Sample records for learning ml methods

  1. Predicting activities of daily living for cancer patients using an ontology-guided machine learning methodology.

    PubMed

    Min, Hua; Mobahi, Hedyeh; Irvin, Katherine; Avramovic, Sanja; Wojtusiak, Janusz

    2017-09-16

    Bio-ontologies are becoming increasingly important in knowledge representation and in the machine learning (ML) fields. This paper presents a ML approach that incorporates bio-ontologies and its application to the SEER-MHOS dataset to discover patterns of patient characteristics that impact the ability to perform activities of daily living (ADLs). Bio-ontologies are used to provide computable knowledge for ML methods to "understand" biomedical data. This retrospective study included 723 cancer patients from the SEER-MHOS dataset. Two ML methods were applied to create predictive models for ADL disabilities for the first year after a patient's cancer diagnosis. The first method is a standard rule learning algorithm; the second is that same algorithm additionally equipped with methods for reasoning with ontologies. The models showed that a patient's race, ethnicity, smoking preference, treatment plan and tumor characteristics including histology, staging, cancer site, and morphology were predictors for ADL performance levels one year after cancer diagnosis. The ontology-guided ML method was more accurate at predicting ADL performance levels (P < 0.1) than methods without ontologies. This study demonstrated that bio-ontologies can be harnessed to provide medical knowledge for ML algorithms. The presented method demonstrates that encoding specific types of hierarchical relationships to guide rule learning is possible, and can be extended to other types of semantic relationships present in biomedical ontologies. The ontology-guided ML method achieved better performance than the method without ontologies. The presented method can also be used to promote the effectiveness and efficiency of ML in healthcare, in which use of background knowledge and consistency with existing clinical expertise is critical.

  2. AstroML: Python-powered Machine Learning for Astronomy

    NASA Astrophysics Data System (ADS)

    Vander Plas, Jake; Connolly, A. J.; Ivezic, Z.

    2014-01-01

    As astronomical data sets grow in size and complexity, automated machine learning and data mining methods are becoming an increasingly fundamental component of research in the field. The astroML project (http://astroML.org) provides a common repository for practical examples of the data mining and machine learning tools used and developed by astronomical researchers, written in Python. The astroML module contains a host of general-purpose data analysis and machine learning routines, loaders for openly-available astronomical datasets, and fast implementations of specific computational methods often used in astronomy and astrophysics. The associated website features hundreds of examples of these routines being used for analysis of real astronomical datasets, while the associated textbook provides a curriculum resource for graduate-level courses focusing on practical statistics, machine learning, and data mining approaches within Astronomical research. This poster will highlight several of the more powerful and unique examples of analysis performed with astroML, all of which can be reproduced in their entirety on any computer with the proper packages installed.

  3. Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.

    PubMed

    Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui

    2018-03-01

    Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.

  4. Multilayer Extreme Learning Machine With Subnetwork Nodes for Representation Learning.

    PubMed

    Yang, Yimin; Wu, Q M Jonathan

    2016-11-01

    The extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feedforward neural networks, provides efficient unified learning solutions for the applications of clustering, regression, and classification. It presents competitive accuracy with superb efficiency in many applications. However, ELM with subnetwork nodes architecture has not attracted much research attentions. Recently, many methods have been proposed for supervised/unsupervised dimension reduction or representation learning, but these methods normally only work for one type of problem. This paper studies the general architecture of multilayer ELM (ML-ELM) with subnetwork nodes, showing that: 1) the proposed method provides a representation learning platform with unsupervised/supervised and compressed/sparse representation learning and 2) experimental results on ten image datasets and 16 classification datasets show that, compared to other conventional feature learning methods, the proposed ML-ELM with subnetwork nodes performs competitively or much better than other feature learning methods.

  5. Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods

    PubMed Central

    Burlina, Philippe; Billings, Seth; Joshi, Neil

    2017-01-01

    Objective To evaluate the use of ultrasound coupled with machine learning (ML) and deep learning (DL) techniques for automated or semi-automated classification of myositis. Methods Eighty subjects comprised of 19 with inclusion body myositis (IBM), 14 with polymyositis (PM), 14 with dermatomyositis (DM), and 33 normal (N) subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally) were acquired. We considered three problems of classification including (A) normal vs. affected (DM, PM, IBM); (B) normal vs. IBM patients; and (C) IBM vs. other types of myositis (DM or PM). We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs) for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF) and “engineered” features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification. Results The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A), 86.6% ± 2.4% for (B) and 74.8% ± 3.9% for (C), while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A), 84.3% ± 2.3% for (B) and 68.9% ± 2.5% for (C). Conclusions This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification. PMID:28854220

  6. Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods.

    PubMed

    Burlina, Philippe; Billings, Seth; Joshi, Neil; Albayda, Jemima

    2017-01-01

    To evaluate the use of ultrasound coupled with machine learning (ML) and deep learning (DL) techniques for automated or semi-automated classification of myositis. Eighty subjects comprised of 19 with inclusion body myositis (IBM), 14 with polymyositis (PM), 14 with dermatomyositis (DM), and 33 normal (N) subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally) were acquired. We considered three problems of classification including (A) normal vs. affected (DM, PM, IBM); (B) normal vs. IBM patients; and (C) IBM vs. other types of myositis (DM or PM). We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs) for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF) and "engineered" features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification. The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A), 86.6% ± 2.4% for (B) and 74.8% ± 3.9% for (C), while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A), 84.3% ± 2.3% for (B) and 68.9% ± 2.5% for (C). This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification.

  7. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  8. Using Machine Learning in Adversarial Environments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren Leon Davis

    Intrusion/anomaly detection systems are among the first lines of cyber defense. Commonly, they either use signatures or machine learning (ML) to identify threats, but fail to account for sophisticated attackers trying to circumvent them. We propose to embed machine learning within a game theoretic framework that performs adversarial modeling, develops methods for optimizing operational response based on ML, and integrates the resulting optimization codebase into the existing ML infrastructure developed by the Hybrid LDRD. Our approach addresses three key shortcomings of ML in adversarial settings: 1) resulting classifiers are typically deterministic and, therefore, easy to reverse engineer; 2) ML approachesmore » only address the prediction problem, but do not prescribe how one should operationalize predictions, nor account for operational costs and constraints; and 3) ML approaches do not model attackers’ response and can be circumvented by sophisticated adversaries. The principal novelty of our approach is to construct an optimization framework that blends ML, operational considerations, and a model predicting attackers reaction, with the goal of computing optimal moving target defense. One important challenge is to construct a realistic model of an adversary that is tractable, yet realistic. We aim to advance the science of attacker modeling by considering game-theoretic methods, and by engaging experimental subjects with red teaming experience in trying to actively circumvent an intrusion detection system, and learning a predictive model of such circumvention activities. In addition, we will generate metrics to test that a particular model of an adversary is consistent with available data.« less

  9. Machine Learning–Based Differential Network Analysis: A Study of Stress-Responsive Transcriptomes in Arabidopsis[W

    PubMed Central

    Ma, Chuang; Xin, Mingming; Feldmann, Kenneth A.; Wang, Xiangfeng

    2014-01-01

    Machine learning (ML) is an intelligent data mining technique that builds a prediction model based on the learning of prior knowledge to recognize patterns in large-scale data sets. We present an ML-based methodology for transcriptome analysis via comparison of gene coexpression networks, implemented as an R package called machine learning–based differential network analysis (mlDNA) and apply this method to reanalyze a set of abiotic stress expression data in Arabidopsis thaliana. The mlDNA first used a ML-based filtering process to remove nonexpressed, constitutively expressed, or non-stress-responsive “noninformative” genes prior to network construction, through learning the patterns of 32 expression characteristics of known stress-related genes. The retained “informative” genes were subsequently analyzed by ML-based network comparison to predict candidate stress-related genes showing expression and network differences between control and stress networks, based on 33 network topological characteristics. Comparative evaluation of the network-centric and gene-centric analytic methods showed that mlDNA substantially outperformed traditional statistical testing–based differential expression analysis at identifying stress-related genes, with markedly improved prediction accuracy. To experimentally validate the mlDNA predictions, we selected 89 candidates out of the 1784 predicted salt stress–related genes with available SALK T-DNA mutagenesis lines for phenotypic screening and identified two previously unreported genes, mutants of which showed salt-sensitive phenotypes. PMID:24520154

  10. Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions.

    PubMed

    Kassahun, Yohannes; Yu, Bingbin; Tibebu, Abraham Temesgen; Stoyanov, Danail; Giannarou, Stamatia; Metzen, Jan Hendrik; Vander Poorten, Emmanuel

    2016-04-01

    Advances in technology and computing play an increasingly important role in the evolution of modern surgical techniques and paradigms. This article reviews the current role of machine learning (ML) techniques in the context of surgery with a focus on surgical robotics (SR). Also, we provide a perspective on the future possibilities for enhancing the effectiveness of procedures by integrating ML in the operating room. The review is focused on ML techniques directly applied to surgery, surgical robotics, surgical training and assessment. The widespread use of ML methods in diagnosis and medical image computing is beyond the scope of the review. Searches were performed on PubMed and IEEE Explore using combinations of keywords: ML, surgery, robotics, surgical and medical robotics, skill learning, skill analysis and learning to perceive. Studies making use of ML methods in the context of surgery are increasingly being reported. In particular, there is an increasing interest in using ML for developing tools to understand and model surgical skill and competence or to extract surgical workflow. Many researchers begin to integrate this understanding into the control of recent surgical robots and devices. ML is an expanding field. It is popular as it allows efficient processing of vast amounts of data for interpreting and real-time decision making. Already widely used in imaging and diagnosis, it is believed that ML will also play an important role in surgery and interventional treatments. In particular, ML could become a game changer into the conception of cognitive surgical robots. Such robots endowed with cognitive skills would assist the surgical team also on a cognitive level, such as possibly lowering the mental load of the team. For example, ML could help extracting surgical skill, learned through demonstration by human experts, and could transfer this to robotic skills. Such intelligent surgical assistance would significantly surpass the state of the art in surgical robotics. Current devices possess no intelligence whatsoever and are merely advanced and expensive instruments.

  11. Improving orbit prediction accuracy through supervised machine learning

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Bai, Xiaoli

    2018-05-01

    Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.

  12. Microelectrode Recordings Validate the Clinical Visualization of Subthalamic-Nucleus Based on 7T Magnetic Resonance Imaging and Machine Learning for Deep Brain Stimulation Surgery.

    PubMed

    Shamir, Reuben R; Duchin, Yuval; Kim, Jinyoung; Patriat, Remi; Marmor, Odeya; Bergman, Hagai; Vitek, Jerrold L; Sapiro, Guillermo; Bick, Atira; Eliahou, Ruth; Eitan, Renana; Israel, Zvi; Harel, Noam

    2018-05-24

    Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a proven and effective therapy for the management of the motor symptoms of Parkinson's disease (PD). While accurate positioning of the stimulating electrode is critical for success of this therapy, precise identification of the STN based on imaging can be challenging. We developed a method to accurately visualize the STN on a standard clinical magnetic resonance imaging (MRI). The method incorporates a database of 7-Tesla (T) MRIs of PD patients together with machine-learning methods (hereafter 7 T-ML). To validate the clinical application accuracy of the 7 T-ML method by comparing it with identification of the STN based on intraoperative microelectrode recordings. Sixteen PD patients who underwent microelectrode-recordings guided STN DBS were included in this study (30 implanted leads and electrode trajectories). The length of the STN along the electrode trajectory and the position of its contacts to dorsal, inside, or ventral to the STN were compared using microelectrode-recordings and the 7 T-ML method computed based on the patient's clinical 3T MRI. All 30 electrode trajectories that intersected the STN based on microelectrode-recordings, also intersected it when visualized with the 7 T-ML method. STN trajectory average length was 6.2 ± 0.7 mm based on microelectrode recordings and 5.8 ± 0.9 mm for the 7 T-ML method. We observed a 93% agreement regarding contact location between the microelectrode-recordings and the 7 T-ML method. The 7 T-ML method is highly consistent with microelectrode-recordings data. This method provides a reliable and accurate patient-specific prediction for targeting the STN.

  13. A machine learning approach as a surrogate of finite element analysis-based inverse method to estimate the zero-pressure geometry of human thoracic aorta.

    PubMed

    Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei

    2018-05-09

    Advances in structural finite element analysis (FEA) and medical imaging have made it possible to investigate the in vivo biomechanics of human organs such as blood vessels, for which organ geometries at the zero-pressure level need to be recovered. Although FEA-based inverse methods are available for zero-pressure geometry estimation, these methods typically require iterative computation, which are time-consuming and may be not suitable for time-sensitive clinical applications. In this study, by using machine learning (ML) techniques, we developed an ML model to estimate the zero-pressure geometry of human thoracic aorta given 2 pressurized geometries of the same patient at 2 different blood pressure levels. For the ML model development, a FEA-based method was used to generate a dataset of aorta geometries of 3125 virtual patients. The ML model, which was trained and tested on the dataset, is capable of recovering zero-pressure geometries consistent with those generated by the FEA-based method. Thus, this study demonstrates the feasibility and great potential of using ML techniques as a fast surrogate of FEA-based inverse methods to recover zero-pressure geometries of human organs. Copyright © 2018 John Wiley & Sons, Ltd.

  14. Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.

    PubMed

    Gong, Xiajing; Hu, Meng; Zhao, Liang

    2018-05-01

    Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  15. Supervised Machine Learning for Population Genetics: A New Paradigm

    PubMed Central

    Schrider, Daniel R.; Kern, Andrew D.

    2018-01-01

    As population genomic datasets grow in size, researchers are faced with the daunting task of making sense of a flood of information. To keep pace with this explosion of data, computational methodologies for population genetic inference are rapidly being developed to best utilize genomic sequence data. In this review we discuss a new paradigm that has emerged in computational population genomics: that of supervised machine learning (ML). We review the fundamentals of ML, discuss recent applications of supervised ML to population genetics that outperform competing methods, and describe promising future directions in this area. Ultimately, we argue that supervised ML is an important and underutilized tool that has considerable potential for the world of evolutionary genomics. PMID:29331490

  16. A study of active learning methods for named entity recognition in clinical text.

    PubMed

    Chen, Yukun; Lasko, Thomas A; Mei, Qiaozhu; Denny, Joshua C; Xu, Hua

    2015-12-01

    Named entity recognition (NER), a sequential labeling task, is one of the fundamental tasks for building clinical natural language processing (NLP) systems. Machine learning (ML) based approaches can achieve good performance, but they often require large amounts of annotated samples, which are expensive to build due to the requirement of domain experts in annotation. Active learning (AL), a sample selection approach integrated with supervised ML, aims to minimize the annotation cost while maximizing the performance of ML-based models. In this study, our goal was to develop and evaluate both existing and new AL methods for a clinical NER task to identify concepts of medical problems, treatments, and lab tests from the clinical notes. Using the annotated NER corpus from the 2010 i2b2/VA NLP challenge that contained 349 clinical documents with 20,423 unique sentences, we simulated AL experiments using a number of existing and novel algorithms in three different categories including uncertainty-based, diversity-based, and baseline sampling strategies. They were compared with the passive learning that uses random sampling. Learning curves that plot performance of the NER model against the estimated annotation cost (based on number of sentences or words in the training set) were generated to evaluate different active learning and the passive learning methods and the area under the learning curve (ALC) score was computed. Based on the learning curves of F-measure vs. number of sentences, uncertainty sampling algorithms outperformed all other methods in ALC. Most diversity-based methods also performed better than random sampling in ALC. To achieve an F-measure of 0.80, the best method based on uncertainty sampling could save 66% annotations in sentences, as compared to random sampling. For the learning curves of F-measure vs. number of words, uncertainty sampling methods again outperformed all other methods in ALC. To achieve 0.80 in F-measure, in comparison to random sampling, the best uncertainty based method saved 42% annotations in words. But the best diversity based method reduced only 7% annotation effort. In the simulated setting, AL methods, particularly uncertainty-sampling based approaches, seemed to significantly save annotation cost for the clinical NER task. The actual benefit of active learning in clinical NER should be further evaluated in a real-time setting. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Case-Based Reasoning in Mixed Paradigm Settings and with Learning

    DTIC Science & Technology

    1994-04-30

    Learning Prototypical Cases OFF-BROADWAY, MCI and RMHC -* are three CBR-ML systems that learn case prototypes. We feel that methods that enable the...at Irvine Machine Learning Repository, including heart disease and breast cancer databases. OFF-BROADWAY, MCI and RMHC -* made the following notable

  18. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods

    PubMed Central

    Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L

    2017-01-01

    Background To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient’s weight kept rising in the past year). This process becomes infeasible with limited budgets. Objective This study’s goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. Methods This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care management allocation and pilot one model with care managers; and (3) perform simulations to estimate the impact of adopting Auto-ML on US patient outcomes. Results We are currently writing Auto-ML’s design document. We intend to finish our study by around the year 2022. Conclusions Auto-ML will generalize to various clinical prediction/classification problems. With minimal help from data scientists, health care researchers can use Auto-ML to quickly build high-quality models. This will boost wider use of machine learning in health care and improve patient outcomes. PMID:28851678

  19. Comparing deep neural network and other machine learning algorithms for stroke prediction in a large-scale population-based electronic medical claims database.

    PubMed

    Chen-Ying Hung; Wei-Chen Chen; Po-Tsun Lai; Ching-Heng Lin; Chi-Chun Lee

    2017-07-01

    Electronic medical claims (EMCs) can be used to accurately predict the occurrence of a variety of diseases, which can contribute to precise medical interventions. While there is a growing interest in the application of machine learning (ML) techniques to address clinical problems, the use of deep-learning in healthcare have just gained attention recently. Deep learning, such as deep neural network (DNN), has achieved impressive results in the areas of speech recognition, computer vision, and natural language processing in recent years. However, deep learning is often difficult to comprehend due to the complexities in its framework. Furthermore, this method has not yet been demonstrated to achieve a better performance comparing to other conventional ML algorithms in disease prediction tasks using EMCs. In this study, we utilize a large population-based EMC database of around 800,000 patients to compare DNN with three other ML approaches for predicting 5-year stroke occurrence. The result shows that DNN and gradient boosting decision tree (GBDT) can result in similarly high prediction accuracies that are better compared to logistic regression (LR) and support vector machine (SVM) approaches. Meanwhile, DNN achieves optimal results by using lesser amounts of patient data when comparing to GBDT method.

  20. Statistical and Machine Learning forecasting methods: Concerns and ways forward

    PubMed Central

    Makridakis, Spyros; Assimakopoulos, Vassilios

    2018-01-01

    Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions. PMID:29584784

  1. Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-05-12

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  2. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    DOE PAGES

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-04-14

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C 7H 10O 2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  3. Promises of Machine Learning Approaches in Prediction of Absorption of Compounds.

    PubMed

    Kumar, Rajnish; Sharma, Anju; Siddiqui, Mohammed Haris; Tiwari, Rajesh Kumar

    2018-01-01

    The Machine Learning (ML) is one of the fastest developing techniques in the prediction and evaluation of important pharmacokinetic properties such as absorption, distribution, metabolism and excretion. The availability of a large number of robust validation techniques for prediction models devoted to pharmacokinetics has significantly enhanced the trust and authenticity in ML approaches. There is a series of prediction models generated and used for rapid screening of compounds on the basis of absorption in last one decade. Prediction of absorption of compounds using ML models has great potential across the pharmaceutical industry as a non-animal alternative to predict absorption. However, these prediction models still have to go far ahead to develop the confidence similar to conventional experimental methods for estimation of drug absorption. Some of the general concerns are selection of appropriate ML methods and validation techniques in addition to selecting relevant descriptors and authentic data sets for the generation of prediction models. The current review explores published models of ML for the prediction of absorption using physicochemical properties as descriptors and their important conclusions. In addition, some critical challenges in acceptance of ML models for absorption are also discussed. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  4. Machine Learning for Discriminating Quantum Measurement Trajectories and Improving Readout.

    PubMed

    Magesan, Easwar; Gambetta, Jay M; Córcoles, A D; Chow, Jerry M

    2015-05-22

    Current methods for classifying measurement trajectories in superconducting qubit systems produce fidelities systematically lower than those predicted by experimental parameters. Here, we place current classification methods within the framework of machine learning (ML) algorithms and improve on them by investigating more sophisticated ML approaches. We find that nonlinear algorithms and clustering methods produce significantly higher assignment fidelities that help close the gap to the fidelity possible under ideal noise conditions. Clustering methods group trajectories into natural subsets within the data, which allows for the diagnosis of systematic errors. We find large clusters in the data associated with T1 processes and show these are the main source of discrepancy between our experimental and ideal fidelities. These error diagnosis techniques help provide a path forward to improve qubit measurements.

  5. Synergies Between Quantum Mechanics and Machine Learning in Reaction Prediction.

    PubMed

    Sadowski, Peter; Fooshee, David; Subrahmanya, Niranjan; Baldi, Pierre

    2016-11-28

    Machine learning (ML) and quantum mechanical (QM) methods can be used in two-way synergy to build chemical reaction expert systems. The proposed ML approach identifies electron sources and sinks among reactants and then ranks all source-sink pairs. This addresses a bottleneck of QM calculations by providing a prioritized list of mechanistic reaction steps. QM modeling can then be used to compute the transition states and activation energies of the top-ranked reactions, providing additional or improved examples of ranked source-sink pairs. Retraining the ML model closes the loop, producing more accurate predictions from a larger training set. The approach is demonstrated in detail using a small set of organic radical reactions.

  6. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  7. Machine Learning Approaches for Predicting Radiation Therapy Outcomes: A Clinician's Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, John; Schwartz, Russell; Flickinger, John

    Radiation oncology has always been deeply rooted in modeling, from the early days of isoeffect curves to the contemporary Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) initiative. In recent years, medical modeling for both prognostic and therapeutic purposes has exploded thanks to increasing availability of electronic data and genomics. One promising direction that medical modeling is moving toward is adopting the same machine learning methods used by companies such as Google and Facebook to combat disease. Broadly defined, machine learning is a branch of computer science that deals with making predictions from complex data through statistical models.more » These methods serve to uncover patterns in data and are actively used in areas such as speech recognition, handwriting recognition, face recognition, “spam” filtering (junk email), and targeted advertising. Although multiple radiation oncology research groups have shown the value of applied machine learning (ML), clinical adoption has been slow due to the high barrier to understanding these complex models by clinicians. Here, we present a review of the use of ML to predict radiation therapy outcomes from the clinician's point of view with the hope that it lowers the “barrier to entry” for those without formal training in ML. We begin by describing 7 principles that one should consider when evaluating (or creating) an ML model in radiation oncology. We next introduce 3 popular ML methods—logistic regression (LR), support vector machine (SVM), and artificial neural network (ANN)—and critique 3 seminal papers in the context of these principles. Although current studies are in exploratory stages, the overall methodology has progressively matured, and the field is ready for larger-scale further investigation.« less

  8. Overview of deep learning in medical imaging.

    PubMed

    Suzuki, Kenji

    2017-09-01

    The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.

  9. Multivariate Analysis and Machine Learning in Cerebral Palsy Research

    PubMed Central

    Zhang, Jing

    2017-01-01

    Cerebral palsy (CP), a common pediatric movement disorder, causes the most severe physical disability in children. Early diagnosis in high-risk infants is critical for early intervention and possible early recovery. In recent years, multivariate analytic and machine learning (ML) approaches have been increasingly used in CP research. This paper aims to identify such multivariate studies and provide an overview of this relatively young field. Studies reviewed in this paper have demonstrated that multivariate analytic methods are useful in identification of risk factors, detection of CP, movement assessment for CP prediction, and outcome assessment, and ML approaches have made it possible to automatically identify movement impairments in high-risk infants. In addition, outcome predictors for surgical treatments have been identified by multivariate outcome studies. To make the multivariate and ML approaches useful in clinical settings, further research with large samples is needed to verify and improve these multivariate methods in risk factor identification, CP detection, movement assessment, and outcome evaluation or prediction. As multivariate analysis, ML and data processing technologies advance in the era of Big Data of this century, it is expected that multivariate analysis and ML will play a bigger role in improving the diagnosis and treatment of CP to reduce mortality and morbidity rates, and enhance patient care for children with CP. PMID:29312134

  10. Multivariate Analysis and Machine Learning in Cerebral Palsy Research.

    PubMed

    Zhang, Jing

    2017-01-01

    Cerebral palsy (CP), a common pediatric movement disorder, causes the most severe physical disability in children. Early diagnosis in high-risk infants is critical for early intervention and possible early recovery. In recent years, multivariate analytic and machine learning (ML) approaches have been increasingly used in CP research. This paper aims to identify such multivariate studies and provide an overview of this relatively young field. Studies reviewed in this paper have demonstrated that multivariate analytic methods are useful in identification of risk factors, detection of CP, movement assessment for CP prediction, and outcome assessment, and ML approaches have made it possible to automatically identify movement impairments in high-risk infants. In addition, outcome predictors for surgical treatments have been identified by multivariate outcome studies. To make the multivariate and ML approaches useful in clinical settings, further research with large samples is needed to verify and improve these multivariate methods in risk factor identification, CP detection, movement assessment, and outcome evaluation or prediction. As multivariate analysis, ML and data processing technologies advance in the era of Big Data of this century, it is expected that multivariate analysis and ML will play a bigger role in improving the diagnosis and treatment of CP to reduce mortality and morbidity rates, and enhance patient care for children with CP.

  11. Interactive machine learning for health informatics: when do we need the human-in-the-loop?

    PubMed

    Holzinger, Andreas

    2016-06-01

    Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as "algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human." This "human-in-the-loop" can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.

  12. Comparison of robotics, functional electrical stimulation, and motor learning methods for treatment of persistent upper extremity dysfunction after stroke: a randomized controlled trial.

    PubMed

    McCabe, Jessica; Monkiewicz, Michelle; Holcomb, John; Pundik, Svetlana; Daly, Janis J

    2015-06-01

    To compare response to upper-limb treatment using robotics plus motor learning (ML) versus functional electrical stimulation (FES) plus ML versus ML alone, according to a measure of complex functional everyday tasks for chronic, severely impaired stroke survivors. Single-blind, randomized trial. Medical center. Enrolled subjects (N=39) were >1 year postsingle stroke (attrition rate=10%; 35 completed the study). All groups received treatment 5d/wk for 5h/d (60 sessions), with unique treatment as follows: ML alone (n=11) (5h/d partial- and whole-task practice of complex functional tasks), robotics plus ML (n=12) (3.5h/d of ML and 1.5h/d of shoulder/elbow robotics), and FES plus ML (n=12) (3.5h/d of ML and 1.5h/d of FES wrist/hand coordination training). Primary measure: Arm Motor Ability Test (AMAT), with 13 complex functional tasks; secondary measure: upper-limb Fugl-Meyer coordination scale (FM). There was no significant difference found in treatment response across groups (AMAT: P≥.584; FM coordination: P≥.590). All 3 treatment groups demonstrated clinically and statistically significant improvement in response to treatment (AMAT and FM coordination: P≤.009). A group treatment paradigm of 1:3 (therapist/patient) ratio proved feasible for provision of the intensive treatment. No adverse effects. Severely impaired stroke survivors with persistent (>1y) upper-extremity dysfunction can make clinically and statistically significant gains in coordination and functional task performance in response to robotics plus ML, FES plus ML, and ML alone in an intensive and long-duration intervention; no group differences were found. Additional studies are warranted to determine the effectiveness of these methods in the clinical setting. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  13. Dynamical analysis of contrastive divergence learning: Restricted Boltzmann machines with Gaussian visible units.

    PubMed

    Karakida, Ryo; Okada, Masato; Amari, Shun-Ichi

    2016-07-01

    The restricted Boltzmann machine (RBM) is an essential constituent of deep learning, but it is hard to train by using maximum likelihood (ML) learning, which minimizes the Kullback-Leibler (KL) divergence. Instead, contrastive divergence (CD) learning has been developed as an approximation of ML learning and widely used in practice. To clarify the performance of CD learning, in this paper, we analytically derive the fixed points where ML and CDn learning rules converge in two types of RBMs: one with Gaussian visible and Gaussian hidden units and the other with Gaussian visible and Bernoulli hidden units. In addition, we analyze the stability of the fixed points. As a result, we find that the stable points of CDn learning rule coincide with those of ML learning rule in a Gaussian-Gaussian RBM. We also reveal that larger principal components of the input data are extracted at the stable points. Moreover, in a Gaussian-Bernoulli RBM, we find that both ML and CDn learning can extract independent components at one of stable points. Our analysis demonstrates that the same feature components as those extracted by ML learning are extracted simply by performing CD1 learning. Expanding this study should elucidate the specific solutions obtained by CD learning in other types of RBMs or in deep networks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Needs, Pains, and Motivations in Autonomous Agents.

    PubMed

    Starzyk, Janusz A; Graham, James; Puzio, Leszek

    This paper presents the development of a motivated learning (ML) agent with symbolic I/O. Our earlier work on the ML agent was enhanced, giving it autonomy for interaction with other agents. Specifically, we equipped the agent with drives and pains that establish its motivations to learn how to respond to desired and undesired events and create related abstract goals. The purpose of this paper is to explore the autonomous development of motivations and memory in agents within a simulated environment. The ML agent has been implemented in a virtual environment created within the NeoAxis game engine. Additionally, to illustrate the benefits of an ML-based agent, we compared the performance of our algorithm against various reinforcement learning (RL) algorithms in a dynamic test scenario, and demonstrated that our ML agent learns better than any of the tested RL agents.This paper presents the development of a motivated learning (ML) agent with symbolic I/O. Our earlier work on the ML agent was enhanced, giving it autonomy for interaction with other agents. Specifically, we equipped the agent with drives and pains that establish its motivations to learn how to respond to desired and undesired events and create related abstract goals. The purpose of this paper is to explore the autonomous development of motivations and memory in agents within a simulated environment. The ML agent has been implemented in a virtual environment created within the NeoAxis game engine. Additionally, to illustrate the benefits of an ML-based agent, we compared the performance of our algorithm against various reinforcement learning (RL) algorithms in a dynamic test scenario, and demonstrated that our ML agent learns better than any of the tested RL agents.

  15. A Machine Learning Ensemble Classifier for Early Prediction of Diabetic Retinopathy.

    PubMed

    S K, Somasundaram; P, Alli

    2017-11-09

    The main complication of diabetes is Diabetic retinopathy (DR), retinal vascular disease and it leads to the blindness. Regular screening for early DR disease detection is considered as an intensive labor and resource oriented task. Therefore, automatic detection of DR diseases is performed only by using the computational technique is the great solution. An automatic method is more reliable to determine the presence of an abnormality in Fundus images (FI) but, the classification process is poorly performed. Recently, few research works have been designed for analyzing texture discrimination capacity in FI to distinguish the healthy images. However, the feature extraction (FE) process was not performed well, due to the high dimensionality. Therefore, to identify retinal features for DR disease diagnosis and early detection using Machine Learning and Ensemble Classification method, called, Machine Learning Bagging Ensemble Classifier (ML-BEC) is designed. The ML-BEC method comprises of two stages. The first stage in ML-BEC method comprises extraction of the candidate objects from Retinal Images (RI). The candidate objects or the features for DR disease diagnosis include blood vessels, optic nerve, neural tissue, neuroretinal rim, optic disc size, thickness and variance. These features are initially extracted by applying Machine Learning technique called, t-distributed Stochastic Neighbor Embedding (t-SNE). Besides, t-SNE generates a probability distribution across high-dimensional images where the images are separated into similar and dissimilar pairs. Then, t-SNE describes a similar probability distribution across the points in the low-dimensional map. This lessens the Kullback-Leibler divergence among two distributions regarding the locations of the points on the map. The second stage comprises of application of ensemble classifiers to the extracted features for providing accurate analysis of digital FI using machine learning. In this stage, an automatic detection of DR screening system using Bagging Ensemble Classifier (BEC) is investigated. With the help of voting the process in ML-BEC, bagging minimizes the error due to variance of the base classifier. With the publicly available retinal image databases, our classifier is trained with 25% of RI. Results show that the ensemble classifier can achieve better classification accuracy (CA) than single classification models. Empirical experiments suggest that the machine learning-based ensemble classifier is efficient for further reducing DR classification time (CT).

  16. Brainstorming: weighted voting prediction of inhibitors for protein targets.

    PubMed

    Plewczynski, Dariusz

    2011-09-01

    The "Brainstorming" approach presented in this paper is a weighted voting method that can improve the quality of predictions generated by several machine learning (ML) methods. First, an ensemble of heterogeneous ML algorithms is trained on available experimental data, then all solutions are gathered and a consensus is built between them. The final prediction is performed using a voting procedure, whereby the vote of each method is weighted according to a quality coefficient calculated using multivariable linear regression (MLR). The MLR optimization procedure is very fast, therefore no additional computational cost is introduced by using this jury approach. Here, brainstorming is applied to selecting actives from large collections of compounds relating to five diverse biological targets of medicinal interest, namely HIV-reverse transcriptase, cyclooxygenase-2, dihydrofolate reductase, estrogen receptor, and thrombin. The MDL Drug Data Report (MDDR) database was used for selecting known inhibitors for these protein targets, and experimental data was then used to train a set of machine learning methods. The benchmark dataset (available at http://bio.icm.edu.pl/∼darman/chemoinfo/benchmark.tar.gz ) can be used for further testing of various clustering and machine learning methods when predicting the biological activity of compounds. Depending on the protein target, the overall recall value is raised by at least 20% in comparison to any single machine learning method (including ensemble methods like random forest) and unweighted simple majority voting procedures.

  17. Stable Atlas-based Mapped Prior (STAMP) machine-learning segmentation for multicenter large-scale MRI data.

    PubMed

    Kim, Eun Young; Magnotta, Vincent A; Liu, Dawei; Johnson, Hans J

    2014-09-01

    Machine learning (ML)-based segmentation methods are a common technique in the medical image processing field. In spite of numerous research groups that have investigated ML-based segmentation frameworks, there remains unanswered aspects of performance variability for the choice of two key components: ML algorithm and intensity normalization. This investigation reveals that the choice of those elements plays a major part in determining segmentation accuracy and generalizability. The approach we have used in this study aims to evaluate relative benefits of the two elements within a subcortical MRI segmentation framework. Experiments were conducted to contrast eight machine-learning algorithm configurations and 11 normalization strategies for our brain MR segmentation framework. For the intensity normalization, a Stable Atlas-based Mapped Prior (STAMP) was utilized to take better account of contrast along boundaries of structures. Comparing eight machine learning algorithms on down-sampled segmentation MR data, it was obvious that a significant improvement was obtained using ensemble-based ML algorithms (i.e., random forest) or ANN algorithms. Further investigation between these two algorithms also revealed that the random forest results provided exceptionally good agreement with manual delineations by experts. Additional experiments showed that the effect of STAMP-based intensity normalization also improved the robustness of segmentation for multicenter data sets. The constructed framework obtained good multicenter reliability and was successfully applied on a large multicenter MR data set (n>3000). Less than 10% of automated segmentations were recommended for minimal expert intervention. These results demonstrate the feasibility of using the ML-based segmentation tools for processing large amount of multicenter MR images. We demonstrated dramatically different result profiles in segmentation accuracy according to the choice of ML algorithm and intensity normalization chosen. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    NASA Astrophysics Data System (ADS)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  19. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    PubMed

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  20. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods.

    PubMed

    Luo, Gang; Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L

    2017-08-29

    To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient's weight kept rising in the past year). This process becomes infeasible with limited budgets. This study's goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care management allocation and pilot one model with care managers; and (3) perform simulations to estimate the impact of adopting Auto-ML on US patient outcomes. We are currently writing Auto-ML's design document. We intend to finish our study by around the year 2022. Auto-ML will generalize to various clinical prediction/classification problems. With minimal help from data scientists, health care researchers can use Auto-ML to quickly build high-quality models. This will boost wider use of machine learning in health care and improve patient outcomes. ©Gang Luo, Bryan L Stone, Michael D Johnson, Peter Tarczy-Hornoch, Adam B Wilcox, Sean D Mooney, Xiaoming Sheng, Peter J Haug, Flory L Nkoy. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 29.08.2017.

  1. Materials Screening for the Discovery of New Half-Heuslers: Machine Learning versus ab Initio Methods.

    PubMed

    Legrain, Fleur; Carrete, Jesús; van Roekeghem, Ambroise; Madsen, Georg K H; Mingo, Natalio

    2018-01-18

    Machine learning (ML) is increasingly becoming a helpful tool in the search for novel functional compounds. Here we use classification via random forests to predict the stability of half-Heusler (HH) compounds, using only experimentally reported compounds as a training set. Cross-validation yields an excellent agreement between the fraction of compounds classified as stable and the actual fraction of truly stable compounds in the ICSD. The ML model is then employed to screen 71 178 different 1:1:1 compositions, yielding 481 likely stable candidates. The predicted stability of HH compounds from three previous high-throughput ab initio studies is critically analyzed from the perspective of the alternative ML approach. The incomplete consistency among the three separate ab initio studies and between them and the ML predictions suggests that additional factors beyond those considered by ab initio phase stability calculations might be determinant to the stability of the compounds. Such factors can include configurational entropies and quasiharmonic contributions.

  2. Coupling Matched Molecular Pairs with Machine Learning for Virtual Compound Optimization.

    PubMed

    Turk, Samo; Merget, Benjamin; Rippmann, Friedrich; Fulle, Simone

    2017-12-26

    Matched molecular pair (MMP) analyses are widely used in compound optimization projects to gain insights into structure-activity relationships (SAR). The analysis is traditionally done via statistical methods but can also be employed together with machine learning (ML) approaches to extrapolate to novel compounds. The here introduced MMP/ML method combines a fragment-based MMP implementation with different machine learning methods to obtain automated SAR decomposition and prediction. To test the prediction capabilities and model transferability, two different compound optimization scenarios were designed: (1) "new fragments" which occurs when exploring new fragments for a defined compound series and (2) "new static core and transformations" which resembles for instance the identification of a new compound series. Very good results were achieved by all employed machine learning methods especially for the new fragments case, but overall deep neural network models performed best, allowing reliable predictions also for the new static core and transformations scenario, where comprehensive SAR knowledge of the compound series is missing. Furthermore, we show that models trained on all available data have a higher generalizability compared to models trained on focused series and can extend beyond chemical space covered in the training data. Thus, coupling MMP with deep neural networks provides a promising approach to make high quality predictions on various data sets and in different compound optimization scenarios.

  3. Automatic Selection of Suitable Sentences for Language Learning Exercises

    ERIC Educational Resources Information Center

    Pilán, Ildikó; Volodina, Elena; Johansson, Richard

    2013-01-01

    In our study we investigated second and foreign language (L2) sentence readability, an area little explored so far in the case of several languages, including Swedish. The outcome of our research consists of two methods for sentence selection from native language corpora based on Natural Language Processing (NLP) and machine learning (ML)…

  4. Separation of pulsar signals from noise using supervised machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Bethapudi, S.; Desai, S.

    2018-04-01

    We evaluate the performance of four different machine learning (ML) algorithms: an Artificial Neural Network Multi-Layer Perceptron (ANN MLP), Adaboost, Gradient Boosting Classifier (GBC), and XGBoost, for the separation of pulsars from radio frequency interference (RFI) and other sources of noise, using a dataset obtained from the post-processing of a pulsar search pipeline. This dataset was previously used for the cross-validation of the SPINN-based machine learning engine, obtained from the reprocessing of the HTRU-S survey data (Morello et al., 2014). We have used the Synthetic Minority Over-sampling Technique (SMOTE) to deal with high-class imbalance in the dataset. We report a variety of quality scores from all four of these algorithms on both the non-SMOTE and SMOTE datasets. For all the above ML methods, we report high accuracy and G-mean for both the non-SMOTE and SMOTE cases. We study the feature importances using Adaboost, GBC, and XGBoost and also from the minimum Redundancy Maximum Relevance approach to report algorithm-agnostic feature ranking. From these methods, we find that the signal to noise of the folded profile to be the best feature. We find that all the ML algorithms report FPRs about an order of magnitude lower than the corresponding FPRs obtained in Morello et al. (2014), for the same recall value.

  5. Testing a machine-learning algorithm to predict the persistence and severity of major depressive disorder from baseline self-reports.

    PubMed

    Kessler, R C; van Loo, H M; Wardenaar, K J; Bossarte, R M; Brenner, L A; Cai, T; Ebert, D D; Hwang, I; Li, J; de Jonge, P; Nierenberg, A A; Petukhova, M V; Rosellini, A J; Sampson, N A; Schoevers, R A; Wilcox, M A; Zaslavsky, A M

    2016-10-01

    Heterogeneity of major depressive disorder (MDD) illness course complicates clinical decision-making. Although efforts to use symptom profiles or biomarkers to develop clinically useful prognostic subtypes have had limited success, a recent report showed that machine-learning (ML) models developed from self-reports about incident episode characteristics and comorbidities among respondents with lifetime MDD in the World Health Organization World Mental Health (WMH) Surveys predicted MDD persistence, chronicity and severity with good accuracy. We report results of model validation in an independent prospective national household sample of 1056 respondents with lifetime MDD at baseline. The WMH ML models were applied to these baseline data to generate predicted outcome scores that were compared with observed scores assessed 10-12 years after baseline. ML model prediction accuracy was also compared with that of conventional logistic regression models. Area under the receiver operating characteristic curve based on ML (0.63 for high chronicity and 0.71-0.76 for the other prospective outcomes) was consistently higher than for the logistic models (0.62-0.70) despite the latter models including more predictors. A total of 34.6-38.1% of respondents with subsequent high persistence chronicity and 40.8-55.8% with the severity indicators were in the top 20% of the baseline ML-predicted risk distribution, while only 0.9% of respondents with subsequent hospitalizations and 1.5% with suicide attempts were in the lowest 20% of the ML-predicted risk distribution. These results confirm that clinically useful MDD risk-stratification models can be generated from baseline patient self-reports and that ML methods improve on conventional methods in developing such models.

  6. Testing a machine-learning algorithm to predict the persistence and severity of major depressive disorder from baseline self-reports

    PubMed Central

    Kessler, Ronald C.; van Loo, Hanna M.; Wardenaar, Klaas J.; Bossarte, Robert M.; Brenner, Lisa A.; Cai, Tianxi; Ebert, David Daniel; Hwang, Irving; Li, Junlong; de Jonge, Peter; Nierenberg, Andrew A.; Petukhova, Maria V.; Rosellini, Anthony J.; Sampson, Nancy A.; Schoevers, Robert A.; Wilcox, Marsha A.; Zaslavsky, Alan M.

    2015-01-01

    Heterogeneity of major depressive disorder (MDD) illness course complicates clinical decision-making. While efforts to use symptom profiles or biomarkers to develop clinically useful prognostic subtypes have had limited success, a recent report showed that machine learning (ML) models developed from self-reports about incident episode characteristics and comorbidities among respondents with lifetime MDD in the World Health Organization World Mental Health (WMH) Surveys predicted MDD persistence, chronicity, and severity with good accuracy. We report results of model validation in an independent prospective national household sample of 1,056 respondents with lifetime MDD at baseline. The WMH ML models were applied to these baseline data to generate predicted outcome scores that were compared to observed scores assessed 10–12 years after baseline. ML model prediction accuracy was also compared to that of conventional logistic regression models. Area under the receiver operating characteristic curve (AUC) based on ML (.63 for high chronicity and .71–.76 for the other prospective outcomes) was consistently higher than for the logistic models (.62–.70) despite the latter models including more predictors. 34.6–38.1% of respondents with subsequent high persistence-chronicity and 40.8–55.8% with the severity indicators were in the top 20% of the baseline ML predicted risk distribution, while only 0.9% of respondents with subsequent hospitalizations and 1.5% with suicide attempts were in the lowest 20% of the ML predicted risk distribution. These results confirm that clinically useful MDD risk stratification models can be generated from baseline patient self-reports and that ML methods improve on conventional methods in developing such models. PMID:26728563

  7. A new scheme for strain typing of methicillin-resistant Staphylococcus aureus on the basis of matrix-assisted laser desorption ionization time-of-flight mass spectrometry by using machine learning approach.

    PubMed

    Wang, Hsin-Yao; Lee, Tzong-Yi; Tseng, Yi-Ju; Liu, Tsui-Ping; Huang, Kai-Yao; Chang, Yung-Ta; Chen, Chun-Hsien; Lu, Jang-Jih

    2018-01-01

    Methicillin-resistant Staphylococcus aureus (MRSA), one of the most important clinical pathogens, conducts an increasing number of morbidity and mortality in the world. Rapid and accurate strain typing of bacteria would facilitate epidemiological investigation and infection control in near real time. Matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry is a rapid and cost-effective tool for presumptive strain typing. To develop robust method for strain typing based on MALDI-TOF spectrum, machine learning (ML) is a promising algorithm for the construction of predictive model. In this study, a strategy of building templates of specific types was used to facilitate generating predictive models of methicillin-resistant Staphylococcus aureus (MRSA) strain typing through various ML methods. The strain types of the isolates were determined through multilocus sequence typing (MLST). The area under the receiver operating characteristic curve (AUC) and the predictive accuracy of the models were compared. ST5, ST59, and ST239 were the major MLST types, and ST45 was the minor type. For binary classification, the AUC values of various ML methods ranged from 0.76 to 0.99 for ST5, ST59, and ST239 types. In multiclass classification, the predictive accuracy of all generated models was more than 0.83. This study has demonstrated that ML methods can serve as a cost-effective and promising tool that provides preliminary strain typing information about major MRSA lineages on the basis of MALDI-TOF spectra.

  8. Machine learning algorithms for the prediction of hERG and CYP450 binding in drug development.

    PubMed

    Klon, Anthony E

    2010-07-01

    The cost of developing new drugs is estimated at approximately $1 billion; the withdrawal of a marketed compound due to toxicity can result in serious financial loss for a pharmaceutical company. There has been a greater interest in the development of in silico tools that can identify compounds with metabolic liabilities before they are brought to market. The two largest classes of machine learning (ML) models, which will be discussed in this review, have been developed to predict binding to the human ether-a-go-go related gene (hERG) ion channel protein and the various CYP isoforms. Being able to identify potentially toxic compounds before they are made would greatly reduce the number of compound failures and the costs associated with drug development. This review summarizes the state of modeling hERG and CYP binding towards this goal since 2003 using ML algorithms. A wide variety of ML algorithms that are comparable in their overall performance are available. These ML methods may be applied regularly in discovery projects to flag compounds with potential metabolic liabilities.

  9. AstroML: "better, faster, cheaper" towards state-of-the-art data mining and machine learning

    NASA Astrophysics Data System (ADS)

    Ivezic, Zeljko; Connolly, Andrew J.; Vanderplas, Jacob

    2015-01-01

    We present AstroML, a Python module for machine learning and data mining built on numpy, scipy, scikit-learn, matplotlib, and astropy, and distributed under an open license. AstroML contains a growing library of statistical and machine learning routines for analyzing astronomical data in Python, loaders for several open astronomical datasets (such as SDSS and other recent major surveys), and a large suite of examples of analyzing and visualizing astronomical datasets. AstroML is especially suitable for introducing undergraduate students to numerical research projects and for graduate students to rapidly undertake cutting-edge research. The long-term goal of astroML is to provide a community repository for fast Python implementations of common tools and routines used for statistical data analysis in astronomy and astrophysics (see http://www.astroml.org).

  10. Toward Intelligent Machine Learning Algorithms

    DTIC Science & Technology

    1988-05-01

    Machine learning is recognized as a tool for improving the performance of many kinds of systems, yet most machine learning systems themselves are not...directed systems, and with the addition of a knowledge store for organizing and maintaining knowledge to assist learning, a learning machine learning (L...ML) algorithm is possible. The necessary components of L-ML systems are presented along with several case descriptions of existing machine learning systems

  11. Quantum machine learning: a classical perspective

    NASA Astrophysics Data System (ADS)

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.

  12. Quantum machine learning: a classical perspective

    PubMed Central

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed. PMID:29434508

  13. Quantum machine learning: a classical perspective.

    PubMed

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.

  14. Learning Physics-based Models in Hydrology under the Framework of Generative Adversarial Networks

    NASA Astrophysics Data System (ADS)

    Karpatne, A.; Kumar, V.

    2017-12-01

    Generative adversarial networks (GANs), that have been highly successful in a number of applications involving large volumes of labeled and unlabeled data such as computer vision, offer huge potential for modeling the dynamics of physical processes that have been traditionally studied using simulations of physics-based models. While conventional physics-based models use labeled samples of input/output variables for model calibration (estimating the right parametric forms of relationships between variables) or data assimilation (identifying the most likely sequence of system states in dynamical systems), there is a greater opportunity to explore the full power of machine learning (ML) methods (e.g, GANs) for studying physical processes currently suffering from large knowledge gaps, e.g. ground-water flow. However, success in this endeavor requires a principled way of combining the strengths of ML methods with physics-based numerical models that are founded on a wealth of scientific knowledge. This is especially important in scientific domains like hydrology where the number of data samples is small (relative to Internet-scale applications such as image recognition where machine learning methods has found great success), and the physical relationships are complex (high-dimensional) and non-stationary. We will present a series of methods for guiding the learning of GANs using physics-based models, e.g., by using the outputs of physics-based models as input data to the generator-learner framework, and by using physics-based models as generators trained using validation data in the adversarial learning framework. These methods are being developed under the broad paradigm of theory-guided data science that we are developing to integrate scientific knowledge with data science methods for accelerating scientific discovery.

  15. Non-Contact Heart Rate and Blood Pressure Estimations from Video Analysis and Machine Learning Modelling Applied to Food Sensory Responses: A Case Study for Chocolate.

    PubMed

    Gonzalez Viejo, Claudia; Fuentes, Sigfredo; Torrico, Damir D; Dunshea, Frank R

    2018-06-03

    Traditional methods to assess heart rate (HR) and blood pressure (BP) are intrusive and can affect results in sensory analysis of food as participants are aware of the sensors. This paper aims to validate a non-contact method to measure HR using the photoplethysmography (PPG) technique and to develop models to predict the real HR and BP based on raw video analysis (RVA) with an example application in chocolate consumption using machine learning (ML). The RVA used a computer vision algorithm based on luminosity changes on the different RGB color channels using three face-regions (forehead and both cheeks). To validate the proposed method and ML models, a home oscillometric monitor and a finger sensor were used. Results showed high correlations with the G color channel (R² = 0.83). Two ML models were developed using three face-regions: (i) Model 1 to predict HR and BP using the RVA outputs with R = 0.85 and (ii) Model 2 based on time-series prediction with HR, magnitude and luminosity from RVA inputs to HR values every second with R = 0.97. An application for the sensory analysis of chocolate showed significant correlations between changes in HR and BP with chocolate hardness and purchase intention.

  16. Merged or monolithic? Using machine-learning to reconstruct the dynamical history of simulated star clusters

    NASA Astrophysics Data System (ADS)

    Pasquato, Mario; Chung, Chul

    2016-05-01

    Context. Machine-learning (ML) solves problems by learning patterns from data with limited or no human guidance. In astronomy, ML is mainly applied to large observational datasets, e.g. for morphological galaxy classification. Aims: We apply ML to gravitational N-body simulations of star clusters that are either formed by merging two progenitors or evolved in isolation, planning to later identify globular clusters (GCs) that may have a history of merging from observational data. Methods: We create mock-observations from simulated GCs, from which we measure a set of parameters (also called features in the machine-learning field). After carrying out dimensionality reduction on the feature space, the resulting datapoints are fed in to various classification algorithms. Using repeated random subsampling validation, we check whether the groups identified by the algorithms correspond to the underlying physical distinction between mergers and monolithically evolved simulations. Results: The three algorithms we considered (C5.0 trees, k-nearest neighbour, and support-vector machines) all achieve a test misclassification rate of about 10% without parameter tuning, with support-vector machines slightly outperforming the others. The first principal component of feature space correlates with cluster concentration. If we exclude it from the regression, the performance of the algorithms is only slightly reduced.

  17. Machine learning & artificial intelligence in the quantum domain: a review of recent progress

    NASA Astrophysics Data System (ADS)

    Dunjko, Vedran; Briegel, Hans J.

    2018-07-01

    Quantum information technologies, on the one hand, and intelligent learning systems, on the other, are both emergent technologies that are likely to have a transformative impact on our society in the future. The respective underlying fields of basic research—quantum information versus machine learning (ML) and artificial intelligence (AI)—have their own specific questions and challenges, which have hitherto been investigated largely independently. However, in a growing body of recent work, researchers have been probing the question of the extent to which these fields can indeed learn and benefit from each other. Quantum ML explores the interaction between quantum computing and ML, investigating how results and techniques from one field can be used to solve the problems of the other. Recently we have witnessed significant breakthroughs in both directions of influence. For instance, quantum computing is finding a vital application in providing speed-ups for ML problems, critical in our ‘big data’ world. Conversely, ML already permeates many cutting-edge technologies and may become instrumental in advanced quantum technologies. Aside from quantum speed-up in data analysis, or classical ML optimization used in quantum experiments, quantum enhancements have also been (theoretically) demonstrated for interactive learning tasks, highlighting the potential of quantum-enhanced learning agents. Finally, works exploring the use of AI for the very design of quantum experiments and for performing parts of genuine research autonomously, have reported their first successes. Beyond the topics of mutual enhancement—exploring what ML/AI can do for quantum physics and vice versa—researchers have also broached the fundamental issue of quantum generalizations of learning and AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is fully described by quantum mechanics. In this review, we describe the main ideas, recent developments and progress in a broad spectrum of research investigating ML and AI in the quantum domain.

  18. Machine learning & artificial intelligence in the quantum domain: a review of recent progress.

    PubMed

    Dunjko, Vedran; Briegel, Hans J

    2018-07-01

    Quantum information technologies, on the one hand, and intelligent learning systems, on the other, are both emergent technologies that are likely to have a transformative impact on our society in the future. The respective underlying fields of basic research-quantum information versus machine learning (ML) and artificial intelligence (AI)-have their own specific questions and challenges, which have hitherto been investigated largely independently. However, in a growing body of recent work, researchers have been probing the question of the extent to which these fields can indeed learn and benefit from each other. Quantum ML explores the interaction between quantum computing and ML, investigating how results and techniques from one field can be used to solve the problems of the other. Recently we have witnessed significant breakthroughs in both directions of influence. For instance, quantum computing is finding a vital application in providing speed-ups for ML problems, critical in our 'big data' world. Conversely, ML already permeates many cutting-edge technologies and may become instrumental in advanced quantum technologies. Aside from quantum speed-up in data analysis, or classical ML optimization used in quantum experiments, quantum enhancements have also been (theoretically) demonstrated for interactive learning tasks, highlighting the potential of quantum-enhanced learning agents. Finally, works exploring the use of AI for the very design of quantum experiments and for performing parts of genuine research autonomously, have reported their first successes. Beyond the topics of mutual enhancement-exploring what ML/AI can do for quantum physics and vice versa-researchers have also broached the fundamental issue of quantum generalizations of learning and AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is fully described by quantum mechanics. In this review, we describe the main ideas, recent developments and progress in a broad spectrum of research investigating ML and AI in the quantum domain.

  19. Bridging paradigms: hybrid mechanistic-discriminative predictive models.

    PubMed

    Doyle, Orla M; Tsaneva-Atansaova, Krasimira; Harte, James; Tiffin, Paul A; Tino, Peter; Díaz-Zuccarini, Vanessa

    2013-03-01

    Many disease processes are extremely complex and characterized by multiple stochastic processes interacting simultaneously. Current analytical approaches have included mechanistic models and machine learning (ML), which are often treated as orthogonal viewpoints. However, to facilitate truly personalized medicine, new perspectives may be required. This paper reviews the use of both mechanistic models and ML in healthcare as well as emerging hybrid methods, which are an exciting and promising approach for biologically based, yet data-driven advanced intelligent systems.

  20. Impact of pixel-based machine-learning techniques on automated frameworks for delineation of gross tumor volume regions for stereotactic body radiation therapy.

    PubMed

    Kawata, Yasuo; Arimura, Hidetaka; Ikushima, Koujirou; Jin, Ze; Morita, Kento; Tokunaga, Chiaki; Yabu-Uchi, Hidetake; Shioyama, Yoshiyuki; Sasaki, Tomonari; Honda, Hiroshi; Sasaki, Masayuki

    2017-10-01

    The aim of this study was to investigate the impact of pixel-based machine learning (ML) techniques, i.e., fuzzy-c-means clustering method (FCM), and the artificial neural network (ANN) and support vector machine (SVM), on an automated framework for delineation of gross tumor volume (GTV) regions of lung cancer for stereotactic body radiation therapy. The morphological and metabolic features for GTV regions, which were determined based on the knowledge of radiation oncologists, were fed on a pixel-by-pixel basis into the respective FCM, ANN, and SVM ML techniques. Then, the ML techniques were incorporated into the automated delineation framework of GTVs followed by an optimum contour selection (OCS) method, which we proposed in a previous study. The three-ML-based frameworks were evaluated for 16 lung cancer cases (six solid, four ground glass opacity (GGO), six part-solid GGO) with the datasets of planning computed tomography (CT) and 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET)/CT images using the three-dimensional Dice similarity coefficient (DSC). DSC denotes the degree of region similarity between the GTVs contoured by radiation oncologists and those estimated using the automated framework. The FCM-based framework achieved the highest DSCs of 0.79±0.06, whereas DSCs of the ANN-based and SVM-based frameworks were 0.76±0.14 and 0.73±0.14, respectively. The FCM-based framework provided the highest segmentation accuracy and precision without a learning process (lowest calculation cost). Therefore, the FCM-based framework can be useful for delineation of tumor regions in practical treatment planning. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  1. Evaluation of mobile learning: Students' experiences in a new rural-based medical school

    PubMed Central

    2010-01-01

    Background Mobile learning (ML) is an emerging educational method with success dependent on many factors including the ML device, physical infrastructure and user characteristics. At Gippsland Medical School (GMS), students are given a laptop at the commencement of their four-year degree. We evaluated the educational impact of the ML program from students' perspectives. Methods Questionnaires and individual interviews explored students' experiences of ML. All students were invited to complete questionnaires. Convenience sampling was used for interviews. Quantitative data was entered to SPSS 17.0 and descriptive statistics computed. Free text comments from questionnaires and transcriptions of interviews were thematically analysed. Results Fifty students completed the questionnaire (response rate 88%). Six students participated in interviews. More than half the students owned a laptop prior to commencing studies, would recommend the laptop and took the laptop to GMS daily. Modal daily use of laptops was four hours. Most frequent use was for access to the internet and email while the most frequently used applications were Microsoft Word and PowerPoint. Students appreciated the laptops for several reasons. The reduced financial burden was valued. Students were largely satisfied with the laptop specifications. Design elements of teaching spaces limited functionality. Although students valued aspects of the virtual learning environment (VLE), they also made many suggestions for improvement. Conclusions Students reported many educational benefits from school provision of laptops. In particular, the quick and easy access to electronic educational resources as and when they were needed. Improved design of physical facilities would enhance laptop use together with a more logical layout of the VLE, new computer-based resources and activities promoting interaction. PMID:20701752

  2. A universal strategy for the creation of machine learning-based atomistic force fields

    NASA Astrophysics Data System (ADS)

    Huan, Tran Doan; Batra, Rohit; Chapman, James; Krishnan, Sridevi; Chen, Lihua; Ramprasad, Rampi

    2017-09-01

    Emerging machine learning (ML)-based approaches provide powerful and novel tools to study a variety of physical and chemical problems. In this contribution, we outline a universal strategy to create ML-based atomistic force fields, which can be used to perform high-fidelity molecular dynamics simulations. This scheme involves (1) preparing a big reference dataset of atomic environments and forces with sufficiently low noise, e.g., using density functional theory or higher-level methods, (2) utilizing a generalizable class of structural fingerprints for representing atomic environments, (3) optimally selecting diverse and non-redundant training datasets from the reference data, and (4) proposing various learning approaches to predict atomic forces directly (and rapidly) from atomic configurations. From the atomistic forces, accurate potential energies can then be obtained by appropriate integration along a reaction coordinate or along a molecular dynamics trajectory. Based on this strategy, we have created model ML force fields for six elemental bulk solids, including Al, Cu, Ti, W, Si, and C, and show that all of them can reach chemical accuracy. The proposed procedure is general and universal, in that it can potentially be used to generate ML force fields for any material using the same unified workflow with little human intervention. Moreover, the force fields can be systematically improved by adding new training data progressively to represent atomic environments not encountered previously.

  3. Comparison between stochastic and machine learning methods for hydrological multi-step ahead forecasting: All forecasts are wrong!

    NASA Astrophysics Data System (ADS)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2017-04-01

    Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts compared to simpler methods. It is pointed out that the ML methods do not differ dramatically from the stochastic methods, while it is interesting that the NN, RF and SVM algorithms used in this study offer potentially very good performance in terms of accuracy. It should be noted that, although this study focuses on hydrological processes, the results are of general scientific interest. Another important point in this study is the use of several methods and metrics. Using fewer methods and fewer metrics would have led to a very different overall picture, particularly if those fewer metrics corresponded to fewer criteria. For this reason, we consider that the proposed methodology is appropriate for the evaluation of forecasting methods.

  4. Effect of normalization methods on the performance of supervised learning algorithms applied to HTSeq-FPKM-UQ data sets: 7SK RNA expression as a predictor of survival in patients with colon adenocarcinoma.

    PubMed

    Shahriyari, Leili

    2017-11-03

    One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  5. Machine learning for epigenetics and future medical applications.

    PubMed

    Holder, Lawrence B; Haque, M Muksitul; Skinner, Michael K

    2017-07-03

    Understanding epigenetic processes holds immense promise for medical applications. Advances in Machine Learning (ML) are critical to realize this promise. Previous studies used epigenetic data sets associated with the germline transmission of epigenetic transgenerational inheritance of disease and novel ML approaches to predict genome-wide locations of critical epimutations. A combination of Active Learning (ACL) and Imbalanced Class Learning (ICL) was used to address past problems with ML to develop a more efficient feature selection process and address the imbalance problem in all genomic data sets. The power of this novel ML approach and our ability to predict epigenetic phenomena and associated disease is suggested. The current approach requires extensive computation of features over the genome. A promising new approach is to introduce Deep Learning (DL) for the generation and simultaneous computation of novel genomic features tuned to the classification task. This approach can be used with any genomic or biological data set applied to medicine. The application of molecular epigenetic data in advanced machine learning analysis to medicine is the focus of this review.

  6. Challenges in the Verification of Reinforcement Learning Algorithms

    NASA Technical Reports Server (NTRS)

    Van Wesel, Perry; Goodloe, Alwyn E.

    2017-01-01

    Machine learning (ML) is increasingly being applied to a wide array of domains from search engines to autonomous vehicles. These algorithms, however, are notoriously complex and hard to verify. This work looks at the assumptions underlying machine learning algorithms as well as some of the challenges in trying to verify ML algorithms. Furthermore, we focus on the specific challenges of verifying reinforcement learning algorithms. These are highlighted using a specific example. Ultimately, we do not offer a solution to the complex problem of ML verification, but point out possible approaches for verification and interesting research opportunities.

  7. Machine Learning-based Virtual Screening and Its Applications to Alzheimer's Drug Discovery: A Review.

    PubMed

    Carpenter, Kristy A; Huang, Xudong

    2018-06-07

    Virtual Screening (VS) has emerged as an important tool in the drug development process, as it conducts efficient in silico searches over millions of compounds, ultimately increasing yields of potential drug leads. As a subset of Artificial Intelligence (AI), Machine Learning (ML) is a powerful way of conducting VS for drug leads. ML for VS generally involves assembling a filtered training set of compounds, comprised of known actives and inactives. After training the model, it is validated and, if sufficiently accurate, used on previously unseen databases to screen for novel compounds with desired drug target binding activity. The study aims to review ML-based methods used for VS and applications to Alzheimer's disease (AD) drug discovery. To update the current knowledge on ML for VS, we review thorough backgrounds, explanations, and VS applications of the following ML techniques: Naïve Bayes (NB), k-Nearest Neighbors (kNN), Support Vector Machines (SVM), Random Forests (RF), and Artificial Neural Networks (ANN). All techniques have found success in VS, but the future of VS is likely to lean more heavily toward the use of neural networks - and more specifically, Convolutional Neural Networks (CNN), which are a subset of ANN that utilize convolution. We additionally conceptualize a work flow for conducting ML-based VS for potential therapeutics of for AD, a complex neurodegenerative disease with no known cure and prevention. This both serves as an example of how to apply the concepts introduced earlier in the review and as a potential workflow for future implementation. Different ML techniques are powerful tools for VS, and they have advantages and disadvantages albeit. ML-based VS can be applied to AD drug development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. Why so GLUMM? Detecting depression clusters through graphing lifestyle-environs using machine-learning methods (GLUMM).

    PubMed

    Dipnall, J F; Pasco, J A; Berk, M; Williams, L J; Dodd, S; Jacka, F N; Meyer, D

    2017-01-01

    Key lifestyle-environ risk factors are operative for depression, but it is unclear how risk factors cluster. Machine-learning (ML) algorithms exist that learn, extract, identify and map underlying patterns to identify groupings of depressed individuals without constraints. The aim of this research was to use a large epidemiological study to identify and characterise depression clusters through "Graphing lifestyle-environs using machine-learning methods" (GLUMM). Two ML algorithms were implemented: unsupervised Self-organised mapping (SOM) to create GLUMM clusters and a supervised boosted regression algorithm to describe clusters. Ninety-six "lifestyle-environ" variables were used from the National health and nutrition examination study (2009-2010). Multivariate logistic regression validated clusters and controlled for possible sociodemographic confounders. The SOM identified two GLUMM cluster solutions. These solutions contained one dominant depressed cluster (GLUMM5-1, GLUMM7-1). Equal proportions of members in each cluster rated as highly depressed (17%). Alcohol consumption and demographics validated clusters. Boosted regression identified GLUMM5-1 as more informative than GLUMM7-1. Members were more likely to: have problems sleeping; unhealthy eating; ≤2 years in their home; an old home; perceive themselves underweight; exposed to work fumes; experienced sex at ≤14 years; not perform moderate recreational activities. A positive relationship between GLUMM5-1 (OR: 7.50, P<0.001) and GLUMM7-1 (OR: 7.88, P<0.001) with depression was found, with significant interactions with those married/living with partner (P=0.001). Using ML based GLUMM to form ordered depressive clusters from multitudinous lifestyle-environ variables enabled a deeper exploration of the heterogeneous data to uncover better understandings into relationships between the complex mental health factors. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  9. SU-G-201-09: Evaluation of a Novel Machine-Learning Algorithm for Permanent Prostate Brachytherapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicolae, A; Department of Physics, Ryerson University, Toronto, ON; Lu, L

    Purpose: A novel, automated, algorithm for permanent prostate brachytherapy (PPB) treatment planning has been developed. The novel approach uses machine-learning (ML), a form of artificial intelligence, to substantially decrease planning time while simultaneously retaining the clinical intuition of plans created by radiation oncologists. This study seeks to compare the ML algorithm against expert-planned PPB plans to evaluate the equivalency of dosimetric and clinical plan quality. Methods: Plan features were computed from historical high-quality PPB treatments (N = 100) and stored in a relational database (RDB). The ML algorithm matched new PPB features to a highly similar case in the RDB;more » this initial plan configuration was then further optimized using a stochastic search algorithm. PPB pre-plans (N = 30) generated using the ML algorithm were compared to plan variants created by an expert dosimetrist (RT), and radiation oncologist (MD). Planning time and pre-plan dosimetry were evaluated using a one-way Student’s t-test and ANOVA, respectively (significance level = 0.05). Clinical implant quality was evaluated by expert PPB radiation oncologists as part of a qualitative study. Results: Average planning time was 0.44 ± 0.42 min compared to 17.88 ± 8.76 min for the ML algorithm and RT, respectively, a significant advantage [t(9), p = 0.01]. A post-hoc ANOVA [F(2,87) = 6.59, p = 0.002] using Tukey-Kramer criteria showed a significantly lower mean prostate V150% for the ML plans (52.9%) compared to the RT (57.3%), and MD (56.2%) plans. Preliminary qualitative study results indicate comparable clinical implant quality between RT and ML plans with a trend towards preference for ML plans. Conclusion: PPB pre-treatment plans highly comparable to those of an expert radiation oncologist can be created using a novel ML planning model. The use of an ML-based planning approach is expected to translate into improved PPB accessibility and plan uniformity.« less

  10. Opportunistic Behavior in Motivated Learning Agents.

    PubMed

    Graham, James; Starzyk, Janusz A; Jachyra, Daniel

    2015-08-01

    This paper focuses on the novel motivated learning (ML) scheme and opportunistic behavior of an intelligent agent. It extends previously developed ML to opportunistic behavior in a multitask situation. Our paper describes the virtual world implementation of autonomous opportunistic agents learning in a dynamically changing environment, creating abstract goals, and taking advantage of arising opportunities to improve their performance. An opportunistic agent achieves better results than an agent based on ML only. It does so by minimizing the average value of all need signals rather than a dominating need. This paper applies to the design of autonomous embodied systems (robots) learning in real-time how to operate in a complex environment.

  11. Using Machine Learning to Predict MCNP Bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grechanuk, Pavel Aleksandrovi

    For many real-world applications in radiation transport where simulations are compared to experimental measurements, like in nuclear criticality safety, the bias (simulated - experimental k eff) in the calculation is an extremely important quantity used for code validation. The objective of this project is to accurately predict the bias of MCNP6 [1] criticality calculations using machine learning (ML) algorithms, with the intention of creating a tool that can complement the current nuclear criticality safety methods. In the latest release of MCNP6, the Whisper tool is available for criticality safety analysts and includes a large catalogue of experimental benchmarks, sensitivity profiles,more » and nuclear data covariance matrices. This data, coming from 1100+ benchmark cases, is used in this study of ML algorithms for criticality safety bias predictions.« less

  12. ANI-1, A data set of 20 million calculated off-equilibrium conformations for organic molecules

    NASA Astrophysics Data System (ADS)

    Smith, Justin S.; Isayev, Olexandr; Roitberg, Adrian E.

    2017-12-01

    One of the grand challenges in modern theoretical chemistry is designing and implementing approximations that expedite ab initio methods without loss of accuracy. Machine learning (ML) methods are emerging as a powerful approach to constructing various forms of transferable atomistic potentials. They have been successfully applied in a variety of applications in chemistry, biology, catalysis, and solid-state physics. However, these models are heavily dependent on the quality and quantity of data used in their fitting. Fitting highly flexible ML potentials, such as neural networks, comes at a cost: a vast amount of reference data is required to properly train these models. We address this need by providing access to a large computational DFT database, which consists of more than 20 M off equilibrium conformations for 57,462 small organic molecules. We believe it will become a new standard benchmark for comparison of current and future methods in the ML potential community.

  13. Machine learning for epigenetics and future medical applications

    PubMed Central

    Holder, Lawrence B.; Haque, M. Muksitul; Skinner, Michael K.

    2017-01-01

    ABSTRACT Understanding epigenetic processes holds immense promise for medical applications. Advances in Machine Learning (ML) are critical to realize this promise. Previous studies used epigenetic data sets associated with the germline transmission of epigenetic transgenerational inheritance of disease and novel ML approaches to predict genome-wide locations of critical epimutations. A combination of Active Learning (ACL) and Imbalanced Class Learning (ICL) was used to address past problems with ML to develop a more efficient feature selection process and address the imbalance problem in all genomic data sets. The power of this novel ML approach and our ability to predict epigenetic phenomena and associated disease is suggested. The current approach requires extensive computation of features over the genome. A promising new approach is to introduce Deep Learning (DL) for the generation and simultaneous computation of novel genomic features tuned to the classification task. This approach can be used with any genomic or biological data set applied to medicine. The application of molecular epigenetic data in advanced machine learning analysis to medicine is the focus of this review. PMID:28524769

  14. Approximate, computationally efficient online learning in Bayesian spiking neurons.

    PubMed

    Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André

    2014-03-01

    Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.

  15. Rainfall Prediction of Indian Peninsula: Comparison of Time Series Based Approach and Predictor Based Approach using Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Dash, Y.; Mishra, S. K.; Panigrahi, B. K.

    2017-12-01

    Prediction of northeast/post monsoon rainfall which occur during October, November and December (OND) over Indian peninsula is a challenging task due to the dynamic nature of uncertain chaotic climate. It is imperative to elucidate this issue by examining performance of different machine leaning (ML) approaches. The prime objective of this research is to compare between a) statistical prediction using historical rainfall observations and global atmosphere-ocean predictors like Sea Surface Temperature (SST) and Sea Level Pressure (SLP) and b) empirical prediction based on a time series analysis of past rainfall data without using any other predictors. Initially, ML techniques have been applied on SST and SLP data (1948-2014) obtained from NCEP/NCAR reanalysis monthly mean provided by the NOAA ESRL PSD. Later, this study investigated the applicability of ML methods using OND rainfall time series for 1948-2014 and forecasted up to 2018. The predicted values of aforementioned methods were verified using observed time series data collected from Indian Institute of Tropical Meteorology and the result revealed good performance of ML algorithms with minimal error scores. Thus, it is found that both statistical and empirical methods are useful for long range climatic projections.

  16. Application of machine learning algorithms for clinical predictive modeling: a data-mining approach in SCT.

    PubMed

    Shouval, R; Bondi, O; Mishan, H; Shimoni, A; Unger, R; Nagler, A

    2014-03-01

    Data collected from hematopoietic SCT (HSCT) centers are becoming more abundant and complex owing to the formation of organized registries and incorporation of biological data. Typically, conventional statistical methods are used for the development of outcome prediction models and risk scores. However, these analyses carry inherent properties limiting their ability to cope with large data sets with multiple variables and samples. Machine learning (ML), a field stemming from artificial intelligence, is part of a wider approach for data analysis termed data mining (DM). It enables prediction in complex data scenarios, familiar to practitioners and researchers. Technological and commercial applications are all around us, gradually entering clinical research. In the following review, we would like to expose hematologists and stem cell transplanters to the concepts, clinical applications, strengths and limitations of such methods and discuss current research in HSCT. The aim of this review is to encourage utilization of the ML and DM techniques in the field of HSCT, including prediction of transplantation outcome and donor selection.

  17. Modeling Music Emotion Judgments Using Machine Learning Methods

    PubMed Central

    Vempala, Naresh N.; Russo, Frank A.

    2018-01-01

    Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML) methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion. PMID:29354080

  18. Modeling Music Emotion Judgments Using Machine Learning Methods.

    PubMed

    Vempala, Naresh N; Russo, Frank A

    2017-01-01

    Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML) methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.

  19. Less is more: Sampling chemical space with active learning

    NASA Astrophysics Data System (ADS)

    Smith, Justin S.; Nebgen, Ben; Lubbers, Nicholas; Isayev, Olexandr; Roitberg, Adrian E.

    2018-06-01

    The development of accurate and transferable machine learning (ML) potentials for predicting molecular energetics is a challenging task. The process of data generation to train such ML potentials is a task neither well understood nor researched in detail. In this work, we present a fully automated approach for the generation of datasets with the intent of training universal ML potentials. It is based on the concept of active learning (AL) via Query by Committee (QBC), which uses the disagreement between an ensemble of ML potentials to infer the reliability of the ensemble's prediction. QBC allows the presented AL algorithm to automatically sample regions of chemical space where the ML potential fails to accurately predict the potential energy. AL improves the overall fitness of ANAKIN-ME (ANI) deep learning potentials in rigorous test cases by mitigating human biases in deciding what new training data to use. AL also reduces the training set size to a fraction of the data required when using naive random sampling techniques. To provide validation of our AL approach, we develop the COmprehensive Machine-learning Potential (COMP6) benchmark (publicly available on GitHub) which contains a diverse set of organic molecules. Active learning-based ANI potentials outperform the original random sampled ANI-1 potential with only 10% of the data, while the final active learning-based model vastly outperforms ANI-1 on the COMP6 benchmark after training to only 25% of the data. Finally, we show that our proposed AL technique develops a universal ANI potential (ANI-1x) that provides accurate energy and force predictions on the entire COMP6 benchmark. This universal ML potential achieves a level of accuracy on par with the best ML potentials for single molecules or materials, while remaining applicable to the general class of organic molecules composed of the elements CHNO.

  20. Unintended consequences of machine learning in medicine?

    PubMed

    McDonald, Laura; Ramagopalan, Sreeram V; Cox, Andrew P; Oguz, Mustafa

    2017-01-01

    Machine learning (ML) has the potential to significantly aid medical practice. However, a recent article highlighted some negative consequences that may arise from using ML decision support in medicine. We argue here that whilst the concerns raised by the authors may be appropriate, they are not specific to ML, and thus the article may lead to an adverse perception about this technique in particular. Whilst ML is not without its limitations like any methodology, a balanced view is needed in order to not hamper its use in potentially enabling better patient care.

  1. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    PubMed

    Zhang, Ai-bing; Feng, Jie; Ward, Robert D; Wan, Ping; Gao, Qiang; Wu, Jun; Zhao, Wei-zhong

    2012-01-01

    Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI) region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS) genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF) to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish) and two representing non-coding ITS barcodes (rust fungi and brown algae). Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ) and Maximum likelihood (ML) methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI) of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40%) for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37%) for 1094 brown algae queries, both using ITS barcodes.

  2. Comparison and optimization of machine learning methods for automated classification of circulating tumor cells.

    PubMed

    Lannin, Timothy B; Thege, Fredrik I; Kirby, Brian J

    2016-10-01

    Advances in rare cell capture technology have made possible the interrogation of circulating tumor cells (CTCs) captured from whole patient blood. However, locating captured cells in the device by manual counting bottlenecks data processing by being tedious (hours per sample) and compromises the results by being inconsistent and prone to user bias. Some recent work has been done to automate the cell location and classification process to address these problems, employing image processing and machine learning (ML) algorithms to locate and classify cells in fluorescent microscope images. However, the type of machine learning method used is a part of the design space that has not been thoroughly explored. Thus, we have trained four ML algorithms on three different datasets. The trained ML algorithms locate and classify thousands of possible cells in a few minutes rather than a few hours, representing an order of magnitude increase in processing speed. Furthermore, some algorithms have a significantly (P < 0.05) higher area under the receiver operating characteristic curve than do other algorithms. Additionally, significant (P < 0.05) losses to performance occur when training on cell lines and testing on CTCs (and vice versa), indicating the need to train on a system that is representative of future unlabeled data. Optimal algorithm selection depends on the peculiarities of the individual dataset, indicating the need of a careful comparison and optimization of algorithms for individual image classification tasks. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  3. Accelerating Chemical Discovery with Machine Learning: Simulated Evolution of Spin Crossover Complexes with an Artificial Neural Network.

    PubMed

    Janet, Jon Paul; Chan, Lydia; Kulik, Heather J

    2018-03-01

    Machine learning (ML) has emerged as a powerful complement to simulation for materials discovery by reducing time for evaluation of energies and properties at accuracy competitive with first-principles methods. We use genetic algorithm (GA) optimization to discover unconventional spin-crossover complexes in combination with efficient scoring from an artificial neural network (ANN) that predicts spin-state splitting of inorganic complexes. We explore a compound space of over 5600 candidate materials derived from eight metal/oxidation state combinations and a 32-ligand pool. We introduce a strategy for error-aware ML-driven discovery by limiting how far the GA travels away from the nearest ANN training points while maximizing property (i.e., spin-splitting) fitness, leading to discovery of 80% of the leads from full chemical space enumeration. Over a 51-complex subset, average unsigned errors (4.5 kcal/mol) are close to the ANN's baseline 3 kcal/mol error. By obtaining leads from the trained ANN within seconds rather than days from a DFT-driven GA, this strategy demonstrates the power of ML for accelerating inorganic material discovery.

  4. Evaluation of a Machine-Learning Algorithm for Treatment Planning in Prostate Low-Dose-Rate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicolae, Alexandru; Department of Medical Physics, Odette Cancer Center, Sunnybrook Health Sciences Centre, Toronto, Ontario; Morton, Gerard

    Purpose: This work presents the application of a machine learning (ML) algorithm to automatically generate high-quality, prostate low-dose-rate (LDR) brachytherapy treatment plans. The ML algorithm can mimic characteristics of preoperative treatment plans deemed clinically acceptable by brachytherapists. The planning efficiency, dosimetry, and quality (as assessed by experts) of preoperative plans generated with an ML planning approach was retrospectively evaluated in this study. Methods and Materials: Preimplantation and postimplantation treatment plans were extracted from 100 high-quality LDR treatments and stored within a training database. The ML training algorithm matches similar features from a new LDR case to those within the trainingmore » database to rapidly obtain an initial seed distribution; plans were then further fine-tuned using stochastic optimization. Preimplantation treatment plans generated by the ML algorithm were compared with brachytherapist (BT) treatment plans in terms of planning time (Wilcoxon rank sum, α = 0.05) and dosimetry (1-way analysis of variance, α = 0.05). Qualitative preimplantation plan quality was evaluated by expert LDR radiation oncologists using a Likert scale questionnaire. Results: The average planning time for the ML approach was 0.84 ± 0.57 minutes, compared with 17.88 ± 8.76 minutes for the expert planner (P=.020). Preimplantation plans were dosimetrically equivalent to the BT plans; the average prostate V150% was 4% lower for ML plans (P=.002), although the difference was not clinically significant. Respondents ranked the ML-generated plans as equivalent to expert BT treatment plans in terms of target coverage, normal tissue avoidance, implant confidence, and the need for plan modifications. Respondents had difficulty differentiating between plans generated by a human or those generated by the ML algorithm. Conclusions: Prostate LDR preimplantation treatment plans that have equivalent quality to plans created by brachytherapists can be rapidly generated using ML. The adoption of ML in the brachytherapy workflow is expected to improve LDR treatment plan uniformity while reducing planning time and resources.« less

  5. Machine learning in computational docking.

    PubMed

    Khamis, Mohamed A; Gomaa, Walid; Ahmed, Walaa F

    2015-03-01

    The objective of this paper is to highlight the state-of-the-art machine learning (ML) techniques in computational docking. The use of smart computational methods in the life cycle of drug design is relatively a recent development that has gained much popularity and interest over the last few years. Central to this methodology is the notion of computational docking which is the process of predicting the best pose (orientation + conformation) of a small molecule (drug candidate) when bound to a target larger receptor molecule (protein) in order to form a stable complex molecule. In computational docking, a large number of binding poses are evaluated and ranked using a scoring function. The scoring function is a mathematical predictive model that produces a score that represents the binding free energy, and hence the stability, of the resulting complex molecule. Generally, such a function should produce a set of plausible ligands ranked according to their binding stability along with their binding poses. In more practical terms, an effective scoring function should produce promising drug candidates which can then be synthesized and physically screened using high throughput screening process. Therefore, the key to computer-aided drug design is the design of an efficient highly accurate scoring function (using ML techniques). The methods presented in this paper are specifically based on ML techniques. Despite many traditional techniques have been proposed, the performance was generally poor. Only in the last few years started the application of the ML technology in the design of scoring functions; and the results have been very promising. The ML-based techniques are based on various molecular features extracted from the abundance of protein-ligand information in the public molecular databases, e.g., protein data bank bind (PDBbind). In this paper, we present this paradigm shift elaborating on the main constituent elements of the ML approach to molecular docking along with the state-of-the-art research in this area. For instance, the best random forest (RF)-based scoring function on PDBbind v2007 achieves a Pearson correlation coefficient between the predicted and experimentally determined binding affinities of 0.803 while the best conventional scoring function achieves 0.644. The best RF-based ranking power ranks the ligands correctly based on their experimentally determined binding affinities with accuracy 62.5% and identifies the top binding ligand with accuracy 78.1%. We conclude with open questions and potential future research directions that can be pursued in smart computational docking; using molecular features of different nature (geometrical, energy terms, pharmacophore), advanced ML techniques (e.g., deep learning), combining more than one ML models. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Heterogeneous classifier fusion for ligand-based virtual screening: or, how decision making by committee can be a good thing.

    PubMed

    Riniker, Sereina; Fechner, Nikolas; Landrum, Gregory A

    2013-11-25

    The concept of data fusion - the combination of information from different sources describing the same object with the expectation to generate a more accurate representation - has found application in a very broad range of disciplines. In the context of ligand-based virtual screening (VS), data fusion has been applied to combine knowledge from either different active molecules or different fingerprints to improve similarity search performance. Machine-learning (ML) methods based on fusion of multiple homogeneous classifiers, in particular random forests, have also been widely applied in the ML literature. The heterogeneous version of classifier fusion - fusing the predictions from different model types - has been less explored. Here, we investigate heterogeneous classifier fusion for ligand-based VS using three different ML methods, RF, naïve Bayes (NB), and logistic regression (LR), with four 2D fingerprints, atom pairs, topological torsions, RDKit fingerprint, and circular fingerprint. The methods are compared using a previously developed benchmarking platform for 2D fingerprints which is extended to ML methods in this article. The original data sets are filtered for difficulty, and a new set of challenging data sets from ChEMBL is added. Data sets were also generated for a second use case: starting from a small set of related actives instead of diverse actives. The final fused model consistently outperforms the other approaches across the broad variety of targets studied, indicating that heterogeneous classifier fusion is a very promising approach for ligand-based VS. The new data sets together with the adapted source code for ML methods are provided in the Supporting Information .

  7. ML-o-Scope: A Diagnostic Visualization System for Deep Machine Learning Pipelines

    DTIC Science & Technology

    2014-05-16

    ML-o-scope: a diagnostic visualization system for deep machine learning pipelines Daniel Bruckner Electrical Engineering and Computer Sciences... machine learning pipelines 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f...the system as a support for tuning large scale object-classification pipelines. 1 Introduction A new generation of pipelined machine learning models

  8. Including crystal structure attributes in machine learning models of formation energies via Voronoi tessellations

    NASA Astrophysics Data System (ADS)

    Ward, Logan; Liu, Ruoqian; Krishna, Amar; Hegde, Vinay I.; Agrawal, Ankit; Choudhary, Alok; Wolverton, Chris

    2017-07-01

    While high-throughput density functional theory (DFT) has become a prevalent tool for materials discovery, it is limited by the relatively large computational cost. In this paper, we explore using DFT data from high-throughput calculations to create faster, surrogate models with machine learning (ML) that can be used to guide new searches. Our method works by using decision tree models to map DFT-calculated formation enthalpies to a set of attributes consisting of two distinct types: (i) composition-dependent attributes of elemental properties (as have been used in previous ML models of DFT formation energies), combined with (ii) attributes derived from the Voronoi tessellation of the compound's crystal structure. The ML models created using this method have half the cross-validation error and similar training and evaluation speeds to models created with the Coulomb matrix and partial radial distribution function methods. For a dataset of 435 000 formation energies taken from the Open Quantum Materials Database (OQMD), our model achieves a mean absolute error of 80 meV/atom in cross validation, which is lower than the approximate error between DFT-computed and experimentally measured formation enthalpies and below 15% of the mean absolute deviation of the training set. We also demonstrate that our method can accurately estimate the formation energy of materials outside of the training set and be used to identify materials with especially large formation enthalpies. We propose that our models can be used to accelerate the discovery of new materials by identifying the most promising materials to study with DFT at little additional computational cost.

  9. An introduction and overview of machine learning in neurosurgical care.

    PubMed

    Senders, Joeky T; Zaki, Mark M; Karhade, Aditya V; Chang, Bliss; Gormley, William B; Broekman, Marike L; Smith, Timothy R; Arnaout, Omar

    2018-01-01

    Machine learning (ML) is a branch of artificial intelligence that allows computers to learn from large complex datasets without being explicitly programmed. Although ML is already widely manifest in our daily lives in various forms, the considerable potential of ML has yet to find its way into mainstream medical research and day-to-day clinical care. The complex diagnostic and therapeutic modalities used in neurosurgery provide a vast amount of data that is ideally suited for ML models. This systematic review explores ML's potential to assist and improve neurosurgical care. A systematic literature search was performed in the PubMed and Embase databases to identify all potentially relevant studies up to January 1, 2017. All studies were included that evaluated ML models assisting neurosurgical treatment. Of the 6,402 citations identified, 221 studies were selected after subsequent title/abstract and full-text screening. In these studies, ML was used to assist surgical treatment of patients with epilepsy, brain tumors, spinal lesions, neurovascular pathology, Parkinson's disease, traumatic brain injury, and hydrocephalus. Across multiple paradigms, ML was found to be a valuable tool for presurgical planning, intraoperative guidance, neurophysiological monitoring, and neurosurgical outcome prediction. ML has started to find applications aimed at improving neurosurgical care by increasing the efficiency and precision of perioperative decision-making. A thorough validation of specific ML models is essential before implementation in clinical neurosurgical care. To bridge the gap between research and clinical care, practical and ethical issues should be considered parallel to the development of these techniques.

  10. A Comparison of different learning models used in Data Mining for Medical Data

    NASA Astrophysics Data System (ADS)

    Srimani, P. K.; Koti, Manjula Sanjay

    2011-12-01

    The present study aims at investigating the different Data mining learning models for different medical data sets and to give practical guidelines to select the most appropriate algorithm for a specific medical data set. In practical situations, it is absolutely necessary to take decisions with regard to the appropriate models and parameters for diagnosis and prediction problems. Learning models and algorithms are widely implemented for rule extraction and the prediction of system behavior. In this paper, some of the well-known Machine Learning(ML) systems are investigated for different methods and are tested on five medical data sets. The practical criteria for evaluating different learning models are presented and the potential benefits of the proposed methodology for diagnosis and learning are suggested.

  11. Evaluation of mobile learning: students' experiences in a new rural-based medical school.

    PubMed

    Nestel, Debra; Ng, Andre; Gray, Katherine; Hill, Robyn; Villanueva, Elmer; Kotsanas, George; Oaten, Andrew; Browne, Chris

    2010-08-11

    Mobile learning (ML) is an emerging educational method with success dependent on many factors including the ML device, physical infrastructure and user characteristics. At Gippsland Medical School (GMS), students are given a laptop at the commencement of their four-year degree. We evaluated the educational impact of the ML program from students' perspectives. Questionnaires and individual interviews explored students' experiences of ML. All students were invited to complete questionnaires. Convenience sampling was used for interviews. Quantitative data was entered to SPSS 17.0 and descriptive statistics computed. Free text comments from questionnaires and transcriptions of interviews were thematically analysed. Fifty students completed the questionnaire (response rate 88%). Six students participated in interviews. More than half the students owned a laptop prior to commencing studies, would recommend the laptop and took the laptop to GMS daily. Modal daily use of laptops was four hours. Most frequent use was for access to the internet and email while the most frequently used applications were Microsoft Word and PowerPoint. Students appreciated the laptops for several reasons. The reduced financial burden was valued. Students were largely satisfied with the laptop specifications. Design elements of teaching spaces limited functionality. Although students valued aspects of the virtual learning environment (VLE), they also made many suggestions for improvement. Students reported many educational benefits from school provision of laptops. In particular, the quick and easy access to electronic educational resources as and when they were needed. Improved design of physical facilities would enhance laptop use together with a more logical layout of the VLE, new computer-based resources and activities promoting interaction.

  12. Use of machine learning to improve autism screening and diagnostic instruments: effectiveness, efficiency, and multi-instrument fusion

    PubMed Central

    Bone, Daniel; Bishop, Somer; Black, Matthew P.; Goodwin, Matthew S.; Lord, Catherine; Narayanan, Shrikanth S.

    2016-01-01

    Background Machine learning (ML) provides novel opportunities for human behavior research and clinical translation, yet its application can have noted pitfalls (Bone et al., 2015). In this work, we fastidiously utilize ML to derive autism spectrum disorder (ASD) instrument algorithms in an attempt to improve upon widely-used ASD screening and diagnostic tools. Methods The data consisted of Autism Diagnostic Interview-Revised (ADI-R) and Social Responsiveness Scale (SRS) scores for 1,264 verbal individuals with ASD and 462 verbal individuals with non-ASD developmental or psychiatric disorders (DD), split at age 10. Algorithms were created via a robust ML classifier, support vector machine (SVM), while targeting best-estimate clinical diagnosis of ASD vs. non-ASD. Parameter settings were tuned in multiple levels of cross-validation. Results The created algorithms were more effective (higher performing) than current algorithms, were tunable (sensitivity and specificity can be differentially weighted), and were more efficient (achieving near-peak performance with five or fewer codes). Results from ML-based fusion of ADI-R and SRS are reported. We present a screener algorithm for below (above) age 10 that reached 89.2% (86.7%) sensitivity and 59.0% (53.4%) specificity with only five behavioral codes. Conclusions ML is useful for creating robust, customizable instrument algorithms. In a unique dataset comprised of controls with other difficulties, our findings highlight limitations of current caregiver-report instruments and indicate possible avenues for improving ASD screening and diagnostic tools. PMID:27090613

  13. PyMVPA: A Unifying Approach to the Analysis of Neuroscientific Data

    PubMed Central

    Hanke, Michael; Halchenko, Yaroslav O.; Sederberg, Per B.; Olivetti, Emanuele; Fründ, Ingo; Rieger, Jochem W.; Herrmann, Christoph S.; Haxby, James V.; Hanson, Stephen José; Pollmann, Stefan

    2008-01-01

    The Python programming language is steadily increasing in popularity as the language of choice for scientific computing. The ability of this scripting environment to access a huge code base in various languages, combined with its syntactical simplicity, make it the ideal tool for implementing and sharing ideas among scientists from numerous fields and with heterogeneous methodological backgrounds. The recent rise of reciprocal interest between the machine learning (ML) and neuroscience communities is an example of the desire for an inter-disciplinary transfer of computational methods that can benefit from a Python-based framework. For many years, a large fraction of both research communities have addressed, almost independently, very high-dimensional problems with almost completely non-overlapping methods. However, a number of recently published studies that applied ML methods to neuroscience research questions attracted a lot of attention from researchers from both fields, as well as the general public, and showed that this approach can provide novel and fruitful insights into the functioning of the brain. In this article we show how PyMVPA, a specialized Python framework for machine learning based data analysis, can help to facilitate this inter-disciplinary technology transfer by providing a single interface to a wide array of machine learning libraries and neural data-processing methods. We demonstrate the general applicability and power of PyMVPA via analyses of a number of neural data modalities, including fMRI, EEG, MEG, and extracellular recordings. PMID:19212459

  14. Deep Learning for Drug Design: an Artificial Intelligence Paradigm for Drug Discovery in the Big Data Era.

    PubMed

    Jing, Yankang; Bian, Yuemin; Hu, Ziheng; Wang, Lirong; Xie, Xiang-Qun Sean

    2018-03-30

    Over the last decade, deep learning (DL) methods have been extremely successful and widely used to develop artificial intelligence (AI) in almost every domain, especially after it achieved its proud record on computational Go. Compared to traditional machine learning (ML) algorithms, DL methods still have a long way to go to achieve recognition in small molecular drug discovery and development. And there is still lots of work to do for the popularization and application of DL for research purpose, e.g., for small molecule drug research and development. In this review, we mainly discussed several most powerful and mainstream architectures, including the convolutional neural network (CNN), recurrent neural network (RNN), and deep auto-encoder networks (DAENs), for supervised learning and nonsupervised learning; summarized most of the representative applications in small molecule drug design; and briefly introduced how DL methods were used in those applications. The discussion for the pros and cons of DL methods as well as the main challenges we need to tackle were also emphasized.

  15. Can machine learning complement traditional medical device surveillance? A case study of dual-chamber implantable cardioverter-defibrillators.

    PubMed

    Ross, Joseph S; Bates, Jonathan; Parzynski, Craig S; Akar, Joseph G; Curtis, Jeptha P; Desai, Nihar R; Freeman, James V; Gamble, Ginger M; Kuntz, Richard; Li, Shu-Xia; Marinac-Dabic, Danica; Masoudi, Frederick A; Normand, Sharon-Lise T; Ranasinghe, Isuru; Shaw, Richard E; Krumholz, Harlan M

    2017-01-01

    Machine learning methods may complement traditional analytic methods for medical device surveillance. Using data from the National Cardiovascular Data Registry for implantable cardioverter-defibrillators (ICDs) linked to Medicare administrative claims for longitudinal follow-up, we applied three statistical approaches to safety-signal detection for commonly used dual-chamber ICDs that used two propensity score (PS) models: one specified by subject-matter experts (PS-SME), and the other one by machine learning-based selection (PS-ML). The first approach used PS-SME and cumulative incidence (time-to-event), the second approach used PS-SME and cumulative risk (Data Extraction and Longitudinal Trend Analysis [DELTA]), and the third approach used PS-ML and cumulative risk (embedded feature selection). Safety-signal surveillance was conducted for eleven dual-chamber ICD models implanted at least 2,000 times over 3 years. Between 2006 and 2010, there were 71,948 Medicare fee-for-service beneficiaries who received dual-chamber ICDs. Cumulative device-specific unadjusted 3-year event rates varied for three surveyed safety signals: death from any cause, 12.8%-20.9%; nonfatal ICD-related adverse events, 19.3%-26.3%; and death from any cause or nonfatal ICD-related adverse event, 27.1%-37.6%. Agreement among safety signals detected/not detected between the time-to-event and DELTA approaches was 90.9% (360 of 396, k =0.068), between the time-to-event and embedded feature-selection approaches was 91.7% (363 of 396, k =-0.028), and between the DELTA and embedded feature selection approaches was 88.1% (349 of 396, k =-0.042). Three statistical approaches, including one machine learning method, identified important safety signals, but without exact agreement. Ensemble methods may be needed to detect all safety signals for further evaluation during medical device surveillance.

  16. Machine learning for prediction of all-cause mortality in patients with suspected coronary artery disease: a 5-year multicentre prospective registry analysis

    PubMed Central

    Motwani, Manish; Dey, Damini; Berman, Daniel S.; Germano, Guido; Achenbach, Stephan; Al-Mallah, Mouaz H.; Andreini, Daniele; Budoff, Matthew J.; Cademartiri, Filippo; Callister, Tracy Q.; Chang, Hyuk-Jae; Chinnaiyan, Kavitha; Chow, Benjamin J.W.; Cury, Ricardo C.; Delago, Augustin; Gomez, Millie; Gransar, Heidi; Hadamitzky, Martin; Hausleiter, Joerg; Hindoyan, Niree; Feuchtner, Gudrun; Kaufmann, Philipp A.; Kim, Yong-Jin; Leipsic, Jonathon; Lin, Fay Y.; Maffei, Erica; Marques, Hugo; Pontone, Gianluca; Raff, Gilbert; Rubinshtein, Ronen; Shaw, Leslee J.; Stehli, Julia; Villines, Todd C.; Dunning, Allison; Min, James K.; Slomka, Piotr J.

    2017-01-01

    Aims Traditional prognostic risk assessment in patients undergoing non-invasive imaging is based upon a limited selection of clinical and imaging findings. Machine learning (ML) can consider a greater number and complexity of variables. Therefore, we investigated the feasibility and accuracy of ML to predict 5-year all-cause mortality (ACM) in patients undergoing coronary computed tomographic angiography (CCTA), and compared the performance to existing clinical or CCTA metrics. Methods and results The analysis included 10 030 patients with suspected coronary artery disease and 5-year follow-up from the COronary CT Angiography EvaluatioN For Clinical Outcomes: An InteRnational Multicenter registry. All patients underwent CCTA as their standard of care. Twenty-five clinical and 44 CCTA parameters were evaluated, including segment stenosis score (SSS), segment involvement score (SIS), modified Duke index (DI), number of segments with non-calcified, mixed or calcified plaques, age, sex, gender, standard cardiovascular risk factors, and Framingham risk score (FRS). Machine learning involved automated feature selection by information gain ranking, model building with a boosted ensemble algorithm, and 10-fold stratified cross-validation. Seven hundred and forty-five patients died during 5-year follow-up. Machine learning exhibited a higher area-under-curve compared with the FRS or CCTA severity scores alone (SSS, SIS, DI) for predicting all-cause mortality (ML: 0.79 vs. FRS: 0.61, SSS: 0.64, SIS: 0.64, DI: 0.62; P< 0.001). Conclusions Machine learning combining clinical and CCTA data was found to predict 5-year ACM significantly better than existing clinical or CCTA metrics alone. PMID:27252451

  17. Role of artificial intelligence in the care of patients with nonsmall cell lung cancer.

    PubMed

    Rabbani, Mohamad; Kanevsky, Jonathan; Kafi, Kamran; Chandelier, Florent; Giles, Francis J

    2018-04-01

    Lung cancer is the leading cause of cancer death worldwide. In up to 57% of patients, it is diagnosed at an advanced stage and the 5-year survival rate ranges between 10%-16%. There has been a significant amount of research using machine learning to generate tools using patient data to improve outcomes. This narrative review is based on research material obtained from PubMed up to Nov 2017. The search terms include "artificial intelligence," "machine learning," "lung cancer," "Nonsmall Cell Lung Cancer (NSCLC)," "diagnosis" and "treatment." Recent studies support the use of computer-aided systems and the use of radiomic features to help diagnose lung cancer earlier. Other studies have looked at machine learning (ML) methods that offer prognostic tools to doctors and help them in choosing personalized treatment options for their patients based on molecular, genetics and histological features. Combining artificial intelligence approaches into health care may serve as a beneficial tool for patients with NSCLC, and this review outlines these benefits and current shortcomings throughout the continuum of care. We present a review of the various applications of ML methods in NSCLC as it relates to improving diagnosis, treatment and outcomes. © 2018 Stichting European Society for Clinical Investigation Journal Foundation.

  18. Water quality of Danube Delta systems: ecological status and prediction using machine-learning algorithms.

    PubMed

    Stoica, C; Camejo, J; Banciu, A; Nita-Lazar, M; Paun, I; Cristofor, S; Pacheco, O R; Guevara, M

    2016-01-01

    Environmental issues have a worldwide impact on water bodies, including the Danube Delta, the largest European wetland. The Water Framework Directive (2000/60/EC) implementation operates toward solving environmental issues from European and national level. As a consequence, the water quality and the biocenosis structure was altered, especially the composition of the macro invertebrate community which is closely related to habitat and substrate heterogeneity. This study aims to assess the ecological status of Southern Branch of the Danube Delta, Saint Gheorghe, using benthic fauna and a computational method as an alternative for monitoring the water quality in real time. The analysis of spatial and temporal variability of unicriterial and multicriterial indices were used to assess the current status of aquatic systems. In addition, chemical status was characterized. Coliform bacteria and several chemical parameters were used to feed machine-learning (ML) algorithms to simulate a real-time classification method. Overall, the assessment of the water bodies indicated a moderate ecological status based on the biological quality elements or a good ecological status based on chemical and ML algorithms criteria.

  19. On-line capacity-building program on "analysis of data" for medical educators in the South Asia region: a qualitative exploration of our experience.

    PubMed

    Dongre, A R; Chacko, T V; Banu, S; Bhandary, S; Sahasrabudhe, R A; Philip, S; Deshmukh, P R

    2010-11-01

    In medical education, using the World Wide Web is a new approach for building the capacity of faculty. However, there is little information available on medical education researchers' needs and their collective learning outcomes in such on-line environments. Hence, the present study attempted: 1)to identify needs for capacity-building of fellows in a faculty development program on the topic of data analysis; and 2) to describe, analyze and understand the collective learning outcomes of the fellows during this need-based on-line session. The present research is based on quantitative (on-line survey for needs assessment) and qualitative (contents of e-mails exchanged in listserv discussion) data which were generated during the October 2009 Mentoring and Learning (M-L) Web discussion on the topic of data analysis. The data sources were shared e-mail responses during the process of planning and executing the M-L Web discussion. Content analysis was undertaken and the categories of discussion were presented as a simple non-hierarchical typology which represents the collective learning of the project fellows. We identified the types of learning needs on the topic 'Analysis of Data' to be addressed for faculty development in the field of education research. This need-based M-L Web discussion could then facilitate collective learning on such topics as 'basic concepts in statistics', tests of significance, Likert scale analysis, bivariate correlation, and simple regression analysis and content analysis of qualitative data. Steps like identifying the learning needs for an on-line M-L Web discussion, addressing the immediate needs of learners and creating a flexible reflective learning environment on the M-L Web facilitated the collective learning of the fellows on the topic of data analysis. Our outcomes can be useful in the design of on-line pedagogical strategies for supporting research in medical education.

  20. Using a Guided Machine Learning Ensemble Model to Predict Discharge Disposition following Meningioma Resection.

    PubMed

    Muhlestein, Whitney E; Akagi, Dallin S; Kallos, Justiss A; Morone, Peter J; Weaver, Kyle D; Thompson, Reid C; Chambless, Lola B

    2018-04-01

    Objective  Machine learning (ML) algorithms are powerful tools for predicting patient outcomes. This study pilots a novel approach to algorithm selection and model creation using prediction of discharge disposition following meningioma resection as a proof of concept. Materials and Methods  A diversity of ML algorithms were trained on a single-institution database of meningioma patients to predict discharge disposition. Algorithms were ranked by predictive power and top performers were combined to create an ensemble model. The final ensemble was internally validated on never-before-seen data to demonstrate generalizability. The predictive power of the ensemble was compared with a logistic regression. Further analyses were performed to identify how important variables impact the ensemble. Results  Our ensemble model predicted disposition significantly better than a logistic regression (area under the curve of 0.78 and 0.71, respectively, p  = 0.01). Tumor size, presentation at the emergency department, body mass index, convexity location, and preoperative motor deficit most strongly influence the model, though the independent impact of individual variables is nuanced. Conclusion  Using a novel ML technique, we built a guided ML ensemble model that predicts discharge destination following meningioma resection with greater predictive power than a logistic regression, and that provides greater clinical insight than a univariate analysis. These techniques can be extended to predict many other patient outcomes of interest.

  1. Advanced methods in NDE using machine learning approaches

    NASA Astrophysics Data System (ADS)

    Wunderlich, Christian; Tschöpe, Constanze; Duckhorn, Frank

    2018-04-01

    Machine learning (ML) methods and algorithms have been applied recently with great success in quality control and predictive maintenance. Its goal to build new and/or leverage existing algorithms to learn from training data and give accurate predictions, or to find patterns, particularly with new and unseen similar data, fits perfectly to Non-Destructive Evaluation. The advantages of ML in NDE are obvious in such tasks as pattern recognition in acoustic signals or automated processing of images from X-ray, Ultrasonics or optical methods. Fraunhofer IKTS is using machine learning algorithms in acoustic signal analysis. The approach had been applied to such a variety of tasks in quality assessment. The principal approach is based on acoustic signal processing with a primary and secondary analysis step followed by a cognitive system to create model data. Already in the second analysis steps unsupervised learning algorithms as principal component analysis are used to simplify data structures. In the cognitive part of the software further unsupervised and supervised learning algorithms will be trained. Later the sensor signals from unknown samples can be recognized and classified automatically by the algorithms trained before. Recently the IKTS team was able to transfer the software for signal processing and pattern recognition to a small printed circuit board (PCB). Still, algorithms will be trained on an ordinary PC; however, trained algorithms run on the Digital Signal Processor and the FPGA chip. The identical approach will be used for pattern recognition in image analysis of OCT pictures. Some key requirements have to be fulfilled, however. A sufficiently large set of training data, a high signal-to-noise ratio, and an optimized and exact fixation of components are required. The automated testing can be done subsequently by the machine. By integrating the test data of many components along the value chain further optimization including lifetime and durability prediction based on big data becomes possible, even if components are used in different versions or configurations. This is the promise behind German Industry 4.0.

  2. Learning a force field for the martensitic phase transformation in Zr

    NASA Astrophysics Data System (ADS)

    Zong, Hongxiang; Pilania, Ghanshyam; Ramprasad, Rampi; Lookman, Turab

    Atomic simulations provide an effective means to understand the underlying physics of martensitic transformations under extreme conditions. However, this is still a challenge for certain phase transforming metals due to the lack of an accurate classical force field. Quantum molecular dynamics (QMD) simulations are accurate but expensive. During the course of QMD simulations, similar configurations are constantly visited and revisited. Machine Learning can effectively learn from past visits and, therefore, eliminate such redundancies. In this talk, we will discuss the development of a hybrid ML-QMD method in which on-demand, on-the-fly quantum mechanical (QM) calculations are performed to accelerate calculations of interatomic forces at much lower computational costs. Using Zirconium as a model system for which accurate atomisctic potentials are currently unvailable we will demonstrate the feasibility and effectiveness of our approach. Specifically, the computed structural phase transformation behavior within the ML-QMD approach will be compared with available experimental results. Furthermore, results on phonons, stacking fault energies, and activation barriers for the homogeneous martensitic transformation in Zr will be presented.

  3. Machine Learning Based Evaluation of Reading and Writing Difficulties.

    PubMed

    Iwabuchi, Mamoru; Hirabayashi, Rumi; Nakamura, Kenryu; Dim, Nem Khan

    2017-01-01

    The possibility of auto evaluation of reading and writing difficulties was investigated using non-parametric machine learning (ML) regression technique for URAWSS (Understanding Reading and Writing Skills of Schoolchildren) [1] test data of 168 children of grade 1 - 9. The result showed that the ML had better prediction than the ordinary rule-based decision.

  4. Memory effects of Aronia melanocarpa fruit juice in a passive avoidance test in rats.

    PubMed

    Valcheva-Kuzmanova, Stefka V; Eftimov, Miroslav Tz; Tashev, Roman E; Belcheva, Iren P; Belcheva, Stiliana P

    2014-01-01

    To study the effect of Aronia melanocarpa fruit juice on memory in male Wistar rats. The juice was administered orally for 7, 14, 21 and 30 days at doses of 2.5 ml/kg, 5 ml/kg and 10 ml/kg. Memory was assessed in the one-way passive avoidance task (step through) which consisted of one training session and two retention tests (3 hours and 24 hours after training). The variables measured were the latency time to step into the dark compartment of the apparatus and the learning criterion (remaining in the illuminated compartment for at least 180 sec). Oral administration of Aronia melanocarpa fruit juice for 7 and 14 days resulted in a dose-dependent tendency to increase the latency time and the learning criterion compared to saline-treated controls but the effect failed to reach statistical significance. After 21 days of treatment, the juice dose-dependently prolonged the latency time at the retention tests, the effect being significant at doses of 5 ml/kg and 10 ml/kg. Applied for 30 days, the juice in all the tested doses increased significantly the latency time at the retention tests and the dose of 10 ml/kg significantly increased the percentage of rats reaching the learning criterion. These findings suggest that Aronia melanocarpa fruit juice could improve memory in rats. The effect is probably due to the polyphenolic ingredients of the juice which have been shown to be involved in learning and memory processes.

  5. Data mining in bioinformatics using Weka.

    PubMed

    Frank, Eibe; Hall, Mark; Trigg, Len; Holmes, Geoffrey; Witten, Ian H

    2004-10-12

    The Weka machine learning workbench provides a general-purpose environment for automatic classification, regression, clustering and feature selection-common data mining problems in bioinformatics research. It contains an extensive collection of machine learning algorithms and data pre-processing methods complemented by graphical user interfaces for data exploration and the experimental comparison of different machine learning techniques on the same problem. Weka can process data given in the form of a single relational table. Its main objectives are to (a) assist users in extracting useful information from data and (b) enable them to easily identify a suitable algorithm for generating an accurate predictive model from it. http://www.cs.waikato.ac.nz/ml/weka.

  6. Coronary CT Angiography-derived Fractional Flow Reserve: Machine Learning Algorithm versus Computational Fluid Dynamics Modeling.

    PubMed

    Tesche, Christian; De Cecco, Carlo N; Baumann, Stefan; Renker, Matthias; McLaurin, Tindal W; Duguay, Taylor M; Bayer, Richard R; Steinberg, Daniel H; Grant, Katharine L; Canstein, Christian; Schwemmer, Chris; Schoebinger, Max; Itu, Lucian M; Rapaka, Saikiran; Sharma, Puneet; Schoepf, U Joseph

    2018-04-10

    Purpose To compare two technical approaches for determination of coronary computed tomography (CT) angiography-derived fractional flow reserve (FFR)-FFR derived from coronary CT angiography based on computational fluid dynamics (hereafter, FFR CFD ) and FFR derived from coronary CT angiography based on machine learning algorithm (hereafter, FFR ML )-against coronary CT angiography and quantitative coronary angiography (QCA). Materials and Methods A total of 85 patients (mean age, 62 years ± 11 [standard deviation]; 62% men) who had undergone coronary CT angiography followed by invasive FFR were included in this single-center retrospective study. FFR values were derived on-site from coronary CT angiography data sets by using both FFR CFD and FFR ML . The performance of both techniques for detecting lesion-specific ischemia was compared against visual stenosis grading at coronary CT angiography, QCA, and invasive FFR as the reference standard. Results On a per-lesion and per-patient level, FFR ML showed a sensitivity of 79% and 90% and a specificity of 94% and 95%, respectively, for detecting lesion-specific ischemia. Meanwhile, FFR CFD resulted in a sensitivity of 79% and 89% and a specificity of 93% and 93%, respectively, on a per-lesion and per-patient basis (P = .86 and P = .92). On a per-lesion level, the area under the receiver operating characteristics curve (AUC) of 0.89 for FFR ML and 0.89 for FFR CFD showed significantly higher discriminatory power for detecting lesion-specific ischemia compared with that of coronary CT angiography (AUC, 0.61) and QCA (AUC, 0.69) (all P < .0001). Also, on a per-patient level, FFR ML (AUC, 0.91) and FFR CFD (AUC, 0.91) performed significantly better than did coronary CT angiography (AUC, 0.65) and QCA (AUC, 0.68) (all P < .0001). Processing time for FFR ML was significantly shorter compared with that of FFR CFD (40.5 minutes ± 6.3 vs 43.4 minutes ± 7.1; P = .042). Conclusion The FFR ML algorithm performs equally in detecting lesion-specific ischemia when compared with the FFR CFD approach. Both methods outperform accuracy of coronary CT angiography and QCA in the detection of flow-limiting stenosis. © RSNA, 2018.

  7. Formal Validation of Fault Management Design Solutions

    NASA Technical Reports Server (NTRS)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  8. Can machine learning complement traditional medical device surveillance? A case study of dual-chamber implantable cardioverter–defibrillators

    PubMed Central

    Ross, Joseph S; Bates, Jonathan; Parzynski, Craig S; Akar, Joseph G; Curtis, Jeptha P; Desai, Nihar R; Freeman, James V; Gamble, Ginger M; Kuntz, Richard; Li, Shu-Xia; Marinac-Dabic, Danica; Masoudi, Frederick A; Normand, Sharon-Lise T; Ranasinghe, Isuru; Shaw, Richard E; Krumholz, Harlan M

    2017-01-01

    Background Machine learning methods may complement traditional analytic methods for medical device surveillance. Methods and results Using data from the National Cardiovascular Data Registry for implantable cardioverter–defibrillators (ICDs) linked to Medicare administrative claims for longitudinal follow-up, we applied three statistical approaches to safety-signal detection for commonly used dual-chamber ICDs that used two propensity score (PS) models: one specified by subject-matter experts (PS-SME), and the other one by machine learning-based selection (PS-ML). The first approach used PS-SME and cumulative incidence (time-to-event), the second approach used PS-SME and cumulative risk (Data Extraction and Longitudinal Trend Analysis [DELTA]), and the third approach used PS-ML and cumulative risk (embedded feature selection). Safety-signal surveillance was conducted for eleven dual-chamber ICD models implanted at least 2,000 times over 3 years. Between 2006 and 2010, there were 71,948 Medicare fee-for-service beneficiaries who received dual-chamber ICDs. Cumulative device-specific unadjusted 3-year event rates varied for three surveyed safety signals: death from any cause, 12.8%–20.9%; nonfatal ICD-related adverse events, 19.3%–26.3%; and death from any cause or nonfatal ICD-related adverse event, 27.1%–37.6%. Agreement among safety signals detected/not detected between the time-to-event and DELTA approaches was 90.9% (360 of 396, k=0.068), between the time-to-event and embedded feature-selection approaches was 91.7% (363 of 396, k=−0.028), and between the DELTA and embedded feature selection approaches was 88.1% (349 of 396, k=−0.042). Conclusion Three statistical approaches, including one machine learning method, identified important safety signals, but without exact agreement. Ensemble methods may be needed to detect all safety signals for further evaluation during medical device surveillance. PMID:28860874

  9. Mathematizing Process of Junior High School Students to Improve Mathematics Literacy Refers PISA on RCP Learning

    NASA Astrophysics Data System (ADS)

    Wardono; Mariani, S.; Hendikawati, P.; Ikayani

    2017-04-01

    Mathematizing process (MP) is the process of modeling a phenomenon mathematically or establish the concept of a phenomenon. There are two mathematizing that is Mathematizing Horizontal (MH) and Mathematizing Vertical (MV). MH as events changes contextual problems into mathematical problems, while MV is the process of formulation of the problem into a variety of settlement mathematics by using some appropriate rules. Mathematics Literacy (ML) is the ability to formulate, implement and interpret mathematics in various contexts, including the capacity to perform reasoning mathematically and using the concepts, procedures, and facts to describe, explain or predict phenomena incident. If junior high school students are conditioned continuously to conduct mathematizing activities on RCP (RME-Card Problem) learning, it will be able to improve ML that refers PISA. The purpose of this research is to know the capability of the MP grade VIII on ML content shape and space with the matter of the cube and beams with RCP learning better than the scientific learning, upgrade MP grade VIII in the issue of the cube and beams with RCP learning better than the scientific learning in terms of cognitive styles reflective and impulsive the MP grade VIII with the approach of the RCP learning in terms of cognitive styles reflective and impulsive This research is the mixed methods model concurrent embedded. The population in this study, i.e., class VIII SMPN 1 Batang with sample two class. Data were taken with the observation, interviews, and tests and analyzed with a different test average of one party the right qualitative and descriptive. The results of this study demonstrate the capability of the MP student with RCP learning better than the scientific learning, upgrade MP with RCP learning better compare with scientific learning in term cognitive style of reflective and impulsive. The subject of the reflective group top, middle, and bottom can meet all the process of MH indicators are then the subject of the reflective upper and intermediate group can meet all the MV indicators but to lower groups can only fulfill some MV indicators. The subject is impulsive upper and middle group can meet all the MH indicators but to lower groups can only meet some MH indicator, then the subject is impulsive group can meet all the MV indicators but for middle and the bottom group can only fulfill some MV indicators.

  10. Predicting novel microRNA: a comprehensive comparison of machine learning approaches.

    PubMed

    Stegmayer, Georgina; Di Persia, Leandro E; Rubiolo, Mariano; Gerard, Matias; Pividori, Milton; Yones, Cristian; Bugnon, Leandro A; Rodriguez, Tadeo; Raad, Jonathan; Milone, Diego H

    2018-05-23

    The importance of microRNAs (miRNAs) is widely recognized in the community nowadays because these short segments of RNA can play several roles in almost all biological processes. The computational prediction of novel miRNAs involves training a classifier for identifying sequences having the highest chance of being precursors of miRNAs (pre-miRNAs). The big issue with this task is that well-known pre-miRNAs are usually few in comparison with the hundreds of thousands of candidate sequences in a genome, which results in high class imbalance. This imbalance has a strong influence on most standard classifiers, and if not properly addressed in the model and the experiments, not only performance reported can be completely unrealistic but also the classifier will not be able to work properly for pre-miRNA prediction. Besides, another important issue is that for most of the machine learning (ML) approaches already used (supervised methods), it is necessary to have both positive and negative examples. The selection of positive examples is straightforward (well-known pre-miRNAs). However, it is difficult to build a representative set of negative examples because they should be sequences with hairpin structure that do not contain a pre-miRNA. This review provides a comprehensive study and comparative assessment of methods from these two ML approaches for dealing with the prediction of novel pre-miRNAs: supervised and unsupervised training. We present and analyze the ML proposals that have appeared during the past 10 years in literature. They have been compared in several prediction tasks involving two model genomes and increasing imbalance levels. This work provides a review of existing ML approaches for pre-miRNA prediction and fair comparisons of the classifiers with same features and data sets, instead of just a revision of published software tools. The results and the discussion can help the community to select the most adequate bioinformatics approach according to the prediction task at hand. The comparative results obtained suggest that from low to mid-imbalance levels between classes, supervised methods can be the best. However, at very high imbalance levels, closer to real case scenarios, models including unsupervised and deep learning can provide better performance.

  11. Examining Mobile Learning Trends 2003-2008: A Categorical Meta-Trend Analysis Using Text Mining Techniques

    ERIC Educational Resources Information Center

    Hung, Jui-Long; Zhang, Ke

    2012-01-01

    This study investigated the longitudinal trends of academic articles in Mobile Learning (ML) using text mining techniques. One hundred and nineteen (119) refereed journal articles and proceedings papers from the SCI/SSCI database were retrieved and analyzed. The taxonomies of ML publications were grouped into twelve clusters (topics) and four…

  12. Improving precision of glomerular filtration rate estimating model by ensemble learning.

    PubMed

    Liu, Xun; Li, Ningshan; Lv, Linsheng; Fu, Yongmei; Cheng, Cailian; Wang, Caixia; Ye, Yuqiu; Li, Shaomin; Lou, Tanqi

    2017-11-09

    Accurate assessment of kidney function is clinically important, but estimates of glomerular filtration rate (GFR) by regression are imprecise. We hypothesized that ensemble learning could improve precision. A total of 1419 participants were enrolled, with 1002 in the development dataset and 417 in the external validation dataset. GFR was independently estimated from age, sex and serum creatinine using an artificial neural network (ANN), support vector machine (SVM), regression, and ensemble learning. GFR was measured by 99mTc-DTPA renal dynamic imaging calibrated with dual plasma sample 99mTc-DTPA GFR. Mean measured GFRs were 70.0 ml/min/1.73 m 2 in the developmental and 53.4 ml/min/1.73 m 2 in the external validation cohorts. In the external validation cohort, precision was better in the ensemble model of the ANN, SVM and regression equation (IQR = 13.5 ml/min/1.73 m 2 ) than in the new regression model (IQR = 14.0 ml/min/1.73 m 2 , P < 0.001). The precision of ensemble learning was the best of the three models, but the models had similar bias and accuracy. The median difference ranged from 2.3 to 3.7 ml/min/1.73 m 2 , 30% accuracy ranged from 73.1 to 76.0%, and P was > 0.05 for all comparisons of the new regression equation and the other new models. An ensemble learning model including three variables, the average ANN, SVM, and regression equation values, was more precise than the new regression model. A more complex ensemble learning strategy may further improve GFR estimates.

  13. Relationship Between Non-invasive Brain Stimulation-induced Plasticity and Capacity for Motor Learning.

    PubMed

    López-Alonso, Virginia; Cheeran, Binith; Fernández-del-Olmo, Miguel

    2015-01-01

    Cortical plasticity plays a key role in motor learning (ML). Non-invasive brain stimulation (NIBS) paradigms have been used to modulate plasticity in the human motor cortex in order to facilitate ML. However, little is known about the relationship between NIBS-induced plasticity over M1 and ML capacity. NIBS-induced MEP changes are related to ML capacity. 56 subjects participated in three NIBS (paired associative stimulation, anodal transcranial direct current stimulation and intermittent theta-burst stimulation), and in three lab-based ML task (serial reaction time, visuomotor adaptation and sequential visual isometric pinch task) sessions. After clustering the patterns of response to the different NIBS protocols, we compared the ML variables between the different patterns found. We used regression analysis to explore further the relationship between ML capacity and summary measures of the MEPs change. We ran correlations with the "responders" group only. We found no differences in ML variables between clusters. Greater response to NIBS protocols may be predictive of poor performance within certain blocks of the VAT. "Responders" to AtDCS and to iTBS showed significantly faster reaction times than "non-responders." However, the physiological significance of these results is uncertain. MEP changes induced in M1 by PAS, AtDCS and iTBS appear to have little, if any, association with the ML capacity tested with the SRTT, the VAT and the SVIPT. However, cortical excitability changes induced in M1 by AtDCS and iTBS may be related to reaction time and retention of newly acquired skills in certain motor learning tasks. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Towards the Automatic Detection of Pre-Existing Termite Mounds through UAS and Hyperspectral Imagery.

    PubMed

    Sandino, Juan; Wooler, Adam; Gonzalez, Felipe

    2017-09-24

    The increased technological developments in Unmanned Aerial Vehicles (UAVs) combined with artificial intelligence and Machine Learning (ML) approaches have opened the possibility of remote sensing of extensive areas of arid lands. In this paper, a novel approach towards the detection of termite mounds with the use of a UAV, hyperspectral imagery, ML and digital image processing is intended. A new pipeline process is proposed to detect termite mounds automatically and to reduce, consequently, detection times. For the classification stage, several ML classification algorithms' outcomes were studied, selecting support vector machines as the best approach for their role in image classification of pre-existing termite mounds. Various test conditions were applied to the proposed algorithm, obtaining an overall accuracy of 68%. Images with satisfactory mound detection proved that the method is "resolution-dependent". These mounds were detected regardless of their rotation and position in the aerial image. However, image distortion reduced the number of detected mounds due to the inclusion of a shape analysis method in the object detection phase, and image resolution is still determinant to obtain accurate results. Hyperspectral imagery demonstrated better capabilities to classify a huge set of materials than implementing traditional segmentation methods on RGB images only.

  15. Designing Contestability: Interaction Design, Machine Learning, and Mental Health

    PubMed Central

    Hirsch, Tad; Merced, Kritzia; Narayanan, Shrikanth; Imel, Zac E.; Atkins, David C.

    2017-01-01

    We describe the design of an automated assessment and training tool for psychotherapists to illustrate challenges with creating interactive machine learning (ML) systems, particularly in contexts where human life, livelihood, and wellbeing are at stake. We explore how existing theories of interaction design and machine learning apply to the psychotherapy context, and identify “contestability” as a new principle for designing systems that evaluate human behavior. Finally, we offer several strategies for making ML systems more accountable to human actors. PMID:28890949

  16. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    NASA Astrophysics Data System (ADS)

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  17. Machine Learning and Neurosurgical Outcome Prediction: A Systematic Review.

    PubMed

    Senders, Joeky T; Staples, Patrick C; Karhade, Aditya V; Zaki, Mark M; Gormley, William B; Broekman, Marike L D; Smith, Timothy R; Arnaout, Omar

    2018-01-01

    Accurate measurement of surgical outcomes is highly desirable to optimize surgical decision-making. An important element of surgical decision making is identification of the patient cohort that will benefit from surgery before the intervention. Machine learning (ML) enables computers to learn from previous data to make accurate predictions on new data. In this systematic review, we evaluate the potential of ML for neurosurgical outcome prediction. A systematic search in the PubMed and Embase databases was performed to identify all potential relevant studies up to January 1, 2017. Thirty studies were identified that evaluated ML algorithms used as prediction models for survival, recurrence, symptom improvement, and adverse events in patients undergoing surgery for epilepsy, brain tumor, spinal lesions, neurovascular disease, movement disorders, traumatic brain injury, and hydrocephalus. Depending on the specific prediction task evaluated and the type of input features included, ML models predicted outcomes after neurosurgery with a median accuracy and area under the receiver operating curve of 94.5% and 0.83, respectively. Compared with logistic regression, ML models performed significantly better and showed a median absolute improvement in accuracy and area under the receiver operating curve of 15% and 0.06, respectively. Some studies also demonstrated a better performance in ML models compared with established prognostic indices and clinical experts. In the research setting, ML has been studied extensively, demonstrating an excellent performance in outcome prediction for a wide range of neurosurgical conditions. However, future studies should investigate how ML can be implemented as a practical tool supporting neurosurgical care. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. [Effects of chrysalis oil on learning, memory and oxidative stress in D-galactose-induced ageing model of mice].

    PubMed

    Chen, Weiping; Yang, Qiongjie; Wei, Xing

    2013-11-01

    To investigate the effects of chrysalis oil on learning, memory and oxidative stress in D-galactose-induced ageing model of mice. Mice were injected intraperitoneally with D-galactose daily and received chrysalis oil intragastrically simultaneously for 30 d. Then mice underwent space navigation test and spatial probe test, superoxide dismutase (SOD), glutathione peroxidase (GSH-PX) activity and malondialdehyde (MDA) contents in mouse brain were measured. Compared to model group, escape latency in mice treated with 6 ml/kg*d chrysalis oil was significantly shorter (P<0.05), crossing times in 12 ml/kg*d group and 6 ml/kg*d group treated with chrysalis oil were significantly increased (P<0.05). Chrysalis oil treatment (12ml/kg*d) significantly increased SOD and GSH-PX activity and reduced MDA contents in brain of D-galactose-induced aging mice. Chrysalis oil can improve the ability of learning and memory in D-galactose-induced aging mice, and inhibit peroxidation in brain tissue.

  19. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.

    PubMed

    Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos

    2016-05-05

    Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  20. Early identification of posttraumatic stress following military deployment: Application of machine learning methods to a prospective study of Danish soldiers.

    PubMed

    Karstoft, Karen-Inge; Statnikov, Alexander; Andersen, Søren B; Madsen, Trine; Galatzer-Levy, Isaac R

    2015-09-15

    Pre-deployment identification of soldiers at risk for long-term posttraumatic stress psychopathology after home coming is important to guide decisions about deployment. Early post-deployment identification can direct early interventions to those in need and thereby prevents the development of chronic psychopathology. Both hold significant public health benefits given large numbers of deployed soldiers, but has so far not been achieved. Here, we aim to assess the potential for pre- and early post-deployment prediction of resilience or posttraumatic stress development in soldiers by application of machine learning (ML) methods. ML feature selection and prediction algorithms were applied to a prospective cohort of 561 Danish soldiers deployed to Afghanistan in 2009 to identify unique risk indicators and forecast long-term posttraumatic stress responses. Robust pre- and early postdeployment risk indicators were identified, and included individual PTSD symptoms as well as total level of PTSD symptoms, previous trauma and treatment, negative emotions, and thought suppression. The predictive performance of these risk indicators combined was assessed by cross-validation. Together, these indicators forecasted long term posttraumatic stress responses with high accuracy (pre-deployment: AUC = 0.84 (95% CI = 0.81-0.87), post-deployment: AUC = 0.88 (95% CI = 0.85-0.91)). This study utilized a previously collected data set and was therefore not designed to exhaust the potential of ML methods. Further, the study relied solely on self-reported measures. Pre-deployment and early post-deployment identification of risk for long-term posttraumatic psychopathology are feasible and could greatly reduce the public health costs of war. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Machine learning techniques in searches for$$t\\bar{t}$$h in the h → $$b\\bar{b}$$ decay channel

    DOE PAGES

    Santos, Robert; Nguyen, M.; Webster, Jordan; ...

    2017-04-10

    Study of the production of pairs of top quarks in association with a Higgs boson is one of the primary goals of the Large Hadron Collider over the next decade, as measurements of this process may help us to understand whether the uniquely large mass of the top quark plays a special role in electroweak symmetry breaking. Higgs bosons decay predominantly to bmore » $$\\bar{_b}$$, yielding signatures for the signal that are similar to t$$\\bar{_t}$$ + jets with heavy flavor. Though particularly challenging to study due to the similar kinematics between signal and background events, such final states (t$$\\bar{_t}$$b$$\\bar{b}$$) are an important channel for studying the top quark Yukawa coupling. This paper presents a systematic study of machine learning (ML) methods for detecting t$$\\bar{_t}$$h in the h → b$$\\bar{b}$$ decay channel. Among the seven ML methods tested, we show that neural network models outperform alternative methods. In addition, two neural models used in this paper outperform NeuroBayes, one of the standard algorithms used in current particle physics experiments. We further study the effectiveness of ML algorithms by investigating the impact of feature set and data size, as well as the depth of the networks for neural models. We demonstrate that an extended feature set leads to improvement of performance over basic features. Furthermore, the availability of large samples for training is found to be important for improving the performance of the techniques. For the features and the data set studied here, neural networks of more layers deliver comparable performance to their simpler counterparts.« less

  2. Machine learning techniques in searches for$$t\\bar{t}$$h in the h → $$b\\bar{b}$$ decay channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, Robert; Nguyen, M.; Webster, Jordan

    Study of the production of pairs of top quarks in association with a Higgs boson is one of the primary goals of the Large Hadron Collider over the next decade, as measurements of this process may help us to understand whether the uniquely large mass of the top quark plays a special role in electroweak symmetry breaking. Higgs bosons decay predominantly to bmore » $$\\bar{_b}$$, yielding signatures for the signal that are similar to t$$\\bar{_t}$$ + jets with heavy flavor. Though particularly challenging to study due to the similar kinematics between signal and background events, such final states (t$$\\bar{_t}$$b$$\\bar{b}$$) are an important channel for studying the top quark Yukawa coupling. This paper presents a systematic study of machine learning (ML) methods for detecting t$$\\bar{_t}$$h in the h → b$$\\bar{b}$$ decay channel. Among the seven ML methods tested, we show that neural network models outperform alternative methods. In addition, two neural models used in this paper outperform NeuroBayes, one of the standard algorithms used in current particle physics experiments. We further study the effectiveness of ML algorithms by investigating the impact of feature set and data size, as well as the depth of the networks for neural models. We demonstrate that an extended feature set leads to improvement of performance over basic features. Furthermore, the availability of large samples for training is found to be important for improving the performance of the techniques. For the features and the data set studied here, neural networks of more layers deliver comparable performance to their simpler counterparts.« less

  3. Machine Learning in Radiation Oncology: Opportunities, Requirements, and Needs

    PubMed Central

    Feng, Mary; Valdes, Gilmer; Dixit, Nayha; Solberg, Timothy D.

    2018-01-01

    Machine learning (ML) has the potential to revolutionize the field of radiation oncology, but there is much work to be done. In this article, we approach the radiotherapy process from a workflow perspective, identifying specific areas where a data-centric approach using ML could improve the quality and efficiency of patient care. We highlight areas where ML has already been used, and identify areas where we should invest additional resources. We believe that this article can serve as a guide for both clinicians and researchers to start discussing issues that must be addressed in a timely manner. PMID:29719815

  4. On-the-Fly Machine Learning of Atomic Potential in Density Functional Theory Structure Optimization

    NASA Astrophysics Data System (ADS)

    Jacobsen, T. L.; Jørgensen, M. S.; Hammer, B.

    2018-01-01

    Machine learning (ML) is used to derive local stability information for density functional theory calculations of systems in relation to the recently discovered SnO2 (110 )-(4 ×1 ) reconstruction. The ML model is trained on (structure, total energy) relations collected during global minimum energy search runs with an evolutionary algorithm (EA). While being built, the ML model is used to guide the EA, thereby speeding up the overall rate by which the EA succeeds. Inspection of the local atomic potentials emerging from the model further shows chemically intuitive patterns.

  5. Machine learning and microsimulation techniques on the prognosis of dementia: A systematic literature review

    PubMed Central

    Mendes, Emilia; Berglund, Johan; Anderberg, Peter

    2017-01-01

    Background Dementia is a complex disorder characterized by poor outcomes for the patients and high costs of care. After decades of research little is known about its mechanisms. Having prognostic estimates about dementia can help researchers, patients and public entities in dealing with this disorder. Thus, health data, machine learning and microsimulation techniques could be employed in developing prognostic estimates for dementia. Objective The goal of this paper is to present evidence on the state of the art of studies investigating and the prognosis of dementia using machine learning and microsimulation techniques. Method To achieve our goal we carried out a systematic literature review, in which three large databases—Pubmed, Socups and Web of Science were searched to select studies that employed machine learning or microsimulation techniques for the prognosis of dementia. A single backward snowballing was done to identify further studies. A quality checklist was also employed to assess the quality of the evidence presented by the selected studies, and low quality studies were removed. Finally, data from the final set of studies were extracted in summary tables. Results In total 37 papers were included. The data summary results showed that the current research is focused on the investigation of the patients with mild cognitive impairment that will evolve to Alzheimer’s disease, using machine learning techniques. Microsimulation studies were concerned with cost estimation and had a populational focus. Neuroimaging was the most commonly used variable. Conclusions Prediction of conversion from MCI to AD is the dominant theme in the selected studies. Most studies used ML techniques on Neuroimaging data. Only a few data sources have been recruited by most studies and the ADNI database is the one most commonly used. Only two studies have investigated the prediction of epidemiological aspects of Dementia using either ML or MS techniques. Finally, care should be taken when interpreting the reported accuracy of ML techniques, given studies’ different contexts. PMID:28662070

  6. Comparison of machine learning techniques to predict all-cause mortality using fitness data: the Henry ford exercIse testing (FIT) project.

    PubMed

    Sakr, Sherif; Elshawi, Radwa; Ahmed, Amjad M; Qureshi, Waqas T; Brawner, Clinton A; Keteyian, Steven J; Blaha, Michael J; Al-Mallah, Mouaz H

    2017-12-19

    Prior studies have demonstrated that cardiorespiratory fitness (CRF) is a strong marker of cardiovascular health. Machine learning (ML) can enhance the prediction of outcomes through classification techniques that classify the data into predetermined categories. The aim of this study is to present an evaluation and comparison of how machine learning techniques can be applied on medical records of cardiorespiratory fitness and how the various techniques differ in terms of capabilities of predicting medical outcomes (e.g. mortality). We use data of 34,212 patients free of known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems Between 1991 and 2009 and had a complete 10-year follow-up. Seven machine learning classification techniques were evaluated: Decision Tree (DT), Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayesian Classifier (BC), Bayesian Network (BN), K-Nearest Neighbor (KNN) and Random Forest (RF). In order to handle the imbalanced dataset used, the Synthetic Minority Over-Sampling Technique (SMOTE) is used. Two set of experiments have been conducted with and without the SMOTE sampling technique. On average over different evaluation metrics, SVM Classifier has shown the lowest performance while other models like BN, BC and DT performed better. The RF classifier has shown the best performance (AUC = 0.97) among all models trained using the SMOTE sampling. The results show that various ML techniques can significantly vary in terms of its performance for the different evaluation metrics. It is also not necessarily that the more complex the ML model, the more prediction accuracy can be achieved. The prediction performance of all models trained with SMOTE is much better than the performance of models trained without SMOTE. The study shows the potential of machine learning methods for predicting all-cause mortality using cardiorespiratory fitness data.

  7. Artificial intelligence, physiological genomics, and precision medicine.

    PubMed

    Williams, Anna Marie; Liu, Yong; Regner, Kevin R; Jotterand, Fabrice; Liu, Pengyuan; Liang, Mingyu

    2018-04-01

    Big data are a major driver in the development of precision medicine. Efficient analysis methods are needed to transform big data into clinically-actionable knowledge. To accomplish this, many researchers are turning toward machine learning (ML), an approach of artificial intelligence (AI) that utilizes modern algorithms to give computers the ability to learn. Much of the effort to advance ML for precision medicine has been focused on the development and implementation of algorithms and the generation of ever larger quantities of genomic sequence data and electronic health records. However, relevance and accuracy of the data are as important as quantity of data in the advancement of ML for precision medicine. For common diseases, physiological genomic readouts in disease-applicable tissues may be an effective surrogate to measure the effect of genetic and environmental factors and their interactions that underlie disease development and progression. Disease-applicable tissue may be difficult to obtain, but there are important exceptions such as kidney needle biopsy specimens. As AI continues to advance, new analytical approaches, including those that go beyond data correlation, need to be developed and ethical issues of AI need to be addressed. Physiological genomic readouts in disease-relevant tissues, combined with advanced AI, can be a powerful approach for precision medicine for common diseases.

  8. Out-of-Sample Extrapolation utilizing Semi-Supervised Manifold Learning (OSE-SSL): Content Based Image Retrieval for Histopathology Images

    PubMed Central

    Sparks, Rachel; Madabhushi, Anant

    2016-01-01

    Content-based image retrieval (CBIR) retrieves database images most similar to the query image by (1) extracting quantitative image descriptors and (2) calculating similarity between database and query image descriptors. Recently, manifold learning (ML) has been used to perform CBIR in a low dimensional representation of the high dimensional image descriptor space to avoid the curse of dimensionality. ML schemes are computationally expensive, requiring an eigenvalue decomposition (EVD) for every new query image to learn its low dimensional representation. We present out-of-sample extrapolation utilizing semi-supervised ML (OSE-SSL) to learn the low dimensional representation without recomputing the EVD for each query image. OSE-SSL incorporates semantic information, partial class label, into a ML scheme such that the low dimensional representation co-localizes semantically similar images. In the context of prostate histopathology, gland morphology is an integral component of the Gleason score which enables discrimination between prostate cancer aggressiveness. Images are represented by shape features extracted from the prostate gland. CBIR with OSE-SSL for prostate histology obtained from 58 patient studies, yielded an area under the precision recall curve (AUPRC) of 0.53 ± 0.03 comparatively a CBIR with Principal Component Analysis (PCA) to learn a low dimensional space yielded an AUPRC of 0.44 ± 0.01. PMID:27264985

  9. Using information from historical high-throughput screens to predict active compounds.

    PubMed

    Riniker, Sereina; Wang, Yuan; Jenkins, Jeremy L; Landrum, Gregory A

    2014-07-28

    Modern high-throughput screening (HTS) is a well-established approach for hit finding in drug discovery that is routinely employed in the pharmaceutical industry to screen more than a million compounds within a few weeks. However, as the industry shifts to more disease-relevant but more complex phenotypic screens, the focus has moved to piloting smaller but smarter chemically/biologically diverse subsets followed by an expansion around hit compounds. One standard method for doing this is to train a machine-learning (ML) model with the chemical fingerprints of the tested subset of molecules and then select the next compounds based on the predictions of this model. An alternative approach would be to take advantage of the wealth of bioactivity information contained in older (full-deck) screens using so-called HTS fingerprints, where each element of the fingerprint corresponds to the outcome of a particular assay, as input to machine-learning algorithms. We constructed HTS fingerprints using two collections of data: 93 in-house assays and 95 publicly available assays from PubChem. For each source, an additional set of 51 and 46 assays, respectively, was collected for testing. Three different ML methods, random forest (RF), logistic regression (LR), and naïve Bayes (NB), were investigated for both the HTS fingerprint and a chemical fingerprint, Morgan2. RF was found to be best suited for learning from HTS fingerprints yielding area under the receiver operating characteristic curve (AUC) values >0.8 for 78% of the internal assays and enrichment factors at 5% (EF(5%)) >10 for 55% of the assays. The RF(HTS-fp) generally outperformed the LR trained with Morgan2, which was the best ML method for the chemical fingerprint, for the majority of assays. In addition, HTS fingerprints were found to retrieve more diverse chemotypes. Combining the two models through heterogeneous classifier fusion led to a similar or better performance than the best individual model for all assays. Further validation using a pair of in-house assays and data from a confirmatory screen--including a prospective set of around 2000 compounds selected based on our approach--confirmed the good performance. Thus, the combination of machine-learning with HTS fingerprints and chemical fingerprints utilizes information from both domains and presents a very promising approach for hit expansion, leading to more hits. The source code used with the public data is provided.

  10. Model-Based Systems Engineering Pilot Program at NASA Langley

    NASA Technical Reports Server (NTRS)

    Vipavetz, Kevin G.; Murphy, Douglas G.; Infeld, Samatha I.

    2012-01-01

    NASA Langley Research Center conducted a pilot program to evaluate the benefits of using a Model-Based Systems Engineering (MBSE) approach during the early phase of the Materials International Space Station Experiment-X (MISSE-X) project. The goal of the pilot was to leverage MBSE tools and methods, including the Systems Modeling Language (SysML), to understand the net gain of utilizing this approach on a moderate size flight project. The System Requirements Review (SRR) success criteria were used to guide the work products desired from the pilot. This paper discusses the pilot project implementation, provides SysML model examples, identifies lessons learned, and describes plans for further use on MBSE on MISSE-X.

  11. Memory Boost from Spaced-Out Learning.

    ERIC Educational Resources Information Center

    Bower, B.

    1987-01-01

    Discusses what learning conditions promote memory stamina. Reviews study findings which suggested that spacing of practice during rote learning of a foreign language vocabulary can produce lasting memories. (ML)

  12. Machine Learning of Fault Friction

    NASA Astrophysics Data System (ADS)

    Johnson, P. A.; Rouet-Leduc, B.; Hulbert, C.; Marone, C.; Guyer, R. A.

    2017-12-01

    We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?

  13. Retrieving the Quantitative Chemical Information at Nanoscale from Scanning Electron Microscope Energy Dispersive X-ray Measurements by Machine Learning

    NASA Astrophysics Data System (ADS)

    Jany, B. R.; Janas, A.; Krok, F.

    2017-11-01

    The quantitative composition of metal alloy nanowires on InSb(001) semiconductor surface and gold nanostructures on germanium surface is determined by blind source separation (BSS) machine learning (ML) method using non negative matrix factorization (NMF) from energy dispersive X-ray spectroscopy (EDX) spectrum image maps measured in a scanning electron microscope (SEM). The BSS method blindly decomposes the collected EDX spectrum image into three source components, which correspond directly to the X-ray signals coming from the supported metal nanostructures, bulk semiconductor signal and carbon background. The recovered quantitative composition is validated by detailed Monte Carlo simulations and is confirmed by separate cross-sectional TEM EDX measurements of the nanostructures. This shows that SEM EDX measurements together with machine learning blind source separation processing could be successfully used for the nanostructures quantitative chemical composition determination.

  14. Predicting carbon dioxide and energy fluxes across global FLUXNET sites with regression algorithms

    DOE PAGES

    Tramontana, Gianluca; Jung, Martin; Schwalm, Christopher R.; ...

    2016-07-29

    Spatio-temporal fields of land–atmosphere fluxes derived from data-driven models can complement simulations by process-based land surface models. While a number of strategies for empirical models with eddy-covariance flux data have been applied, a systematic intercomparison of these methods has been missing so far. In this study, we performed a cross-validation experiment for predicting carbon dioxide, latent heat, sensible heat and net radiation fluxes across different ecosystem types with 11 machine learning (ML) methods from four different classes (kernel methods, neural networks, tree methods, and regression splines). We applied two complementary setups: (1) 8-day average fluxes based on remotely sensed data andmore » (2) daily mean fluxes based on meteorological data and a mean seasonal cycle of remotely sensed variables. The patterns of predictions from different ML and experimental setups were highly consistent. There were systematic differences in performance among the fluxes, with the following ascending order: net ecosystem exchange ( R 2 < 0.5), ecosystem respiration ( R 2 > 0.6), gross primary production ( R 2> 0.7), latent heat ( R 2 > 0.7), sensible heat ( R 2 > 0.7), and net radiation ( R 2 > 0.8). The ML methods predicted the across-site variability and the mean seasonal cycle of the observed fluxes very well ( R 2 > 0.7), while the 8-day deviations from the mean seasonal cycle were not well predicted ( R 2 < 0.5). Fluxes were better predicted at forested and temperate climate sites than at sites in extreme climates or less represented by training data (e.g., the tropics). Finally, the evaluated large ensemble of ML-based models will be the basis of new global flux products.« less

  15. Predicting carbon dioxide and energy fluxes across global FLUXNET sites with regression algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tramontana, Gianluca; Jung, Martin; Schwalm, Christopher R.

    Spatio-temporal fields of land–atmosphere fluxes derived from data-driven models can complement simulations by process-based land surface models. While a number of strategies for empirical models with eddy-covariance flux data have been applied, a systematic intercomparison of these methods has been missing so far. In this study, we performed a cross-validation experiment for predicting carbon dioxide, latent heat, sensible heat and net radiation fluxes across different ecosystem types with 11 machine learning (ML) methods from four different classes (kernel methods, neural networks, tree methods, and regression splines). We applied two complementary setups: (1) 8-day average fluxes based on remotely sensed data andmore » (2) daily mean fluxes based on meteorological data and a mean seasonal cycle of remotely sensed variables. The patterns of predictions from different ML and experimental setups were highly consistent. There were systematic differences in performance among the fluxes, with the following ascending order: net ecosystem exchange ( R 2 < 0.5), ecosystem respiration ( R 2 > 0.6), gross primary production ( R 2> 0.7), latent heat ( R 2 > 0.7), sensible heat ( R 2 > 0.7), and net radiation ( R 2 > 0.8). The ML methods predicted the across-site variability and the mean seasonal cycle of the observed fluxes very well ( R 2 > 0.7), while the 8-day deviations from the mean seasonal cycle were not well predicted ( R 2 < 0.5). Fluxes were better predicted at forested and temperate climate sites than at sites in extreme climates or less represented by training data (e.g., the tropics). Finally, the evaluated large ensemble of ML-based models will be the basis of new global flux products.« less

  16. Anomaly Detection and Modeling of Trajectories

    DTIC Science & Technology

    2012-08-01

    policies, either expressed or implied, of the Gates Millennium Scholars Program , or the Office of Naval Research. Report Documentation Page Form... PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Carnegie...thesis proposes several methods using statistics and machine learning (ML) that provide a deep understanding of trajectory datasets. In particular

  17. Ensemble Learning Method for Hidden Markov Models

    DTIC Science & Technology

    2014-12-01

    Ensemble HMM landmine detector Mine signatures vary according to the mine type, mine size , and burial depth. Similarly, clutter signatures vary with soil ...approaches for the di erent K groups depending on their size and homogeneity. In particular, we investigate the maximum likelihood (ML), the minimum...propose using and optimizing various training approaches for the different K groups depending on their size and homogeneity. In particular, we

  18. Proceedings of the Workshop on Multivariable Control Systems Held at Wright-Patterson AFB, OH, on 3 December 1982.

    DTIC Science & Technology

    1983-09-01

    promising method of af- craft multivariable flight controller design. Like any ne.! design technique, there is still more to learn about the r.~ cd...M4atix - Feedback Gain Ma trix - Fandom ’htrix Z - Number of Outputs L1 - Roll Moment • : ’ - 7oll Moment with Inertia TrML 523 a.. Symbols m - Number of

  19. Cognitive domains in the dog: independence of working memory from object learning, selective attention, and motor learning.

    PubMed

    Zanghi, Brian M; Araujo, Joseph; Milgram, Norton W

    2015-05-01

    Cognition in dogs, like in humans, is not a unitary process. Some functions, such as simple discrimination learning, are relatively insensitive to age; others, such as visuospatial learning can provide behavioral biomarkers of age. The present experiment sought to further establish the relationship between various cognitive domains, namely visuospatial memory, object discrimination learning (ODL), and selective attention (SA). In addition, we also set up a task to assess motor learning (ML). Thirty-six beagles (9-16 years) performed a variable delay non-matching to position (vDNMP) task using two objects with 20- and 90-s delay and were divided into three groups based on a combined score (HMP = 88-93 % accuracy [N = 12]; MMP = 79-86 % accuracy [N = 12]; LMP = 61-78 % accuracy [N = 12]). Variable object oddity task was used to measure ODL (correct or incorrect object) and SA (0-3 incorrect distractor objects with same [SA-same] or different [SA-diff] correct object as ODL). ML involved reaching various distances (0-15 cm). Age did not differ between memory groups (mean 11.6 years). ODL (ANOVA P = 0.43), or SA-same and SA-different (ANOVA P = 0.96), performance did not differ between the three vDNMP groups, although mean errors during ODL was numerically higher for LMP dogs. Errors increased (P < 0.001) for all dogs with increasing number of distractor objects during both SA tasks. vDNMP groups remained different (ANOVA P < 0.001) when re-tested with vDNMP task 42 days later. Maximum ML distance did not differ between vDNMP groups (ANOVA P = 0.96). Impaired short-term memory performance in aged dogs does not appear to predict performance of cognitive domains associated with object learning, SA, or maximum ML distance.

  20. Deep learning for single-molecule science

    NASA Astrophysics Data System (ADS)

    Albrecht, Tim; Slabaugh, Gregory; Alonso, Eduardo; Al-Arif, SM Masudur R.

    2017-10-01

    Exploring and making predictions based on single-molecule data can be challenging, not only due to the sheer size of the datasets, but also because a priori knowledge about the signal characteristics is typically limited and poor signal-to-noise ratio. For example, hypothesis-driven data exploration, informed by an expectation of the signal characteristics, can lead to interpretation bias or loss of information. Equally, even when the different data categories are known, e.g., the four bases in DNA sequencing, it is often difficult to know how to make best use of the available information content. The latest developments in machine learning (ML), so-called deep learning (DL) offer interesting, new avenues to address such challenges. In some applications, such as speech and image recognition, DL has been able to outperform conventional ML strategies and even human performance. However, to date DL has not been applied much in single-molecule science, presumably in part because relatively little is known about the ‘internal workings’ of such DL tools within single-molecule science as a field. In this Tutorial, we make an attempt to illustrate in a step-by-step guide how one of those, a convolutional neural network (CNN), may be used for base calling in DNA sequencing applications. We compare it with a SVM as a more conventional ML method, and discuss some of the strengths and weaknesses of the approach. In particular, a ‘deep’ neural network has many features of a ‘black box’, which has important implications on how we look at and interpret data.

  1. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    PubMed

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time. It is found that the proposed ML based sentence extraction techniques and the composite feature set used with RNN as classifier outperform all other approaches. By using ANN in FF form as feature extractor, the performance of the system is evaluated and a comparison is made. Experimental results show that the application of big data samples has enhanced the learning of the ASR system. Further, the ANN based sample and feature extraction techniques are found to be efficient enough to enable application of ML techniques in big data aspects as part of ASR systems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Efficacy of tension-free vaginal tape compared with transobturator tape in the treatment of stress urinary incontinence in women: analysis of learning curve, perioperative changes of voiding function

    PubMed Central

    2011-01-01

    Background In this study, by comparing TVT surgery and TOT surgery for stress urinary incontinence in women, the characteristics and learning curves of both operative methods were studied. Methods A total of 83 women with stress urinary incontinence treated with tension-free vaginal tape (TVT) (n = 38) or transobturator tape (TOT) (n = 45) at Saiseikai Central Hospital between April 2004 and September 2009 were included. We compare the outcomes and learning curves between TVT surgery and TOT surgery. In statistical analysis, Student's t test, Fisher's exact test, and Mann-Whitney's U test were used. Results The surgical durations were 37.4 ± 15.7 minutes with TVT surgery and 31.0 ± 8.3 minutes with TOT surgery. A longer period of time was required for TVT surgery (p = 0.025). The residual urine at post-operative day 1 was higher in TVT surgery (25.9 ± 44.2 ml) than in TOT surgery (10.6 ± 19.2 ml) (p = 0.0452). The surgical duration of TVT surgery was shortened after the operator had performed 15 operations (p = 0.019). Conclusions In comparison of TVT surgery and TOT surgery, the surgical duration of TVT surgery was longer and the residual urine of TVT surgery was higher at post-operative day 1. Surgical experience could shorten the duration of TVT surgery. PMID:21726448

  3. Surface EMG signals based motion intent recognition using multi-layer ELM

    NASA Astrophysics Data System (ADS)

    Wang, Jianhui; Qi, Lin; Wang, Xiao

    2017-11-01

    The upper-limb rehabilitation robot is regard as a useful tool to help patients with hemiplegic to do repetitive exercise. The surface electromyography (sEMG) contains motion information as the electric signals are generated and related to nerve-muscle motion. These sEMG signals, representing human's intentions of active motions, are introduced into the rehabilitation robot system to recognize upper-limb movements. Traditionally, the feature extraction is an indispensable part of drawing significant information from original signals, which is a tedious task requiring rich and related experience. This paper employs a deep learning scheme to extract the internal features of the sEMG signals using an advanced Extreme Learning Machine based auto-encoder (ELMAE). The mathematical information contained in the multi-layer structure of the ELM-AE is used as the high-level representation of the internal features of the sEMG signals, and thus a simple ELM can post-process the extracted features, formulating the entire multi-layer ELM (ML-ELM) algorithm. The method is employed for the sEMG based neural intentions recognition afterwards. The case studies show the adopted deep learning algorithm (ELM-AE) is capable of yielding higher classification accuracy compared to the Principle Component Analysis (PCA) scheme in 5 different types of upper-limb motions. This indicates the effectiveness and the learning capability of the ML-ELM in such motion intent recognition applications.

  4. Forecasting Solar Flares Using Magnetogram-based Predictors and Machine Learning

    NASA Astrophysics Data System (ADS)

    Florios, Kostas; Kontogiannis, Ioannis; Park, Sung-Hong; Guerra, Jordan A.; Benvenuto, Federico; Bloomfield, D. Shaun; Georgoulis, Manolis K.

    2018-02-01

    We propose a forecasting approach for solar flares based on data from Solar Cycle 24, taken by the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO) mission. In particular, we use the Space-weather HMI Active Region Patches (SHARP) product that facilitates cut-out magnetograms of solar active regions (AR) in the Sun in near-realtime (NRT), taken over a five-year interval (2012 - 2016). Our approach utilizes a set of thirteen predictors, which are not included in the SHARP metadata, extracted from line-of-sight and vector photospheric magnetograms. We exploit several machine learning (ML) and conventional statistics techniques to predict flares of peak magnitude {>} M1 and {>} C1 within a 24 h forecast window. The ML methods used are multi-layer perceptrons (MLP), support vector machines (SVM), and random forests (RF). We conclude that random forests could be the prediction technique of choice for our sample, with the second-best method being multi-layer perceptrons, subject to an entropy objective function. A Monte Carlo simulation showed that the best-performing method gives accuracy ACC=0.93(0.00), true skill statistic TSS=0.74(0.02), and Heidke skill score HSS=0.49(0.01) for {>} M1 flare prediction with probability threshold 15% and ACC=0.84(0.00), TSS=0.60(0.01), and HSS=0.59(0.01) for {>} C1 flare prediction with probability threshold 35%.

  5. PredicT-ML: a tool for automating machine learning model building with big clinical data.

    PubMed

    Luo, Gang

    2016-01-01

    Predictive modeling is fundamental to transforming large clinical data sets, or "big clinical data," into actionable knowledge for various healthcare applications. Machine learning is a major predictive modeling approach, but two barriers make its use in healthcare challenging. First, a machine learning tool user must choose an algorithm and assign one or more model parameters called hyper-parameters before model training. The algorithm and hyper-parameter values used typically impact model accuracy by over 40 %, but their selection requires many labor-intensive manual iterations that can be difficult even for computer scientists. Second, many clinical attributes are repeatedly recorded over time, requiring temporal aggregation before predictive modeling can be performed. Many labor-intensive manual iterations are required to identify a good pair of aggregation period and operator for each clinical attribute. Both barriers result in time and human resource bottlenecks, and preclude healthcare administrators and researchers from asking a series of what-if questions when probing opportunities to use predictive models to improve outcomes and reduce costs. This paper describes our design of and vision for PredicT-ML (prediction tool using machine learning), a software system that aims to overcome these barriers and automate machine learning model building with big clinical data. The paper presents the detailed design of PredicT-ML. PredicT-ML will open the use of big clinical data to thousands of healthcare administrators and researchers and increase the ability to advance clinical research and improve healthcare.

  6. Comparison of Machine Learning methods for incipient motion in gravel bed rivers

    NASA Astrophysics Data System (ADS)

    Valyrakis, Manousos

    2013-04-01

    Soil erosion and sediment transport of natural gravel bed streams are important processes which affect both the morphology as well as the ecology of earth's surface. For gravel bed rivers at near incipient flow conditions, particle entrainment dynamics are highly intermittent. This contribution reviews the use of modern Machine Learning (ML) methods implemented for short term prediction of entrainment instances of individual grains exposed in fully developed near boundary turbulent flows. Results obtained by network architectures of variable complexity based on two different ML methods namely the Artificial Neural Network (ANN) and the Adaptive Neuro-Fuzzy Inference System (ANFIS) are compared in terms of different error and performance indices, computational efficiency and complexity as well as predictive accuracy and forecast ability. Different model architectures are trained and tested with experimental time series obtained from mobile particle flume experiments. The experimental setup consists of a Laser Doppler Velocimeter (LDV) and a laser optics system, which acquire data for the instantaneous flow and particle response respectively, synchronously. The first is used to record the flow velocity components directly upstream of the test particle, while the later tracks the particle's displacements. The lengthy experimental data sets (millions of data points) are split into the training and validation subsets used to perform the corresponding learning and testing of the models. It is demonstrated that the ANFIS hybrid model, which is based on neural learning and fuzzy inference principles, better predicts the critical flow conditions above which sediment transport is initiated. In addition, it is illustrated that empirical knowledge can be extracted, validating the theoretical assumption that particle ejections occur due to energetic turbulent flow events. Such a tool may find application in management and regulation of stream flows downstream of dams for stream restoration, implementation of sustainable practices in river and estuarine ecosystems and design of stable river bed and banks.

  7. A machine learning approach using EEG data to predict response to SSRI treatment for major depressive disorder.

    PubMed

    Khodayari-Rostamabad, Ahmad; Reilly, James P; Hasey, Gary M; de Bruin, Hubert; Maccrimmon, Duncan J

    2013-10-01

    The problem of identifying, in advance, the most effective treatment agent for various psychiatric conditions remains an elusive goal. To address this challenge, we investigate the performance of the proposed machine learning (ML) methodology (based on the pre-treatment electroencephalogram (EEG)) for prediction of response to treatment with a selective serotonin reuptake inhibitor (SSRI) medication in subjects suffering from major depressive disorder (MDD). A relatively small number of most discriminating features are selected from a large group of candidate features extracted from the subject's pre-treatment EEG, using a machine learning procedure for feature selection. The selected features are fed into a classifier, which was realized as a mixture of factor analysis (MFA) model, whose output is the predicted response in the form of a likelihood value. This likelihood indicates the extent to which the subject belongs to the responder vs. non-responder classes. The overall method was evaluated using a "leave-n-out" randomized permutation cross-validation procedure. A list of discriminating EEG biomarkers (features) was found. The specificity of the proposed method is 80.9% while sensitivity is 94.9%, for an overall prediction accuracy of 87.9%. There is a 98.76% confidence that the estimated prediction rate is within the interval [75%, 100%]. These results indicate that the proposed ML method holds considerable promise in predicting the efficacy of SSRI antidepressant therapy for MDD, based on a simple and cost-effective pre-treatment EEG. The proposed approach offers the potential to improve the treatment of major depression and to reduce health care costs. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  8. Psoriasis image representation using patch-based dictionary learning for erythema severity scoring.

    PubMed

    George, Yasmeen; Aldeen, Mohammad; Garnavi, Rahil

    2018-06-01

    Psoriasis is a chronic skin disease which can be life-threatening. Accurate severity scoring helps dermatologists to decide on the treatment. In this paper, we present a semi-supervised computer-aided system for automatic erythema severity scoring in psoriasis images. Firstly, the unsupervised stage includes a novel image representation method. We construct a dictionary, which is then used in the sparse representation for local feature extraction. To acquire the final image representation vector, an aggregation method is exploited over the local features. Secondly, the supervised phase is where various multi-class machine learning (ML) classifiers are trained for erythema severity scoring. Finally, we compare the proposed system with two popular unsupervised feature extractor methods, namely: bag of visual words model (BoVWs) and AlexNet pretrained model. Root mean square error (RMSE) and F1 score are used as performance measures for the learned dictionaries and the trained ML models, respectively. A psoriasis image set consisting of 676 images, is used in this study. Experimental results demonstrate that the use of the proposed procedure can provide a setup where erythema scoring is accurate and consistent. Also, it is revealed that dictionaries with large number of atoms and small patch sizes yield the best representative erythema severity features. Further, random forest (RF) outperforms other classifiers with F1 score 0.71, followed by support vector machine (SVM) and boosting with 0.66 and 0.64 scores, respectively. Furthermore, the conducted comparative studies confirm the effectiveness of the proposed approach with improvement of 9% and 12% over BoVWs and AlexNet based features, respectively. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.

  9. STAR-GALAXY CLASSIFICATION IN MULTI-BAND OPTICAL IMAGING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fadely, Ross; Willman, Beth; Hogg, David W.

    2012-11-20

    Ground-based optical surveys such as PanSTARRS, DES, and LSST will produce large catalogs to limiting magnitudes of r {approx}> 24. Star-galaxy separation poses a major challenge to such surveys because galaxies-even very compact galaxies-outnumber halo stars at these depths. We investigate photometric classification techniques on stars and galaxies with intrinsic FWHM <0.2 arcsec. We consider unsupervised spectral energy distribution template fitting and supervised, data-driven support vector machines (SVMs). For template fitting, we use a maximum likelihood (ML) method and a new hierarchical Bayesian (HB) method, which learns the prior distribution of template probabilities from the data. SVM requires training datamore » to classify unknown sources; ML and HB do not. We consider (1) a best-case scenario (SVM{sub best}) where the training data are (unrealistically) a random sampling of the data in both signal-to-noise and demographics and (2) a more realistic scenario where training is done on higher signal-to-noise data (SVM{sub real}) at brighter apparent magnitudes. Testing with COSMOS ugriz data, we find that HB outperforms ML, delivering {approx}80% completeness, with purity of {approx}60%-90% for both stars and galaxies. We find that no algorithm delivers perfect performance and that studies of metal-poor main-sequence turnoff stars may be challenged by poor star-galaxy separation. Using the Receiver Operating Characteristic curve, we find a best-to-worst ranking of SVM{sub best}, HB, ML, and SVM{sub real}. We conclude, therefore, that a well-trained SVM will outperform template-fitting methods. However, a normally trained SVM performs worse. Thus, HB template fitting may prove to be the optimal classification method in future surveys.« less

  10. Deep Artificial Neural Networks and Neuromorphic Chips for Big Data Analysis: Pharmaceutical and Bioinformatics Applications.

    PubMed

    Pastur-Romay, Lucas Antón; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana Belén

    2016-08-11

    Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure-Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron-Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.

  11. Deep Artificial Neural Networks and Neuromorphic Chips for Big Data Analysis: Pharmaceutical and Bioinformatics Applications

    PubMed Central

    Pastur-Romay, Lucas Antón; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana Belén

    2016-01-01

    Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure–Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron–Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods. PMID:27529225

  12. Learning atoms for materials discovery.

    PubMed

    Zhou, Quan; Tang, Peizhe; Liu, Shenxiu; Pan, Jinbo; Yan, Qimin; Zhang, Shou-Cheng

    2018-06-26

    Exciting advances have been made in artificial intelligence (AI) during recent decades. Among them, applications of machine learning (ML) and deep learning techniques brought human-competitive performances in various tasks of fields, including image recognition, speech recognition, and natural language understanding. Even in Go, the ancient game of profound complexity, the AI player has already beat human world champions convincingly with and without learning from the human. In this work, we show that our unsupervised machines (Atom2Vec) can learn the basic properties of atoms by themselves from the extensive database of known compounds and materials. These learned properties are represented in terms of high-dimensional vectors, and clustering of atoms in vector space classifies them into meaningful groups consistent with human knowledge. We use the atom vectors as basic input units for neural networks and other ML models designed and trained to predict materials properties, which demonstrate significant accuracy. Copyright © 2018 the Author(s). Published by PNAS.

  13. Natural and Artificial Intelligence in Neurosurgery: A Systematic Review.

    PubMed

    Senders, Joeky T; Arnaout, Omar; Karhade, Aditya V; Dasenbrock, Hormuzdiyar H; Gormley, William B; Broekman, Marike L; Smith, Timothy R

    2017-09-07

    Machine learning (ML) is a domain of artificial intelligence that allows computer algorithms to learn from experience without being explicitly programmed. To summarize neurosurgical applications of ML where it has been compared to clinical expertise, here referred to as "natural intelligence." A systematic search was performed in the PubMed and Embase databases as of August 2016 to review all studies comparing the performance of various ML approaches with that of clinical experts in neurosurgical literature. Twenty-three studies were identified that used ML algorithms for diagnosis, presurgical planning, or outcome prediction in neurosurgical patients. Compared to clinical experts, ML models demonstrated a median absolute improvement in accuracy and area under the receiver operating curve of 13% (interquartile range 4-21%) and 0.14 (interquartile range 0.07-0.21), respectively. In 29 (58%) of the 50 outcome measures for which a P -value was provided or calculated, ML models outperformed clinical experts ( P < .05). In 18 of 50 (36%), no difference was seen between ML and expert performance ( P > .05), while in 3 of 50 (6%) clinical experts outperformed ML models ( P < .05). All 4 studies that compared clinicians assisted by ML models vs clinicians alone demonstrated a better performance in the first group. We conclude that ML models have the potential to augment the decision-making capacity of clinicians in neurosurgical applications; however, significant hurdles remain associated with creating, validating, and deploying ML models in the clinical setting. Shifting from the preconceptions of a human-vs-machine to a human-and-machine paradigm could be essential to overcome these hurdles. Published by Oxford University Press on behalf of Congress of Neurological Surgeons 2017.

  14. An Application of the Geo-Semantic Micro-services in Seamless Data-Model Integration

    NASA Astrophysics Data System (ADS)

    Jiang, P.; Elag, M.; Kumar, P.; Liu, R.; Hu, Y.; Marini, L.; Peckham, S. D.; Hsu, L.

    2016-12-01

    We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?

  15. Perspective: Machine learning potentials for atomistic simulations

    NASA Astrophysics Data System (ADS)

    Behler, Jörg

    2016-11-01

    Nowadays, computer simulations have become a standard tool in essentially all fields of chemistry, condensed matter physics, and materials science. In order to keep up with state-of-the-art experiments and the ever growing complexity of the investigated problems, there is a constantly increasing need for simulations of more realistic, i.e., larger, model systems with improved accuracy. In many cases, the availability of sufficiently efficient interatomic potentials providing reliable energies and forces has become a serious bottleneck for performing these simulations. To address this problem, currently a paradigm change is taking place in the development of interatomic potentials. Since the early days of computer simulations simplified potentials have been derived using physical approximations whenever the direct application of electronic structure methods has been too demanding. Recent advances in machine learning (ML) now offer an alternative approach for the representation of potential-energy surfaces by fitting large data sets from electronic structure calculations. In this perspective, the central ideas underlying these ML potentials, solved problems and remaining challenges are reviewed along with a discussion of their current applicability and limitations.

  16. Using Machine-Learned Bayesian Belief Networks to Predict Perioperative Risk of Clostridium Difficile Infection Following Colon Surgery

    PubMed Central

    Bilchik, Anton; Eberhardt, John; Kalina, Philip; Nissan, Aviram; Johnson, Eric; Avital, Itzhak; Stojadinovic, Alexander

    2012-01-01

    Background Clostridium difficile (C-Diff) infection following colorectal resection is an increasing source of morbidity and mortality. Objective We sought to determine if machine-learned Bayesian belief networks (ml-BBNs) could preoperatively provide clinicians with postoperative estimates of C-Diff risk. Methods We performed a retrospective modeling of the Nationwide Inpatient Sample (NIS) national registry dataset with independent set validation. The NIS registries for 2005 and 2006 were used for initial model training, and the data from 2007 were used for testing and validation. International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) codes were used to identify subjects undergoing colon resection and postoperative C-Diff development. The ml-BBNs were trained using a stepwise process. Receiver operating characteristic (ROC) curve analysis was conducted and area under the curve (AUC), positive predictive value (PPV), and negative predictive value (NPV) were calculated. Results From over 24 million admissions, 170,363 undergoing colon resection met the inclusion criteria. Overall, 1.7% developed postoperative C-Diff. Using the ml-BBN to estimate C-Diff risk, model AUC is 0.75. Using only known a priori features, AUC is 0.74. The model has two configurations: a high sensitivity and a high specificity configuration. Sensitivity, specificity, PPV, and NPV are 81.0%, 50.1%, 2.6%, and 99.4% for high sensitivity and 55.4%, 81.3%, 3.5%, and 99.1% for high specificity. C-Diff has 4 first-degree associates that influence the probability of C-Diff development: weight loss, tumor metastases, inflammation/infections, and disease severity. Conclusions Machine-learned BBNs can produce robust estimates of postoperative C-Diff infection, allowing clinicians to identify high-risk patients and potentially implement measures to reduce its incidence or morbidity. PMID:23611947

  17. Topological data analyses and machine learning for detection, classification and characterization of atmospheric rivers

    NASA Astrophysics Data System (ADS)

    Muszynski, G.; Kashinath, K.; Wehner, M. F.; Prabhat, M.; Kurlin, V.

    2017-12-01

    We investigate novel approaches to detecting, classifying and characterizing extreme weather events, such as atmospheric rivers (ARs), in large high-dimensional climate datasets. ARs are narrow filaments of concentrated water vapour in the atmosphere that bring much of the precipitation in many mid-latitude regions. The precipitation associated with ARs is also responsible for major flooding events in many coastal regions of the world, including the west coast of the United States and western Europe. In this study we combine ideas from Topological Data Analysis (TDA) with Machine Learning (ML) for detecting, classifying and characterizing extreme weather events, like ARs. TDA is a new field that sits at the interface between topology and computer science, that studies "shape" - hidden topological structure - in raw data. It has been applied successfully in many areas of applied sciences, including complex networks, signal processing and image recognition. Using TDA we provide ARs with a shape characteristic as a new feature descriptor for the task of AR classification. In particular, we track the change in topology in precipitable water (integrated water vapour) fields using the Union-Find algorithm. We use the generated feature descriptors with ML classifiers to establish reliability and classification performance of our approach. We utilize the parallel toolkit for extreme climate events analysis (TECA: Petascale Pattern Recognition for Climate Science, Prabhat et al., Computer Analysis of Images and Patterns, 2015) for comparison (it is assumed that events identified by TECA is ground truth). Preliminary results indicate that our approach brings new insight into the study of ARs and provides quantitative information about the relevance of topological feature descriptors in analyses of a large climate datasets. We illustrate this method on climate model output and NCEP reanalysis datasets. Further, our method outperforms existing methods on detection and classification of ARs. This work illustrates that TDA combined with ML may provide a uniquely powerful approach for detection, classification and characterization of extreme weather phenomena.

  18. Resolving Transition Metal Chemical Space: Feature Selection for Machine Learning and Structure-Property Relationships.

    PubMed

    Janet, Jon Paul; Kulik, Heather J

    2017-11-22

    Machine learning (ML) of quantum mechanical properties shows promise for accelerating chemical discovery. For transition metal chemistry where accurate calculations are computationally costly and available training data sets are small, the molecular representation becomes a critical ingredient in ML model predictive accuracy. We introduce a series of revised autocorrelation functions (RACs) that encode relationships of the heuristic atomic properties (e.g., size, connectivity, and electronegativity) on a molecular graph. We alter the starting point, scope, and nature of the quantities evaluated in standard ACs to make these RACs amenable to inorganic chemistry. On an organic molecule set, we first demonstrate superior standard AC performance to other presently available topological descriptors for ML model training, with mean unsigned errors (MUEs) for atomization energies on set-aside test molecules as low as 6 kcal/mol. For inorganic chemistry, our RACs yield 1 kcal/mol ML MUEs on set-aside test molecules in spin-state splitting in comparison to 15-20× higher errors for feature sets that encode whole-molecule structural information. Systematic feature selection methods including univariate filtering, recursive feature elimination, and direct optimization (e.g., random forest and LASSO) are compared. Random-forest- or LASSO-selected subsets 4-5× smaller than the full RAC set produce sub- to 1 kcal/mol spin-splitting MUEs, with good transferability to metal-ligand bond length prediction (0.004-5 Å MUE) and redox potential on a smaller data set (0.2-0.3 eV MUE). Evaluation of feature selection results across property sets reveals the relative importance of local, electronic descriptors (e.g., electronegativity, atomic number) in spin-splitting and distal, steric effects in redox potential and bond lengths.

  19. Could machine learning improve the prediction of pelvic nodal status of prostate cancer patients? Preliminary results of a pilot study.

    PubMed

    De Bari, B; Vallati, M; Gatta, R; Simeone, C; Girelli, G; Ricardi, U; Meattini, I; Gabriele, P; Bellavita, R; Krengli, M; Cafaro, I; Cagna, E; Bunkheila, F; Borghesi, S; Signor, M; Di Marco, A; Bertoni, F; Stefanacci, M; Pasinetti, N; Buglione, M; Magrini, S M

    2015-07-01

    We tested and compared performances of Roach formula, Partin tables and of three Machine Learning (ML) based algorithms based on decision trees in identifying N+ prostate cancer (PC). 1,555 cN0 and 50 cN+ PC were analyzed. Results were also verified on an independent population of 204 operated cN0 patients, with a known pN status (187 pN0, 17 pN1 patients). ML performed better, also when tested on the surgical population, with accuracy, specificity, and sensitivity ranging between 48-86%, 35-91%, and 17-79%, respectively. ML potentially allows better prediction of the nodal status of PC, potentially allowing a better tailoring of pelvic irradiation.

  20. Machine learning algorithms for the creation of clinical healthcare enterprise systems

    NASA Astrophysics Data System (ADS)

    Mandal, Indrajit

    2017-10-01

    Clinical recommender systems are increasingly becoming popular for improving modern healthcare systems. Enterprise systems are persuasively used for creating effective nurse care plans to provide nurse training, clinical recommendations and clinical quality control. A novel design of a reliable clinical recommender system based on multiple classifier system (MCS) is implemented. A hybrid machine learning (ML) ensemble based on random subspace method and random forest is presented. The performance accuracy and robustness of proposed enterprise architecture are quantitatively estimated to be above 99% and 97%, respectively (above 95% confidence interval). The study then extends to experimental analysis of the clinical recommender system with respect to the noisy data environment. The ranking of items in nurse care plan is demonstrated using machine learning algorithms (MLAs) to overcome the drawback of the traditional association rule method. The promising experimental results are compared against the sate-of-the-art approaches to highlight the advancement in recommendation technology. The proposed recommender system is experimentally validated using five benchmark clinical data to reinforce the research findings.

  1. Deep Learning for ECG Classification

    NASA Astrophysics Data System (ADS)

    Pyakillya, B.; Kazachenko, N.; Mikhailovsky, N.

    2017-10-01

    The importance of ECG classification is very high now due to many current medical applications where this problem can be stated. Currently, there are many machine learning (ML) solutions which can be used for analyzing and classifying ECG data. However, the main disadvantages of these ML results is use of heuristic hand-crafted or engineered features with shallow feature learning architectures. The problem relies in the possibility not to find most appropriate features which will give high classification accuracy in this ECG problem. One of the proposing solution is to use deep learning architectures where first layers of convolutional neurons behave as feature extractors and in the end some fully-connected (FCN) layers are used for making final decision about ECG classes. In this work the deep learning architecture with 1D convolutional layers and FCN layers for ECG classification is presented and some classification results are showed.

  2. Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time‐to‐Event Analysis

    PubMed Central

    Gong, Xiajing; Hu, Meng

    2018-01-01

    Abstract Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time‐to‐event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high‐dimensional data featured by a large number of predictor variables. Our results showed that ML‐based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high‐dimensional data. The prediction performances of ML‐based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML‐based methods provide a powerful tool for time‐to‐event analysis, with a built‐in capacity for high‐dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. PMID:29536640

  3. Machine learning reveals cyclic changes in seismic source spectra in Geysers geothermal field

    PubMed Central

    Paisley, John

    2018-01-01

    The earthquake rupture process comprises complex interactions of stress, fracture, and frictional properties. New machine learning methods demonstrate great potential to reveal patterns in time-dependent spectral properties of seismic signals and enable identification of changes in faulting processes. Clustering of 46,000 earthquakes of 0.3 < ML < 1.5 from the Geysers geothermal field (CA) yields groupings that have no reservoir-scale spatial patterns but clear temporal patterns. Events with similar spectral properties repeat on annual cycles within each cluster and track changes in the water injection rates into the Geysers reservoir, indicating that changes in acoustic properties and faulting processes accompany changes in thermomechanical state. The methods open new means to identify and characterize subtle changes in seismic source properties, with applications to tectonic and geothermal seismicity. PMID:29806015

  4. Predictive modeling of dynamic fracture growth in brittle materials with machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Bryan A.; Rougier, Esteban; O’Malley, Daniel

    We use simulation data from a high delity Finite-Discrete Element Model to build an e cient Machine Learning (ML) approach to predict fracture growth and coalescence. Our goal is for the ML approach to be used as an emulator in place of the computationally intensive high delity models in an uncertainty quanti cation framework where thousands of forward runs are required. The failure of materials with various fracture con gurations (size, orientation and the number of initial cracks) are explored and used as data to train our ML model. This novel approach has shown promise in predicting spatial (path tomore » failure) and temporal (time to failure) aspects of brittle material failure. Predictions of where dominant fracture paths formed within a material were ~85% accurate and the time of material failure deviated from the actual failure time by an average of ~16%. Additionally, the ML model achieves a reduction in computational cost by multiple orders of magnitude.« less

  5. Predictive modeling of dynamic fracture growth in brittle materials with machine learning

    DOE PAGES

    Moore, Bryan A.; Rougier, Esteban; O’Malley, Daniel; ...

    2018-02-22

    We use simulation data from a high delity Finite-Discrete Element Model to build an e cient Machine Learning (ML) approach to predict fracture growth and coalescence. Our goal is for the ML approach to be used as an emulator in place of the computationally intensive high delity models in an uncertainty quanti cation framework where thousands of forward runs are required. The failure of materials with various fracture con gurations (size, orientation and the number of initial cracks) are explored and used as data to train our ML model. This novel approach has shown promise in predicting spatial (path tomore » failure) and temporal (time to failure) aspects of brittle material failure. Predictions of where dominant fracture paths formed within a material were ~85% accurate and the time of material failure deviated from the actual failure time by an average of ~16%. Additionally, the ML model achieves a reduction in computational cost by multiple orders of magnitude.« less

  6. Invited commentary on comparison of robotics, functional electrical stimulation, and motor learning methods for treatment of persistent upper extremity dysfunction after stroke: a randomized controlled trial.

    PubMed

    Kwakkel, Gert; van Wegen, Erwin E; Meskers, Carel M

    2015-06-01

    In this issue of Archives of Physical Medicine and Rehabilitation, Jessica McCabe and colleagues report findings from their methodologically sound, dose-matched clinical trial in 39 patients beyond 6 months poststroke. In this phase II trial, the effects of 60 treatment sessions, each involving 3.5 hours of intensive practice plus either 1.5 hours of functional electrical stimulation (FES) or a shoulder-arm robotic therapy, were compared with 5 hours of intensive daily practice alone. Although no significant between-group differences were found on the primary outcome measure of Arm Motor Ability Test and the secondary outcome measure of Fugl-Meyer Arm motor score, 10% to 15% within-group therapeutic gains were on the Arm Motor Ability Test and Fugl-Meyer Arm. These gains are clinically meaningful for patients with stroke. However, the underlying mechanisms that drive these improvements remain poorly understood. The approximately $1000 cost reduction per patient calculated for the use of motor learning (ML) methods alone or combined with FES, compared with the combination of ML and shoulder-arm robotics, further emphasizes the need for cost considerations when making clinical decisions about selecting the most appropriate therapy for the upper paretic limb in patients with chronic stroke. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  7. The effects of deep network topology on mortality prediction.

    PubMed

    Hao Du; Ghassemi, Mohammad M; Mengling Feng

    2016-08-01

    Deep learning has achieved remarkable results in the areas of computer vision, speech recognition, natural language processing and most recently, even playing Go. The application of deep-learning to problems in healthcare, however, has gained attention only in recent years, and it's ultimate place at the bedside remains a topic of skeptical discussion. While there is a growing academic interest in the application of Machine Learning (ML) techniques to clinical problems, many in the clinical community see little incentive to upgrade from simpler methods, such as logistic regression, to deep learning. Logistic regression, after all, provides odds ratios, p-values and confidence intervals that allow for ease of interpretation, while deep nets are often seen as `black-boxes' which are difficult to understand and, as of yet, have not demonstrated performance levels far exceeding their simpler counterparts. If deep learning is to ever take a place at the bedside, it will require studies which (1) showcase the performance of deep-learning methods relative to other approaches and (2) interpret the relationships between network structure, model performance, features and outcomes. We have chosen these two requirements as the goal of this study. In our investigation, we utilized a publicly available EMR dataset of over 32,000 intensive care unit patients and trained a Deep Belief Network (DBN) to predict patient mortality at discharge. Utilizing an evolutionary algorithm, we demonstrate automated topology selection for DBNs. We demonstrate that with the correct topology selection, DBNs can achieve better prediction performance compared to several bench-marking methods.

  8. The Public Economics of Mastery Learning.

    ERIC Educational Resources Information Center

    Garner, William T.

    There is both less and more to mastery learning (ML) than meets the eye. Less because mastery learning is not based on a model of school learning, and more because it is the most optimistic statement we have about the power of education. The notions of setting achievement standards and letting time for completion vary, of using criterion…

  9. 78 FR 68774 - Onsite Emergency Response Capabilities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-15

    [email protected] . SUPPLEMENTARY INFORMATION: I. Background As a result of the events at the Fukushima Dai-ichi... Force Review of Insights from the Fukushima Dai-ichi Accident'' (ADAMS Accession No. ML111861807), the... in Response to Fukushima Lessons Learned'' (ADAMS Accession No. ML11269A204). The NRC staff...

  10. Comparing deep learning models for population screening using chest radiography

    NASA Astrophysics Data System (ADS)

    Sivaramakrishnan, R.; Antani, Sameer; Candemir, Sema; Xue, Zhiyun; Abuya, Joseph; Kohli, Marc; Alderson, Philip; Thoma, George

    2018-02-01

    According to the World Health Organization (WHO), tuberculosis (TB) remains the most deadly infectious disease in the world. In a 2015 global annual TB report, 1.5 million TB related deaths were reported. The conditions worsened in 2016 with 1.7 million reported deaths and more than 10 million people infected with the disease. Analysis of frontal chest X-rays (CXR) is one of the most popular methods for initial TB screening, however, the method is impacted by the lack of experts for screening chest radiographs. Computer-aided diagnosis (CADx) tools have gained significance because they reduce the human burden in screening and diagnosis, particularly in countries that lack substantial radiology services. State-of-the-art CADx software typically is based on machine learning (ML) approaches that use hand-engineered features, demanding expertise in analyzing the input variances and accounting for the changes in size, background, angle, and position of the region of interest (ROI) on the underlying medical imagery. More automatic Deep Learning (DL) tools have demonstrated promising results in a wide range of ML applications. Convolutional Neural Networks (CNN), a class of DL models, have gained research prominence in image classification, detection, and localization tasks because they are highly scalable and deliver superior results with end-to-end feature extraction and classification. In this study, we evaluated the performance of CNN based DL models for population screening using frontal CXRs. The results demonstrate that pre-trained CNNs are a promising feature extracting tool for medical imagery including the automated diagnosis of TB from chest radiographs but emphasize the importance of large data sets for the most accurate classification.

  11. Learning Machine Learning: A Case Study

    ERIC Educational Resources Information Center

    Lavesson, N.

    2010-01-01

    This correspondence reports on a case study conducted in the Master's-level Machine Learning (ML) course at Blekinge Institute of Technology, Sweden. The students participated in a self-assessment test and a diagnostic test of prerequisite subjects, and their results on these tests are correlated with their achievement of the course's learning…

  12. Modern Languages and Specific Learning Difficulties (SpLD): Implications of Teaching Adult Learners with Dyslexia in Distance Learning

    ERIC Educational Resources Information Center

    Gallardo, Matilde; Heiser, Sarah; Arias McLaughlin, Ximena

    2015-01-01

    In modern language (ML) distance learning programmes, teachers and students use online tools to facilitate, reinforce and support independent learning. This makes it essential for teachers to develop pedagogical expertise in using online communication tools to perform their role. Teachers frequently raise questions of how best to support the needs…

  13. Developing Pedagogical Expertise in Modern Language Learning and Specific Learning Difficulties through Collaborative and Open Educational Practices

    ERIC Educational Resources Information Center

    Gallardo, Matilde; Heiser, Sarah; Arias McLaughlin, Ximena

    2017-01-01

    This paper analyses teachers' engagement with collaborative and open educational practices to develop their pedagogical expertise in the field of modern language (ML) learning and specific learning difficulties (SpLD). The study analyses the findings of a staff development initiative at the Department of Languages, Open University, UK, in 2013,…

  14. Performance of Machine Learning Algorithms for Qualitative and Quantitative Prediction Drug Blockade of hERG1 channel.

    PubMed

    Wacker, Soren; Noskov, Sergei Yu

    2018-05-01

    Drug-induced abnormal heart rhythm known as Torsades de Pointes (TdP) is a potential lethal ventricular tachycardia found in many patients. Even newly released anti-arrhythmic drugs, like ivabradine with HCN channel as a primary target, block the hERG potassium current in overlapping concentration interval. Promiscuous drug block to hERG channel may potentially lead to perturbation of the action potential duration (APD) and TdP, especially when with combined with polypharmacy and/or electrolyte disturbances. The example of novel anti-arrhythmic ivabradine illustrates clinically important and ongoing deficit in drug design and warrants for better screening methods. There is an urgent need to develop new approaches for rapid and accurate assessment of how drugs with complex interactions and multiple subcellular targets can predispose or protect from drug-induced TdP. One of the unexpected outcomes of compulsory hERG screening implemented in USA and European Union resulted in large datasets of IC 50 values for various molecules entering the market. The abundant data allows now to construct predictive machine-learning (ML) models. Novel ML algorithms and techniques promise better accuracy in determining IC 50 values of hERG blockade that is comparable or surpassing that of the earlier QSAR or molecular modeling technique. To test the performance of modern ML techniques, we have developed a computational platform integrating various workflows for quantitative structure activity relationship (QSAR) models using data from the ChEMBL database. To establish predictive powers of ML-based algorithms we computed IC 50 values for large dataset of molecules and compared it to automated patch clamp system for a large dataset of hERG blocking and non-blocking drugs, an industry gold standard in studies of cardiotoxicity. The optimal protocol with high sensitivity and predictive power is based on the novel eXtreme gradient boosting (XGBoost) algorithm. The ML-platform with XGBoost displays excellent performance with a coefficient of determination of up to R 2 ~0.8 for pIC 50 values in evaluation datasets, surpassing other metrics and approaches available in literature. Ultimately, the ML-based platform developed in our work is a scalable framework with automation potential to interact with other developing technologies in cardiotoxicity field, including high-throughput electrophysiology measurements delivering large datasets of profiled drugs, rapid synthesis and drug development via progress in synthetic biology.

  15. Predictions of new AB O3 perovskite compounds by combining machine learning and density functional theory

    NASA Astrophysics Data System (ADS)

    Balachandran, Prasanna V.; Emery, Antoine A.; Gubernatis, James E.; Lookman, Turab; Wolverton, Chris; Zunger, Alex

    2018-04-01

    We apply machine learning (ML) methods to a database of 390 experimentally reported A B O3 compounds to construct two statistical models that predict possible new perovskite materials and possible new cubic perovskites. The first ML model classified the 390 compounds into 254 perovskites and 136 that are not perovskites with a 90% average cross-validation (CV) accuracy; the second ML model further classified the perovskites into 22 known cubic perovskites and 232 known noncubic perovskites with a 94% average CV accuracy. We find that the most effective chemical descriptors affecting our classification include largely geometric constructs such as the A and B Shannon ionic radii, the tolerance and octahedral factors, the A -O and B -O bond length, and the A and B Villars' Mendeleev numbers. We then construct an additional list of 625 A B O3 compounds assembled from charge conserving combinations of A and B atoms absent from our list of known compounds. Then, using the two ML models constructed on the known compounds, we predict that 235 of the 625 exist in a perovskite structure with a confidence greater than 50% and among them that 20 exist in the cubic structure (albeit, the latter with only ˜50 % confidence). We find that the new perovskites are most likely to occur when the A and B atoms are a lanthanide or actinide, when the A atom is an alkali, alkali earth, or late transition metal atom, or when the B atom is a p -block atom. We also compare the ML findings with the density functional theory calculations and convex hull analyses in the Open Quantum Materials Database (OQMD), which predicts the T =0 K ground-state stability of all the A B O3 compounds. We find that OQMD predicts 186 of 254 of the perovskites in the experimental database to be thermodynamically stable within 100 meV/atom of the convex hull and predicts 87 of the 235 ML-predicted perovskite compounds to be thermodynamically stable within 100 meV/atom of the convex hull, including 6 of these to be in cubic structures. We suggest these 87 as the most promising candidates for future experimental synthesis of novel perovskites.

  16. Motor Learning Versus StandardWalking Exercise in Older Adults with Subclinical Gait Dysfunction: A Randomized Clinical Trial

    PubMed Central

    Brach, Jennifer S.; Van Swearingen, Jessie M.; Perera, Subashan; Wert, David M.; Studenski, Stephanie

    2013-01-01

    Background Current exercise recommendationsfocus on endurance and strength, but rarely incorporate principles of motor learning. Motor learning exerciseis designed to address neurological aspects of movement. Motor learning exercise has not been evaluated in older adults with subclinical gait dysfunction. Objectives Tocompare motor learning versus standard exercise on measures of mobility and perceived function and disability. Design Single-blind randomized trial. Setting University research center. Participants Olderadults (n=40), mean age 77.1±6.0 years), who had normal walking speed (≥1.0 m/s) and impaired motor skill (Figure of 8 walk time > 8 s). Interventions The motor learning program (ML) incorporated goal-oriented stepping and walking to promote timing and coordination within the phases of the gait cycle. The standard program (S) employed endurance training by treadmill walking.Both included strength training and were offered twice weekly for one hour for 12 weeks. Measurements Primary outcomes included mobility performance (gait efficiency, motor skill in walking, gait speed, and walking endurance)and secondary outcomes included perceived function and disability (Late Life Function and Disability Instrument). Results 38 of 40 participants completed the trial (ML, n=18; S, n=20). ML improved more than Sin gait speed (0.13 vs. 0.05 m/s, p=0.008) and motor skill (−2.2 vs. −0.89 s, p<0.0001). Both groups improved in walking endurance (28.3 and 22.9m, but did not differ significantly p=0.14). Changes in gait efficiency and perceived function and disability were not different between the groups (p>0.10). Conclusion In older adults with subclinical gait dysfunction, motor learning exercise improved some parameters of mobility performance more than standard exercise. PMID:24219189

  17. Quantitative forecasting of PTSD from early trauma responses: a Machine Learning application.

    PubMed

    Galatzer-Levy, Isaac R; Karstoft, Karen-Inge; Statnikov, Alexander; Shalev, Arieh Y

    2014-12-01

    There is broad interest in predicting the clinical course of mental disorders from early, multimodal clinical and biological information. Current computational models, however, constitute a significant barrier to realizing this goal. The early identification of trauma survivors at risk of post-traumatic stress disorder (PTSD) is plausible given the disorder's salient onset and the abundance of putative biological and clinical risk indicators. This work evaluates the ability of Machine Learning (ML) forecasting approaches to identify and integrate a panel of unique predictive characteristics and determine their accuracy in forecasting non-remitting PTSD from information collected within 10 days of a traumatic event. Data on event characteristics, emergency department observations, and early symptoms were collected in 957 trauma survivors, followed for fifteen months. An ML feature selection algorithm identified a set of predictors that rendered all others redundant. Support Vector Machines (SVMs) as well as other ML classification algorithms were used to evaluate the forecasting accuracy of i) ML selected features, ii) all available features without selection, and iii) Acute Stress Disorder (ASD) symptoms alone. SVM also compared the prediction of a) PTSD diagnostic status at 15 months to b) posterior probability of membership in an empirically derived non-remitting PTSD symptom trajectory. Results are expressed as mean Area Under Receiver Operating Characteristics Curve (AUC). The feature selection algorithm identified 16 predictors, present in ≥ 95% cross-validation trials. The accuracy of predicting non-remitting PTSD from that set (AUC = .77) did not differ from predicting from all available information (AUC = .78). Predicting from ASD symptoms was not better then chance (AUC = .60). The prediction of PTSD status was less accurate than that of membership in a non-remitting trajectory (AUC = .71). ML methods may fill a critical gap in forecasting PTSD. The ability to identify and integrate unique risk indicators makes this a promising approach for developing algorithms that infer probabilistic risk of chronic posttraumatic stress psychopathology based on complex sources of biological, psychological, and social information. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Chemometric-assisted spectrophotometric methods and high performance liquid chromatography for simultaneous determination of seven β-blockers in their pharmaceutical products: A comparative study

    NASA Astrophysics Data System (ADS)

    Abdel Hameed, Eman A.; Abdel Salam, Randa A.; Hadad, Ghada M.

    2015-04-01

    Chemometric-assisted spectrophotometric methods and high performance liquid chromatography (HPLC) were developed for the simultaneous determination of the seven most commonly prescribed β-blockers (atenolol, sotalol, metoprolol, bisoprolol, propranolol, carvedilol and nebivolol). Principal component regression PCR, partial least square PLS and PLS with previous wavelength selection by genetic algorithm (GA-PLS) were used for chemometric analysis of spectral data of these drugs. The compositions of the mixtures used in the calibration set were varied to cover the linearity ranges 0.7-10 μg ml-1 for AT, 1-15 μg ml-1 for ST, 1-15 μg ml-1 for MT, 0.3-5 μg ml-1 for BS, 0.1-3 μg ml-1 for PR, 0.1-3 μg ml-1 for CV and 0.7-5 μg ml-1 for NB. The analytical performances of these chemometric methods were characterized by relative prediction errors and were compared with each other. GA-PLS showed superiority over the other applied multivariate methods due to the wavelength selection. A new gradient HPLC method had been developed using statistical experimental design. Optimum conditions of separation were determined with the aid of central composite design. The developed HPLC method was found to be linear in the range of 0.2-20 μg ml-1 for AT, 0.2-20 μg ml-1 for ST, 0.1-15 μg ml-1 for MT, 0.1-15 μg ml-1 for BS, 0.1-13 μg ml-1 for PR, 0.1-13 μg ml-1 for CV and 0.4-20 μg ml-1 for NB. No significant difference between the results of the proposed GA-PLS and HPLC methods with respect to accuracy and precision. The proposed analytical methods did not show any interference of the excipients when applied to pharmaceutical products.

  19. Distributed Learning Enhances Relational Memory Consolidation

    ERIC Educational Resources Information Center

    Litman, Leib; Davachi, Lila

    2008-01-01

    It has long been known that distributed learning (DL) provides a mnemonic advantage over massed learning (ML). However, the underlying mechanisms that drive this robust mnemonic effect remain largely unknown. In two experiments, we show that DL across a 24 hr interval does not enhance immediate memory performance but instead slows the rate of…

  20. The reluctant visitor: an alkaloid in toxic nectar can reduce olfactory learning and memory in Asian honey bees.

    PubMed

    Zhang, Junjun; Wang, Zhengwei; Wen, Ping; Qu, Yufeng; Tan, Ken; Nieh, James C

    2018-03-01

    The nectar of the thunder god vine, Tripterygium hypoglaucum , contains a terpenoid, triptolide (TRP), that may be toxic to the sympatric Asian honey bee, Apis cerana , because honey produced from this nectar is toxic to bees. However, these bees will forage on, recruit for, and pollinate this plant during a seasonal dearth of preferred food sources. Olfactory learning plays a key role in forager constancy and pollination, and we therefore tested the effects of acute and chronic TRP feeding on forager olfactory learning, using proboscis extension reflex conditioning. At concentrations of 0.5-10 µg TRP ml -1 , there were no learning effects of acute exposure. However, memory retention (1 h after the last learning trial) significantly decreased by 56% following acute consumption of 0.5 µg TRP ml -1 Chronic exposure did not alter learning or memory, except at high concentrations (5 and 10 µg TRP ml -1 ). TRP concentrations in nectar may therefore not significantly harm plant pollination. Surprisingly, TRP slightly increased bee survival, and thus other components in T. hypoglaucum honey may be toxic. Long-term exposure to TRP could have colony effects but these may be ameliorated by the bees' aversion to T. hypoglaucum nectar when other food sources are available and, perhaps, by detoxification mechanisms. The co-evolution of this plant and its reluctant visitor may therefore likely illustrate a classic compromise between the interests of both actors. © 2018. Published by The Company of Biologists Ltd.

  1. Nonlinear phase noise tolerance for coherent optical systems using soft-decision-aided ML carrier phase estimation enhanced with constellation partitioning

    NASA Astrophysics Data System (ADS)

    Li, Yan; Wu, Mingwei; Du, Xinwei; Xu, Zhuoran; Gurusamy, Mohan; Yu, Changyuan; Kam, Pooi-Yuen

    2018-02-01

    A novel soft-decision-aided maximum likelihood (SDA-ML) carrier phase estimation method and its simplified version, the decision-aided and soft-decision-aided maximum likelihood (DA-SDA-ML) methods are tested in a nonlinear phase noise-dominant channel. The numerical performance results show that both the SDA-ML and DA-SDA-ML methods outperform the conventional DA-ML in systems with constant-amplitude modulation formats. In addition, modified algorithms based on constellation partitioning are proposed. With partitioning, the modified SDA-ML and DA-SDA-ML are shown to be useful for compensating the nonlinear phase noise in multi-level modulation systems.

  2. Using Supervised Machine Learning to Classify Real Alerts and Artifact in Online Multi-signal Vital Sign Monitoring Data

    PubMed Central

    Chen, Lujie; Dubrawski, Artur; Wang, Donghan; Fiterau, Madalina; Guillame-Bert, Mathieu; Bose, Eliezer; Kaynar, Ata M.; Wallace, David J.; Guttendorf, Jane; Clermont, Gilles; Pinsky, Michael R.; Hravnak, Marilyn

    2015-01-01

    OBJECTIVE Use machine-learning (ML) algorithms to classify alerts as real or artifacts in online noninvasive vital sign (VS) data streams to reduce alarm fatigue and missed true instability. METHODS Using a 24-bed trauma step-down unit’s non-invasive VS monitoring data (heart rate [HR], respiratory rate [RR], peripheral oximetry [SpO2]) recorded at 1/20Hz, and noninvasive oscillometric blood pressure [BP] less frequently, we partitioned data into training/validation (294 admissions; 22,980 monitoring hours) and test sets (2,057 admissions; 156,177 monitoring hours). Alerts were VS deviations beyond stability thresholds. A four-member expert committee annotated a subset of alerts (576 in training/validation set, 397 in test set) as real or artifact selected by active learning, upon which we trained ML algorithms. The best model was evaluated on alerts in the test set to enact online alert classification as signals evolve over time. MAIN RESULTS The Random Forest model discriminated between real and artifact as the alerts evolved online in the test set with area under the curve (AUC) performance of 0.79 (95% CI 0.67-0.93) for SpO2 at the instant the VS first crossed threshold and increased to 0.87 (95% CI 0.71-0.95) at 3 minutes into the alerting period. BP AUC started at 0.77 (95%CI 0.64-0.95) and increased to 0.87 (95% CI 0.71-0.98), while RR AUC started at 0.85 (95%CI 0.77-0.95) and increased to 0.97 (95% CI 0.94–1.00). HR alerts were too few for model development. CONCLUSIONS ML models can discern clinically relevant SpO2, BP and RR alerts from artifacts in an online monitoring dataset (AUC>0.87). PMID:26992068

  3. Parameterization of typhoon-induced ocean cooling using temperature equation and machine learning algorithms: an example of typhoon Soulik (2013)

    NASA Astrophysics Data System (ADS)

    Wei, Jun; Jiang, Guo-Qing; Liu, Xin

    2017-09-01

    This study proposed three algorithms that can potentially be used to provide sea surface temperature (SST) conditions for typhoon prediction models. Different from traditional data assimilation approaches, which provide prescribed initial/boundary conditions, our proposed algorithms aim to resolve a flow-dependent SST feedback between growing typhoons and oceans in the future time. Two of these algorithms are based on linear temperature equations (TE-based), and the other is based on an innovative technique involving machine learning (ML-based). The algorithms are then implemented into a Weather Research and Forecasting model for the simulation of typhoon to assess their effectiveness, and the results show significant improvement in simulated storm intensities by including ocean cooling feedback. The TE-based algorithm I considers wind-induced ocean vertical mixing and upwelling processes only, and thus obtained a synoptic and relatively smooth sea surface temperature cooling. The TE-based algorithm II incorporates not only typhoon winds but also ocean information, and thus resolves more cooling features. The ML-based algorithm is based on a neural network, consisting of multiple layers of input variables and neurons, and produces the best estimate of the cooling structure, in terms of its amplitude and position. Sensitivity analysis indicated that the typhoon-induced ocean cooling is a nonlinear process involving interactions of multiple atmospheric and oceanic variables. Therefore, with an appropriate selection of input variables and neuron sizes, the ML-based algorithm appears to be more efficient in prognosing the typhoon-induced ocean cooling and in predicting typhoon intensity than those algorithms based on linear regression methods.

  4. Performance comparison of machine learning algorithms and number of independent components used in fMRI decoding of belief vs. disbelief.

    PubMed

    Douglas, P K; Harris, Sam; Yuille, Alan; Cohen, Mark S

    2011-05-15

    Machine learning (ML) has become a popular tool for mining functional neuroimaging data, and there are now hopes of performing such analyses efficiently in real-time. Towards this goal, we compared accuracy of six different ML algorithms applied to neuroimaging data of persons engaged in a bivariate task, asserting their belief or disbelief of a variety of propositional statements. We performed unsupervised dimension reduction and automated feature extraction using independent component (IC) analysis and extracted IC time courses. Optimization of classification hyperparameters across each classifier occurred prior to assessment. Maximum accuracy was achieved at 92% for Random Forest, followed by 91% for AdaBoost, 89% for Naïve Bayes, 87% for a J48 decision tree, 86% for K*, and 84% for support vector machine. For real-time decoding applications, finding a parsimonious subset of diagnostic ICs might be useful. We used a forward search technique to sequentially add ranked ICs to the feature subspace. For the current data set, we determined that approximately six ICs represented a meaningful basis set for classification. We then projected these six IC spatial maps forward onto a later scanning session within subject. We then applied the optimized ML algorithms to these new data instances, and found that classification accuracy results were reproducible. Additionally, we compared our classification method to our previously published general linear model results on this same data set. The highest ranked IC spatial maps show similarity to brain regions associated with contrasts for belief > disbelief, and disbelief < belief. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. Enhancement of plant metabolite fingerprinting by machine learning.

    PubMed

    Scott, Ian M; Vermeer, Cornelia P; Liakata, Maria; Corol, Delia I; Ward, Jane L; Lin, Wanchang; Johnson, Helen E; Whitehead, Lynne; Kular, Baldeep; Baker, John M; Walsh, Sean; Dave, Anuja; Larson, Tony R; Graham, Ian A; Wang, Trevor L; King, Ross D; Draper, John; Beale, Michael H

    2010-08-01

    Metabolite fingerprinting of Arabidopsis (Arabidopsis thaliana) mutants with known or predicted metabolic lesions was performed by (1)H-nuclear magnetic resonance, Fourier transform infrared, and flow injection electrospray-mass spectrometry. Fingerprinting enabled processing of five times more plants than conventional chromatographic profiling and was competitive for discriminating mutants, other than those affected in only low-abundance metabolites. Despite their rapidity and complexity, fingerprints yielded metabolomic insights (e.g. that effects of single lesions were usually not confined to individual pathways). Among fingerprint techniques, (1)H-nuclear magnetic resonance discriminated the most mutant phenotypes from the wild type and Fourier transform infrared discriminated the fewest. To maximize information from fingerprints, data analysis was crucial. One-third of distinctive phenotypes might have been overlooked had data models been confined to principal component analysis score plots. Among several methods tested, machine learning (ML) algorithms, namely support vector machine or random forest (RF) classifiers, were unsurpassed for phenotype discrimination. Support vector machines were often the best performing classifiers, but RFs yielded some particularly informative measures. First, RFs estimated margins between mutant phenotypes, whose relations could then be visualized by Sammon mapping or hierarchical clustering. Second, RFs provided importance scores for the features within fingerprints that discriminated mutants. These scores correlated with analysis of variance F values (as did Kruskal-Wallis tests, true- and false-positive measures, mutual information, and the Relief feature selection algorithm). ML classifiers, as models trained on one data set to predict another, were ideal for focused metabolomic queries, such as the distinctiveness and consistency of mutant phenotypes. Accessible software for use of ML in plant physiology is highlighted.

  6. Prediction of revascularization after myocardial perfusion SPECT by machine learning in a large population.

    PubMed

    Arsanjani, Reza; Dey, Damini; Khachatryan, Tigran; Shalev, Aryeh; Hayes, Sean W; Fish, Mathews; Nakanishi, Rine; Germano, Guido; Berman, Daniel S; Slomka, Piotr

    2015-10-01

    We aimed to investigate if early revascularization in patients with suspected coronary artery disease can be effectively predicted by integrating clinical data and quantitative image features derived from perfusion SPECT (MPS) by machine learning (ML) approach. 713 rest (201)Thallium/stress (99m)Technetium MPS studies with correlating invasive angiography with 372 revascularization events (275 PCI/97 CABG) within 90 days after MPS (91% within 30 days) were considered. Transient ischemic dilation, stress combined supine/prone total perfusion deficit (TPD), supine rest and stress TPD, exercise ejection fraction, and end-systolic volume, along with clinical parameters including patient gender, history of hypertension and diabetes mellitus, ST-depression on baseline ECG, ECG and clinical response during stress, and post-ECG probability by boosted ensemble ML algorithm (LogitBoost) to predict revascularization events. These features were selected using an automated feature selection algorithm from all available clinical and quantitative data (33 parameters). Tenfold cross-validation was utilized to train and test the prediction model. The prediction of revascularization by ML algorithm was compared to standalone measures of perfusion and visual analysis by two experienced readers utilizing all imaging, quantitative, and clinical data. The sensitivity of machine learning (ML) (73.6% ± 4.3%) for prediction of revascularization was similar to one reader (73.9% ± 4.6%) and standalone measures of perfusion (75.5% ± 4.5%). The specificity of ML (74.7% ± 4.2%) was also better than both expert readers (67.2% ± 4.9% and 66.0% ± 5.0%, P < .05), but was similar to ischemic TPD (68.3% ± 4.9%, P < .05). The receiver operator characteristics areas under curve for ML (0.81 ± 0.02) was similar to reader 1 (0.81 ± 0.02) but superior to reader 2 (0.72 ± 0.02, P < .01) and standalone measure of perfusion (0.77 ± 0.02, P < .01). ML approach is comparable or better than experienced readers in prediction of the early revascularization after MPS, and is significantly better than standalone measures of perfusion derived from MPS.

  7. The influence of negative training set size on machine learning-based virtual screening.

    PubMed

    Kurczab, Rafał; Smusz, Sabina; Bojarski, Andrzej J

    2014-01-01

    The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening.

  8. The influence of negative training set size on machine learning-based virtual screening

    PubMed Central

    2014-01-01

    Background The paper presents a thorough analysis of the influence of the number of negative training examples on the performance of machine learning methods. Results The impact of this rather neglected aspect of machine learning methods application was examined for sets containing a fixed number of positive and a varying number of negative examples randomly selected from the ZINC database. An increase in the ratio of positive to negative training instances was found to greatly influence most of the investigated evaluating parameters of ML methods in simulated virtual screening experiments. In a majority of cases, substantial increases in precision and MCC were observed in conjunction with some decreases in hit recall. The analysis of dynamics of those variations let us recommend an optimal composition of training data. The study was performed on several protein targets, 5 machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest) and 2 types of molecular fingerprints (MACCS and CDK FP). The most effective classification was provided by the combination of CDK FP with SMO or Random Forest algorithms. The Naïve Bayes models appeared to be hardly sensitive to changes in the number of negative instances in the training set. Conclusions In conclusion, the ratio of positive to negative training instances should be taken into account during the preparation of machine learning experiments, as it might significantly influence the performance of particular classifier. What is more, the optimization of negative training set size can be applied as a boosting-like approach in machine learning-based virtual screening. PMID:24976867

  9. Semi-local machine-learned kinetic energy density functional with third-order gradients of electron density

    NASA Astrophysics Data System (ADS)

    Seino, Junji; Kageyama, Ryo; Fujinami, Mikito; Ikabata, Yasuhiro; Nakai, Hiromi

    2018-06-01

    A semi-local kinetic energy density functional (KEDF) was constructed based on machine learning (ML). The present scheme adopts electron densities and their gradients up to third-order as the explanatory variables for ML and the Kohn-Sham (KS) kinetic energy density as the response variable in atoms and molecules. Numerical assessments of the present scheme were performed in atomic and molecular systems, including first- and second-period elements. The results of 37 conventional KEDFs with explicit formulae were also compared with those of the ML KEDF with an implicit formula. The inclusion of the higher order gradients reduces the deviation of the total kinetic energies from the KS calculations in a stepwise manner. Furthermore, our scheme with the third-order gradient resulted in the closest kinetic energies to the KS calculations out of the presented functionals.

  10. A Machine Learning Approach to Predicted Bathymetry

    NASA Astrophysics Data System (ADS)

    Wood, W. T.; Elmore, P. A.; Petry, F.

    2017-12-01

    Recent and on-going efforts have shown how machine learning (ML) techniques, incorporating more, and more disparate data than can be interpreted manually, can predict seafloor properties, with uncertainty, where they have not been measured directly. We examine here a ML approach to predicted bathymetry. Our approach employs a paradigm of global bathymetry as an integral component of global geology. From a marine geology and geophysics perspective the bathymetry is the thickness of one layer in an ensemble of layers that inter-relate to varying extents vertically and geospatially. The nature of the multidimensional relationships in these layers between bathymetry, gravity, magnetic field, age, and many other global measures is typically geospatially dependent and non-linear. The advantage of using ML is that these relationships need not be stated explicitly, nor do they need to be approximated with a transfer function - the machine learns them via the data. Fundamentally, ML operates by brute-force searching for multidimensional correlations between desired, but sparsely known data values (in this case water depth), and a multitude of (geologic) predictors. Predictors include quantities known extensively such as remotely sensed measurements (i.e. gravity and magnetics), distance from spreading ridge, trench etc., (and spatial statistics based on these quantities). Estimating bathymetry from an approximate transfer function is inherently model, as well as data limited - complex relationships are explicitly ruled out. The ML is a purely data-driven approach, so only the extent and quality of the available observations limit prediction accuracy. This allows for a system in which new data, of a wide variety of types, can be quickly and easily assimilated into updated bathymetry predictions with quantitative posterior uncertainties.

  11. Machine-learning scoring functions for identifying native poses of ligands docked to known and novel proteins.

    PubMed

    Ashtawy, Hossam M; Mahapatra, Nihar R

    2015-01-01

    Molecular docking is a widely-employed method in structure-based drug design. An essential component of molecular docking programs is a scoring function (SF) that can be used to identify the most stable binding pose of a ligand, when bound to a receptor protein, from among a large set of candidate poses. Despite intense efforts in developing conventional SFs, which are either force-field based, knowledge-based, or empirical, their limited docking power (or ability to successfully identify the correct pose) has been a major impediment to cost-effective drug discovery. Therefore, in this work, we explore a range of novel SFs employing different machine-learning (ML) approaches in conjunction with physicochemical and geometrical features characterizing protein-ligand complexes to predict the native or near-native pose of a ligand docked to a receptor protein's binding site. We assess the docking accuracies of these new ML SFs as well as those of conventional SFs in the context of the 2007 PDBbind benchmark dataset on both diverse and homogeneous (protein-family-specific) test sets. Further, we perform a systematic analysis of the performance of the proposed SFs in identifying native poses of ligands that are docked to novel protein targets. We find that the best performing ML SF has a success rate of 80% in identifying poses that are within 1 Å root-mean-square deviation from the native poses of 65 different protein families. This is in comparison to a success rate of only 70% achieved by the best conventional SF, ASP, employed in the commercial docking software GOLD. In addition, the proposed ML SFs perform better on novel proteins that they were never trained on before. We also observed steady gains in the performance of these scoring functions as the training set size and number of features were increased by considering more protein-ligand complexes and/or more computationally-generated poses for each complex.

  12. Machine Learning Algorithms Utilizing Quantitative CT Features May Predict Eventual Onset of Bronchiolitis Obliterans Syndrome After Lung Transplantation.

    PubMed

    Barbosa, Eduardo J Mortani; Lanclus, Maarten; Vos, Wim; Van Holsbeke, Cedric; De Backer, William; De Backer, Jan; Lee, James

    2018-02-19

    Long-term survival after lung transplantation (LTx) is limited by bronchiolitis obliterans syndrome (BOS), defined as a sustained decline in forced expiratory volume in the first second (FEV 1 ) not explained by other causes. We assessed whether machine learning (ML) utilizing quantitative computed tomography (qCT) metrics can predict eventual development of BOS. Paired inspiratory-expiratory CT scans of 71 patients who underwent LTx were analyzed retrospectively (BOS [n = 41] versus non-BOS [n = 30]), using at least two different time points. The BOS cohort experienced a reduction in FEV 1 of >10% compared to baseline FEV 1 post LTx. Multifactor analysis correlated declining FEV 1 with qCT features linked to acute inflammation or BOS onset. Student t test and ML were applied on baseline qCT features to identify lung transplant patients at baseline that eventually developed BOS. The FEV 1 decline in the BOS cohort correlated with an increase in the lung volume (P = .027) and in the central airway volume at functional residual capacity (P = .018), not observed in non-BOS patients, whereas the non-BOS cohort experienced a decrease in the central airway volume at total lung capacity with declining FEV 1 (P = .039). Twenty-three baseline qCT parameters could significantly distinguish between non-BOS patients and eventual BOS developers (P < .05), whereas no pulmonary function testing parameters could. Using ML methods (support vector machine), we could identify BOS developers at baseline with an accuracy of 85%, using only three qCT parameters. ML utilizing qCT could discern distinct mechanisms driving FEV 1 decline in BOS and non-BOS LTx patients and predict eventual onset of BOS. This approach may become useful to optimize management of LTx patients. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  13. Machine-learning scoring functions for identifying native poses of ligands docked to known and novel proteins

    PubMed Central

    2015-01-01

    Background Molecular docking is a widely-employed method in structure-based drug design. An essential component of molecular docking programs is a scoring function (SF) that can be used to identify the most stable binding pose of a ligand, when bound to a receptor protein, from among a large set of candidate poses. Despite intense efforts in developing conventional SFs, which are either force-field based, knowledge-based, or empirical, their limited docking power (or ability to successfully identify the correct pose) has been a major impediment to cost-effective drug discovery. Therefore, in this work, we explore a range of novel SFs employing different machine-learning (ML) approaches in conjunction with physicochemical and geometrical features characterizing protein-ligand complexes to predict the native or near-native pose of a ligand docked to a receptor protein's binding site. We assess the docking accuracies of these new ML SFs as well as those of conventional SFs in the context of the 2007 PDBbind benchmark dataset on both diverse and homogeneous (protein-family-specific) test sets. Further, we perform a systematic analysis of the performance of the proposed SFs in identifying native poses of ligands that are docked to novel protein targets. Results and conclusion We find that the best performing ML SF has a success rate of 80% in identifying poses that are within 1 Å root-mean-square deviation from the native poses of 65 different protein families. This is in comparison to a success rate of only 70% achieved by the best conventional SF, ASP, employed in the commercial docking software GOLD. In addition, the proposed ML SFs perform better on novel proteins that they were never trained on before. We also observed steady gains in the performance of these scoring functions as the training set size and number of features were increased by considering more protein-ligand complexes and/or more computationally-generated poses for each complex. PMID:25916860

  14. Exploration of Machine Learning Approaches to Predict Pavement Performance

    DOT National Transportation Integrated Search

    2018-03-23

    Machine learning (ML) techniques were used to model and predict pavement condition index (PCI) for various pavement types using a variety of input variables. The primary objective of this research was to develop and assess PCI predictive models for t...

  15. Integrating Natural Language Processing and Machine Learning Algorithms to Categorize Oncologic Response in Radiology Reports.

    PubMed

    Chen, Po-Hao; Zafar, Hanna; Galperin-Aizenberg, Maya; Cook, Tessa

    2018-04-01

    A significant volume of medical data remains unstructured. Natural language processing (NLP) and machine learning (ML) techniques have shown to successfully extract insights from radiology reports. However, the codependent effects of NLP and ML in this context have not been well-studied. Between April 1, 2015 and November 1, 2016, 9418 cross-sectional abdomen/pelvis CT and MR examinations containing our internal structured reporting element for cancer were separated into four categories: Progression, Stable Disease, Improvement, or No Cancer. We combined each of three NLP techniques with five ML algorithms to predict the assigned label using the unstructured report text and compared the performance of each combination. The three NLP algorithms included term frequency-inverse document frequency (TF-IDF), term frequency weighting (TF), and 16-bit feature hashing. The ML algorithms included logistic regression (LR), random decision forest (RDF), one-vs-all support vector machine (SVM), one-vs-all Bayes point machine (BPM), and fully connected neural network (NN). The best-performing NLP model consisted of tokenized unigrams and bigrams with TF-IDF. Increasing N-gram length yielded little to no added benefit for most ML algorithms. With all parameters optimized, SVM had the best performance on the test dataset, with 90.6 average accuracy and F score of 0.813. The interplay between ML and NLP algorithms and their effect on interpretation accuracy is complex. The best accuracy is achieved when both algorithms are optimized concurrently.

  16. Reliable Multi-Label Learning via Conformal Predictor and Random Forest for Syndrome Differentiation of Chronic Fatigue in Traditional Chinese Medicine

    PubMed Central

    Wang, Huazhen; Liu, Xin; Lv, Bing; Yang, Fan; Hong, Yanzhu

    2014-01-01

    Objective Chronic Fatigue (CF) still remains unclear about its etiology, pathophysiology, nomenclature and diagnostic criteria in the medical community. Traditional Chinese medicine (TCM) adopts a unique diagnostic method, namely ‘bian zheng lun zhi’ or syndrome differentiation, to diagnose the CF with a set of syndrome factors, which can be regarded as the Multi-Label Learning (MLL) problem in the machine learning literature. To obtain an effective and reliable diagnostic tool, we use Conformal Predictor (CP), Random Forest (RF) and Problem Transformation method (PT) for the syndrome differentiation of CF. Methods and Materials In this work, using PT method, CP-RF is extended to handle MLL problem. CP-RF applies RF to measure the confidence level (p-value) of each label being the true label, and then selects multiple labels whose p-values are larger than the pre-defined significance level as the region prediction. In this paper, we compare the proposed CP-RF with typical CP-NBC(Naïve Bayes Classifier), CP-KNN(K-Nearest Neighbors) and ML-KNN on CF dataset, which consists of 736 cases. Specifically, 95 symptoms are used to identify CF, and four syndrome factors are employed in the syndrome differentiation, including ‘spleen deficiency’, ‘heart deficiency’, ‘liver stagnation’ and ‘qi deficiency’. The Results CP-RF demonstrates an outstanding performance beyond CP-NBC, CP-KNN and ML-KNN under the general metrics of subset accuracy, hamming loss, one-error, coverage, ranking loss and average precision. Furthermore, the performance of CP-RF remains steady at the large scale of confidence levels from 80% to 100%, which indicates its robustness to the threshold determination. In addition, the confidence evaluation provided by CP is valid and well-calibrated. Conclusion CP-RF not only offers outstanding performance but also provides valid confidence evaluation for the CF syndrome differentiation. It would be well applicable to TCM practitioners and facilitate the utilities of objective, effective and reliable computer-based diagnosis tool. PMID:24918430

  17. Learners' Approaches to Solving Mathematical Tasks: Does Specialisation Matter?

    ERIC Educational Resources Information Center

    Machaba, France; Mwakapenda, Willy

    2016-01-01

    This article emerged from an analysis of learners' responses to a task presented to learners studying Mathematics and Mathematical Literacy (ML) in Gauteng, South Africa. Officially, Mathematics and ML are two separate learning areas. Learners from Grade 10 onwards are supposed to take either one or the other, but not both. This means that by…

  18. Aqueous extracts from asparagus stems prevent memory impairments in scopolamine-treated mice.

    PubMed

    Sui, Zifang; Qi, Ce; Huang, Yunxiang; Ma, Shufeng; Wang, Xinguo; Le, Guowei; Sun, Jin

    2017-04-19

    Aqueous extracts from Asparagus officinalis L. stems (AEAS) are rich in polysaccharides, gamma-amino butyric acid (GABA), and steroidal saponin. This study was designed to investigate the effects of AEAS on learning, memory, and acetylcholinesterase-related activity in a scopolamine-induced model of amnesia. Sixty ICR mice were randomly divided into 6 groups (n = 10) including the control group (CT), scopolamine group (SC), donepezil group (DON), low, medium, and high dose groups of AEAS (LS, MS, HS; 1.6 mL kg -1 , 8 mL kg -1 , 16 mL kg -1 ). The results showed that 8 mL kg -1 of AEAS used in this study significantly reversed scopolamine-induced cognitive impairments in mice in the novel object recognition test (P < 0.05) and the Y-maze test (P < 0.05), and also improved the latency to escape in the Morris water maze test (P < 0.05). Moreover, it significantly increased acetylcholine and inhibited acetylcholinesterase activity in the hippocampus, which was directly related to the reduction in learning and memory impairments. It also reversed scopolamine-induced reduction in the hippocampal brain-derived neurotrophic factor (BDNF) and the cAMP response element-binding protein (CREB) mRNA expression. AEAS protected against scopolamine-induced memory deficits. In conclusion, AEAS protected learning and memory function in mice by enhancing the activity of the cholinergic nervous system, and increasing BDNF and CREB expression. This suggests that AEAS has the potential to prevent cognitive impairments in age-related diseases, such as Alzheimer's disease.

  19. Comparative study of three modified numerical spectrophotometric methods: An application on pharmaceutical ternary mixture of aspirin, atorvastatin and clopedogrel

    NASA Astrophysics Data System (ADS)

    Issa, Mahmoud Mohamed; Nejem, R.'afat Mahmoud; Shanab, Alaa Abu; Hegazy, Nahed Diab; Stefan-van Staden, Raluca-Ioana

    2014-07-01

    Three novel numerical methods were developed for the spectrophotometric multi-component analysis of capsules and synthetic mixtures of aspirin, atorvastatin and clopedogrel without any chemical separation. The subtraction method is based on the relationship between the difference in absorbance at four wavelengths and corresponding concentration of analyte. In this method, the linear determination ranges were 0.8-40 μg mL-1 aspirin, 0.8-30 μg mL-1 atorvastatin and 0.5-30 μg mL-1 clopedogrel. In the quotient method, 0.8-40 μg mL-1 aspirin, 0.8-30 μg mL-1 atorvastatin and 1.0-30 μg mL-1 clopedogrel were determine from spectral data at the wavelength pairs that show the same ratio of absorbance for other two species. Standard addition method was used for resolving ternary mixture of 1.0-40 μg mL-1 aspirin, 0.8-30 μg mL-1 atorvastatin and 2.0-30 μg mL-1 clopedogrel. The proposed methods were validated. The reproducibility and repeatability were found satisfactory which evidence was by low values of relative standard deviation (<2%). Recovery was found to be in the range (99.6-100.8%). By adopting these methods, the time taken for analysis was reduced as these methods involve very limited steps. The developed methods were applied for simultaneous analysis of aspirin, atorvastatin and clopedogrel in capsule dosage forms and results were in good concordance with alternative liquid chromatography.

  20. Distributed Stress Sensing and Non-Destructive Tests Using Mechanoluminescence Materials

    NASA Astrophysics Data System (ADS)

    Rahimi, Mohammad Reza

    Rapid aging of infrastructure systems is currently pervasive in the US and the anticipated cost until 2020 for rehabilitation of aging lifeline will reach 3.6 trillion US dollars (ASCE 2013). Reliable condition or serviceability assessment is critically important in decision-making for economic and timely maintenance of the infrastructure systems. Advanced sensors and nondestructive test (NDT) methods are the key technologies for structural health monitoring (SHM) applications that can provide information on the current state of structures. There are many traditional sensors and NDT methods, for examples, strain gauges, ultrasound, radiography and other X-ray, etc. to detect any defect on the infrastructure. Considering that civil infrastructure is typically large-scale and exhibits complex behavior, estimation of structural conditions by the local sensing and NDT methods is a challenging task. Non-contact and distributed (or full-field) sensing and NDT method are desirable that can provide rich information on the civil infrastructure's state. Materials with the ability of emitting light, especially in the visible range, are named as luminescent materials. Mechanoluminescence (ML) phenomenon is the light emission from luminescent materials as a response of an induced mechanical stress. ML materials offer new opportunities for SHM that can directly visualize the stress and crack distributions on the surface of structures through ML light emission. Although material research for ML phenomena have been made substantially, applications of the ML sensors to full-field stress and crack visualization are still at infant stage and have yet to be full-fledged. Moreover, practical applications of the ML sensors for SHM of civil infrastructure have difficulties since numerous challenging problems (e.g. environmental effect) arise in actual applications. In order to realize a practical SHM system employing ML sensors, more research needs to be conducted, for examples, fundamental understandings of physics of ML phenomenon, method for quantitative stress measurements, calibration method for ML sensors, improvement of sensitivity, optimal manufacturing and design of ML sensors, environmental effects of ML phenomenon (e.g. temperature), image processing and analysis, etc. In this research, fundamental ML phenomena of two most promising ML sensing materials were experimentally studied and a methodology for full-field quantitative strain measurements, for the first time, was proposed along with a standardized calibration method. Characteristics and behavior of ML composites and thin films coated on the structure have been studied under various material tests including compression, tension, pure shear, bending, etc. In addition, ML emission sensitivity to the manufacturing parameters and experimental conditions was addressed in order to find optimal design the ML sensor. A phenomenological stress-optics transduction model for predicting the ML light intensity from a thin-film ML coating sensor subjected to in-plane stresses was proposed. A new full-field quantitative strain measuring methodology by ML thin film sensor was developed, for the first time, in order to visualize and measure the strain field. The results from the ML sensor were compared and verified by finite element simulation results. For NDT applications of ML sensors, experimental tests were conducted to visualize the cracks on structural surfaces and detect damages on structural components. In summary, this research proposes and realizes a new distributed stress sensor and NDT method using ML sensing materials. The proposed method is experimentally validated to be effective for stress measurement and crack visualizations. Successful completion of this research provides a leap toward a commercial light intensity-based optic sensor to be used as a new full-field stress measurement technology and NDT method.

  1. Distributed learning enhances relational memory consolidation.

    PubMed

    Litman, Leib; Davachi, Lila

    2008-09-01

    It has long been known that distributed learning (DL) provides a mnemonic advantage over massed learning (ML). However, the underlying mechanisms that drive this robust mnemonic effect remain largely unknown. In two experiments, we show that DL across a 24 hr interval does not enhance immediate memory performance but instead slows the rate of forgetting relative to ML. Furthermore, we demonstrate that this savings in forgetting is specific to relational, but not item, memory. In the context of extant theories and knowledge of memory consolidation, these results suggest that an important mechanism underlying the mnemonic benefit of DL is enhanced memory consolidation. We speculate that synaptic strengthening mechanisms supporting long-term memory consolidation may be differentially mediated by the spacing of memory reactivation. These findings have broad implications for the scientific study of episodic memory consolidation and, more generally, for educational curriculum development and policy.

  2. [Pancreatoduodenectomy: learning curve within single multi-field center].

    PubMed

    Kaprin, A D; Kostin, A A; Nikiforov, P V; Egorov, V I; Grishin, N A; Lozhkin, M V; Petrov, L O; Bykasov, S A; Sidorov, D V

    2018-01-01

    To analyze learning curve by using of immediate results of pancreatoduodenectomy at multi-field oncology institute. For the period 2010-2016 at Abdominal Oncology Department of Herzen Moscow Oncology Research Institute 120 pancreatoduodenal resections were consistently performed. All patients were divided into two groups: the first 60 procedures (group A) and subsequent 60 operations (group B). Herewith, first 60 operations were performed within the first 4.5 years of study period, the next 60 operations - within remaining 2.5 years. Learning curves showed significantly variable intraoperative blood loss (1100 ml and 725 ml), surgery time (589 min and 513 min) and postoperative hospital-stay (15 days and 13 days) in group A followed by gradual improvement of these values in group B. Incidence of negative resection margin (R0) was also significantly improved in the last 60 operations (70 and 92%, respectively). Despite pancreatoduodenectomy is one of the most difficult surgical interventions in abdominal surgery learning curve will differ from one surgeon to another.

  3. Machine Learning Force Field Parameters from Ab Initio Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ying; Li, Hui; Pickard, Frank C.

    Machine learning (ML) techniques with the genetic algorithm (GA) have been applied to determine a polarizable force field parameters using only ab initio data from quantum mechanics (QM) calculations of molecular clusters at the MP2/6-31G(d,p), DFMP2(fc)/jul-cc-pVDZ, and DFMP2(fc)/jul-cc-pVTZ levels to predict experimental condensed phase properties (i.e., density and heat of vaporization). The performance of this ML/GA approach is demonstrated on 4943 dimer electrostatic potentials and 1250 cluster interaction energies for methanol. Excellent agreement between the training data set from QM calculations and the optimized force field model was achieved. The results were further improved by introducing an offset factor duringmore » the machine learning process to compensate for the discrepancy between the QM calculated energy and the energy reproduced by optimized force field, while maintaining the local “shape” of the QM energy surface. Throughout the machine learning process, experimental observables were not involved in the objective function, but were only used for model validation. The best model, optimized from the QM data at the DFMP2(fc)/jul-cc-pVTZ level, appears to perform even better than the original AMOEBA force field (amoeba09.prm), which was optimized empirically to match liquid properties. The present effort shows the possibility of using machine learning techniques to develop descriptive polarizable force field using only QM data. The ML/GA strategy to optimize force fields parameters described here could easily be extended to other molecular systems.« less

  4. Comparison of four machine learning algorithms for their applicability in satellite-based optical rainfall retrievals

    NASA Astrophysics Data System (ADS)

    Meyer, Hanna; Kühnlein, Meike; Appelhans, Tim; Nauss, Thomas

    2016-03-01

    Machine learning (ML) algorithms have successfully been demonstrated to be valuable tools in satellite-based rainfall retrievals which show the practicability of using ML algorithms when faced with high dimensional and complex data. Moreover, recent developments in parallel computing with ML present new possibilities for training and prediction speed and therefore make their usage in real-time systems feasible. This study compares four ML algorithms - random forests (RF), neural networks (NNET), averaged neural networks (AVNNET) and support vector machines (SVM) - for rainfall area detection and rainfall rate assignment using MSG SEVIRI data over Germany. Satellite-based proxies for cloud top height, cloud top temperature, cloud phase and cloud water path serve as predictor variables. The results indicate an overestimation of rainfall area delineation regardless of the ML algorithm (averaged bias = 1.8) but a high probability of detection ranging from 81% (SVM) to 85% (NNET). On a 24-hour basis, the performance of the rainfall rate assignment yielded R2 values between 0.39 (SVM) and 0.44 (AVNNET). Though the differences in the algorithms' performance were rather small, NNET and AVNNET were identified as the most suitable algorithms. On average, they demonstrated the best performance in rainfall area delineation as well as in rainfall rate assignment. NNET's computational speed is an additional advantage in work with large datasets such as in remote sensing based rainfall retrievals. However, since no single algorithm performed considerably better than the others we conclude that further research in providing suitable predictors for rainfall is of greater necessity than an optimization through the choice of the ML algorithm.

  5. DISCOVERY OF BRIGHT GALACTIC R CORONAE BOREALIS AND DY PERSEI VARIABLES: RARE GEMS MINED FROM ACVS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, A. A.; Richards, J. W.; Bloom, J. S.

    2012-08-20

    We present the results of a machine-learning (ML)-based search for new R Coronae Borealis (RCB) stars and DY Persei-like stars (DYPers) in the Galaxy using cataloged light curves from the All-Sky Automated Survey (ASAS) Catalog of Variable Stars (ACVS). RCB stars-a rare class of hydrogen-deficient carbon-rich supergiants-are of great interest owing to the insights they can provide on the late stages of stellar evolution. DYPers are possibly the low-temperature, low-luminosity analogs to the RCB phenomenon, though additional examples are needed to fully establish this connection. While RCB stars and DYPers are traditionally identified by epochs of extreme dimming that occurmore » without regularity, the ML search framework more fully captures the richness and diversity of their photometric behavior. We demonstrate that our ML method can use newly discovered RCB stars to identify additional candidates within the same data set. Our search yields 15 candidates that we consider likely RCB stars/DYPers: new spectroscopic observations confirm that four of these candidates are RCB stars and four are DYPers. Our discovery of four new DYPers increases the number of known Galactic DYPers from two to six; noteworthy is that one of the new DYPers has a measured parallax and is m Almost-Equal-To 7 mag, making it the brightest known DYPer to date. Future observations of these new DYPers should prove instrumental in establishing the RCB connection. We consider these results, derived from a machine-learned probabilistic classification catalog, as an important proof-of-concept for the efficient discovery of rare sources with time-domain surveys.« less

  6. Enhancement of Plant Metabolite Fingerprinting by Machine Learning1[W

    PubMed Central

    Scott, Ian M.; Vermeer, Cornelia P.; Liakata, Maria; Corol, Delia I.; Ward, Jane L.; Lin, Wanchang; Johnson, Helen E.; Whitehead, Lynne; Kular, Baldeep; Baker, John M.; Walsh, Sean; Dave, Anuja; Larson, Tony R.; Graham, Ian A.; Wang, Trevor L.; King, Ross D.; Draper, John; Beale, Michael H.

    2010-01-01

    Metabolite fingerprinting of Arabidopsis (Arabidopsis thaliana) mutants with known or predicted metabolic lesions was performed by 1H-nuclear magnetic resonance, Fourier transform infrared, and flow injection electrospray-mass spectrometry. Fingerprinting enabled processing of five times more plants than conventional chromatographic profiling and was competitive for discriminating mutants, other than those affected in only low-abundance metabolites. Despite their rapidity and complexity, fingerprints yielded metabolomic insights (e.g. that effects of single lesions were usually not confined to individual pathways). Among fingerprint techniques, 1H-nuclear magnetic resonance discriminated the most mutant phenotypes from the wild type and Fourier transform infrared discriminated the fewest. To maximize information from fingerprints, data analysis was crucial. One-third of distinctive phenotypes might have been overlooked had data models been confined to principal component analysis score plots. Among several methods tested, machine learning (ML) algorithms, namely support vector machine or random forest (RF) classifiers, were unsurpassed for phenotype discrimination. Support vector machines were often the best performing classifiers, but RFs yielded some particularly informative measures. First, RFs estimated margins between mutant phenotypes, whose relations could then be visualized by Sammon mapping or hierarchical clustering. Second, RFs provided importance scores for the features within fingerprints that discriminated mutants. These scores correlated with analysis of variance F values (as did Kruskal-Wallis tests, true- and false-positive measures, mutual information, and the Relief feature selection algorithm). ML classifiers, as models trained on one data set to predict another, were ideal for focused metabolomic queries, such as the distinctiveness and consistency of mutant phenotypes. Accessible software for use of ML in plant physiology is highlighted. PMID:20566707

  7. Resonance scattering spectra of micrococcus lysodeikticus and its application to assay of lysozyme activity.

    PubMed

    Jiang, Zhi-Liang; Huang, Guo-Xia

    2007-02-01

    Several methods, including turbidimetric and colorimetric methods, have been reported for the detection of lysozyme activity. However, there is no report about the resonance scattering spectral (RSS) assay, which is based on the catalytic effect of lysozyme on the hydrolysis of micrococcus lysodeikticus (ML) and its resonance scattering effect. ML has 5 resonance scattering peaks at 360 400, 420, 470, and 520 nm with the strongest one at 470 nm. The concentration of ML in the range of 2.0x10(6)-9.3x10(8) cells/ml is proportional to the RS intensity at 470 nm (I(470 nm)). A new catalytic RSS method has been proposed for 0.24-40.0 U/ml (or 0.012-2.0 mug/ml) lysozyme activity, with a detection limit (3sigma) of 0.014 U/ml (or 0.0007 microg/ml). Saliva samples were assayed by this method, and it is in agreement with the results of turbidimetric method. The slope, intercept and the correlation coefficient of the regression analysis of the 2 assays were 0.9665, -87.50, and 0.9973, respectively. The assay has high sensitivity and simplicity.

  8. Intelligent earthquake data processing for global adjoint tomography

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Hill, J.; Li, T.; Lei, W.; Ruan, Y.; Lefebvre, M. P.; Tromp, J.

    2016-12-01

    Due to the increased computational capability afforded by modern and future computing architectures, the seismology community is demanding a more comprehensive understanding of the full waveform information from the recorded earthquake seismograms. Global waveform tomography is a complex workflow that matches observed seismic data with synthesized seismograms by iteratively updating the earth model parameters based on the adjoint state method. This methodology allows us to compute a very accurate model of the earth's interior. The synthetic data is simulated by solving the wave equation in the entire globe using a spectral-element method. In order to ensure the inversion accuracy and stability, both the synthesized and observed seismograms must be carefully pre-processed. Because the scale of the inversion problem is extremely large and there is a very large volume of data to both be read and written, an efficient and reliable pre-processing workflow must be developed. We are investigating intelligent algorithms based on a machine-learning (ML) framework that will automatically tune parameters for the data processing chain. One straightforward application of ML in data processing is to classify all possible misfit calculation windows into usable and unusable ones, based on some intelligent ML models such as neural network, support vector machine or principle component analysis. The intelligent earthquake data processing framework will enable the seismology community to compute the global waveform tomography using seismic data from an arbitrarily large number of earthquake events in the fastest, most efficient way.

  9. Selected South African Grade 10 Learners' Perceptions of Two Learning Areas: Mathematical Literacy and Life Orientation

    ERIC Educational Resources Information Center

    Geldenhuys, J. L.; Kruger, C.; Moss, J.

    2013-01-01

    In 2006, Mathematical Literacy (ML) and Life Orientation (LO) were introduced into South Africa's Grade 10 national curriculum. The implementation of the ML programme in schools stemmed from a need to improve the level of numeracy of the general population of South Africa, while LO was introduced to equip learners to solve problems and to make…

  10. RuleML-Based Learning Object Interoperability on the Semantic Web

    ERIC Educational Resources Information Center

    Biletskiy, Yevgen; Boley, Harold; Ranganathan, Girish R.

    2008-01-01

    Purpose: The present paper aims to describe an approach for building the Semantic Web rules for interoperation between heterogeneous learning objects, namely course outlines from different universities, and one of the rule uses: identifying (in)compatibilities between course descriptions. Design/methodology/approach: As proof of concept, a rule…

  11. Bivariate versus multivariate smart spectrophotometric calibration methods for the simultaneous determination of a quaternary mixture of mosapride, pantoprazole and their degradation products.

    PubMed

    Hegazy, M A; Yehia, A M; Moustafa, A A

    2013-05-01

    The ability of bivariate and multivariate spectrophotometric methods was demonstrated in the resolution of a quaternary mixture of mosapride, pantoprazole and their degradation products. The bivariate calibrations include bivariate spectrophotometric method (BSM) and H-point standard addition method (HPSAM), which were able to determine the two drugs, simultaneously, but not in the presence of their degradation products, the results showed that simultaneous determinations could be performed in the concentration ranges of 5.0-50.0 microg/ml for mosapride and 10.0-40.0 microg/ml for pantoprazole by bivariate spectrophotometric method and in the concentration ranges of 5.0-45.0 microg/ml for both drugs by H-point standard addition method. Moreover, the applied multivariate calibration methods were able for the determination of mosapride, pantoprazole and their degradation products using concentration residuals augmented classical least squares (CRACLS) and partial least squares (PLS). The proposed multivariate methods were applied to 17 synthetic samples in the concentration ranges of 3.0-12.0 microg/ml mosapride, 8.0-32.0 microg/ml pantoprazole, 1.5-6.0 microg/ml mosapride degradation products and 2.0-8.0 microg/ml pantoprazole degradation products. The proposed bivariate and multivariate calibration methods were successfully applied to the determination of mosapride and pantoprazole in their pharmaceutical preparations.

  12. The potential for machine learning algorithms to improve and reduce the cost of 3-dimensional printing for surgical planning.

    PubMed

    Huff, Trevor J; Ludwig, Parker E; Zuniga, Jorge M

    2018-05-01

    3D-printed anatomical models play an important role in medical and research settings. The recent successes of 3D anatomical models in healthcare have led many institutions to adopt the technology. However, there remain several issues that must be addressed before it can become more wide-spread. Of importance are the problems of cost and time of manufacturing. Machine learning (ML) could be utilized to solve these issues by streamlining the 3D modeling process through rapid medical image segmentation and improved patient selection and image acquisition. The current challenges, potential solutions, and future directions for ML and 3D anatomical modeling in healthcare are discussed. Areas covered: This review covers research articles in the field of machine learning as related to 3D anatomical modeling. Topics discussed include automated image segmentation, cost reduction, and related time constraints. Expert commentary: ML-based segmentation of medical images could potentially improve the process of 3D anatomical modeling. However, until more research is done to validate these technologies in clinical practice, their impact on patient outcomes will remain unknown. We have the necessary computational tools to tackle the problems discussed. The difficulty now lies in our ability to collect sufficient data.

  13. Relational machine learning for electronic health record-driven phenotyping.

    PubMed

    Peissig, Peggy L; Santos Costa, Vitor; Caldwell, Michael D; Rottscheit, Carla; Berg, Richard L; Mendonca, Eneida A; Page, David

    2014-12-01

    Electronic health records (EHR) offer medical and pharmacogenomics research unprecedented opportunities to identify and classify patients at risk. EHRs are collections of highly inter-dependent records that include biological, anatomical, physiological, and behavioral observations. They comprise a patient's clinical phenome, where each patient has thousands of date-stamped records distributed across many relational tables. Development of EHR computer-based phenotyping algorithms require time and medical insight from clinical experts, who most often can only review a small patient subset representative of the total EHR records, to identify phenotype features. In this research we evaluate whether relational machine learning (ML) using inductive logic programming (ILP) can contribute to addressing these issues as a viable approach for EHR-based phenotyping. Two relational learning ILP approaches and three well-known WEKA (Waikato Environment for Knowledge Analysis) implementations of non-relational approaches (PART, J48, and JRIP) were used to develop models for nine phenotypes. International Classification of Diseases, Ninth Revision (ICD-9) coded EHR data were used to select training cohorts for the development of each phenotypic model. Accuracy, precision, recall, F-Measure, and Area Under the Receiver Operating Characteristic (AUROC) curve statistics were measured for each phenotypic model based on independent manually verified test cohorts. A two-sided binomial distribution test (sign test) compared the five ML approaches across phenotypes for statistical significance. We developed an approach to automatically label training examples using ICD-9 diagnosis codes for the ML approaches being evaluated. Nine phenotypic models for each ML approach were evaluated, resulting in better overall model performance in AUROC using ILP when compared to PART (p=0.039), J48 (p=0.003) and JRIP (p=0.003). ILP has the potential to improve phenotyping by independently delivering clinically expert interpretable rules for phenotype definitions, or intuitive phenotypes to assist experts. Relational learning using ILP offers a viable approach to EHR-driven phenotyping. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Simultaneous determination of morphine, codeine and 6-acetyl morphine in human urine and blood samples using direct aqueous derivatisation: validation and application to real cases.

    PubMed

    Chericoni, S; Stefanelli, F; Iannella, V; Giusiani, M

    2014-02-15

    Opiates play a relevant role in forensic toxicology and their assay in urine or blood is usually performed for example in workplace drug-testing or toxicological investigation of drug impaired driving. The present work describes two new methods for detecting morphine, codeine and 6-monoacethyl morphine in human urine or blood using a single step derivatisation in aqueous phase. Propyl chloroformate is used as the dramatizing agent followed by liquid-liquid extraction and gas-chromatography-mass spectroscopy to detect the derivatives. The methods have been validated both for hydrolysed and unhydrolysed urine. For hydrolysed urine, the LOD and LOQ were 2.5ng/ml and 8.5ng/ml for codeine, and 5.2ng/ml and 15.1ng/ml for morphine, respectively. For unhydrolysed urine, the LOD and LOQ were 3.0ng/ml and 10.1ng/ml for codeine, 2.7ng/ml and 8.1ng/ml for morphine, 0.8ng/ml and 1.5ng/ml for 6-monoacetyl morphine, respectively. In blood, the LOD and LOQ were 0.44ng/ml and 1.46ng/ml for codeine, 0.29ng/ml and 0.98ng/ml for morphine, 0.15ng/ml and 0.51ng/ml for 6-monoacetyl morphine, respectively. The validated methods have been applied to 50 urine samples and 40 blood samples (both positive and negative) and they can be used in routine analyses. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Machine learning for prediction of all-cause mortality in patients with suspected coronary artery disease: a 5-year multicentre prospective registry analysis.

    PubMed

    Motwani, Manish; Dey, Damini; Berman, Daniel S; Germano, Guido; Achenbach, Stephan; Al-Mallah, Mouaz H; Andreini, Daniele; Budoff, Matthew J; Cademartiri, Filippo; Callister, Tracy Q; Chang, Hyuk-Jae; Chinnaiyan, Kavitha; Chow, Benjamin J W; Cury, Ricardo C; Delago, Augustin; Gomez, Millie; Gransar, Heidi; Hadamitzky, Martin; Hausleiter, Joerg; Hindoyan, Niree; Feuchtner, Gudrun; Kaufmann, Philipp A; Kim, Yong-Jin; Leipsic, Jonathon; Lin, Fay Y; Maffei, Erica; Marques, Hugo; Pontone, Gianluca; Raff, Gilbert; Rubinshtein, Ronen; Shaw, Leslee J; Stehli, Julia; Villines, Todd C; Dunning, Allison; Min, James K; Slomka, Piotr J

    2017-02-14

    Traditional prognostic risk assessment in patients undergoing non-invasive imaging is based upon a limited selection of clinical and imaging findings. Machine learning (ML) can consider a greater number and complexity of variables. Therefore, we investigated the feasibility and accuracy of ML to predict 5-year all-cause mortality (ACM) in patients undergoing coronary computed tomographic angiography (CCTA), and compared the performance to existing clinical or CCTA metrics. The analysis included 10 030 patients with suspected coronary artery disease and 5-year follow-up from the COronary CT Angiography EvaluatioN For Clinical Outcomes: An InteRnational Multicenter registry. All patients underwent CCTA as their standard of care. Twenty-five clinical and 44 CCTA parameters were evaluated, including segment stenosis score (SSS), segment involvement score (SIS), modified Duke index (DI), number of segments with non-calcified, mixed or calcified plaques, age, sex, gender, standard cardiovascular risk factors, and Framingham risk score (FRS). Machine learning involved automated feature selection by information gain ranking, model building with a boosted ensemble algorithm, and 10-fold stratified cross-validation. Seven hundred and forty-five patients died during 5-year follow-up. Machine learning exhibited a higher area-under-curve compared with the FRS or CCTA severity scores alone (SSS, SIS, DI) for predicting all-cause mortality (ML: 0.79 vs. FRS: 0.61, SSS: 0.64, SIS: 0.64, DI: 0.62; P< 0.001). Machine learning combining clinical and CCTA data was found to predict 5-year ACM significantly better than existing clinical or CCTA metrics alone. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.

  16. Use of machine learning to improve autism screening and diagnostic instruments: effectiveness, efficiency, and multi-instrument fusion.

    PubMed

    Bone, Daniel; Bishop, Somer L; Black, Matthew P; Goodwin, Matthew S; Lord, Catherine; Narayanan, Shrikanth S

    2016-08-01

    Machine learning (ML) provides novel opportunities for human behavior research and clinical translation, yet its application can have noted pitfalls (Bone et al., 2015). In this work, we fastidiously utilize ML to derive autism spectrum disorder (ASD) instrument algorithms in an attempt to improve upon widely used ASD screening and diagnostic tools. The data consisted of Autism Diagnostic Interview-Revised (ADI-R) and Social Responsiveness Scale (SRS) scores for 1,264 verbal individuals with ASD and 462 verbal individuals with non-ASD developmental or psychiatric disorders, split at age 10. Algorithms were created via a robust ML classifier, support vector machine, while targeting best-estimate clinical diagnosis of ASD versus non-ASD. Parameter settings were tuned in multiple levels of cross-validation. The created algorithms were more effective (higher performing) than the current algorithms, were tunable (sensitivity and specificity can be differentially weighted), and were more efficient (achieving near-peak performance with five or fewer codes). Results from ML-based fusion of ADI-R and SRS are reported. We present a screener algorithm for below (above) age 10 that reached 89.2% (86.7%) sensitivity and 59.0% (53.4%) specificity with only five behavioral codes. ML is useful for creating robust, customizable instrument algorithms. In a unique dataset comprised of controls with other difficulties, our findings highlight the limitations of current caregiver-report instruments and indicate possible avenues for improving ASD screening and diagnostic tools. © 2016 Association for Child and Adolescent Mental Health.

  17. Spectrofluorimetric determination of fluoroquinolones in pharmaceutical preparations.

    PubMed

    Ulu, Sevgi Tatar

    2009-02-01

    Simple, rapid and highly sensitive spectrofluorimetric method is presented for the determination of four fluoroquinolone (FQ) drugs, ciprofloxacin, enoxacin, norfloxacin and moxifloxacin in pharmaceutical preparations. Proposed method is based on the derivatization of FQ with 4-chloro-7-nitrobenzofurazan (NBD-Cl) in borate buffer of pH 9.0 to yield a yellow product. The optimum experimental conditions have been studied carefully. Beer's law is obeyed over the concentration range of 23.5-500 ng mL(-1) for ciprofloxacin, 28.5-700 ng mL(-1) for enoxacin, 29.5-800 ng mL(-1) for norfloxacin and 33.5-1000 ng mL(-1) for moxifloxacin using NBD-Cl reagent, respectively. The detection limits were found to be 7.0 ng mL(-1) for ciprofloxacin, 8.5 ng mL(-1) for enoxacin, 9.2 ng mL(-1) for norfloxacin and 9.98 ng mL(-1) for moxifloxacin, respectively. Intra-day and inter-day relative standard deviation and relative mean error values at three different concentrations were determined. The low relative standard deviation values indicate good precision and high recovery values indicate accuracy of the proposed methods. The method is highly sensitive and specific. The results obtained are in good agreement with those obtained by the official and reference method. The results presented in this report show that the applied spectrofluorimetric method is acceptable for the determination of the four FQ in the pharmaceutical preparations. Common excipients used as additives in pharmaceutical preparations do not interfere with the proposed method.

  18. Real Alerts and Artifact Classification in Archived Multi-signal Vital Sign Monitoring Data—Implications for Mining Big Data — Implications for Mining Big Data

    PubMed Central

    Hravnak, Marilyn; Chen, Lujie; Dubrawski, Artur; Bose, Eliezer; Clermont, Gilles; Pinsky, Michael R.

    2015-01-01

    PURPOSE Huge hospital information system databases can be mined for knowledge discovery and decision support, but artifact in stored non-invasive vital sign (VS) high-frequency data streams limits its use. We used machine-learning (ML) algorithms trained on expert-labeled VS data streams to automatically classify VS alerts as real or artifact, thereby “cleaning” such data for future modeling. METHODS 634 admissions to a step-down unit had recorded continuous noninvasive VS monitoring data (heart rate [HR], respiratory rate [RR], peripheral arterial oxygen saturation [SpO2] at 1/20Hz., and noninvasive oscillometric blood pressure [BP]) Time data were across stability thresholds defined VS event epochs. Data were divided Block 1 as the ML training/cross-validation set and Block 2 the test set. Expert clinicians annotated Block 1 events as perceived real or artifact. After feature extraction, ML algorithms were trained to create and validate models automatically classifying events as real or artifact. The models were then tested on Block 2. RESULTS Block 1 yielded 812 VS events, with 214 (26%) judged by experts as artifact (RR 43%, SpO2 40%, BP 15%, HR 2%). ML algorithms applied to the Block 1 training/cross-validation set (10-fold cross-validation) gave area under the curve (AUC) scores of 0.97 RR, 0.91 BP and 0.76 SpO2. Performance when applied to Block 2 test data was AUC 0.94 RR, 0.84 BP and 0.72 SpO2). CONCLUSIONS ML-defined algorithms applied to archived multi-signal continuous VS monitoring data allowed accurate automated classification of VS alerts as real or artifact, and could support data mining for future model building. PMID:26438655

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu Xiaoying; Ho, Shirley; Trac, Hy

    We investigate machine learning (ML) techniques for predicting the number of galaxies (N{sub gal}) that occupy a halo, given the halo's properties. These types of mappings are crucial for constructing the mock galaxy catalogs necessary for analyses of large-scale structure. The ML techniques proposed here distinguish themselves from traditional halo occupation distribution (HOD) modeling as they do not assume a prescribed relationship between halo properties and N{sub gal}. In addition, our ML approaches are only dependent on parent halo properties (like HOD methods), which are advantageous over subhalo-based approaches as identifying subhalos correctly is difficult. We test two algorithms: supportmore » vector machines (SVM) and k-nearest-neighbor (kNN) regression. We take galaxies and halos from the Millennium simulation and predict N{sub gal} by training our algorithms on the following six halo properties: number of particles, M{sub 200}, {sigma}{sub v}, v{sub max}, half-mass radius, and spin. For Millennium, our predicted N{sub gal} values have a mean-squared error (MSE) of {approx}0.16 for both SVM and kNN. Our predictions match the overall distribution of halos reasonably well and the galaxy correlation function at large scales to {approx}5%-10%. In addition, we demonstrate a feature selection algorithm to isolate the halo parameters that are most predictive, a useful technique for understanding the mapping between halo properties and N{sub gal}. Lastly, we investigate these ML-based approaches in making mock catalogs for different galaxy subpopulations (e.g., blue, red, high M{sub star}, low M{sub star}). Given its non-parametric nature as well as its powerful predictive and feature selection capabilities, ML offers an interesting alternative for creating mock catalogs.« less

  20. Development and validation of spectrophotometric methods for estimating amisulpride in pharmaceutical preparations.

    PubMed

    Sharma, Sangita; Neog, Madhurjya; Prajapati, Vipul; Patel, Hiren; Dabhi, Dipti

    2010-01-01

    Five simple, sensitive, accurate and rapid visible spectrophotometric methods (A, B, C, D and E) have been developed for estimating Amisulpride in pharmaceutical preparations. These are based on the diazotization of Amisulpride with sodium nitrite and hydrochloric acid, followed by coupling with N-(1-naphthyl)ethylenediamine dihydrochloride (Method A), diphenylamine (Method B), beta-naphthol in an alkaline medium (Method C), resorcinol in an alkaline medium (Method D) and chromotropic acid in an alkaline medium (Method E) to form a colored chromogen. The absorption maxima, lambda(max), are at 523 nm for Method A, 382 and 490 nm for Method B, 527 nm for Method C, 521 nm for Method D and 486 nm for Method E. Beer's law was obeyed in the concentration range of 2.5-12.5 microg mL(-1) in Method A, 5-25 and 10-50 microg mL(-1) in Method B, 4-20 microg mL(-1) in Method C, 2.5-12.5 microg mL(-1) in Method D and 5-15 microg mL(-1) in Method E. The results obtained for the proposed methods are in good agreement with labeled amounts, when marketed pharmaceutical preparations were analyzed.

  1. [MK-801 or DNQX reduces electroconvulsive shock-induced impairment of learning-memory and hyperphosphorylation of Tau in rats].

    PubMed

    Liu, Chao; Min, Su; Wei, Ke; Liu, Dong; Dong, Jun; Luo, Jie; Liu, Xiao-Bin

    2012-08-25

    This study explored the effect of the excitatory amino acid receptor antagonists on the impairment of learning-memory and the hyperphosphorylation of Tau protein induced by electroconvulsive shock (ECT) in depressed rats, in order to provide experimental evidence for the study on neuropsychological mechanisms improving learning and memory impairment and the clinical intervention treatment. The analysis of variance of factorial design set up two intervention factors which were the electroconvulsive shock (two level: no disposition; a course of ECT) and the excitatory amino acid receptor antagonists (three level: iv saline; iv NMDA receptor antagonist MK-801; iv AMPA receptor antagonist DNQX). Forty-eight adult Wistar-Kyoto (WKY) rats (an animal model for depressive behavior) were randomly divided into six experimental groups (n = 8 in each group): saline (iv 2 mL saline through the tail veins of WKY rats ); MK-801 (iv 2 mL 5 mg/kg MK-801 through the tail veins of WKY rats) ; DNQX (iv 2 mL 5 mg/kg DNQX through the tail veins of WKY rats ); saline + ECT (iv 2 mL saline through the tail veins of WKY rats and giving a course of ECT); MK-801 + ECT (iv 2 mL 5 mg/kg MK-801 through the tail veins of WKY rats and giving a course of ECT); DNQX + ECT (iv 2 mL 5 mg/kg DNQX through the tail veins of WKY rats and giving a course of ECT). The Morris water maze test started within 1 day after the finish of the course of ECT to evaluate learning and memory. The hippocampus was removed from rats within 1 day after the finish of Morris water maze test. The content of glutamate in the hippocampus of rats was detected by high performance liquid chromatography. The contents of Tau protein which included Tau5 (total Tau protein), p-PHF1(Ser396/404), p-AT8(Ser199/202) and p-12E8(Ser262) in the hippocampus of rats were detected by immunohistochemistry staining (SP) and Western blot. The results showed that ECT and the glutamate ionic receptor blockers (NMDA receptor antagonist MK-801 and AMPA receptor antagonist DNQX) induced the impairment of learning and memory in depressed rats with extended evasive latency time and shortened space exploration time. And the two factors presented a subtractive effect. ECT significantly up-regulated the content of glutamate in the hippocampus of depressed rats which were not affected by the glutamate ionic receptor blockers. ECT and the glutamate ionic receptor blockers did not affect the total Tau protein in the hippocampus of rats. ECT up-regulated the hyperphosphorylation of Tau protein in the hippocampus of depressed rats, while the glutamate ionic receptor blockers down-regulated it, and combination of the two factors presented a subtractive effect. Our results indicate that ECT up-regulates the content of glutamate in the hippocampus of depressed rats, which up-regulates the hyperphosphorylation of Tau protein resulting in the impairment of learning and memory in depressed rats.

  2. Wall-based measurement features provides an improved IVUS coronary artery risk assessment when fused with plaque texture-based features during machine learning paradigm.

    PubMed

    Banchhor, Sumit K; Londhe, Narendra D; Araki, Tadashi; Saba, Luca; Radeva, Petia; Laird, John R; Suri, Jasjit S

    2017-12-01

    Planning of percutaneous interventional procedures involves a pre-screening and risk stratification of the coronary artery disease. Current screening tools use stand-alone plaque texture-based features and therefore lack the ability to stratify the risk. This IRB approved study presents a novel strategy for coronary artery disease risk stratification using an amalgamation of IVUS plaque texture-based and wall-based measurement features. Due to common genetic plaque makeup, carotid plaque burden was chosen as a gold standard for risk labels during training-phase of machine learning (ML) paradigm. Cross-validation protocol was adopted to compute the accuracy of the ML framework. A set of 59 plaque texture-based features was padded with six wall-based measurement features to show the improvement in stratification accuracy. The ML system was executed using principle component analysis-based framework for dimensionality reduction and uses support vector machine classifier for training and testing-phases. The ML system produced a stratification accuracy of 91.28%, demonstrating an improvement of 5.69% when wall-based measurement features were combined with plaque texture-based features. The fused system showed an improvement in mean sensitivity, specificity, positive predictive value, and area under the curve by: 6.39%, 4.59%, 3.31% and 5.48%, respectively when compared to the stand-alone system. While meeting the stability criteria of 5%, the ML system also showed a high average feature retaining power and mean reliability of 89.32% and 98.24%, respectively. The ML system showed an improvement in risk stratification accuracy when the wall-based measurement features were fused with the plaque texture-based features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A comparison of the short- and long-term effects of corticosterone exposure on extinction in adolescence versus adulthood.

    PubMed

    Den, Miriam Liora; Altmann, Sarah R; Richardson, Rick

    2014-12-01

    Human and nonhuman adolescents have impaired retention of extinction of learned fear, relative to juveniles and adults. It is unknown whether exposure to stress affects extinction differently in adolescents versus adults. These experiments compared the short- and long-term effects of exposure to the stress-related hormone corticosterone (CORT) on the extinction of learned fear in adolescent and adult rats. Across all experiments, adolescent and adult rats were trained to exhibit good extinction retention by giving extinction training across 2 consecutive days. Despite this extra training, adolescents exposed to 1 week of CORT (200 μg/ml) in their drinking water showed impaired extinction retention when trained shortly after the CORT was removed (Experiment 1a). In contrast, adult rats exposed to CORT (200 μg/ml) for the same duration did not exhibit deficits in extinction retention (Experiment 1b). Exposing adolescents to half the amount of CORT (100 μg/ml; Experiment 1c) for 1 week similarly disrupted extinction retention. Extinction impairments in adult rats were only observed after 3 weeks, rather than 1 week, of CORT (200 μg/ml; Experiment 1d). Remarkably, however, adult rats showed impaired extinction retention if they had been exposed to 1 week of CORT (200 μg/ml) during adolescence (Experiment 2). Finally, exposure to 3 weeks of CORT (200 μg/ml) in adulthood led to long-lasting extinction deficits after a 6-week drug-free period (Experiment 3). These findings suggest that although CORT disrupts both short- and long-term extinction retention in adolescents and adults, adolescents may be more vulnerable to these effects because of the maturation of stress-sensitive brain regions. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  4. Method for determination of levoglucosan in snow and ice at trace concentration levels using ultra-performance liquid chromatography coupled with triple quadrupole mass spectrometry.

    PubMed

    You, Chao; Song, Lili; Xu, Baiqing; Gao, Shaopeng

    2016-02-01

    A method is developed for determination of levoglucosan at trace concentration levels in complex matrices of snow and ice samples. This method uses an injection mixture comprising acetonitrile and melt sample at a ratio of 50/50 (v/v). Samples are analyzed using ultra-performance liquid chromatography system combined with triple tandem quadrupole mass spectrometry (UPLC-MS/MS). Levoglucosan is analyzed on BEH Amide column (2.1 mm × 100 mm, 1.7 um), and a Z-spray electrospray ionization source is used for levoglucosan ionization. The polyether sulfone filter is selected for filtrating insoluble particles due to less impact on levoglucosan. The matrix effect is evaluated by using a standard addition method. During the method validation, limit of detection (LOD), linearity, recovery, repeatability and reproducibility were evaluated using standard addition method. The LOD of this method is 0.11 ng mL(-1). Recoveries vary from 91.2% at 0.82 ng mL(-1) to 99.3% at 4.14 ng mL(-1). Repeatability ranges from 17.9% at a concentration of 0.82 ng mL(-1) to 2.8% at 4.14 ng mL(-1). Reproducibility ranges from 15.1% at a concentration of 0.82 ng mL(-1) to 1.9% at 4.14 ng mL(-1). This method can be implemented using less than 0.50 mL sample volume in low and middle latitude regions like the Tibetan Plateau. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Computer vision and machine learning for robust phenotyping in genome-wide studies

    PubMed Central

    Zhang, Jiaoping; Naik, Hsiang Sing; Assefa, Teshale; Sarkar, Soumik; Reddy, R. V. Chowda; Singh, Arti; Ganapathysubramanian, Baskar; Singh, Asheesh K.

    2017-01-01

    Traditional evaluation of crop biotic and abiotic stresses are time-consuming and labor-intensive limiting the ability to dissect the genetic basis of quantitative traits. A machine learning (ML)-enabled image-phenotyping pipeline for the genetic studies of abiotic stress iron deficiency chlorosis (IDC) of soybean is reported. IDC classification and severity for an association panel of 461 diverse plant-introduction accessions was evaluated using an end-to-end phenotyping workflow. The workflow consisted of a multi-stage procedure including: (1) optimized protocols for consistent image capture across plant canopies, (2) canopy identification and registration from cluttered backgrounds, (3) extraction of domain expert informed features from the processed images to accurately represent IDC expression, and (4) supervised ML-based classifiers that linked the automatically extracted features with expert-rating equivalent IDC scores. ML-generated phenotypic data were subsequently utilized for the genome-wide association study and genomic prediction. The results illustrate the reliability and advantage of ML-enabled image-phenotyping pipeline by identifying previously reported locus and a novel locus harboring a gene homolog involved in iron acquisition. This study demonstrates a promising path for integrating the phenotyping pipeline into genomic prediction, and provides a systematic framework enabling robust and quicker phenotyping through ground-based systems. PMID:28272456

  6. Machine Learning-Augmented Propensity Score-Adjusted Multilevel Mixed Effects Panel Analysis of Hands-On Cooking and Nutrition Education versus Traditional Curriculum for Medical Students as Preventive Cardiology: Multisite Cohort Study of 3,248 Trainees over 5 Years

    PubMed Central

    Dart, Lyn; Vanbeber, Anne; Smith-Barbaro, Peggy; Costilla, Vanessa; Samuel, Charlotte; Terregino, Carol A.; Abali, Emine Ercikan; Dollinger, Beth; Baumgartner, Nicole; Kramer, Nicholas; Seelochan, Alex; Taher, Sabira; Deutchman, Mark; Evans, Meredith; Ellis, Robert B.; Oyola, Sonia; Maker-Clark, Geeta; Budnick, Isadore; Tran, David; DeValle, Nicole; Shepard, Rachel; Chow, Erika; Petrin, Christine; Razavi, Alexander; McGowan, Casey; Grant, Austin; Bird, Mackenzie; Carry, Connor; McGowan, Glynis; McCullough, Colleen; Berman, Casey M.; Dotson, Kerri; Sarris, Leah; Harlan, Timothy S.; Co-investigators, on behalf of the CHOP

    2018-01-01

    Background Cardiovascular disease (CVD) annually claims more lives and costs more dollars than any other disease globally amid widening health disparities, despite the known significant reductions in this burden by low cost dietary changes. The world's first medical school-based teaching kitchen therefore launched CHOP-Medical Students as the largest known multisite cohort study of hands-on cooking and nutrition education versus traditional curriculum for medical students. Methods This analysis provides a novel integration of artificial intelligence-based machine learning (ML) with causal inference statistics. 43 ML automated algorithms were tested, with the top performer compared to triply robust propensity score-adjusted multilevel mixed effects regression panel analysis of longitudinal data. Inverse-variance weighted fixed effects meta-analysis pooled the individual estimates for competencies. Results 3,248 unique medical trainees met study criteria from 20 medical schools nationally from August 1, 2012, to June 26, 2017, generating 4,026 completed validated surveys. ML analysis produced similar results to the causal inference statistics based on root mean squared error and accuracy. Hands-on cooking and nutrition education compared to traditional medical school curriculum significantly improved student competencies (OR 2.14, 95% CI 2.00–2.28, p < 0.001) and MedDiet adherence (OR 1.40, 95% CI 1.07–1.84, p = 0.015), while reducing trainees' soft drink consumption (OR 0.56, 95% CI 0.37–0.85, p = 0.007). Overall improved competencies were demonstrated from the initial study site through the scale-up of the intervention to 10 sites nationally (p < 0.001). Discussion This study provides the first machine learning-augmented causal inference analysis of a multisite cohort showing hands-on cooking and nutrition education for medical trainees improves their competencies counseling patients on nutrition, while improving students' own diets. This study suggests that the public health and medical sectors can unite population health management and precision medicine for a sustainable model of next-generation health systems providing effective, equitable, accessible care beginning with reversing the CVD epidemic. PMID:29850526

  7. Improving quantitative structure-activity relationship models using Artificial Neural Networks trained with dropout.

    PubMed

    Mendenhall, Jeffrey; Meiler, Jens

    2016-02-01

    Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both enrichment false positive rate and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22-46 % over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods.

  8. Improving Quantitative Structure-Activity Relationship Models using Artificial Neural Networks Trained with Dropout

    PubMed Central

    Mendenhall, Jeffrey; Meiler, Jens

    2016-01-01

    Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery (LB-CADD) pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both Enrichment false positive rate (FPR) and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22–46% over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods. PMID:26830599

  9. Bt Toxin Cry1Ie Causes No Negative Effects on Survival, Pollen Consumption, or Olfactory Learning in Worker Honey Bees (Hymenoptera: Apidae).

    PubMed

    Dai, Ping-Li; Jia, Hui-Ru; Geng, Li-Li; Diao, Qing-Yun

    2016-04-27

    The honey bee (Apis mellifera L.) is a key nontarget insect in environmental risk assessments of insect-resistant genetically modified crops. In controlled laboratory conditions, we evaluated the potential effects of Cry1Ie toxin on survival, pollen consumption, and olfactory learning of young adult honey bees. We exposed worker bees to syrup containing 20, 200, or 20,000 ng/ml Cry1Ie toxin, and also exposed some bees to 48 ng/ml imidacloprid as a positive control for exposure to a sublethal concentration of a toxic product. Results suggested that Cry1Ie toxin carries no risk to survival, pollen consumption, or learning capabilities of young adult honey bees. However, during oral exposure to the imidacloprid treatments, honey bee learning behavior was affected and bees consumed significantly less pollen than the control and Cry1Ie groups. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. A gas chromatographic method for the determination of bicarbonate and dissolved gases

    USDA-ARS?s Scientific Manuscript database

    A gas chromatographic method for the rapid determination of aqueous carbon dioxide and its speciation into solvated carbon dioxide and bicarbonate is presented. One-half mL samples are injected through a rubber septum into 20-mL vials that are filled with 9.5 mL of 0.1 N HCl. A one mL portion of the...

  11. Non-negative Tensor Factorization for Robust Exploratory Big-Data Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexandrov, Boian; Vesselinov, Velimir Valentinov; Djidjev, Hristo Nikolov

    Currently, large multidimensional datasets are being accumulated in almost every field. Data are: (1) collected by distributed sensor networks in real-time all over the globe, (2) produced by large-scale experimental measurements or engineering activities, (3) generated by high-performance simulations, and (4) gathered by electronic communications and socialnetwork activities, etc. Simultaneous analysis of these ultra-large heterogeneous multidimensional datasets is often critical for scientific discoveries, decision-making, emergency response, and national and global security. The importance of such analyses mandates the development of the next-generation of robust machine learning (ML) methods and tools for bigdata exploratory analysis.

  12. Leveraging knowledge engineering and machine learning for microbial bio-manufacturing.

    PubMed

    Oyetunde, Tolutola; Bao, Forrest Sheng; Chen, Jiung-Wen; Martin, Hector Garcia; Tang, Yinjie J

    2018-05-03

    Genome scale modeling (GSM) predicts the performance of microbial workhorses and helps identify beneficial gene targets. GSM integrated with intracellular flux dynamics, omics, and thermodynamics have shown remarkable progress in both elucidating complex cellular phenomena and computational strain design (CSD). Nonetheless, these models still show high uncertainty due to a poor understanding of innate pathway regulations, metabolic burdens, and other factors (such as stress tolerance and metabolite channeling). Besides, the engineered hosts may have genetic mutations or non-genetic variations in bioreactor conditions and thus CSD rarely foresees fermentation rate and titer. Metabolic models play important role in design-build-test-learn cycles for strain improvement, and machine learning (ML) may provide a viable complementary approach for driving strain design and deciphering cellular processes. In order to develop quality ML models, knowledge engineering leverages and standardizes the wealth of information in literature (e.g., genomic/phenomic data, synthetic biology strategies, and bioprocess variables). Data driven frameworks can offer new constraints for mechanistic models to describe cellular regulations, to design pathways, to search gene targets, and to estimate fermentation titer/rate/yield under specified growth conditions (e.g., mixing, nutrients, and O 2 ). This review highlights the scope of information collections, database constructions, and machine learning techniques (such as deep learning and transfer learning), which may facilitate "Learn and Design" for strain development. Copyright © 2018. Published by Elsevier Inc.

  13. Considerations for automated machine learning in clinical metabolic profiling: Altered homocysteine plasma concentration associated with metformin exposure.

    PubMed

    Orlenko, Alena; Moore, Jason H; Orzechowski, Patryk; Olson, Randal S; Cairns, Junmei; Caraballo, Pedro J; Weinshilboum, Richard M; Wang, Liewei; Breitenstein, Matthew K

    2018-01-01

    With the maturation of metabolomics science and proliferation of biobanks, clinical metabolic profiling is an increasingly opportunistic frontier for advancing translational clinical research. Automated Machine Learning (AutoML) approaches provide exciting opportunity to guide feature selection in agnostic metabolic profiling endeavors, where potentially thousands of independent data points must be evaluated. In previous research, AutoML using high-dimensional data of varying types has been demonstrably robust, outperforming traditional approaches. However, considerations for application in clinical metabolic profiling remain to be evaluated. Particularly, regarding the robustness of AutoML to identify and adjust for common clinical confounders. In this study, we present a focused case study regarding AutoML considerations for using the Tree-Based Optimization Tool (TPOT) in metabolic profiling of exposure to metformin in a biobank cohort. First, we propose a tandem rank-accuracy measure to guide agnostic feature selection and corresponding threshold determination in clinical metabolic profiling endeavors. Second, while AutoML, using default parameters, demonstrated potential to lack sensitivity to low-effect confounding clinical covariates, we demonstrated residual training and adjustment of metabolite features as an easily applicable approach to ensure AutoML adjustment for potential confounding characteristics. Finally, we present increased homocysteine with long-term exposure to metformin as a potentially novel, non-replicated metabolite association suggested by TPOT; an association not identified in parallel clinical metabolic profiling endeavors. While warranting independent replication, our tandem rank-accuracy measure suggests homocysteine to be the metabolite feature with largest effect, and corresponding priority for further translational clinical research. Residual training and adjustment for a potential confounding effect by BMI only slightly modified the suggested association. Increased homocysteine is thought to be associated with vitamin B12 deficiency - evaluation for potential clinical relevance is suggested. While considerations for clinical metabolic profiling are recommended, including adjustment approaches for clinical confounders, AutoML presents an exciting tool to enhance clinical metabolic profiling and advance translational research endeavors.

  14. A Hybrid Semi-supervised Classification Scheme for Mining Multisource Geospatial Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju; Bhaduri, Budhendra L

    2011-01-01

    Supervised learning methods such as Maximum Likelihood (ML) are often used in land cover (thematic) classification of remote sensing imagery. ML classifier relies exclusively on spectral characteristics of thematic classes whose statistical distributions (class conditional probability densities) are often overlapping. The spectral response distributions of thematic classes are dependent on many factors including elevation, soil types, and ecological zones. A second problem with statistical classifiers is the requirement of large number of accurate training samples (10 to 30 |dimensions|), which are often costly and time consuming to acquire over large geographic regions. With the increasing availability of geospatial databases, itmore » is possible to exploit the knowledge derived from these ancillary datasets to improve classification accuracies even when the class distributions are highly overlapping. Likewise newer semi-supervised techniques can be adopted to improve the parameter estimates of statistical model by utilizing a large number of easily available unlabeled training samples. Unfortunately there is no convenient multivariate statistical model that can be employed for mulitsource geospatial databases. In this paper we present a hybrid semi-supervised learning algorithm that effectively exploits freely available unlabeled training samples from multispectral remote sensing images and also incorporates ancillary geospatial databases. We have conducted several experiments on real datasets, and our new hybrid approach shows over 25 to 35% improvement in overall classification accuracy over conventional classification schemes.« less

  15. Five methods of breast volume measurement: a comparative study of measurements of specimen volume in 30 mastectomy cases.

    PubMed

    Kayar, Ragip; Civelek, Serdar; Cobanoglu, Murat; Gungor, Osman; Catal, Hidayet; Emiroglu, Mustafa

    2011-03-27

    To compare breast volume measurement techniques in terms of accuracy, convenience, and cost. Breast volumes of 30 patients who were scheduled to undergo total mastectomy surgery were measured preoperatively by using five different methods (mammography, anatomic [anthropometric], thermoplastic casting, the Archimedes procedure, and the Grossman-Roudner device). Specimen volume after total mastectomy was measured in each patient with the water displacement method (Archimedes). The results were compared statistically with the values obtained by the five different methods. The mean mastectomy specimen volume was 623.5 (range 150-1490) mL. The breast volume values were established to be 615.7 mL (r = 0.997) with the mammographic method, 645.4 mL (r = 0.975) with the anthropometric method, 565.8 mL (r = 0.934) with the Grossman-Roudner device, 583.2 mL (r = 0.989) with the Archimedes procedure, and 544.7 mL (r = 0.94) with the casting technique. Examination of r values revealed that the most accurate method was mammography for all volume ranges, followed by the Archimedes method. The present study demonstrated that the most accurate method of breast volume measurement is mammography, followed by the Archimedes method. However, when patient comfort, ease of application, and cost were taken into consideration, the Grossman-Roudner device and anatomic measurement were relatively less expensive, and easier methods with an acceptable degree of accuracy.

  16. Development and Validation of HPLC Method for Determination of Crocetin, a constituent of Saffron, in Human Serum Samples

    PubMed Central

    Mohammadpour, Amir Hooshang; Ramezani, Mohammad; Tavakoli Anaraki, Nasim; Malaekeh-Nikouei, Bizhan; Amel Farzad, Sara; Hosseinzadeh, Hossein

    2013-01-01

    Objective(s): The present study reports the development and validation of a sensitive and rapid extraction method beside high performance liquid chromatographic method for the determination of crocetin in human serum. Materials and Methods: The HPLC method was carried out by using a C18 reversed-phase column and a mobile phase composed of methanol/water/acetic acid (85:14.5:0.5 v/v/v) at the flow rate of 0.8 ml/min. The UV detector was set at 423 nm and 13-cis retinoic acid was used as the internal standard. Serum samples were pretreated with solid-phase extraction using Bond Elut C18 (200mg) cartridges or with direct precipitation using acetonitrile. Results: The calibration curves were linear over the range of 0.05-1.25 µg/ml for direct precipitation method and 0.5-5 µg/ml for solid-phase extraction. The mean recoveries of crocetin over a concentration range of 0.05-5 µg/ml serum for direct precipitation method and 0.5-5 µg/ml for solid-phase extraction were above 70 % and 60 %, respectively. The intraday coefficients of variation were 0.37- 2.6% for direct precipitation method and 0.64 - 5.43% for solid-phase extraction. The inter day coefficients of variation were 1.69 – 6.03% for direct precipitation method and 5.13-12.74% for solid-phase extraction, respectively. The lower limit of quantification for crocetin was 0.05 µg/ml for direct precipitation method and 0.5 µg/ml for solid-phase extraction. Conclusion: The validated direct precipitation method for HPLC satisfied all of the criteria that were necessary for a bioanalytical method and could reliably quantitate crocetin in human serum for future clinical pharmacokinetic study. PMID:23638292

  17. NASA FDL: Accelerating Artificial Intelligence Applications in the Space Sciences.

    NASA Astrophysics Data System (ADS)

    Parr, J.; Navas-Moreno, M.; Dahlstrom, E. L.; Jennings, S. B.

    2017-12-01

    NASA has a long history of using Artificial Intelligence (AI) for exploration purposes, however due to the recent explosion of the Machine Learning (ML) field within AI, there are great opportunities for NASA to find expanded benefit. For over two years now, the NASA Frontier Development Lab (FDL) has been at the nexus of bright academic researchers, private sector expertise in AI/ML and NASA scientific problem solving. The FDL hypothesis of improving science results was predicated on three main ideas, faster results could be achieved through sprint methodologies, better results could be achieved through interdisciplinarity, and public-private partnerships could lower costs We present select results obtained during two summer sessions in 2016 and 2017 where the research was focused on topics in planetary defense, space resources and space weather, and utilized variational auto encoders, bayesian optimization, and deep learning techniques like deep, recurrent and residual neural networks. The FDL results demonstrate the power of bridging research disciplines and the potential that AI/ML has for supporting research goals, improving on current methodologies, enabling new discovery and doing so in accelerated timeframes.

  18. Development and Validation of HPLC Method for Determination of Crocetin, a constituent of Saffron, in Human Serum Samples.

    PubMed

    Mohammadpour, Amir Hooshang; Ramezani, Mohammad; Tavakoli Anaraki, Nasim; Malaekeh-Nikouei, Bizhan; Amel Farzad, Sara; Hosseinzadeh, Hossein

    2013-01-01

    The present study reports the development and validation of a sensitive and rapid extraction method beside high performance liquid chromatographic method for the determination of crocetin in human serum. The HPLC method was carried out by using a C18 reversed-phase column and a mobile phase composed of methanol/water/acetic acid (85:14.5:0.5 v/v/v) at the flow rate of 0.8 ml/min. The UV detector was set at 423 nm and 13-cis retinoic acid was used as the internal standard. Serum samples were pretreated with solid-phase extraction using Bond Elut C18 (200mg) cartridges or with direct precipitation using acetonitrile. The calibration curves were linear over the range of 0.05-1.25 µg/ml for direct precipitation method and 0.5-5 µg/ml for solid-phase extraction. The mean recoveries of crocetin over a concentration range of 0.05-5 µg/ml serum for direct precipitation method and 0.5-5 µg/ml for solid-phase extraction were above 70 % and 60 %, respectively. The intraday coefficients of variation were 0.37- 2.6% for direct precipitation method and 0.64 - 5.43% for solid-phase extraction. The inter day coefficients of variation were 1.69 - 6.03% for direct precipitation method and 5.13-12.74% for solid-phase extraction, respectively. The lower limit of quantification for crocetin was 0.05 µg/ml for direct precipitation method and 0.5 µg/ml for solid-phase extraction. The validated direct precipitation method for HPLC satisfied all of the criteria that were necessary for a bioanalytical method and could reliably quantitate crocetin in human serum for future clinical pharmacokinetic study.

  19. Spectrophotometric Methods for the Determination of Sitagliptin and Vildagliptin in Bulk and Dosage Forms

    PubMed Central

    El-Bagary, Ramzia I.; Elkady, Ehab F.; Ayoub, Bassam M.

    2011-01-01

    Simple, accurate and precise spectrophotometric methods have been developed for the determination of sitagliptin and vildagliptin in bulk and dosage forms. The proposed methods are based on the charge transfer complexes of sitagliptin phosphate and vildagliptin with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ), 7,7,8,8-tetracyanoquinodimethane (TCNQ) and tetrachloro-1,4-benzoquinone (p-chloranil). All the variables were studied to optimize the reactions conditions. For sitagliptin, Beer’s law was obeyed in the concentration ranges of 50-300 μg/ml, 20-120 μg/ml and 100-900 μg/ml with DDQ, TCNQ and p-chloranil, respectively. For vildagliptin, Beer’s law was obeyed in the concentration ranges of 50-300 μg/ml, 10-85 μg/ml and 50-350 μg/ml with DDQ, TCNQ and p-chloranil, respectively. The developed methods were validated and proved to be specific and accurate for the quality control of the cited drugs in pharmaceutical dosage forms. PMID:23675221

  20. Spectrophotometric and spectrofluorimetric determination of indacaterol maleate in pure form and pharmaceutical preparations: application to content uniformity.

    PubMed

    El-Ashry, S M; El-Wasseef, D R; El-Sherbiny, D T; Salem, Y A

    2015-09-01

    Two simple, rapid, sensitive and precise spectrophotometric and spectrofluorimetric methods were developed for the determination of indacaterol maleate in bulk powder and capsules. Both methods were based on the direct measurement of the drug in methanol. In the spectrophotometric method (Method I) the absorbance was measured at 259 nm. The absorbance-concentration plot was rectilinear over the range 1.0-10.0 µg mL(-1) with a lower detection limit (LOD) of 0.078 µg mL(-1) and lower quantification limit (LOQ) of 0.238 µg mL(-1). Meanwhile in the spectrofluorimetric method (Method II) the native fluorescence was measured at 358 nm after excitation at 258 nm. The fluorescence-concentration plot was rectilinear over the range of 1.0-40.0 ng mL(-1) with an LOD of 0.075 ng mL(-1) and an LOQ of 0.226 ng mL(-1). The proposed methods were successfully applied to the determination of indacaterol maleate in capsules with average percent recoveries ± RSD% of 99.94 ± 0.96 for Method I and 99.97 ± 0.81 for Method II. In addition, the proposed methods were extended to a content uniformity test according to the United States Pharmacopoeia (USP) guidelines and were accurate, precise for the capsules studied with acceptance value 3.98 for Method I and 2.616 for Method II. Copyright © 2015 John Wiley & Sons, Ltd.

  1. An Event-Triggered Machine Learning Approach for Accelerometer-Based Fall Detection.

    PubMed

    Putra, I Putu Edy Suardiyana; Brusey, James; Gaura, Elena; Vesilo, Rein

    2017-12-22

    The fixed-size non-overlapping sliding window (FNSW) and fixed-size overlapping sliding window (FOSW) approaches are the most commonly used data-segmentation techniques in machine learning-based fall detection using accelerometer sensors. However, these techniques do not segment by fall stages (pre-impact, impact, and post-impact) and thus useful information is lost, which may reduce the detection rate of the classifier. Aligning the segment with the fall stage is difficult, as the segment size varies. We propose an event-triggered machine learning (EvenT-ML) approach that aligns each fall stage so that the characteristic features of the fall stages are more easily recognized. To evaluate our approach, two publicly accessible datasets were used. Classification and regression tree (CART), k -nearest neighbor ( k -NN), logistic regression (LR), and the support vector machine (SVM) were used to train the classifiers. EvenT-ML gives classifier F-scores of 98% for a chest-worn sensor and 92% for a waist-worn sensor, and significantly reduces the computational cost compared with the FNSW- and FOSW-based approaches, with reductions of up to 8-fold and 78-fold, respectively. EvenT-ML achieves a significantly better F-score than existing fall detection approaches. These results indicate that aligning feature segments with fall stages significantly increases the detection rate and reduces the computational cost.

  2. Semiautomatic three-dimensional CT ventricular volumetry in patients with congenital heart disease: agreement between two methods with different user interaction.

    PubMed

    Goo, Hyun Woo; Park, Sang-Hyub

    2015-12-01

    To assess agreement between two semi-automatic, three-dimensional (3D) computed tomography (CT) ventricular volumetry methods with different user interactions in patients with congenital heart disease. In 30 patients with congenital heart disease (median age 8 years, range 5 days-33 years; 20 men), dual-source, multi-section, electrocardiography-synchronized cardiac CT was obtained at the end-systolic (n = 22) and/or end-diastolic (n = 28) phase. Nineteen left ventricle end-systolic (LV ESV), 28 left ventricle end-diastolic (LV EDV), 22 right ventricle end-systolic (RV ESV), and 28 right ventricle end-diastolic volumes (RV EDV) were successfully calculated using two semi-automatic, 3D segmentation methods with different user interactions (high in method 1, low in method 2). The calculated ventricular volumes of the two methods were compared and correlated. A P value <0.05 was considered statistically significant. LV ESV (35.95 ± 23.49 ml), LV EDV (88.76 ± 61.83 ml), and RV ESV (46.87 ± 47.39 ml) measured by method 2 were slightly but significantly smaller than those measured by method 1 (41.25 ± 26.94 ml, 92.20 ± 62.69 ml, 53.61 ± 50.08 ml for LV ESV, LV EDV, and RV ESV, respectively; P ≤ 0.02). In contrast, no statistically significant difference in RV EDV (122.57 ± 88.57 ml in method 1, 123.83 ± 89.89 ml in method 2; P = 0.36) was found between the two methods. All ventricular volumes showed very high correlation (R = 0.978, 0.993, 0.985, 0.997 for LV ESV, LV EDV, RV ESV, and RV EDV, respectively; P < 0.001) between the two methods. In patients with congenital heart disease, 3D CT ventricular volumetry shows good agreement and high correlation between the two methods, but method 2 tends to slightly underestimate LV ESV, LV EDV, and RV ESV.

  3. Optimizing associated liver partition and portal vein ligation for staged hepatectomy outcomes: Surgical experience or appropriate patient selection?

    PubMed Central

    Al Hasan, Ibrahim; Tun-Abraham, Mauro Enrique; Wanis, Kerollos N.; Garcia-Ochoa, Carlos; Levstik, Mark A.; Al-Judaibi, Bandar; Hernandez-Alejandro, Roberto

    2017-01-01

    Background Early reports of associated liver partition and portal vein ligation for staged hepatectomy (ALPPS) outcomes have been suboptimal. The literature has confirmed that learning curves influence surgical outcomes. We have 54 months of continuous experience performing ALPPS with strict selection criteria. This study aimed to evaluate the impact of the learning curve on ALPPS outcomes. Methods We retrospectively compared patients who underwent ALPPS between April 2012 and March 2016. Patients were grouped into 2 24-month (early and late) periods. All candidates had a high tumour load requiring staged hepatectomy after chemotherapy response, a predicted future liver remnant (FLR) less than 30% and good performance status. Results Thirty-three patients underwent ALPPS during the study period: 16 in the early group (median age 65 yr, mean body mass index [BMI] 27) and 17 in the late group (median age 60 yr, mean BMI 25). Bilobar disease was comparable in both groups (94% v. 88%, p > 0.99). Duration of surgery was not statistically different. Intraoperative blood loss and need for transfusion were significantly lower in the late group (200 ± 109 mL v. 100 ± 43 mL, p < 0.05). The late group had a higher proportion of monosegment ALPPS (4:1). There were no deaths within 90 days in either cohort. Rates of postoperative complications were not statistically significant between groups. The R0 resection rate was similar. The entire 1-year disease-free and overall survival were 52% and 84%, respectively. Conclusion Excellent results can be obtained in innovative complex surgery with careful patient selection and good technical skills. Additionally, the learning curve brought confidence to perform more complex procedures while maintaining good outcomes. PMID:29173259

  4. [Determination of aluminium in flour foods with photometric method].

    PubMed

    Ma, Lan; Zhao, Xin; Zhou, Shuang; Yang, Dajin

    2012-05-01

    To establish a determination method for aluminium in flour foods with photometric method. After samples being treated with microwave digestion and wet digestion, aluminium in staple flour foods was determined by photometric method. There was a good linearity of the result in the range of 0.25 - 5.0 microg/ml aluminium, r = 0.9998; limit of detection (LOD) : 2.3 ng/ml; limit of quantitation (LOQ) : 7 ng/ml. This method of determining aluminium in flour foods is simple, rapid and reliable.

  5. Acceleration of saddle-point searches with machine learning.

    PubMed

    Peterson, Andrew A

    2016-08-21

    In atomistic simulations, the location of the saddle point on the potential-energy surface (PES) gives important information on transitions between local minima, for example, via transition-state theory. However, the search for saddle points often involves hundreds or thousands of ab initio force calls, which are typically all done at full accuracy. This results in the vast majority of the computational effort being spent calculating the electronic structure of states not important to the researcher, and very little time performing the calculation of the saddle point state itself. In this work, we describe how machine learning (ML) can reduce the number of intermediate ab initio calculations needed to locate saddle points. Since machine-learning models can learn from, and thus mimic, atomistic simulations, the saddle-point search can be conducted rapidly in the machine-learning representation. The saddle-point prediction can then be verified by an ab initio calculation; if it is incorrect, this strategically has identified regions of the PES where the machine-learning representation has insufficient training data. When these training data are used to improve the machine-learning model, the estimates greatly improve. This approach can be systematized, and in two simple example problems we demonstrate a dramatic reduction in the number of ab initio force calls. We expect that this approach and future refinements will greatly accelerate searches for saddle points, as well as other searches on the potential energy surface, as machine-learning methods see greater adoption by the atomistics community.

  6. Acceleration of saddle-point searches with machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Andrew A., E-mail: andrew-peterson@brown.edu

    In atomistic simulations, the location of the saddle point on the potential-energy surface (PES) gives important information on transitions between local minima, for example, via transition-state theory. However, the search for saddle points often involves hundreds or thousands of ab initio force calls, which are typically all done at full accuracy. This results in the vast majority of the computational effort being spent calculating the electronic structure of states not important to the researcher, and very little time performing the calculation of the saddle point state itself. In this work, we describe how machine learning (ML) can reduce the numbermore » of intermediate ab initio calculations needed to locate saddle points. Since machine-learning models can learn from, and thus mimic, atomistic simulations, the saddle-point search can be conducted rapidly in the machine-learning representation. The saddle-point prediction can then be verified by an ab initio calculation; if it is incorrect, this strategically has identified regions of the PES where the machine-learning representation has insufficient training data. When these training data are used to improve the machine-learning model, the estimates greatly improve. This approach can be systematized, and in two simple example problems we demonstrate a dramatic reduction in the number of ab initio force calls. We expect that this approach and future refinements will greatly accelerate searches for saddle points, as well as other searches on the potential energy surface, as machine-learning methods see greater adoption by the atomistics community.« less

  7. Comparison of five methods for extraction of Legionella pneumophila from respiratory specimens.

    PubMed

    Wilson, Deborah; Yen-Lieberman, Belinda; Reischl, Udo; Warshawsky, Ilka; Procop, Gary W

    2004-12-01

    The efficiencies of five commercially available nucleic acid extraction methods were evaluated for the recovery of a standardized inoculum of Legionella pneumophila in respiratory specimens (sputum and bronchoalveolar lavage [BAL] specimens). The concentrations of Legionella DNA recovered from sputa with the automated MagNA Pure (526,200 CFU/ml) and NucliSens (171,800 CFU/ml) extractors were greater than those recovered with the manual methods (i.e., Roche High Pure kit [133,900 CFU/ml], QIAamp DNA Mini kit [46,380 CFU/ml], and ViralXpress kit [13,635 CFU/ml]). The rank order was the same for extracts from BAL specimens, except that for this specimen type the QIAamp DNA Mini kit recovered more than the Roche High Pure kit.

  8. Formal Verification of Complex Systems based on SysML Functional Requirements

    DTIC Science & Technology

    2014-12-23

    Formal Verification of Complex Systems based on SysML Functional Requirements Hoda Mehrpouyan1, Irem Y. Tumer2, Chris Hoyle2, Dimitra Giannakopoulou3...requirements for design of complex engineered systems. The proposed ap- proach combines a SysML modeling approach to document and structure safety requirements...methods and tools to support the integration of safety into the design solution. 2.1. SysML for Complex Engineered Systems Traditional methods and tools

  9. EnzML: multi-label prediction of enzyme classes using InterPro signatures

    PubMed Central

    2012-01-01

    Background Manual annotation of enzymatic functions cannot keep up with automatic genome sequencing. In this work we explore the capacity of InterPro sequence signatures to automatically predict enzymatic function. Results We present EnzML, a multi-label classification method that can efficiently account also for proteins with multiple enzymatic functions: 50,000 in UniProt. EnzML was evaluated using a standard set of 300,747 proteins for which the manually curated Swiss-Prot and KEGG databases have agreeing Enzyme Commission (EC) annotations. EnzML achieved more than 98% subset accuracy (exact match of all correct Enzyme Commission classes of a protein) for the entire dataset and between 87 and 97% subset accuracy in reannotating eight entire proteomes: human, mouse, rat, mouse-ear cress, fruit fly, the S. pombe yeast, the E. coli bacterium and the M. jannaschii archaebacterium. To understand the role played by the dataset size, we compared the cross-evaluation results of smaller datasets, either constructed at random or from specific taxonomic domains such as archaea, bacteria, fungi, invertebrates, plants and vertebrates. The results were confirmed even when the redundancy in the dataset was reduced using UniRef100, UniRef90 or UniRef50 clusters. Conclusions InterPro signatures are a compact and powerful attribute space for the prediction of enzymatic function. This representation makes multi-label machine learning feasible in reasonable time (30 minutes to train on 300,747 instances with 10,852 attributes and 2,201 class values) using the Mulan Binary Relevance Nearest Neighbours algorithm implementation (BR-kNN). PMID:22533924

  10. Courseware Development Model (CDM): The Effects of CDM on Primary School Pre-Service Teachers' Achievements and Attitudes

    ERIC Educational Resources Information Center

    Efendioglu, Akin

    2012-01-01

    The main purpose of this study is to design a "Courseware Development Model" (CDM) and investigate its effects on pre-service teachers' academic achievements in the field of geography and attitudes toward computer-based education (ATCBE). The CDM consisted of three components: content (C), learning theory, namely, meaningful learning (ML), and…

  11. Combining Human and Machine Learning to Map Cropland in the 21st Century's Major Agricultural Frontier

    NASA Astrophysics Data System (ADS)

    Estes, L. D.; Debats, S. R.; Caylor, K. K.; Evans, T. P.; Gower, D.; McRitchie, D.; Searchinger, T.; Thompson, D. R.; Wood, E. F.; Zeng, L.

    2016-12-01

    In the coming decades, large areas of new cropland will be created to meet the world's rapidly growing food demands. Much of this new cropland will be in sub-Saharan Africa, where food needs will increase most and the area of remaining potential farmland is greatest. If we are to understand the impacts of global change, it is critical to accurately identify Africa's existing croplands and how they are changing. Yet the continent's smallholder-dominated agricultural systems are unusually challenging for remote sensing analyses, making accurate area estimates difficult to obtain, let alone important details related to field size and geometry. Fortunately, the rapidly growing archives of moderate to high-resolution satellite imagery hosted on open servers now offer an unprecedented opportunity to improve landcover maps. We present a system that integrates two critical components needed to capitalize on this opportunity: 1) human image interpretation and 2) machine learning (ML). Human judgment is needed to accurately delineate training sites within noisy imagery and a highly variable cover type, while ML provides the ability to scale and to interpret large feature spaces that defy human comprehension. Because large amounts of training data are needed (a major impediment for analysts), we use a crowdsourcing platform that connects amazon.com's Mechanical Turk service to satellite imagery hosted on open image servers. Workers map visible fields at pre-assigned sites, and are paid according to their mapping accuracy. Initial tests show overall high map accuracy and mapping rates >1800 km2/hour. The ML classifier uses random forests and randomized quasi-exhaustive feature selection, and is highly effective in classifying diverse agricultural types in southern Africa (AUC > 0.9). We connect the ML and crowdsourcing components to make an interactive learning framework. The ML algorithm performs an initial classification using a first batch of crowd-sourced maps, using thresholds of posterior probabilities to segregate sub-images classified with high or low confidence. Workers are then directed to collect new training data in low confidence sub-images, after which classification is repeated and re-assessed, and the entire process iterated until maximum possible accuracy is realized.

  12. Improving Arterial Spin Labeling by Using Deep Learning.

    PubMed

    Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong

    2018-05-01

    Purpose To develop a deep learning algorithm that generates arterial spin labeling (ASL) perfusion images with higher accuracy and robustness by using a smaller number of subtraction images. Materials and Methods For ASL image generation from pair-wise subtraction, we used a convolutional neural network (CNN) as a deep learning algorithm. The ground truth perfusion images were generated by averaging six or seven pairwise subtraction images acquired with (a) conventional pseudocontinuous arterial spin labeling from seven healthy subjects or (b) Hadamard-encoded pseudocontinuous ASL from 114 patients with various diseases. CNNs were trained to generate perfusion images from a smaller number (two or three) of subtraction images and evaluated by means of cross-validation. CNNs from the patient data sets were also tested on 26 separate stroke data sets. CNNs were compared with the conventional averaging method in terms of mean square error and radiologic score by using a paired t test and/or Wilcoxon signed-rank test. Results Mean square errors were approximately 40% lower than those of the conventional averaging method for the cross-validation with the healthy subjects and patients and the separate test with the patients who had experienced a stroke (P < .001). Region-of-interest analysis in stroke regions showed that cerebral blood flow maps from CNN (mean ± standard deviation, 19.7 mL per 100 g/min ± 9.7) had smaller mean square errors than those determined with the conventional averaging method (43.2 ± 29.8) (P < .001). Radiologic scoring demonstrated that CNNs suppressed noise and motion and/or segmentation artifacts better than the conventional averaging method did (P < .001). Conclusion CNNs provided superior perfusion image quality and more accurate perfusion measurement compared with those of the conventional averaging method for generation of ASL images from pair-wise subtraction images. © RSNA, 2017.

  13. Development and validation of simple spectrophotometric and chemometric methods for simultaneous determination of empagliflozin and metformin: Applied to recently approved pharmaceutical formulation

    NASA Astrophysics Data System (ADS)

    Ayoub, Bassam M.

    2016-11-01

    New univariate spectrophotometric method and multivariate chemometric approach were developed and compared for simultaneous determination of empagliflozin and metformin manipulating their zero order absorption spectra with application on their pharmaceutical preparation. Sample enrichment technique was used to increase concentration of empagliflozin after extraction from tablets to allow its simultaneous determination with metformin without prior separation. Validation parameters according to ICH guidelines were satisfactory over the concentration range of 2-12 μg mL- 1 for both drugs using simultaneous equation with LOD values equal to 0.20 μg mL- 1 and 0.19 μg mL- 1, LOQ values equal to 0.59 μg mL- 1 and 0.58 μg mL- 1 for empagliflozin and metformin, respectively. While the optimum results for the chemometric approach using partial least squares method (PLS-2) were obtained using concentration range of 2-10 μg mL- 1. The optimized validated methods are suitable for quality control laboratories enable fast and economic determination of the recently approved pharmaceutical combination Synjardy® tablets.

  14. Machine learning approaches to diagnosis and laterality effects in semantic dementia discourse.

    PubMed

    Garrard, Peter; Rentoumi, Vassiliki; Gesierich, Benno; Miller, Bruce; Gorno-Tempini, Maria Luisa

    2014-06-01

    Advances in automatic text classification have been necessitated by the rapid increase in the availability of digital documents. Machine learning (ML) algorithms can 'learn' from data: for instance a ML system can be trained on a set of features derived from written texts belonging to known categories, and learn to distinguish between them. Such a trained system can then be used to classify unseen texts. In this paper, we explore the potential of the technique to classify transcribed speech samples along clinical dimensions, using vocabulary data alone. We report the accuracy with which two related ML algorithms [naive Bayes Gaussian (NBG) and naive Bayes multinomial (NBM)] categorized picture descriptions produced by: 32 semantic dementia (SD) patients versus 10 healthy, age-matched controls; and SD patients with left- (n = 21) versus right-predominant (n = 11) patterns of temporal lobe atrophy. We used information gain (IG) to identify the vocabulary features that were most informative to each of these two distinctions. In the SD versus control classification task, both algorithms achieved accuracies of greater than 90%. In the right- versus left-temporal lobe predominant classification, NBM achieved a high level of accuracy (88%), but this was achieved by both NBM and NBG when the features used in the training set were restricted to those with high values of IG. The most informative features for the patient versus control task were low frequency content words, generic terms and components of metanarrative statements. For the right versus left task the number of informative lexical features was too small to support any specific inferences. An enriched feature set, including values derived from Quantitative Production Analysis (QPA) may shed further light on this little understood distinction. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Machine learning and microsimulation techniques on the prognosis of dementia: A systematic literature review.

    PubMed

    Dallora, Ana Luiza; Eivazzadeh, Shahryar; Mendes, Emilia; Berglund, Johan; Anderberg, Peter

    2017-01-01

    Dementia is a complex disorder characterized by poor outcomes for the patients and high costs of care. After decades of research little is known about its mechanisms. Having prognostic estimates about dementia can help researchers, patients and public entities in dealing with this disorder. Thus, health data, machine learning and microsimulation techniques could be employed in developing prognostic estimates for dementia. The goal of this paper is to present evidence on the state of the art of studies investigating and the prognosis of dementia using machine learning and microsimulation techniques. To achieve our goal we carried out a systematic literature review, in which three large databases-Pubmed, Socups and Web of Science were searched to select studies that employed machine learning or microsimulation techniques for the prognosis of dementia. A single backward snowballing was done to identify further studies. A quality checklist was also employed to assess the quality of the evidence presented by the selected studies, and low quality studies were removed. Finally, data from the final set of studies were extracted in summary tables. In total 37 papers were included. The data summary results showed that the current research is focused on the investigation of the patients with mild cognitive impairment that will evolve to Alzheimer's disease, using machine learning techniques. Microsimulation studies were concerned with cost estimation and had a populational focus. Neuroimaging was the most commonly used variable. Prediction of conversion from MCI to AD is the dominant theme in the selected studies. Most studies used ML techniques on Neuroimaging data. Only a few data sources have been recruited by most studies and the ADNI database is the one most commonly used. Only two studies have investigated the prediction of epidemiological aspects of Dementia using either ML or MS techniques. Finally, care should be taken when interpreting the reported accuracy of ML techniques, given studies' different contexts.

  16. The learning curve of laparoscopic liver resection after the Louisville statement 2008: Will it be more effective and smooth?

    PubMed

    Lin, Chung-Wei; Tsai, Tzu-Jung; Cheng, Tsung-Yen; Wei, Hung-Kuang; Hung, Chen-Fang; Chen, Yin-Yin; Chen, Chii-Ming

    2016-07-01

    Laparoscopic liver resection (LLR) has been proven to be feasible and safe. However, it is a difficult and complex procedure with a steep learning curve. The aim of this study was to evaluate the learning curve of LLR at our institutions since 2008. One hundred and twenty-six consecutive LLRs were included from May 2008 to December 2014. Patient characteristics, operative data, and surgical outcomes were collected prospectively and analyzed. The median tumor size was 25 mm (range 5-90 mm), and 96 % of the resected tumors were malignant. 41.3 % (52/126) of patients had pathologically proven liver cirrhosis. The median operation time was 216 min (range 40-602 min) with a median blood loss of 100 ml (range 20-2300 ml). The median length of hospital stay was 4 days (range 2-10 days). Six major postoperative complications occurred in this series, and there was no 90-day postoperative mortality. Regarding the incidence of major operative events including operation time longer than 300 min, perioperative blood loss above 500 ml, and major postoperative complications, the learning curve [as evaluated by the cumulative sum (CUSUM) technique] showed its first reverse after 22 cases. The indication of laparoscopic resection in this series extended after 60 cases to include tumors located in difficult locations (segments 4a, 7, 8) and major hepatectomy. CUSUM showed that the incidence of major operative events proceeded to increase again, and the second reverse was noted after an additional 40 cases of experience. Location of the tumor in a difficult area emerged as a significant predictor of major operative events. In carefully selected patients, CUSUM analysis showed 22 cases were needed to overcome the learning curve for minor LLR.

  17. Derivative spectrophotometric method for simultaneous determination of zofenopril and fluvastatin in mixtures and pharmaceutical dosage forms.

    PubMed

    Stolarczyk, Mariusz; Maślanka, Anna; Apola, Anna; Rybak, Wojciech; Krzek, Jan

    2015-09-05

    Fast, accurate and precise method for the determination of zofenopril and fluvastatin was developed using spectrophotometry of the first (D1), second (D2), and third (D3) order derivatives in two-component mixtures and in pharmaceutical preparations. It was shown, that the developed method allows for the determination of the tested components in a direct manner, despite the apparent interference of the absorption spectra in the UV range. For quantitative determinations, "zero-crossing" method was chosen, appropriate wavelengths for zofenopril were: D1 λ=270.85 nm, D2 λ=286.38 nm, D3 λ=253.90 nm. Fluvastatin was determined at wavelengths: D1 λ=339.03 nm, D2 λ=252.57 nm, D3 λ=258.50 nm, respectively. The method was characterized by high sensitivity and accuracy, for zofenopril LOD was in the range of 0.19-0.87 μg mL(-1), for fluvastatin 0.51-1.18 μg mL(-1), depending on the class of derivative, and for zofenopril and fluvastatin LOQ was 0.57-2.64 μg mL(-1) and 1.56-3.57 μg mL(-1), respectively. The recovery of individual components was within the range of 100±5%. For zofenopril, the linearity range was estimated between 7.65 μg mL(-1) and 22.94 μg mL(-1), and for fluvastatin between 5.60 μg mL(-1) and 28.00 μg mL(-1). Copyright © 2015 Elsevier B.V. All rights reserved.

  18. The effect of machine learning regression algorithms and sample size on individualized behavioral prediction with functional connectivity features.

    PubMed

    Cui, Zaixu; Gong, Gaolang

    2018-06-02

    Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. [Evaluation of in vitro antimicrobial activity of cefazolin alone and in combination with cefmetazole or flomoxef using agar dilution method and disk diffusion method].

    PubMed

    Matsuo, K; Uete, T

    1992-10-01

    Antimicrobial activities of cefazolin (CEZ) against 251 strains of various clinical isolates obtained during 1989 and 1990 were determined using the Mueller-Hinton agar dilution method at an inoculum level 10(6) CFU/ml. The reliability of the disk susceptility test was also studied using Mueller-Hinton agar and various disks at inoculum levels of 10(3-4) CFU/cm2 in estimating approximate values of MICs. In addition, antimicrobial activities of CEZ and cefmetazole (CMZ) or flomoxef (FMOX) in combination were investigated against methicillin-sensitive and -resistant Staphylococcus aureus (MSSA and MRSA) using the checkerboard agar dilution MIC method and the disk diffusion test either with the disks contained CEZ, CMZ, and FMOX alone, or CEZ, and CMZ or FMOX in combination. In this study, the MICs of CEZ against S. aureus were distributed with the 3 peak values at 0.39 microgram/ml, 3.13 micrograms/ml and > 100 micrograms/ml. MICs against MSSA were 0.39 microgram/ml to 0.78 microgram/ml, whereas those against MRSA were greater than 0.78 microgram/ml. MICs against majority of strains of Enterococcus faecalis were 25 micrograms/ml. Over 90% of strains of Escherichia coli and Klebsiella pneumoniae were inhibited at the level of 3.13 micrograms/ml. About 60% of isolates of indole negative Proteus spp. were inhibited at the levels of less than 3.13 micrograms/ml and 100% at 6.25 micrograms/ml, but MICs against indole positive Proteus spp., Serratia spp. and Pseudomonas aeruginosa were over 100 micrograms/ml. The antimicrobial activities of CEZ against these clinical isolates were not significantly different compared to those reported about 15-20 years ago, except for S. aureus. Highly resistant strains of S. aureus to CEZ were more prevalent in this study. The inhibitory zones obtained with the disk test were compared with MICs. The results of CEZ disk susceptibility test with 30 micrograms disk (Showa) or 10 micrograms disk (prepared in this laboratory) were well correlated with MICs (r = -0.837 and -0.814, respectively), showing the reliavility of the disk method in estimating approximate values of MICs. In the 4 category classification system currently used in Japan, break points in MIC values proposed are () MIC < or = 3 micrograms/ml, (++) > 3-15 micrograms/ml, (+) > 15-60 micrograms/ml, (-) > 60 micrograms/ml. The results obtained with 30 micrograms disks showed false positive in 7.7% and false negative in 6.8% of the samples. The disk results with E. faecalis showed a higher ratio of false positive results.(ABSTRACT TRUNCATED AT 400 WORDS)

  20. Immunocytometric quantitation of foeto-maternal haemorrhage with the Abbott Cell-Dyn CD4000 haematology analyser.

    PubMed

    Little, B H; Robson, R; Roemer, B; Scott, C S

    2005-02-01

    This study evaluated the extended use of a haematology analyser (Abbott Cell-Dyn CD4000) for the immunofluorescent enumeration of foeto-maternal haemorrhage (FMH) with fluorescein isothiocyanate-labelled monoclonal anti-RhD. Method performance was assessed with artificial FMH standards, and a series of 44 clinical samples. Within run precision was <15% (coefficient of variation, CV) for FMH volumes of 3 ml and above, 18.8% at an FMH volume of 2 ml and 31.7% at an FMH volume of 1 ml. Linearity analysis showed excellent agreement (observed FMH% = 0.98x expected FMH% + 0.02), and a close relationship (R(2) = 0.99) between observed and expected FMH percentages. The lower limit of quantification of the CD4000 (SRP-Ret) method with a maximum CV of 15% was 1.6 ml, and the limit of detection was <1 ml. Parallel Kleihauer-Betke test (KBT) assessments of FMH standards showed an overall trend for higher KBT values (observed = 1.25x expected - 0.38). At an FMH level of 4 ml, KBT observer estimates ranged from 0.57 to 11.94 ml with a mean inter-observer CV of 63%. For 44 clinical samples, there was decision point agreement between KBT and SRP-Ret results for 42 samples with an FMH of <2 ml. Analysis in the low FMH range (<1 ml) showed that small volume foetal leaks could be detected with the SRP-Ret method in most of 23 samples with negative KBT results. CD4000 SRP-Ret method performance for FMH determination was similar to that reported for flow cytometry.

  1. Uncertainty associated with assessing semen volume: are volumetric and gravimetric methods that different?

    PubMed

    Woodward, Bryan; Gossen, Nicole; Meadows, Jessica; Tomlinson, Mathew

    2016-12-01

    The World Health Organization laboratory manual for the examination of human semen suggests that an indirect measurement of semen volume by weighing (gravimetric method) is more accurate than a direct measure using a serological pipette. A series of experiments were performed to determine the level of discrepancy between the two methods using pipettes and a balance which had been calibrated to a traceable standard. The median weights of 1.0ml and 5.0ml of semen were 1.03 g (range 1.02-1.05 g) and 5.11 g (range 4.95-5.16 g), respectively, suggesting a density for semen between 1.03g and 1.04 g/ml. When the containers were re-weighed after the removal of 5.0 ml semen using a serological pipette, the mean residual loss was 0.12 ml (120 μl) or 0.12 g (median 100 μl, range 70-300 μl). Direct comparison of the volumetric and gravimetric methods in a total of 40 samples showed a mean difference of 0.25ml (median 0.32 ± 0.67ml) representing an error of 8.5%. Residual semen left in the container by weight was on average 0.11 g (median 0.10 g, range 0.05-0.19 g). Assuming a density of 1 g/ml then the average error between volumetric and gravimetric methods was approximately 8% (p < 0.001). If, however, the WHO value for density is assumed (1.04 g/ml) then the difference is reduced to 4.2%. At least 2.4-3.5% of this difference is also explained by the residual semen remaining in the container. This study suggests that by assuming the density of semen as 1 g/ml, there is significant uncertainty associated with the average gravimetric measurement of semen volume. Laboratories may therefore prefer to provide in-house quality assurance data in order to be satisfied that 'estimating' semen volume is 'fit for purpose' as opposed to assuming a lower uncertainty associated with the WHO recommended method.

  2. An IoT-Enabled Stroke Rehabilitation System Based on Smart Wearable Armband and Machine Learning.

    PubMed

    Yang, Geng; Deng, Jia; Pang, Gaoyang; Zhang, Hao; Li, Jiayi; Deng, Bin; Pang, Zhibo; Xu, Juan; Jiang, Mingzhe; Liljeberg, Pasi; Xie, Haibo; Yang, Huayong

    2018-01-01

    Surface electromyography signal plays an important role in hand function recovery training. In this paper, an IoT-enabled stroke rehabilitation system was introduced which was based on a smart wearable armband (SWA), machine learning (ML) algorithms, and a 3-D printed dexterous robot hand. User comfort is one of the key issues which should be addressed for wearable devices. The SWA was developed by integrating a low-power and tiny-sized IoT sensing device with textile electrodes, which can measure, pre-process, and wirelessly transmit bio-potential signals. By evenly distributing surface electrodes over user's forearm, drawbacks of classification accuracy poor performance can be mitigated. A new method was put forward to find the optimal feature set. ML algorithms were leveraged to analyze and discriminate features of different hand movements, and their performances were appraised by classification complexity estimating algorithms and principal components analysis. According to the verification results, all nine gestures can be successfully identified with an average accuracy up to 96.20%. In addition, a 3-D printed five-finger robot hand was implemented for hand rehabilitation training purpose. Correspondingly, user's hand movement intentions were extracted and converted into a series of commands which were used to drive motors assembled inside the dexterous robot hand. As a result, the dexterous robot hand can mimic the user's gesture in a real-time manner, which shows the proposed system can be used as a training tool to facilitate rehabilitation process for the patients after stroke.

  3. Classifying bent radio galaxies from a mixture of point-like/extended images with Machine Learning.

    NASA Astrophysics Data System (ADS)

    Bastien, David; Oozeer, Nadeem; Somanah, Radhakrishna

    2017-05-01

    The hypothesis that bent radio sources are supposed to be found in rich, massive galaxy clusters and the avalibility of huge amount of data from radio surveys have fueled our motivation to use Machine Learning (ML) to identify bent radio sources and as such use them as tracers for galaxy clusters. The shapelet analysis allowed us to decompose radio images into 256 features that could be fed into the ML algorithm. Additionally, ideas from the field of neuro-psychology helped us to consider training the machine to identify bent galaxies at different orientations. From our analysis, we found that the Random Forest algorithm was the most effective with an accuracy rate of 92% for a classification of point and extended sources as well as an accuracy of 80% for bent and unbent classification.

  4. Perturbation Theory/Machine Learning Model of ChEMBL Data for Dopamine Targets: Docking, Synthesis, and Assay of New l-Prolyl-l-leucyl-glycinamide Peptidomimetics.

    PubMed

    Ferreira da Costa, Joana; Silva, David; Caamaño, Olga; Brea, José M; Loza, Maria Isabel; Munteanu, Cristian R; Pazos, Alejandro; García-Mera, Xerardo; González-Díaz, Humbert

    2018-06-25

    Predicting drug-protein interactions (DPIs) for target proteins involved in dopamine pathways is a very important goal in medicinal chemistry. We can tackle this problem using Molecular Docking or Machine Learning (ML) models for one specific protein. Unfortunately, these models fail to account for large and complex big data sets of preclinical assays reported in public databases. This includes multiple conditions of assays, such as different experimental parameters, biological assays, target proteins, cell lines, organism of the target, or organism of assay. On the other hand, perturbation theory (PT) models allow us to predict the properties of a query compound or molecular system in experimental assays with multiple boundary conditions based on a previously known case of reference. In this work, we report the first PTML (PT + ML) study of a large ChEMBL data set of preclinical assays of compounds targeting dopamine pathway proteins. The best PTML model found predicts 50000 cases with accuracy of 70-91% in training and external validation series. We also compared the linear PTML model with alternative PTML models trained with multiple nonlinear methods (artificial neural network (ANN), Random Forest, Deep Learning, etc.). Some of the nonlinear methods outperform the linear model but at the cost of a notable increment of the complexity of the model. We illustrated the practical use of the new model with a proof-of-concept theoretical-experimental study. We reported for the first time the organic synthesis, chemical characterization, and pharmacological assay of a new series of l-prolyl-l-leucyl-glycinamide (PLG) peptidomimetic compounds. In addition, we performed a molecular docking study for some of these compounds with the software Vina AutoDock. The work ends with a PTML model predictive study of the outcomes of the new compounds in a large number of assays. Therefore, this study offers a new computational methodology for predicting the outcome for any compound in new assays. This PTML method focuses on the prediction with a simple linear model of multiple pharmacological parameters (IC 50 , EC 50 , K i , etc.) for compounds in assays involving different cell lines used, organisms of the protein target, or organism of assay for proteins in the dopamine pathway.

  5. BindML/BindML+: Detecting Protein-Protein Interaction Interface Propensity from Amino Acid Substitution Patterns.

    PubMed

    Wei, Qing; La, David; Kihara, Daisuke

    2017-01-01

    Prediction of protein-protein interaction sites in a protein structure provides important information for elucidating the mechanism of protein function and can also be useful in guiding a modeling or design procedures of protein complex structures. Since prediction methods essentially assess the propensity of amino acids that are likely to be part of a protein docking interface, they can help in designing protein-protein interactions. Here, we introduce BindML and BindML+ protein-protein interaction sites prediction methods. BindML predicts protein-protein interaction sites by identifying mutation patterns found in known protein-protein complexes using phylogenetic substitution models. BindML+ is an extension of BindML for distinguishing permanent and transient types of protein-protein interaction sites. We developed an interactive web-server that provides a convenient interface to assist in structural visualization of protein-protein interactions site predictions. The input data for the web-server are a tertiary structure of interest. BindML and BindML+ are available at http://kiharalab.org/bindml/ and http://kiharalab.org/bindml/plus/ .

  6. Humans and Autonomy: Implications of Shared Decision Making for Military Operations

    DTIC Science & Technology

    2017-01-01

    and machine learning transparency are identified as future research opportunities. 15. SUBJECT TERMS autonomy, human factors, intelligent agents...network as either the mission changes or an agent becomes disabled (DSB 2012). Fig. 2 Control structures for human agent teams. Robots without tools... learning (ML) algorithms monitor progress. However, operators have final executive authority; they are able to tweak the plan or choose an option

  7. Modeling an aquatic ecosystem: application of an evolutionary algorithm with genetic doping to reduce prediction uncertainty

    NASA Astrophysics Data System (ADS)

    Friedel, Michael; Buscema, Massimo

    2016-04-01

    Aquatic ecosystem models can potentially be used to understand the influence of stresses on catchment resource quality. Given that catchment responses are functions of natural and anthropogenic stresses reflected in sparse and spatiotemporal biological, physical, and chemical measurements, an ecosystem is difficult to model using statistical or numerical methods. We propose an artificial adaptive systems approach to model ecosystems. First, an unsupervised machine-learning (ML) network is trained using the set of available sparse and disparate data variables. Second, an evolutionary algorithm with genetic doping is applied to reduce the number of ecosystem variables to an optimal set. Third, the optimal set of ecosystem variables is used to retrain the ML network. Fourth, a stochastic cross-validation approach is applied to quantify and compare the nonlinear uncertainty in selected predictions of the original and reduced models. Results are presented for aquatic ecosystems (tens of thousands of square kilometers) undergoing landscape change in the USA: Upper Illinois River Basin and Central Colorado Assessment Project Area, and Southland region, NZ.

  8. Machine Learning-based discovery of closures for reduced models of dynamical systems

    NASA Astrophysics Data System (ADS)

    Pan, Shaowu; Duraisamy, Karthik

    2017-11-01

    Despite the successful application of machine learning (ML) in fields such as image processing and speech recognition, only a few attempts has been made toward employing ML to represent the dynamics of complex physical systems. Previous attempts mostly focus on parameter calibration or data-driven augmentation of existing models. In this work we present a ML framework to discover closure terms in reduced models of dynamical systems and provide insights into potential problems associated with data-driven modeling. Based on exact closure models for linear system, we propose a general linear closure framework from viewpoint of optimization. The framework is based on trapezoidal approximation of convolution term. Hyperparameters that need to be determined include temporal length of memory effect, number of sampling points, and dimensions of hidden states. To circumvent the explicit specification of memory effect, a general framework inspired from neural networks is also proposed. We conduct both a priori and posteriori evaluations of the resulting model on a number of non-linear dynamical systems. This work was supported in part by AFOSR under the project ``LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.

  9. Simultaneous Estimation of Amlodipine Besilate and Olmesartan Medoxomil in Pharmaceutical Dosage Form

    PubMed Central

    Wankhede, S. B.; Wadkar, S. B.; Raka, K. C.; Chitlange, S. S.

    2009-01-01

    Two UV Spectrophotometric and one reverse phase high performance liquid chromatography methods have been developed for the simultaneous estimation of amlodipine besilate and olmesartan medoxomil in tablet dosage form. First UV spectrophotometric method was a determination using the simultaneous equation method at 237.5 nm and 255.5 nm over the concentration range 10-50 μg/ml and 10-50 μg/ml, for amlodipine besilate and olmesartan medoxomil with accuracy 100.09%, and 100.22% respectively. Second UV spectrophotometric method was a determination using the area under curve method at 242.5-232.5 nm and 260.5-250.5 nm over the concentration range of 10-50 μg/ml and 10-50 μg/ml, for amlodipine besilate and olmesartan medoxomil with accuracy 100.10%, and 100.48%, respectively. In reverse phase high performance liquid chromatography analysis carried out using 0.05M potassuim dihydrogen phosphate buffer:acetonitrile (50:50 v/v) as the mobile phase and Kromasil C18 (4.6 mm i.d.×250 mm) column as the stationery phase with detection wavelength of 238 nm. Flow rate was 1.0 ml/min. Retention time for amlodipine besilate and olmesartan medoxomil were 3.69 and 5.36 min, respectively. Linearity was obtained in the concentration range of 4-20 μg/ml and 10-50 μg/ml for amlodipine besilate and olmesartan medoxomil, respectively. Proposed methods can be used for the estimation of amlodipine besilate and olmesartan medoxomil in tablet dosage form provided all the validation parameters are met. PMID:20502580

  10. Enhancing the performance of regional land cover mapping

    NASA Astrophysics Data System (ADS)

    Wu, Weicheng; Zucca, Claudio; Karam, Fadi; Liu, Guangping

    2016-10-01

    Different pixel-based, object-based and subpixel-based methods such as time-series analysis, decision-tree, and different supervised approaches have been proposed to conduct land use/cover classification. However, despite their proven advantages in small dataset tests, their performance is variable and less satisfactory while dealing with large datasets, particularly, for regional-scale mapping with high resolution data due to the complexity and diversity in landscapes and land cover patterns, and the unacceptably long processing time. The objective of this paper is to demonstrate the comparatively highest performance of an operational approach based on integration of multisource information ensuring high mapping accuracy in large areas with acceptable processing time. The information used includes phenologically contrasted multiseasonal and multispectral bands, vegetation index, land surface temperature, and topographic features. The performance of different conventional and machine learning classifiers namely Malahanobis Distance (MD), Maximum Likelihood (ML), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Random Forests (RFs) was compared using the same datasets in the same IDL (Interactive Data Language) environment. An Eastern Mediterranean area with complex landscape and steep climate gradients was selected to test and develop the operational approach. The results showed that SVMs and RFs classifiers produced most accurate mapping at local-scale (up to 96.85% in Overall Accuracy), but were very time-consuming in whole-scene classification (more than five days per scene) whereas ML fulfilled the task rapidly (about 10 min per scene) with satisfying accuracy (94.2-96.4%). Thus, the approach composed of integration of seasonally contrasted multisource data and sampling at subclass level followed by a ML classification is a suitable candidate to become an operational and effective regional land cover mapping method.

  11. ML 3.0 smoothed aggregation user's guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sala, Marzio; Hu, Jonathan Joseph; Tuminaro, Raymond Stephen

    2004-05-01

    ML is a multigrid preconditioning package intended to solve linear systems of equations Az = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. ML should be used on large sparse linear systems arising from partial differential equation (PDE) discretizations. While technically any linear system can be considered, ML should be used on linear systems that correspond to things that work well with multigrid methods (e.g. elliptic PDEs). ML can be used as a stand-alone package ormore » to generate preconditioners for a traditional iterative solver package (e.g. Krylov methods). We have supplied support for working with the AZTEC 2.1 and AZTECOO iterative package [15]. However, other solvers can be used by supplying a few functions. This document describes one specific algebraic multigrid approach: smoothed aggregation. This approach is used within several specialized multigrid methods: one for the eddy current formulation for Maxwell's equations, and a multilevel and domain decomposition method for symmetric and non-symmetric systems of equations (like elliptic equations, or compressible and incompressible fluid dynamics problems). Other methods exist within ML but are not described in this document. Examples are given illustrating the problem definition and exercising multigrid options.« less

  12. ML 3.1 smoothed aggregation user's guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sala, Marzio; Hu, Jonathan Joseph; Tuminaro, Raymond Stephen

    2004-10-01

    ML is a multigrid preconditioning package intended to solve linear systems of equations Ax = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. ML should be used on large sparse linear systems arising from partial differential equation (PDE) discretizations. While technically any linear system can be considered, ML should be used on linear systems that correspond to things that work well with multigrid methods (e.g. elliptic PDEs). ML can be used as a stand-alone package ormore » to generate preconditioners for a traditional iterative solver package (e.g. Krylov methods). We have supplied support for working with the Aztec 2.1 and AztecOO iterative package [16]. However, other solvers can be used by supplying a few functions. This document describes one specific algebraic multigrid approach: smoothed aggregation. This approach is used within several specialized multigrid methods: one for the eddy current formulation for Maxwell's equations, and a multilevel and domain decomposition method for symmetric and nonsymmetric systems of equations (like elliptic equations, or compressible and incompressible fluid dynamics problems). Other methods exist within ML but are not described in this document. Examples are given illustrating the problem definition and exercising multigrid options.« less

  13. Simultaneous measurement of chlorophyll and astaxanthin in Haematococcus pluvialis cells by first-order derivative ultraviolet-visible spectrophotometry.

    PubMed

    Lababpour, Abdolmajid; Lee, Choul-Gyun

    2006-02-01

    A first-order derivative spectrophotometric method has been developed for the simultaneous measurement of chlorophyll and astaxanthin concentrations in Haematococcus pluvialis cells. Acetone was selected for the extraction of pigments because of its good sensitivity and low toxicity compared with other organic solvents tested; the tested solvents included acetone, methanol, hexane, chloroform, n-propanol, and acetonitrile. A first-order derivative spectrophotometric method was used to eliminate the effects of the overlaping of the chlorophyll and astaxanthin peaks. The linear ranges in 1D evaluation were from 0.50 to 20.0 microg x ml(-1) for chlorophyll and from 1.00 to 12.0 microg x ml(-1) for astaxanthin. The limits of detection of the analytical procedure were found to be 0.35 microg x ml(-1) for chlorophyll and 0.25 microg x ml(-1) for astaxanthin. The relative standard deviations for the determination of 7.0 microg x ml(-1) chlorophyll and 5.0 microg x ml(-1) astaxanthin were 1.2% and 1.1%, respectively. The procedure was found to be simple, rapid, and reliable. This method was successfully applied to the determination of chlorophyll and astaxanthin concentrations in H. pluvialis cells. A good agreement was achieved between the results obtained by the proposed method and HPLC method.

  14. Determination of bisphenol A in human serum by high-performance liquid chromatography with multi-electrode electrochemical detection.

    PubMed

    Inoue, K; Kato, K; Yoshimura, Y; Makino, T; Nakazawa, H

    2000-11-10

    A simple and sensitive method using high-performance liquid chromatography with multi-electrode electrochemical detection (HPLC-ED) including a coulometric array of four electrochemical sensors has been developed for the determination of bisphenol A in water and human serum. For good separation and detection of bisphenol A, a CAPCELL PAK UG 120 C18 reversed-phase column and a mobile phase consisting of 0.3% phosphoric acid-acetonitrile (60:40) were used. The detection limit obtained by the HPLC-ED method was 0.01 ng/ml (0.5 pg), which was more than 3000-times higher than the detection limit obtained by the ultraviolet (UV) method, and more than 200-times higher than the detection limit obtained by the fluorescence (FL) method. Bisphenol A in water and serum samples was pretreated by solid-phase extraction (SPE) after removing possible contamination derived from a plastic SPE cartridges and water used for the pretreatment. A trace amount (ND approximately 0.013 ng/ml) of bisphenol A was detected from the parts of cartridges (filtration column, sorbent bed and frits) by extraction with methanol, and it was completely removed by washing with at least 15 ml of methanol in the operation process. The concentrations of bisphenol A in tap water and Milli-Q-purified water were found to be 0.01 and 0.02 ng/ml, respectively. For that reason, bisphenol A-free water was made to trap bisphenol A in water using an Empore disk. In every pretreatment, SPE methods using bisphenol A-free water and washing with 15 ml of methanol were done in water and serum samples. The yields obtained from the recovery tests using water to which 0.5 or 0.05 ng/ml of bisphenol A was added were 83.8 to 98.2%, and the RSDs were 3.4 to 6.1%, respectively. The yields obtained from the recovery tests by OASIS HLB using serum to which 1.0 ng/ml or 0.1 ng/ml of bisphenol A was added were 79.0% and 87.3%, and the RSDs were 5.1% and 13.5%, respectively. The limits of quantification in water and serum sample were 0.01 ng/ml and 0.05 ng/ml, respectively. The method was applied to the determination of bisphenol A in healthy human serum sample, and the obtained detection was 0.32 ng/ml. From these results, the HPLC-ED method should be the most useful in the determination of bisphenol A at low concentration levels in water and biological samples.

  15. Highly polygenic architecture of antidepressant treatment response: Comparative analysis of SSRI and NRI treatment in an animal model of depression.

    PubMed

    Malki, Karim; Tosto, Maria Grazia; Mouriño-Talín, Héctor; Rodríguez-Lorenzo, Sabela; Pain, Oliver; Jumhaboy, Irfan; Liu, Tina; Parpas, Panos; Newman, Stuart; Malykh, Artem; Carboni, Lucia; Uher, Rudolf; McGuffin, Peter; Schalkwyk, Leonard C; Bryson, Kevin; Herbster, Mark

    2017-04-01

    Response to antidepressant (AD) treatment may be a more polygenic trait than previously hypothesized, with many genetic variants interacting in yet unclear ways. In this study we used methods that can automatically learn to detect patterns of statistical regularity from a sparsely distributed signal across hippocampal transcriptome measurements in a large-scale animal pharmacogenomic study to uncover genomic variations associated with AD. The study used four inbred mouse strains of both sexes, two drug treatments, and a control group (escitalopram, nortriptyline, and saline). Multi-class and binary classification using Machine Learning (ML) and regularization algorithms using iterative and univariate feature selection methods, including InfoGain, mRMR, ANOVA, and Chi Square, were used to uncover genomic markers associated with AD response. Relevant genes were selected based on Jaccard distance and carried forward for gene-network analysis. Linear association methods uncovered only one gene associated with drug treatment response. The implementation of ML algorithms, together with feature reduction methods, revealed a set of 204 genes associated with SSRI and 241 genes associated with NRI response. Although only 10% of genes overlapped across the two drugs, network analysis shows that both drugs modulated the CREB pathway, through different molecular mechanisms. Through careful implementation and optimisations, the algorithms detected a weak signal used to predict whether an animal was treated with nortriptyline (77%) or escitalopram (67%) on an independent testing set. The results from this study indicate that the molecular signature of AD treatment may include a much broader range of genomic markers than previously hypothesized, suggesting that response to medication may be as complex as the pathology. The search for biomarkers of antidepressant treatment response could therefore consider a higher number of genetic markers and their interactions. Through predominately different molecular targets and mechanisms of action, the two drugs modulate the same Creb1 pathway which plays a key role in neurotrophic responses and in inflammatory processes. © 2016 The Authors. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics Published by Wiley Periodicals, Inc. © 2016 The Authors. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics Published by Wiley Periodicals, Inc.

  16. A novel method for blood volume estimation using trivalent chromium in rabbit models.

    PubMed

    Baby, Prathap Moothamadathil; Kumar, Pramod; Kumar, Rajesh; Jacob, Sanu S; Rawat, Dinesh; Binu, V S; Karun, Kalesh M

    2014-05-01

    Blood volume measurement though important in management of critically ill-patients is not routinely estimated in clinical practice owing to labour intensive, intricate and time consuming nature of existing methods. The aim was to compare blood volume estimations using trivalent chromium [(51)Cr(III)] and standard Evans blue dye (EBD) method in New Zealand white rabbit models and establish correction-factor (CF). Blood volume estimation in 33 rabbits was carried out using EBD method and concentration determined using spectrophotometric assay followed by blood volume estimation using direct injection of (51)Cr(III). Twenty out of 33 rabbits were used to find CF by dividing blood volume estimation using EBD with blood volume estimation using (51)Cr(III). CF is validated in 13 rabbits by multiplying it with blood volume estimation values obtained using (51)Cr(III). The mean circulating blood volume of 33 rabbits using EBD was 142.02 ± 22.77 ml or 65.76 ± 9.31 ml/kg and using (51)Cr(III) was estimated to be 195.66 ± 47.30 ml or 89.81 ± 17.88 ml/kg. The CF was found to be 0.77. The mean blood volume of 13 rabbits measured using EBD was 139.54 ± 27.19 ml or 66.33 ± 8.26 ml/kg and using (51)Cr(III) with CF was 152.73 ± 46.25 ml or 71.87 ± 13.81 ml/kg (P = 0.11). The estimation of blood volume using (51)Cr(III) was comparable to standard EBD method using CF. With further research in this direction, we envisage human blood volume estimation using (51)Cr(III) to find its application in acute clinical settings.

  17. Multiseed liposomal drug delivery system using micelle gradient as driving force to improve amphiphilic drug retention and its anti-tumor efficacy.

    PubMed

    Zhang, Wenli; Li, Caibin; Jin, Ya; Liu, Xinyue; Wang, Zhiyu; Shaw, John P; Baguley, Bruce C; Wu, Zimei; Liu, Jianping

    2018-11-01

    To improve drug retention in carriers for amphiphilic asulacrine (ASL), a novel active loading method using micelle gradient was developed to fabricate the ASL-loaded multiseed liposomes (ASL-ML). The empty ML were prepared by hydrating a thin film with empty micelles. Then the micelles in liposomal compartment acting as 'micelle pool' drove the drug to be loaded after the outer micelles were removed. Some reasoning studies including critical micelle concentration (CMC) determination, influencing factors tests on entrapment efficiency (EE), structure visualization, and drug release were carried out to explore the mechanism of active loading, ASL location, and the structure of ASL-ML. Comparisons were made between pre-loading and active loading method. Finally, the extended drug retention capacity of ML was evaluated through pharmacokinetic, drug tissue irritancy, and in vivo anti-tumor activity studies. Comprehensive results from fluorescent and transmission electron microscope (TEM) observation, encapsulation efficiency (EE) comparison, and release studies demonstrated the formation of ML-shell structure for ASL-ML without inter-carrier fusion. The location of drug mainly in inner micelles as well as the superiority of post-loading to the pre-loading method , in which drug in micelles shifted onto the bilayer membrane was an additional positive of this delivery system. It was observed that the drug amphiphilicity and interaction of micelles with drug were the two prerequisites for this active loading method. The extended retention capacity of ML has been verified through the prolonged half-life, reduced paw-lick responses in rats, and enhanced tumor inhibition in model mice. In conclusion, ASL-ML prepared by active loading method can effectively load drug into micelles with expected structure and improve drug retention.

  18. Improving Crotalidae polyvalent immune Fab reconstitution times.

    PubMed

    Quan, Asia N; Quan, Dan; Curry, Steven C

    2010-06-01

    Crotalidae polyvalent immune Fab (CroFab) is used to treat rattlesnake envenomations in the United States. Time to infusion may be a critical factor in the treatment of these bites. Per manufacturer's instructions, 10 mL of sterile water for injection (SWI) and hand swirling are recommended for reconstitution. We wondered whether completely filling vials with 25 mL of SWI would result in shorter reconstitution times than using 10-mL volumes and how hand mixing compared to mechanical agitation of vials or leaving vials undisturbed. Six sets of 5 vials were filled with either 10 mL or 25 mL. Three mixing techniques were used as follows: undisturbed; agitation with a mechanical agitator; and continuous hand rolling and inverting of vials. Dissolution was determined by observation and time to complete dissolution for each vial. Nonparametric 2-tailed P values were calculated. Filling vials completely with 25 mL resulted in quicker dissolution than using 10-mL volumes, regardless of mixing method (2-tailed P = .024). Mixing by hand was shorter than other methods (P < .001). Reconstitution with 25 mL and hand mixing resulted in the shortest dissolution times (median, 1.1 minutes; range, 0.9-1.3 minutes). This appeared clinically important because dissolution times using 10 mL and mechanical rocking of vials (median, 26.4 minutes) or leaving vials undisturbed (median, 33.6 minutes) was several-fold longer. Hand mixing after filling vials completely with 25 mL results in shorter dissolution times than using 10 mL or other methods of mixing and is recommended, especially when preparing initial doses of CroFab. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  19. Estimation of channel parameters and background irradiance for free-space optical link.

    PubMed

    Khatoon, Afsana; Cowley, William G; Letzepis, Nick; Giggenbach, Dirk

    2013-05-10

    Free-space optical communication can experience severe fading due to optical scintillation in long-range links. Channel estimation is also corrupted by background and electrical noise. Accurate estimation of channel parameters and scintillation index (SI) depends on perfect removal of background irradiance. In this paper, we propose three different methods, the minimum-value (MV), mean-power (MP), and maximum-likelihood (ML) based methods, to remove the background irradiance from channel samples. The MV and MP methods do not require knowledge of the scintillation distribution. While the ML-based method assumes gamma-gamma scintillation, it can be easily modified to accommodate other distributions. Each estimator's performance is compared using simulation data as well as experimental measurements. The estimators' performance are evaluated from low- to high-SI areas using simulation data as well as experimental trials. The MV and MP methods have much lower complexity than the ML-based method. However, the ML-based method shows better SI and background-irradiance estimation performance.

  20. Hip and Wrist Accelerometer Algorithms for Free-Living Behavior Classification.

    PubMed

    Ellis, Katherine; Kerr, Jacqueline; Godbole, Suneeta; Staudenmayer, John; Lanckriet, Gert

    2016-05-01

    Accelerometers are a valuable tool for objective measurement of physical activity (PA). Wrist-worn devices may improve compliance over standard hip placement, but more research is needed to evaluate their validity for measuring PA in free-living settings. Traditional cut-point methods for accelerometers can be inaccurate and need testing in free living with wrist-worn devices. In this study, we developed and tested the performance of machine learning (ML) algorithms for classifying PA types from both hip and wrist accelerometer data. Forty overweight or obese women (mean age = 55.2 ± 15.3 yr; BMI = 32.0 ± 3.7) wore two ActiGraph GT3X+ accelerometers (right hip, nondominant wrist; ActiGraph, Pensacola, FL) for seven free-living days. Wearable cameras captured ground truth activity labels. A classifier consisting of a random forest and hidden Markov model classified the accelerometer data into four activities (sitting, standing, walking/running, and riding in a vehicle). Free-living wrist and hip ML classifiers were compared with each other, with traditional accelerometer cut points, and with an algorithm developed in a laboratory setting. The ML classifier obtained average values of 89.4% and 84.6% balanced accuracy over the four activities using the hip and wrist accelerometer, respectively. In our data set with average values of 28.4 min of walking or running per day, the ML classifier predicted average values of 28.5 and 24.5 min of walking or running using the hip and wrist accelerometer, respectively. Intensity-based cut points and the laboratory algorithm significantly underestimated walking minutes. Our results demonstrate the superior performance of our PA-type classification algorithm, particularly in comparison with traditional cut points. Although the hip algorithm performed better, additional compliance achieved with wrist devices might justify using a slightly lower performing algorithm.

  1. ClimateNet: A Machine Learning dataset for Climate Science Research

    NASA Astrophysics Data System (ADS)

    Prabhat, M.; Biard, J.; Ganguly, S.; Ames, S.; Kashinath, K.; Kim, S. K.; Kahou, S.; Maharaj, T.; Beckham, C.; O'Brien, T. A.; Wehner, M. F.; Williams, D. N.; Kunkel, K.; Collins, W. D.

    2017-12-01

    Deep Learning techniques have revolutionized commercial applications in Computer vision, speech recognition and control systems. The key for all of these developments was the creation of a curated, labeled dataset ImageNet, for enabling multiple research groups around the world to develop methods, benchmark performance and compete with each other. The success of Deep Learning can be largely attributed to the broad availability of this dataset. Our empirical investigations have revealed that Deep Learning is similarly poised to benefit the task of pattern detection in climate science. Unfortunately, labeled datasets, a key pre-requisite for training, are hard to find. Individual research groups are typically interested in specialized weather patterns, making it hard to unify, and share datasets across groups and institutions. In this work, we are proposing ClimateNet: a labeled dataset that provides labeled instances of extreme weather patterns, as well as associated raw fields in model and observational output. We develop a schema in NetCDF to enumerate weather pattern classes/types, store bounding boxes, and pixel-masks. We are also working on a TensorFlow implementation to natively import such NetCDF datasets, and are providing a reference convolutional architecture for binary classification tasks. Our hope is that researchers in Climate Science, as well as ML/DL, will be able to use (and extend) ClimateNet to make rapid progress in the application of Deep Learning for Climate Science research.

  2. Spectrophotometric and HPLC determinations of anti-diabetic drugs, rosiglitazone maleate and metformin hydrochloride, in pure form and in pharmaceutical preparations.

    PubMed

    Onal, Armağan

    2009-12-01

    In this study, three spectrophotometric methods and one HPLC method were developed for analysis of anti-diabetic drugs in tablets. The two spectrophotometric methods were based on the reaction of rosiglitazone (RSG) with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) and bromocresol green (BCG). Linear relationship between the absorbance at lambda(max) and the drug concentration was found to be in the ranges 6.0-50.0 and 1.5-12 microg ml(-1) for DDQ and BCG methods, respectively. The third spectrophotometric method consists of a zero-crossing first-derivative spectrophotometric method for simultaneous analysis of RSG and metformin (MTF) in tablets. The calibration curves were linear within the concentration ranges of 5.0-50 microg ml(-1) for RSG and 1.0-10.0 microg ml(-1) for MTF. The fourth method is a rapid stability-indicating HPLC method developed for the determination of RSG. A linear response was observed within the concentration range of 0.25-2.5 microg ml(-1). The proposed methods have been successfully applied to the tablet analysis.

  3. Characterization of Chronic Aortic and Mitral Regurgitation Undergoing Valve Surgery Using Cardiovascular Magnetic Resonance.

    PubMed

    Polte, Christian L; Gao, Sinsia A; Johnsson, Åse A; Lagerstrand, Kerstin M; Bech-Hanssen, Odd

    2017-06-15

    Grading of chronic aortic regurgitation (AR) and mitral regurgitation (MR) by cardiovascular magnetic resonance (CMR) is currently based on thresholds, which are neither modality nor quantification method specific. Accordingly, this study sought to identify CMR-specific and quantification method-specific thresholds for regurgitant volumes (RVols), RVol indexes, and regurgitant fractions (RFs), which denote severe chronic AR or MR with an indication for surgery. The study comprised patients with moderate and severe chronic AR (n = 38) and MR (n = 40). Echocardiography and CMR was performed at baseline and in all operated AR/MR patients (n = 23/25) 10 ± 1 months after surgery. CMR quantification of AR: direct (aortic flow) and indirect method (left ventricular stroke volume [LVSV] - pulmonary stroke volume [PuSV]); MR: 2 indirect methods (LVSV - aortic forward flow [AoFF]; mitral inflow [MiIF] - AoFF). All operated patients had severe regurgitation and benefited from surgery, indicated by a significant postsurgical reduction in end-diastolic volume index and improvement or relief of symptoms. The discriminatory ability between moderate and severe AR was strong for RVol >40 ml, RVol index >20 ml/m 2 , and RF >30% (direct method) and RVol >62 ml, RVol index >31 ml/m 2 , and RF >36% (LVSV-PuSV) with a negative likelihood ratio ≤ 0.2. In MR, the discriminatory ability was very strong for RVol >64 ml, RVol index >32 ml/m 2 , and RF >41% (LVSV-AoFF) and RVol >40 ml, RVol index >20 ml/m 2 , and RF >30% (MiIF-AoFF) with a negative likelihood ratio < 0.1. In conclusion, CMR grading of chronic AR and MR should be based on modality-specific and quantification method-specific thresholds, as they differ largely from recognized guideline criteria, to assure appropriate clinical decision-making and timing of surgery. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. The Effects of Learning Strategies on Mathematical Literacy: A Comparison between Lower and Higher Achieving Countries

    ERIC Educational Resources Information Center

    Magen-Nagar, Noga

    2016-01-01

    The purpose of the current study is to explore the effects of learning strategies on Mathematical Literacy (ML) of students in higher and lower achieving countries. To address this issue, the study utilizes PISA2002 data to conduct a multi-level analysis (HLM) of Hong Kong and Israel students. In PISA2002, Israel was rated 31st in Mathematics,…

  5. Fluorimetric determination of some sulfur containing compounds through complex formation with terbium (Tb+3) and uranium (U+3).

    PubMed

    Taha, Elham Anwer; Hassan, Nagiba Yehya; Aal, Fahima Abdel; Fattah, Laila El-Sayed Abdel

    2007-05-01

    Two simple, sensitive and specific fluorimetric methods have been developed for the determination of some sulphur containing compounds namely, Acetylcysteine (Ac), Carbocisteine (Cc) and Thioctic acid (Th) using terbium Tb+3 and uranium U+3 ions as fluorescent probes. The proposed methods involve the formation of a ternary complex with Tb+3 in presence of Tris-buffer method (I) and a binary complex with aqueous uranyl acetate solution method (II). The fluorescence quenching of Tb+3 at 510, 488 and 540 nm (lambda(ex) 250, 241 and 268 nm) and of uranyl acetate at 512 nm (lambda(ex) 240 nm) due to the complex formation was quantitatively measured for Ac, Cc and Th, respectively. The reaction conditions and the fluorescence spectral properties of the complexes have been investigated. Under the described conditions, the proposed methods were applicable over the concentration range (0.2-2.5 microg ml(-1)), (1-4 microg ml(-1)) and (0.5-3.5 microg ml(-1)) with mean percentage recoveries 99.74+/-0.36, 99.70+/-0.52 and 99.43+/-0.23 for method (I) and (0.5-6 microg ml(-1)), (0.5-5 microg ml(-1)), and (1-6 microg ml(-1)) with mean percentage recoveries 99.38+/-0.20, 99.82+/-0.28 and 99.93+/-0.32 for method (II), for the three cited drugs, respectively. The proposed methods were successfully applied for the determination of the studied compounds in bulk powders and in pharmaceutical formulations, as well as in presence of their related substances. The results obtained were found to be in agree statistically with those obtained by official and reported ones. The two methods were validated according to USP guidelines and also assessed by applying the standard addition technique.

  6. And What Did You Learn in Your PhD Program?

    ERIC Educational Resources Information Center

    Mohrig, Jerry R.

    1988-01-01

    Surveys the outlook presented by former and present chemistry and biochemistry doctoral students toward their graduate program. Poses questions to determine what aspects are deemed important. Suggests seminars and quality advisors are important factors. (ML)

  7. What type of drinker are you?

    MedlinePlus

    ... beer, one 5-ounce (148 mL) glass of wine, 1 wine cooler, 1 cocktail, or 1 shot of hard ... this important distinction for online health information and services. Learn more about A.D.A.M.'s editorial ...

  8. Family Life.

    ERIC Educational Resources Information Center

    Naturescope, 1986

    1986-01-01

    Focuses on various aspects of mammal family life ranging from ways different species are born to how different mammals are raised. Learning activities include making butter from cream, creating birth announcements for mammals, and playing a password game on family life. (ML)

  9. One-year monthly quantitative survey of noroviruses, enteroviruses, and adenoviruses in wastewater collected from six plants in Japan.

    PubMed

    Katayama, Hiroyuki; Haramoto, Eiji; Oguma, Kumiko; Yamashita, Hiromasa; Tajima, Atsushi; Nakajima, Hideichiro; Ohgaki, Shinichiro

    2008-03-01

    Sewerage systems are important nodes to monitor human enteric pathogens transmitted via water. A quantitative virus survey was performed once a month for a year to understand the seasonal profiles of noroviruses genotype 1 and genotype 2, enteroviruses, and adenoviruses in sewerage systems. A total of 72 samples of influent, secondary-treated wastewater before chlorination and effluent were collected from six wastewater treatment plants in Japan. Viruses were successfully recovered from 100ml of influent and 1000ml of the secondary-treated wastewater and effluent using the acid rinse method. Viruses were determined by the RT-PCR or PCR method to obtain the most probable number for each sample. All the samples were also assayed for fecal coliforms (FCs) by a double-layer method. The seasonal profiles of noroviruses genotype 1 and genotype 2 in influent were very similar, i.e. they were abundant in winter (from November to March) at a geometric mean value of 190 and 200 RT-PCR units/ml, respectively, and less frequent in summer (from June to September), at 4.9 and 9.1 RT-PCR units/ml, respectively. The concentrations of enteroviruses and adenoviruses were mostly constant all the year round, 17 RT-PCR units/ml and 320 PCR units/ml in influent, and 0.044 RT-PCR units/ml and 7.0 PCR units/ml in effluent, respectively.

  10. Determination of N-methylsuccinimide and 2-hydroxy-N-methylsuccinimide in human urine and plasma.

    PubMed

    Jönsson, B A; Akesson, B

    1997-12-19

    A method for determination of N-methylsuccinimide (MSI) and 2-hydroxy-N-methylsuccinimide (2-HMSI) in human urine and of MSI in human plasma was developed. MSI and 2-HMSI are metabolites of the widely used organic solvent N-methyl-2-pyrrolidone (NMP). MSI and 2-HMSI were purified from urine and plasma by C8 solid-phase extraction and analysed by gas chromatography-mass spectrometry in the negative-ion chemical ionisation mode. The intra-day precisions in urine were 2-6% for MSI (50 and 400 ng/ml) and 3-5% for 2-HMSI (1000 and 8000 ng/ml). For MSI in plasma it was 2% (60 and 1200 ng/ml). The between-day precisions in urine were 3-4% for MSI (100 and 1000 ng/ml) and 2-4% for 2-HMSI (10,000 and 18,000 ng/ml) and 3-4% for MSI in plasma (100 and 900 ng/ml). The recoveries from urine were 109-117% for MSI (50 and 400 ng/ml) and 81-89% for 2-HMSI (1000 and 8000 ng/ml). The recovery of MSI from plasma was 91-101% (50 and 500 ng/ml). The detection limits for MSI were 3 ng/ml in urine and 1 ng/ml in plasma and that of 2-HMSI in urine was 200 ng/ml. The method is applicable for analysis of urine and plasma samples from workers exposed to NMP.

  11. Minimum effective volume of mepivacaine for ultrasound-guided supraclavicular block

    PubMed Central

    Song, Jae Gyok; Kang, Bong Jin; Park, Kee Keun

    2013-01-01

    Background The aim of this study was to estimate the minimum effective volume (MEV) of 1.5% mepivacaine for ultrasound-guided supraclavicular block by placing the needle near the lower trunk of brachial plexus and multiple injections. Methods Thirty patients undergoing forearm and hand surgery received ultrasound-guided supraclavicular block with 1.5% mepivacaine. The initial volume of local anesthetic injected was 24 ml, and local anesthetic volume for the next patient was determined by the response of the previous patient. The next patient received a 3 ml higher volume in the case of the failure of the previous case. If the previous block was successful, the next volume was 3 ml lower. MEV was estimated by the Dixon and Massey up and down method. MEV in 95, 90, and 50% of patients (MEV95, MEV90, and MEV50) were calculated using probit transformation and logistic regression. Results MEV95 of 1.5% mepivacaine was 17 ml (95% confidence interval [CI], 13-42 ml), MEV90 was 15 ml (95% CI, 12-34 ml), and MEV50 was 9 ml (95% CI, 4-12 ml). Twelve patients had a failed block. Three patients received general anesthesia. Nine patients could undergo surgery with sedation only. Only one patient showed hemi-diaphragmatic paresis. Conclusions MEV95 was 17 ml, MEV90 was 15 ml, and MEV50 was 9 ml. However, needle location near the lower trunk of brachial plexus and multiple injections should be performed. PMID:23904937

  12. Determination of plasma volume in anaesthetized piglets using the carbon monoxide (CO) method.

    PubMed

    Heltne, J K; Farstad, M; Lund, T; Koller, M E; Matre, K; Rynning, S E; Husby, P

    2002-07-01

    Based on measurements of the circulating red blood cell volume (V(RBC)) in seven anaesthetized piglets using carbon monoxide (CO) as a label, plasma volume (PV) was calculated for each animal. The increase in carboxyhaemoglobin (COHb) concentration following administration of a known amount of CO into a closed circuit re-breathing system was determined by diode-array spectrophotometry. Simultaneously measured haematocrit (HCT) and haemoglobin (Hb) values were used for PV calculation. The PV values were compared with simultaneously measured PVs determined using the Evans blue technique. Mean values (SD) for PV were 1708.6 (287.3)ml and 1738.7 (412.4)ml with the CO method and the Evans blue technique, respectively. Comparison of PVs determined with the two techniques demonstrated good correlation (r = 0.995). The mean difference between PV measurements was -29.9 ml and the limits of agreement (mean difference +/-2SD) were -289.1 ml and 229.3 ml. In conclusion, the CO method can be applied easily under general anaesthesia and controlled ventilation with a simple administration system. The agreement between the compared methods was satisfactory. Plasma volume determined with the CO method is safe, accurate and has no signs of major side effects.

  13. Spectro-photometric determinations of Mn, Fe and Cu in aluminum master alloys

    NASA Astrophysics Data System (ADS)

    Rehan; Naveed, A.; Shan, A.; Afzal, M.; Saleem, J.; Noshad, M. A.

    2016-08-01

    Highly reliable, fast and cost effective Spectro-photometric methods have been developed for the determination of Mn, Fe & Cu in aluminum master alloys, based on the development of calibration curves being prepared via laboratory standards. The calibration curves are designed so as to induce maximum sensitivity and minimum instrumental error (Mn 1mg/100ml-2mg/100ml, Fe 0.01mg/100ml-0.2mg/100ml and Cu 2mg/100ml-10mg/ 100ml). The developed Spectro-photometric methods produce accurate results while analyzing Mn, Fe and Cu in certified reference materials. Particularly, these methods are suitable for all types of Al-Mn, Al-Fe and Al-Cu master alloys (5%, 10%, 50% etc. master alloys).Moreover, the sampling practices suggested herein include a reasonable amount of analytical sample, which truly represent the whole lot of a particular master alloy. Successive dilution technique was utilized to meet the calibration curve range. Furthermore, the workout methods were also found suitable for the analysis of said elements in ordinary aluminum alloys. However, it was observed that Cush owed a considerable interference with Fe, the later one may not be accurately measured in the presence of Cu greater than 0.01 %.

  14. Comparison of a 50 mL pycnometer and a 500 mL flask, EURAMET.M.FF.S8 (EURAMET 1297)

    NASA Astrophysics Data System (ADS)

    Mićić, Ljiljana; Batista, Elsa

    2018-01-01

    The purpose of this comparison was to compare the results of the participating laboratories in the calibration of 50 mL pycnometer and 500 mL volumetric flask using the gravimetric method. Laboratories were asked to determined the 'contained' volume of the 50 mL pycnometer and of the 500 mL flask at a reference temperature of 20 °C. The gravimetric method was used for both instruments by all laboratories. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  15. Catalytic spectrophotometric determination of iodide in pharmaceutical preparations and edible salt.

    PubMed

    El-Ries, M A; Khaled, Elmorsy; Zidane, F I; Ibrahim, S A; Abd-Elmonem, M S

    2012-02-01

    The catalytic effect of iodide on the oxidation of four dyes: viz. variamine blue (VB), methylene blue (MB), rhodamine B (RB), and malachite green (MG) with different oxidizing agents was investigated for the kinetic spectrophotometric determination of iodide. The above catalyzed reactions were monitored spectrophotometrically by following the change in dye absorbances at 544, 558, 660, or 617 nm for the VB, RB, MB, or MG catalyzed reactions, respectively. Under optimum conditions, iodide can be determined within the concentration levels 0.064-1.27 µg mL(-1) for VB method, 3.20-9.54 µg mL(-1) for RB method, 5.00-19.00 µg mL(-1) for the MB method, and 6.4-19.0 µg mL(-1) for the MG one, with detection limit reaching 0.004 µg mL(-1) iodide. The reported methods were highly sensitive, selective, and free from most interference. Applying the proposed procedures, trace amounts of iodide in pharmaceutical and edible salt samples were successfully determined without separation or pretreatment steps. Copyright © 2011 John Wiley & Sons, Ltd.

  16. Comparison of ZetaPlus 60S and nitrocellulose membrane filters for the simultaneous concentration of F-RNA coliphages, porcine teschovirus and porcine adenovirus from river water.

    PubMed

    Jones, T H; Muehlhauser, V; Thériault, G

    2014-09-01

    Increasing attention is being paid to the impact of agricultural activities on water quality to understand the impact on public health. F-RNA coliphages have been proposed as viral indicators of fecal contamination while porcine teschovirus (PTV) and porcine adenovirus (PAdV) are proposed indicators of fecal contamination of swine origin. Viruses and coliphages are present in water in very low concentrations and must be concentrated to permit their detection. There is little information comparing the effectiveness of the methods for concentrating F-RNA coliphages with concentration methods for other viruses and vice versa. The objective of this study was to compare 5 current published methods for recovering F-RNA coliphages, PTV and PAdV from river water samples concentrated by electronegative nitrocellulose membrane filters (methods A and B) or electropositive Zeta Plus 60S filters (methods C-E). Method A is used routinely for the detection of coliphages (Méndez et al., 2004) and method C (Brassard et al., 2005) is the official method in Health Canada's compendium for the detection of viruses in bottled mineral or spring water. When river water was inoculated with stocks of F-RNA MS2, PAdV, and PTV to final concentrations of 1×10(6) PFU/100 mL, 1×10(5) gc/100 mL and 3×10(5) gc/100 mL, respectively, a significantly higher recovery for each virus was consistently obtained for method A with recoveries of 52% for MS2, 95% for PAdV, and 1.5% for PTV. When method A was compared with method C for the detection of F-coliphages, PAdV and PTV in river water samples, viruses were detected with higher frequencies and at higher mean numbers with method A than with method C. With method A, F-coliphages were detected in 11/12 samples (5-154 PFU/100 mL), PTV in 12/12 samples (397-10,951 gc/100 mL), PAdV in 1/12 samples (15 gc/100 mL), and F-RNA GIII in 1/12 samples (750 gc/100 mL) while F-RNA genotypes I, II, and IV were not detected by qRT-PCR. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  17. Validation of different spectrophotometric methods for determination of vildagliptin and metformin in binary mixture

    NASA Astrophysics Data System (ADS)

    Abdel-Ghany, Maha F.; Abdel-Aziz, Omar; Ayad, Miriam F.; Tadros, Mariam M.

    New, simple, specific, accurate, precise and reproducible spectrophotometric methods have been developed and subsequently validated for determination of vildagliptin (VLG) and metformin (MET) in binary mixture. Zero order spectrophotometric method was the first method used for determination of MET in the range of 2-12 μg mL-1 by measuring the absorbance at 237.6 nm. The second method was derivative spectrophotometric technique; utilized for determination of MET at 247.4 nm, in the range of 1-12 μg mL-1. Derivative ratio spectrophotometric method was the third technique; used for determination of VLG in the range of 4-24 μg mL-1 at 265.8 nm. Fourth and fifth methods adopted for determination of VLG in the range of 4-24 μg mL-1; were ratio subtraction and mean centering spectrophotometric methods, respectively. All the results were statistically compared with the reported methods, using one-way analysis of variance (ANOVA). The developed methods were satisfactorily applied to analysis of the investigated drugs and proved to be specific and accurate for quality control of them in pharmaceutical dosage forms.

  18. Simultaneous analysis of eight bioactive compounds in Danning tablet by HPLC-ESI-MS and HPLC-UV.

    PubMed

    Liu, Runhui; Zhang, Jiye; Liang, Mingjin; Zhang, Weidong; Yan, Shikai; Lin, Min

    2007-02-19

    A high performance liquid chromatography (HPLC) coupled with electrospray tandem mass spectrometry (ESI-MS) and ultraviolet detector (UV) has been developed for the simultaneous analysis of eight bioactive compounds in Danning tablet (including hyperin, hesperidin, resveratrol, nobiletin, curcumine, emodin, chrysophanol, and physcion), a widely used prescription of traditional Chinese medicine (TCM). The chromatographic separation was performed on a ZORBAX Extend C(18) analytical column by gradient elution with acetonitrile and formate buffer (containing 0.05% formic acid, adjusted with triethylamine to pH 5.0) at a flow rate of 0.8 ml/min. The eight compounds in Danning tablet were identified and their MS(n) fractions were elucidated by using HPLC-ESI-MS, and the contents of these compounds were determined by using HPLC-UV method. The standard calibration curves were linear between 5.0 and 100 microg/ml for hyperin, 10-200 microg/ml for hesperidin, 1.0-150 microg/ml for resveratrol, 2.0-120 microg/ml for nobiletin, 2.0-225 microg/ml for curcumine, 20-300 microg/ml for emodin, 2.0-200 microg/ml for chrysophanol, and 20-250 microg/ml for physcion with regression coefficient r(2)>0.9995. The intra-day and inter-day precisions of this method were evaluated with the R.S.D. values less than 0.7% and 1.3%, respectively. The recoveries of the eight investigated compounds were ranged from 99.3% to 100.2% with R.S.D. values less than 1.5%. This method was successfully used to determine the 8 target compounds in 10 batches of Danning tablet.

  19. Retention of Basic Life Support in Laypeople: Mastery Learning vs. Time-based Education.

    PubMed

    Boet, Sylvain; Bould, M Dylan; Pigford, Ashlee-Ann; Rössler, Bernhard; Nambyiah, Pratheeban; Li, Qi; Bunting, Alexandra; Schebesta, Karl

    2017-01-01

    To compare the effectiveness of a mastery learning (ML) versus a time-based (TB) BLS course for the acquisition and retention of BLS knowledge and skills in laypeople. After ethics approval, laypeople were randomized to a ML or TB BLS course based on the American Heart Association (AHA) Heartsaver course. In the ML group, subjects practiced and received feedback at six BLS stations until they reached a pre-determined level of performance. The TB group received a standard AHA six-station BLS course. All participants took the standard in-course BLS skills test at the end of their course. BLS skills and knowledge were tested using a high-fidelity scenario and knowledge questionnaire upon course completion (immediate post-test) and after four months (retention test). Video recorded scenarios were assessed by two blinded, independent raters using the AHA skills checklist. Forty-three subjects were included in analysis (23ML;20TB). For primary outcome, subjects' performance did not change after four months, regardless of the teaching modality (TB from (median[IQR]) 8.0[6.125;8.375] to 8.5[5.625;9.0] vs. ML from 8.0[7.0;9.0] to 7.0[6.0;8.0], p = 0.12 for test phase, p = 0.21 for interaction between effect of teaching modality and test phase). For secondary outcomes, subjects acquired knowledge between pre- and immediate post-tests (p < 0.005), and partially retained the acquired knowledge up to four months (p < 0.005) despite a decrease between immediate post-test and retention test (p = 0.009), irrespectively of the group (p = 0.59) (TB from 63.3[48.3;73.3] to 93.3[81.7;100.0] and then 93.3[81.7;93.3] vs. ML from 60.0[46.7;66.7] to 93.3[80.0;100.0] and then 80.0[73.3;93.3]). Regardless of the group after 4 months, chest compression depth improved (TB from 39.0[35.0;46.0] to 48.5[40.25;58.0] vs. ML from 40.0[37.0;47.0] to 45.0[37.0;52.0]; p = 0.012), but not the rate (TB from 118.0[114.0;125.0] to 120.5[113.0;129.5] vs. ML from 119.0[113.0;130.0] to 123.0[102.0;132.0]; p = 0.70). All subjects passed the in-course BLS skills test. Pass/fail rates were poor in both groups at both the simulated immediate post-test (ML = 1/22;TB = 0/20; p = 0.35) and retention test (ML pass/fail = 1/22, TB pass/fail = 0/20; p = 0.35). The ML course was slightly longer than the TB course (108[94;117] min vs. 95[89;102] min; p = 0.003). There was no major benefit of a ML compared to a TB BLS course for the acquisition and four-month retention of knowledge or skills among laypeople.

  20. Development and validation of a method for the determination of low-ppb levels of macrocyclic lactones in butter, using HPLC-fluorescence.

    PubMed

    Macedo, Fabio; Marsico, Eliane Teixeira; Conte-Júnior, Carlos Adam; de Resende, Michele Fabri; Brasil, Taila Figueiredo; Pereira Netto, Annibal Duarte

    2015-07-15

    An analytical method was developed and validated for the simultaneous determination of four macrocyclic lactones (ML) (abamectin, doramectin, ivermectin and moxidectin) in butter, using liquid chromatography with fluorescence detection. The method employed heated liquid-liquid extraction and a mixture of acetonitrile, ethyl acetate and water, with preconcentration and derivatization, to produce stable fluorescent derivatives. The chromatographic run time was <12.5 min, with excellent separation. The method validation followed international guidelines and employed fortified butter samples. The figures of merit obtained, e.g. recovery (72.4-106.5%), repeatability (8.8%), within-laboratory reproducibility (15.7%) and limits of quantification (0.09-0.16 μg kg(-1)) were satisfactory for the desired application. The application of the method to real samples showed that ML residues were present in six of the ten samples evaluated. The method proved to be simple, easy and appropriate for simultaneous determination of ML residues in butter. To our knowledge, this is the first method described for the evaluation of ML in butter. Copyright © 2015. Published by Elsevier Ltd.

  1. Malignancy Detection on Mammography Using Dual Deep Convolutional Neural Networks and Genetically Discovered False Color Input Enhancement.

    PubMed

    Teare, Philip; Fishman, Michael; Benzaquen, Oshra; Toledano, Eyal; Elnekave, Eldad

    2017-08-01

    Breast cancer is the most prevalent malignancy in the US and the third highest cause of cancer-related mortality worldwide. Regular mammography screening has been attributed with doubling the rate of early cancer detection over the past three decades, yet estimates of mammographic accuracy in the hands of experienced radiologists remain suboptimal with sensitivity ranging from 62 to 87% and specificity from 75 to 91%. Advances in machine learning (ML) in recent years have demonstrated capabilities of image analysis which often surpass those of human observers. Here we present two novel techniques to address inherent challenges in the application of ML to the domain of mammography. We describe the use of genetic search of image enhancement methods, leading us to the use of a novel form of false color enhancement through contrast limited adaptive histogram equalization (CLAHE), as a method to optimize mammographic feature representation. We also utilize dual deep convolutional neural networks at different scales, for classification of full mammogram images and derivative patches combined with a random forest gating network as a novel architectural solution capable of discerning malignancy with a specificity of 0.91 and a specificity of 0.80. To our knowledge, this represents the first automatic stand-alone mammography malignancy detection algorithm with sensitivity and specificity performance similar to that of expert radiologists.

  2. Many-Body Descriptors for Predicting Molecular Properties with Machine Learning: Analysis of Pairwise and Three-Body Interactions in Molecules.

    PubMed

    Pronobis, Wiktor; Tkatchenko, Alexandre; Müller, Klaus-Robert

    2018-06-12

    Machine learning (ML) based prediction of molecular properties across chemical compound space is an important and alternative approach to efficiently estimate the solutions of highly complex many-electron problems in chemistry and physics. Statistical methods represent molecules as descriptors that should encode molecular symmetries and interactions between atoms. Many such descriptors have been proposed; all of them have advantages and limitations. Here, we propose a set of general two-body and three-body interaction descriptors which are invariant to translation, rotation, and atomic indexing. By adapting the successfully used kernel ridge regression methods of machine learning, we evaluate our descriptors on predicting several properties of small organic molecules calculated using density-functional theory. We use two data sets. The GDB-7 set contains 6868 molecules with up to 7 heavy atoms of type CNO. The GDB-9 set is composed of 131722 molecules with up to 9 heavy atoms containing CNO. When trained on 5000 random molecules, our best model achieves an accuracy of 0.8 kcal/mol (on the remaining 1868 molecules of GDB-7) and 1.5 kcal/mol (on the remaining 126722 molecules of GDB-9) respectively. Applying a linear regression model on our novel many-body descriptors performs almost equal to a nonlinear kernelized model. Linear models are readily interpretable: a feature importance ranking measure helps to obtain qualitative and quantitative insights on the importance of two- and three-body molecular interactions for predicting molecular properties computed with quantum-mechanical methods.

  3. Enhancing the Biological Relevance of Machine Learning Classifiers for Reverse Vaccinology.

    PubMed

    Heinson, Ashley I; Gunawardana, Yawwani; Moesker, Bastiaan; Hume, Carmen C Denman; Vataga, Elena; Hall, Yper; Stylianou, Elena; McShane, Helen; Williams, Ann; Niranjan, Mahesan; Woelk, Christopher H

    2017-02-01

    Reverse vaccinology (RV) is a bioinformatics approach that can predict antigens with protective potential from the protein coding genomes of bacterial pathogens for subunit vaccine design. RV has become firmly established following the development of the BEXSERO® vaccine against Neisseria meningitidis serogroup B. RV studies have begun to incorporate machine learning (ML) techniques to distinguish bacterial protective antigens (BPAs) from non-BPAs. This research contributes significantly to the RV field by using permutation analysis to demonstrate that a signal for protective antigens can be curated from published data. Furthermore, the effects of the following on an ML approach to RV were also assessed: nested cross-validation, balancing selection of non-BPAs for subcellular localization, increasing the training data, and incorporating greater numbers of protein annotation tools for feature generation. These enhancements yielded a support vector machine (SVM) classifier that could discriminate BPAs (n = 200) from non-BPAs (n = 200) with an area under the curve (AUC) of 0.787. In addition, hierarchical clustering of BPAs revealed that intracellular BPAs clustered separately from extracellular BPAs. However, no immediate benefit was derived when training SVM classifiers on data sets exclusively containing intra- or extracellular BPAs. In conclusion, this work demonstrates that ML classifiers have great utility in RV approaches and will lead to new subunit vaccines in the future.

  4. Structural brain changes versus self-report: machine-learning classification of chronic fatigue syndrome patients.

    PubMed

    Sevel, Landrew S; Boissoneault, Jeff; Letzen, Janelle E; Robinson, Michael E; Staud, Roland

    2018-05-30

    Chronic fatigue syndrome (CFS) is a disorder associated with fatigue, pain, and structural/functional abnormalities seen during magnetic resonance brain imaging (MRI). Therefore, we evaluated the performance of structural MRI (sMRI) abnormalities in the classification of CFS patients versus healthy controls and compared it to machine learning (ML) classification based upon self-report (SR). Participants included 18 CFS patients and 15 healthy controls (HC). All subjects underwent T1-weighted sMRI and provided visual analogue-scale ratings of fatigue, pain intensity, anxiety, depression, anger, and sleep quality. sMRI data were segmented using FreeSurfer and 61 regions based on functional and structural abnormalities previously reported in patients with CFS. Classification was performed in RapidMiner using a linear support vector machine and bootstrap optimism correction. We compared ML classifiers based on (1) 61 a priori sMRI regional estimates and (2) SR ratings. The sMRI model achieved 79.58% classification accuracy. The SR (accuracy = 95.95%) outperformed both sMRI models. Estimates from multiple brain areas related to cognition, emotion, and memory contributed strongly to group classification. This is the first ML-based group classification of CFS. Our findings suggest that sMRI abnormalities are useful for discriminating CFS patients from HC, but SR ratings remain most effective in classification tasks.

  5. An IoT-Enabled Stroke Rehabilitation System Based on Smart Wearable Armband and Machine Learning

    PubMed Central

    Yang, Geng; Pang, Gaoyang; Zhang, Hao; Li, Jiayi; Deng, Bin; Pang, Zhibo; Xu, Juan; Jiang, Mingzhe; Liljeberg, Pasi; Xie, Haibo; Yang, Huayong

    2018-01-01

    Surface electromyography signal plays an important role in hand function recovery training. In this paper, an IoT-enabled stroke rehabilitation system was introduced which was based on a smart wearable armband (SWA), machine learning (ML) algorithms, and a 3-D printed dexterous robot hand. User comfort is one of the key issues which should be addressed for wearable devices. The SWA was developed by integrating a low-power and tiny-sized IoT sensing device with textile electrodes, which can measure, pre-process, and wirelessly transmit bio-potential signals. By evenly distributing surface electrodes over user’s forearm, drawbacks of classification accuracy poor performance can be mitigated. A new method was put forward to find the optimal feature set. ML algorithms were leveraged to analyze and discriminate features of different hand movements, and their performances were appraised by classification complexity estimating algorithms and principal components analysis. According to the verification results, all nine gestures can be successfully identified with an average accuracy up to 96.20%. In addition, a 3-D printed five-finger robot hand was implemented for hand rehabilitation training purpose. Correspondingly, user’s hand movement intentions were extracted and converted into a series of commands which were used to drive motors assembled inside the dexterous robot hand. As a result, the dexterous robot hand can mimic the user’s gesture in a real-time manner, which shows the proposed system can be used as a training tool to facilitate rehabilitation process for the patients after stroke. PMID:29805919

  6. "What is relevant in a text document?": An interpretable machine learning approach

    PubMed Central

    Arras, Leila; Horn, Franziska; Montavon, Grégoire; Müller, Klaus-Robert

    2017-01-01

    Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text’s category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications. PMID:28800619

  7. Comparison of in-house biotin-avidin tetanus IgG enzyme-linked-immunosorbent assay (ELISA) with gold standard in vivo mouse neutralization test for the detection of low level antibodies.

    PubMed

    Sonmez, Cemile; Coplu, Nilay; Gozalan, Aysegul; Akin, Lutfu; Esen, Berrin

    2017-06-01

    Detection of anti-tetanus antibody levels is necessary for both determination of the immune status of individuals and also for planning preventive measures. ELISA is the preferred test among in vitro tests however it can be affected by the cross reacting antibodies. A previously developed in-house ELISA test was found not reliable for the antibody levels ≤1.0IU/ml. A new method was developed to detect low antibody levels correctly. The aim of the present study was to compare the results of the newly developed in-house biotin-avidin tetanus IgG ELISA test with the in vivo mouse neutralization test, for the antibody levels ≤1.0IU/ml. A total of 54 serum samples with the antibody levels of three different levels, =0.01IU/ml, 0.01-0.1IU/ml, 0.1-1IU/ml, which were detected by in vivo mouse neutralization test were studied by the newly developed in-house biotin-avidin tetanus IgG ELISA test. Test was validated by using five different concentrations (0.01IU/ml, 0.06IU/ml, 0.2IU/ml, 0.5IU/ml, 1.0IU/ml). A statistically significant correlation (r 2 =0.9967 p=0,001) between in vivo mouse neutralization test and in-house biotin-avidin tetanus IgG ELISA test, was observed. For the tested concentrations intra-assay, inter-assay, accuracy, sensitivity, specificity and coefficients of variations were determined as ≤15%. In-house biotin-avidin tetanus IgG ELISA test can be an alternative method to in vivo mouse neutralization method for the detection of levels ≤1.0IU/ml. By using in-house biotin-avidin tetanus IgG ELISA test, individuals with non protective levels, will be reliably detected. Copyright © 2017. Published by Elsevier B.V.

  8. Activities of E1210 and comparator agents tested by CLSI and EUCAST broth microdilution methods against Fusarium and Scedosporium species identified using molecular methods.

    PubMed

    Castanheira, Mariana; Duncanson, Frederick P; Diekema, Daniel J; Guarro, Josep; Jones, Ronald N; Pfaller, Michael A

    2012-01-01

    Fusarium (n = 67) and Scedosporium (n = 63) clinical isolates were tested by two reference broth microdilution (BMD) methods against a novel broad-spectrum (active against both yeasts and molds) antifungal, E1210, and comparator agents. E1210 inhibits the inositol acylation step in glycophosphatidylinositol (GPI) biosynthesis, resulting in defects in fungal cell wall biosynthesis. Five species complex organisms/species of Fusarium (4 isolates unspeciated) and 28 Scedosporium apiospermum, 7 Scedosporium aurantiacum, and 28 Scedosporium prolificans species were identified by molecular techniques. Comparator antifungal agents included anidulafungin, caspofungin, itraconazole, posaconazole, voriconazole, and amphotericin B. E1210 was highly active against all of the tested isolates, with minimum effective concentration (MEC)/MIC(90) values (μg/ml) for E1210, anidulafungin, caspofungin, itraconazole, posaconazole, voriconazole, and amphotericin B, respectively, for Fusarium of 0.12, >16, >16, >8, >8, 8, and 4 μg/ml. E1210 was very potent against the Scedosporium spp. tested. The E1210 MEC(90) was 0.12 μg/ml for S. apiospermum, but 1 to >8 μg/ml for other tested agents. Against S. aurantiacum, the MEC(50) for E1210 was 0.06 μg/ml versus 0.5 to >8 μg/ml for the comparators. Against S. prolificans, the MEC(90) for E1210 was only 0.12 μg/ml, compared to >4 μg/ml for amphotericin B and >8 μg/ml for itraconazole, posaconazole, and voriconazole. Both CLSI and EUCAST methods were highly concordant for E1210 and all comparator agents. The essential agreement (EA; ±2 doubling dilutions) was >93% for all comparisons, with the exception of posaconazole and F. oxysporum species complex (SC) (60%), posaconazole and S. aurantiacum (85.7%), and voriconazole and S. aurantiacum (85.7%). In conclusion, E1210 exhibited very potent and broad-spectrum antifungal activity against azole- and amphotericin B-resistant strains of Fusarium spp. and Scedosporium spp. Furthermore, in vitro susceptibility testing of E1210 against isolates of Fusarium and Scedosporium may be accomplished using either of the CLSI or EUCAST BMD methods, each producing very similar results.

  9. Activities of E1210 and Comparator Agents Tested by CLSI and EUCAST Broth Microdilution Methods against Fusarium and Scedosporium Species Identified Using Molecular Methods

    PubMed Central

    Duncanson, Frederick P.; Diekema, Daniel J.; Guarro, Josep; Jones, Ronald N.; Pfaller, Michael A.

    2012-01-01

    Fusarium (n = 67) and Scedosporium (n = 63) clinical isolates were tested by two reference broth microdilution (BMD) methods against a novel broad-spectrum (active against both yeasts and molds) antifungal, E1210, and comparator agents. E1210 inhibits the inositol acylation step in glycophosphatidylinositol (GPI) biosynthesis, resulting in defects in fungal cell wall biosynthesis. Five species complex organisms/species of Fusarium (4 isolates unspeciated) and 28 Scedosporium apiospermum, 7 Scedosporium aurantiacum, and 28 Scedosporium prolificans species were identified by molecular techniques. Comparator antifungal agents included anidulafungin, caspofungin, itraconazole, posaconazole, voriconazole, and amphotericin B. E1210 was highly active against all of the tested isolates, with minimum effective concentration (MEC)/MIC90 values (μg/ml) for E1210, anidulafungin, caspofungin, itraconazole, posaconazole, voriconazole, and amphotericin B, respectively, for Fusarium of 0.12, >16, >16, >8, >8, 8, and 4 μg/ml. E1210 was very potent against the Scedosporium spp. tested. The E1210 MEC90 was 0.12 μg/ml for S. apiospermum, but 1 to >8 μg/ml for other tested agents. Against S. aurantiacum, the MEC50 for E1210 was 0.06 μg/ml versus 0.5 to >8 μg/ml for the comparators. Against S. prolificans, the MEC90 for E1210 was only 0.12 μg/ml, compared to >4 μg/ml for amphotericin B and >8 μg/ml for itraconazole, posaconazole, and voriconazole. Both CLSI and EUCAST methods were highly concordant for E1210 and all comparator agents. The essential agreement (EA; ±2 doubling dilutions) was >93% for all comparisons, with the exception of posaconazole and F. oxysporum species complex (SC) (60%), posaconazole and S. aurantiacum (85.7%), and voriconazole and S. aurantiacum (85.7%). In conclusion, E1210 exhibited very potent and broad-spectrum antifungal activity against azole- and amphotericin B-resistant strains of Fusarium spp. and Scedosporium spp. Furthermore, in vitro susceptibility testing of E1210 against isolates of Fusarium and Scedosporium may be accomplished using either of the CLSI or EUCAST BMD methods, each producing very similar results. PMID:22083469

  10. Performance Characteristics of the QUANTIPLEX HIV-1 RNA 3.0 Assay for Detection and Quantitation of Human Immunodeficiency Virus Type 1 RNA in Plasma

    PubMed Central

    Erice, Alejo; Brambilla, Donald; Bremer, James; Jackson, J. Brooks; Kokka, Robert; Yen-Lieberman, Belinda; Coombs, Robert W.

    2000-01-01

    The QUANTIPLEX HIV-1 RNA assay, version 3.0 (a branched DNA, version 3.0, assay [bDNA 3.0 assay]), was evaluated by analyzing spiked and clinical plasma samples and was compared with the AMPLICOR HIV-1 MONITOR Ultrasensitive (ultrasensitive reverse transcription-PCR [US-RT-PCR]) method. A panel of spiked plasma samples that contained 0 to 750,000 copies of human immunodeficiency virus type 1 (HIV-1) RNA per ml was tested four times in each of four laboratories (1,344 assays). Negative results (<50 copies/ml) were obtained in 30 of 32 (94%) assays with seronegative samples, 66 of 128 (52%) assays with HIV-1 RNA at 50 copies/ml, and 5 of 128 (4%) assays with HIV-1 RNA at 100 copies/ml. The assay was linear from 100 to 500,000 copies/ml. The within-run standard deviation (SD) of the log10 estimated HIV-1 RNA concentration was 0.08 at 1,000 to 500,000 copies/ml, increased below 1,000 copies/ml, and was 0.17 at 100 copies/ml. Between-run reproducibility at 100 to 500 copies/ml was <0.10 log10 in most comparisons. Interlaboratory differences across runs were ≤0.10 log10 at all concentrations examined. A subset of the panel (25 to 500 copies/ml) was also analyzed by the US-RT-PCR assay. The within-run SD varied inversely with the log10 HIV-1 RNA concentration but was higher than the SD for the bDNA 3.0 assay at all concentrations. Log-log regression analysis indicated that the two methods produced very similar estimates at 100 to 500 copies/ml. In parallel testing of clinical specimens with low HIV-1 RNA levels, 80 plasma samples with <50 copies/ml by the US-RT-PCR assay had <50 copies/ml when they were retested by the bDNA 3.0 assay. In contrast, 11 of 78 (14%) plasma samples with <50 copies/ml by the bDNA 3.0 assay had ≥50 copies/ml when they were retested by the US-RT-PCR assay (median, 86 copies/ml; range, 50 to 217 copies/ml). Estimation of bDNA 3.0 values of <50 copies/ml by extending the standard curve of the assay showed that these samples with discrepant results had higher HIV-1 RNA levels than the samples with concordant results (median, 34 versus 17 copies/ml; P = 0.0051 by the Wilcoxon two-sample test). The excellent reproducibility, broad linear range, and good sensitivity of the bDNA 3.0 assay make it a very attractive method for quantitation of HIV-1 RNA levels in plasma. PMID:10921936

  11. [Determination of residual solvents in 7-amino-3-chloro cephalosporanic acid by gas chromatography].

    PubMed

    Ma, Li; Yao, Tong-wei

    2011-01-01

    To develop a gas chromatography method for determination of residual solvents in 7-amino-3-chloro cephalosporanic acid (7-ACCA). The residual levels of acetone, methanol, dichloromethane, ethyl acetate, isobutanol, pyridine and toluene in 7-ACCA were measured by gas chromatography using Agilent INNOWAX capillary column (30 m × 0.32 mm,0.5 μm). The initial column temperature was 70° maintained for 6 min and then raised (10°C/min) to 160°C for 1 min. Nitrogen gas was used as carrier and FID as detector. The flow of carrier was 1.0 ml/min, the temperature of injection port and detector was 200°C and 250°C, respectively. The limits of detection for acetone, methanol, dichloromethane, ethyl acetate, isobutanol, pyridine, toluene in 7-ACCA were 2.5 μg/ml, 1.5 μg/ml, 15 μg/ml, 2.5 μg/ml, 2.5 μg/ml, 2.5 μg/ml and 11 μg/ml, respectively. Only acetone was detected in the sample, and was less than the limits of Ch.P. The method can effectively detect the residual solvents in 7-ACCA.

  12. Assessment of quality outcomes for robotic pancreaticoduodenectomy: identification of the learning curve.

    PubMed

    Boone, Brian A; Zenati, Mazen; Hogg, Melissa E; Steve, Jennifer; Moser, Arthur James; Bartlett, David L; Zeh, Herbert J; Zureikat, Amer H

    2015-05-01

    Quality assessment is an important instrument to ensure optimal surgical outcomes, particularly during the adoption of new surgical technology. The use of the robotic platform for complex pancreatic resections, such as the pancreaticoduodenectomy, requires close monitoring of outcomes during its implementation phase to ensure patient safety is maintained and the learning curve identified. To report the results of a quality analysis and learning curve during the implementation of robotic pancreaticoduodenectomy (RPD). A retrospective review of a prospectively maintained database of 200 consecutive patients who underwent RPD in a large academic center from October 3, 2008, through March 1, 2014, was evaluated for important metrics of quality. Patients were analyzed in groups of 20 to minimize demographic differences and optimize the ability to detect statistically meaningful changes in performance. Robotic pancreaticoduodenectomy. Optimization of perioperative outcome parameters. No statistical differences in mortality rates or major morbidity were noted during the study. Statistical improvements in estimated blood loss and conversions to open surgery occurred after 20 cases (600 mL vs 250 mL [P = .002] and 35.0% vs 3.3% [P < .001], respectively), incidence of pancreatic fistula after 40 cases (27.5% vs 14.4%; P = .04), and operative time after 80 cases (581 minutes vs 417 minutes [P < .001]). Complication rates, lengths of stay, and readmission rates showed continuous improvement that did not reach statistical significance. Outcomes for the last 120 cases (representing optimized metrics beyond the learning curve) included a mean operative time of 417 minutes, median estimated blood loss of 250 mL, a conversion rate of 3.3%, 90-day mortality of 3.3%, a clinically significant (grade B/C) pancreatic fistula rate of 6.9%, and a median length of stay of 9 days. Continuous assessment of quality metrics allows for safe implementation of RPD. We identified several inflexion points corresponding to optimization of performance metrics for RPD that can be used as benchmarks for surgeons who are adopting this technology.

  13. Comparison of macronutrient contents in human milk measured using mid-infrared human milk analyser in a field study vs. chemical reference methods.

    PubMed

    Zhu, Mei; Yang, Zhenyu; Ren, Yiping; Duan, Yifan; Gao, Huiyu; Liu, Biao; Ye, Wenhui; Wang, Jie; Yin, Shian

    2017-01-01

    Macronutrient contents in human milk are the common basis for estimating these nutrient requirements for both infants and lactating women. A mid-infrared human milk analyser (HMA, Miris, Sweden) was recently developed for determining macronutrient levels. The purpose of the study is to compare the accuracy and precision of HMA method with fresh milk samples in the field studies with chemical methods with frozen samples in the lab. Full breast milk was collected using electric pumps and fresh milk was analyzed in the field studies using HMA. All human milk samples were thawed and analyzed with chemical reference methods in the lab. The protein, fat and total solid levels were significantly correlated between the two methods and the correlation coefficient was 0.88, 0.93 and 0.78, respectively (p  <  0.001). The mean protein content was significantly lower and the mean fat level was significantly greater when measured using HMA method (1.0 g 100 mL -1 vs 1.2 g 100 mL -1 and 3. 7 g 100 mL -1 vs 3.2 g 100 mL -1 , respectively, p  <  0.001). Thus, linear recalibration could be used to improve mean estimation for both protein and fat. There was no significant correlation for lactose between the two methods (p  >  0.05). There was no statistically significant difference in the mean total solid concentration (12.2 g 100 mL -1 vs 12.3 g 100 mL -1 , p  >  0.05). Overall, HMA might be used to analyze macronutrients in fresh human milk with acceptable accuracy and precision after recalibrating fat and protein levels of field samples. © 2016 John Wiley & Sons Ltd.

  14. Examining Changes to Center of Pressure During the First Trials of Wii Gameplay.

    PubMed

    Reed-Jones, Rebecca; Carvalho, Laura; Sanderson, Chelsey; Montelpare, William; Murray, Nicholas; Powell, Douglas

    2017-02-01

    Use of the Nintendo Wii™ as a balance assessment and rehabilitation tool continues to grow. One advantage of the Wii is that games can serve as a virtual reality training tool; however, a disadvantage of the Wii is the human-machine interface and the learning effect over multiple trials. The purpose of this study was to assess changes in postural control during Wii gameplay over a series of trials in novice players. Thirty-one university athletes (aged 18-25 years) completed four trials of the Nintendo Wii Fit™ soccer heading (SH) balance game. Center of pressure (COP) was calculated in the anterior-posterior (AP) and medial-lateral (ML) directions for each 70-second time trial at 1000 Hz. COP was assessed using six linear and two nonlinear measures. Repeated measures analysis of variances compared COP measures over the four trials. Significant differences in COP magnitude and velocity were found between trials 1 and 2 in the ML direction. No significant effects of trial were found in the AP direction. In contrast, a measure of the overall area of COP using an ellipse method revealed a significant reduction to COP area between trials 3 and 4. No significant differences between trials were observed in nonlinear measures. These results demonstrate how magnitude and velocity measures of COP control stabilize after the first trial of Wii SH game play in novice young adults. As Wii rehabilitation focuses on individuals with balance difficulties, an important consideration when using the game as an assessment tool is that more than four trials may be required to capture learning in these populations. In addition, contrasting results from ellipse measurement methods point to the use of multiple measures for robust description of COP behavior. This work provides understanding of normative postural control responses with further research in clinical populations needed.

  15. BACLAB: A Computer Simulation of a Medical Bacteriology Laboratory--An Aid for Teaching Tertiary Level Microbiology.

    ERIC Educational Resources Information Center

    Lewington, J.; And Others

    1985-01-01

    Describes a computer simulation program which helps students learn the main biochemical tests and profiles for identifying medically important bacteria. Also discusses the advantages and applications of this type of approach. (ML)

  16. High FSH decreases the developmental potential of mouse oocytes and resulting fertilized embryos, but does not influence offspring physiology and behavior in vitro or in vivo.

    PubMed

    Li, Min; Zhao, Yue; Zhao, Cui H; Yan, Jie; Yan, Ying L; Rong, Li; Liu, Ping; Feng, Huai-Liang; Yu, Yang; Qiao, Jie

    2013-05-01

    Do different concentrations of FSH in the assisted reproductive technology (ART) procedure in vitro or in vivo affect the developmental competence of oocytes, the embryos and the offspring conceived from these embryos? Improper FSH treatment (200 IU/l in vitro, 10 IU/ml in vivo and 200 IU/ml in vivo) impairs the development competence of oocyte and embryo, but does not influence offspring physiology and behavior. Exogenous FSH has been widely used in the field of ART. However, the effects of different concentrations of FSH on the developmental competence of oocytes, embryos and the offspring conceived from these embryos, are still unknown. In a prospective study, a total of 45 mice at 8-10 weeks of age were primed in vivo with different dosages of FSH (9 mice in the 10 IU/ml, 10 mice in the 50 IU/ml, 10 mice in the 100 IU/ml and 16 mice in the 200 IU/ml groups). Fresh MII oocytes were retrieved from ovaries: this was designated as in vivo group. Thirty six mice at 8-10 weeks of age were sacrificed by cervical dislocation to obtain ovaries without FSH treatment (9 mice in the 0 IU/l, 9 mice in the 50 IU/l, 8 mice in the 100 IU/l and 10 mice in the 200 IU/l groups), and then the immature oocytes were collected from these ovaries and cultured in vitro matured medium supplemented with 0, 50, 100 and 200 IU/l FSH: this was designated as in vitro group. Spindle assembly of matured MII oocytes was stained via an immunofluorescence method and the oocytes ratio of normal spindle was analyzed. The developmental competence of the resulting fertilized embryos in the pre- and post-implantation stages was examined in in vitro and in vivo groups. Furthermore, physiological index, including reproductive potential and body weight, of the offspring was investigated by mating experiments and behavior index, including learning, memory, probing and intelligence, was tested by Morris water maze in in vitro and in vivo groups. In the in vitro groups, the oocyte maturation competence, normal spindle assembly, blastocyst formation and implantation, as well as viable pup production were all impaired in the group treated with 200 IU/l FSH (P < 0.05). No differences were observed among the other three groups (P > 0.05). In the in vivo groups, 10 IU/ml FSH but not 200 IU/ml treatment influenced blastocyst formation and viable pup production (P < 0.05), although the high proportion of spindle assembly abnormality was only observed in the 200 IU/ml FSH treatment group (P < 0.05). Furthermore, there were no significant differences in terms of physiological index (reproductive potential and body weight) and behavior index (learning, memory, probing and intelligence) in offspring from in vitro and in vivo groups (P > 0.05). The mouse model was used in this study. The results of the mouse follicle growth and oocyte development in responding to different concentrations of FSH are not 100% transferable to human, because of the physiological differences between mouse and human. The findings indicated that FSH application in the field of ART is safe to the resulted offspring, but it should be more carefully used for each women in ART cycles because the inappropriate FSH concentration would decrease the oocyte developmental competence. This work was partially supported by the Ministry of Science and Technology of China Grants (973 program; 2011CB944504), the Program for Changjiang Scholars and Innovative Research Team in University of Ministry of Education of China (30825038), the National Natural Science Funds for Young Scholar (31000661) and by the Joint Research Fund for Overseas, Hong Kong and Marco Scholars (31128013/C120205). None of the authors has any conflicts of interest.

  17. Comparison of two immunoradiometric assays for serum thyrotropin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheinin, B.; Drew, H.; La France, N.

    1985-05-01

    An ultra-sensitive TSH assay capable of detecting subnormal TSH levels would be useful in confirming suppressed pituitary function as seen in hyperthyroidism. Two sensitive immunoradiometric TSH assays (IRMA's) were studied to determine how well they distinguished thyrotoxic patients from normal subjects. Serono Diagnostics' method employs three monoclonal antibodies specific for different regions of the TSH molecule with a minimum detectable dose (MDD) limit of 0.1 ..mu..IU/ml. Precision studies using a low TSH control in the 1.8 ..mu..IU/ml range gave CV's of 15.0%. Boots-Celltech Diagnostics method is a two site IRMA using two monoclonal antibodies. The MDD limit is 0.05 ..mu..IU/mlmore » with precision CV's of 29.3% at a TSH control range of 0.62 ..mu..IU/ml. In 24 chemically thyrotoxic patients, the mean serum TSH concentration was significantly lower than in the normal control subjects: for Serono, 0.19 ..mu..IU/ml vs. 2.34 ..mu..IU/ml and for Boots Celltech, 0.18 IU/ml vs 2.06 ..mu..IU/ml. The range of TSH was 0 to 0.5 ..mu..IU/ml in thyrotoxic patients using Serono with the exception of one patient having a TSH value of 0.8 ..mu..IU/ml. The normal range was 0.6 to 6.0 ..mu..IU/ml. For Boots Celltech the thyrotoxic range was 0 to 0.2 ..mu..IU/ml with that same thyrotoxic patient giving a TSH value of 0.7 ..mu..IU/ml with a normal range of 0.6 to 5.0 IU/ml. Serum TSH measurements using both procedures are highly sensitive for distinguishing thyrotoxic patients from normal subjects and are useful to confirm suppressed pituitary function.« less

  18. Spectroflourometric and spectrophotometric methods for the determination of sitagliptin in binary mixture with metformin and ternary mixture with metformin and sitagliptin alkaline degradation product.

    PubMed

    El-Bagary, Ramzia I; Elkady, Ehab F; Ayoub, Bassam M

    2011-03-01

    Simple, accurate and precise spectroflourometric and spectrophotometric methods have been developed and validated for the determination of sitagliptin phosphate monohydrate (STG) and metformin HCL (MET). Zero order, first derivative, ratio derivative spectrophotometric methods and flourometric methods have been developed. The zero order spectrophotometric method was used for the determination of STG in the range of 50-300 μg mL(-1). The first derivative spectrophotometric method was used for the determination of MET in the range of 2-12 μg mL(-1) and STG in the range of 50-300 μg mL(-1) by measuring the peak amplitude at 246.5 nm and 275 nm, respectively. The first derivative of ratio spectra spectrophotometric method used the peak amplitudes at 232 nm and 239 nm for the determination of MET in the range of 2-12 μg mL(-1). The flourometric method was used for the determination of STG in the range of 0.25-110 μg mL(-1). The proposed methods used to determine each drug in binary mixture with metformin and ternary mixture with metformin and sitagliptin alkaline degradation product that is obtained after alkaline hydrolysis of sitagliptin. The results were statistically compared using one-way analysis of variance (ANOVA). The methods developed were satisfactorily applied to the analysis of the pharmaceutical formulations and proved to be specific and accurate for the quality control of the cited drugs in pharmaceutical dosage forms.

  19. Spectroflourometric and Spectrophotometric Methods for the Determination of Sitagliptin in Binary Mixture with Metformin and Ternary Mixture with Metformin and Sitagliptin Alkaline Degradation Product

    PubMed Central

    El-Bagary, Ramzia I.; Elkady, Ehab F.; Ayoub, Bassam M.

    2011-01-01

    Simple, accurate and precise spectroflourometric and spectrophotometric methods have been developed and validated for the determination of sitagliptin phosphate monohydrate (STG) and metformin HCL (MET). Zero order, first derivative, ratio derivative spectrophotometric methods and flourometric methods have been developed. The zero order spectrophotometric method was used for the determination of STG in the range of 50-300 μg mL-1. The first derivative spectrophotometric method was used for the determination of MET in the range of 2–12 μg mL-1 and STG in the range of 50-300 μg mL-1 by measuring the peak amplitude at 246.5 nm and 275 nm, respectively. The first derivative of ratio spectra spectrophotometric method used the peak amplitudes at 232 nm and 239 nm for the determination of MET in the range of 2–12 μg mL-1. The flourometric method was used for the determination of STG in the range of 0.25-110 μg mL-1. The proposed methods used to determine each drug in binary mixture with metformin and ternary mixture with metformin and sitagliptin alkaline degradation product that is obtained after alkaline hydrolysis of sitagliptin. The results were statistically compared using one-way analysis of variance (ANOVA). The methods developed were satisfactorily applied to the analysis of the pharmaceutical formulations and proved to be specific and accurate for the quality control of the cited drugs in pharmaceutical dosage forms. PMID:23675222

  20. jTraML: an open source Java API for TraML, the PSI standard for sharing SRM transitions.

    PubMed

    Helsens, Kenny; Brusniak, Mi-Youn; Deutsch, Eric; Moritz, Robert L; Martens, Lennart

    2011-11-04

    We here present jTraML, a Java API for the Proteomics Standards Initiative TraML data standard. The library provides fully functional classes for all elements specified in the TraML XSD document, as well as convenient methods to construct controlled vocabulary-based instances required to define SRM transitions. The use of jTraML is demonstrated via a two-way conversion tool between TraML documents and vendor specific files, facilitating the adoption process of this new community standard. The library is released as open source under the permissive Apache2 license and can be downloaded from http://jtraml.googlecode.com . TraML files can also be converted online at http://iomics.ugent.be/jtraml .

  1. Into the Bowels of Depression: Unravelling Medical Symptoms Associated with Depression by Applying Machine-Learning Techniques to a Community Based Population Sample.

    PubMed

    Dipnall, Joanna F; Pasco, Julie A; Berk, Michael; Williams, Lana J; Dodd, Seetal; Jacka, Felice N; Meyer, Denny

    2016-01-01

    Depression is commonly comorbid with many other somatic diseases and symptoms. Identification of individuals in clusters with comorbid symptoms may reveal new pathophysiological mechanisms and treatment targets. The aim of this research was to combine machine-learning (ML) algorithms with traditional regression techniques by utilising self-reported medical symptoms to identify and describe clusters of individuals with increased rates of depression from a large cross-sectional community based population epidemiological study. A multi-staged methodology utilising ML and traditional statistical techniques was performed using the community based population National Health and Nutrition Examination Study (2009-2010) (N = 3,922). A Self-organised Mapping (SOM) ML algorithm, combined with hierarchical clustering, was performed to create participant clusters based on 68 medical symptoms. Binary logistic regression, controlling for sociodemographic confounders, was used to then identify the key clusters of participants with higher levels of depression (PHQ-9≥10, n = 377). Finally, a Multiple Additive Regression Tree boosted ML algorithm was run to identify the important medical symptoms for each key cluster within 17 broad categories: heart, liver, thyroid, respiratory, diabetes, arthritis, fractures and osteoporosis, skeletal pain, blood pressure, blood transfusion, cholesterol, vision, hearing, psoriasis, weight, bowels and urinary. Five clusters of participants, based on medical symptoms, were identified to have significantly increased rates of depression compared to the cluster with the lowest rate: odds ratios ranged from 2.24 (95% CI 1.56, 3.24) to 6.33 (95% CI 1.67, 24.02). The ML boosted regression algorithm identified three key medical condition categories as being significantly more common in these clusters: bowel, pain and urinary symptoms. Bowel-related symptoms was found to dominate the relative importance of symptoms within the five key clusters. This methodology shows promise for the identification of conditions in general populations and supports the current focus on the potential importance of bowel symptoms and the gut in mental health research.

  2. Measurement of limb volume: laser scanning versus volume displacement.

    PubMed

    McKinnon, John Gregory; Wong, Vanessa; Temple, Walley J; Galbraith, Callum; Ferry, Paul; Clynch, George S; Clynch, Colin

    2007-10-01

    Determining the prevalence and treatment success of surgical lymphedema requires accurate and reproducible measurement. A new method of measurement of limb volume is described. A series of inanimate objects of known and unknown volume was measured using digital laser scanning and water displacement. A similar comparison was made with 10 human volunteers. Digital scanning was evaluated by comparison to the established method of water displacement, then to itself to determine reproducibility of measurement. (1) Objects of known volume: Laser scanning accurately measured the calculated volume but water displacement became less accurate as the size of the object increased. (2) Objects of unknown volume: As average volume increased, there was an increasing bias of underestimation of volume by the water displacement method. The coefficient of reproducibility of water displacement was 83.44 ml. In contrast, the reproducibility of the digital scanning method was 19.0 ml. (3) Human data: The mean difference between water displacement volume and laser scanning volume was 151.7 ml (SD +/- 189.5). The coefficient of reproducibility of water displacement was 450.8 ml whereas for laser scanning it was 174 ml. Laser scanning is an innovative method of measuring tissue volume that combines precision and reproducibility and may have clinical utility for measuring lymphedema. 2007 Wiley-Liss, Inc

  3. Face mask ventilation in edentulous patients: a comparison of mandibular groove and lower lip placement.

    PubMed

    Racine, Stéphane X; Solis, Audrey; Hamou, Nora Ait; Letoumelin, Philippe; Hepner, David L; Beloucif, Sadek; Baillard, Christophe

    2010-05-01

    In edentulous patients, it may be difficult to perform face mask ventilation because of inadequate seal with air leaks. Our aim was to ascertain whether the "lower lip" face mask placement, as a new face mask ventilation method, is more effective at reducing air leaks than the standard face mask placement. Forty-nine edentulous patients with inadequate seal and air leak during two-hand positive-pressure ventilation using the ventilator circle system were prospectively evaluated. In the presence of air leaks, defined as a difference of at least 33% between inspired and expired tidal volumes, the mask was placed in a lower lip position by repositioning the caudal end of the mask above the lower lip while maintaining the head in extension. The results are expressed as mean +/- SD or median (25th-75th percentiles). Patient characteristics included age (71 +/- 11 yr) and body mass index (24 +/- 4 kg/m2). By using the standard method, the median inspired and expired tidal volumes were 450 ml (400-500 ml) and 0 ml (0-50 ml), respectively, and the median air leak was 400 ml (365-485 ml). After placing the mask in the lower lip position, the median expired tidal volume increased to 400 ml (380-490), and the median air leak decreased to 10 ml (0-20 ml) (P < 0.001 vs. standard method). The lower lip face mask placement with two hands reduced the air leak by 95% (80-100%). In edentulous patients with inadequate face mask ventilation, the lower lip face mask placement with two hands markedly reduced the air leak and improved ventilation.

  4. The use of genetic programming to develop a predictor of swash excursion on sandy beaches

    NASA Astrophysics Data System (ADS)

    Passarella, Marinella; Goldstein, Evan B.; De Muro, Sandro; Coco, Giovanni

    2018-02-01

    We use genetic programming (GP), a type of machine learning (ML) approach, to predict the total and infragravity swash excursion using previously published data sets that have been used extensively in swash prediction studies. Three previously published works with a range of new conditions are added to this data set to extend the range of measured swash conditions. Using this newly compiled data set we demonstrate that a ML approach can reduce the prediction errors compared to well-established parameterizations and therefore it may improve coastal hazards assessment (e.g. coastal inundation). Predictors obtained using GP can also be physically sound and replicate the functionality and dependencies of previous published formulas. Overall, we show that ML techniques are capable of both improving predictability (compared to classical regression approaches) and providing physical insight into coastal processes.

  5. In vitro activities of dalbavancin and nine comparator agents against anaerobic gram-positive species and corynebacteria.

    PubMed

    Goldstein, Ellie J C; Citron, Diane M; Merriam, C Vreni; Warren, Yumi; Tyrrell, Kerin; Fernandez, Helen T

    2003-06-01

    Dalbavancin is a novel semisynthetic glycopeptide with enhanced activity against gram-positive species. Its comparative in vitro activities and those of nine comparator agents, including daptomycin, vancomycin, linezolid, and quinupristin-dalfopristin, against 290 recent gram-positive clinical isolates strains, as determined by the NCCLS agar dilution method, were studied. The MICs of dalbavancin at which 90% of various isolates tested were inhibited were as follows: Actinomyces spp., 0.5 microg/ml; Clostridium clostridioforme, 8 microg/ml; C. difficile, 0.25 microg/ml; C. innocuum, 0.25 microg/ml; C. perfringens, 0.125 microg/ml; C. ramosum, 1 microg/ml; Eubacterium spp., 1 microg/ml; Lactobacillus spp., >32 microg/ml, Propionibacterium spp., 0.5 microg/ml; and Peptostreptococcus spp., 0.25 microg/ml. Dalbavancin was 1 to 3 dilutions more active than vancomycin against most strains. Dalbavancin exhibited excellent activity against gram-positive strains tested and warrants clinical evaluation.

  6. Ultrasensitive prostate specific antigen assay following laparoscopic radical prostatectomy--an outcome measure for defining the learning curve.

    PubMed

    Viney, R; Gommersall, L; Zeif, J; Hayne, D; Shah, Z H; Doherty, A

    2009-07-01

    Radical retropubic prostatectomy (RRP) performed laparoscopically is a popular treatment with curative intent for organ-confined prostate cancer. After surgery, prostate specific antigen (PSA) levels drop to low levels which can be measured with ultrasensitive assays. This has been described in the literature for open RRP but not for laparoscopic RRP. This paper describes PSA changes in the first 300 consecutive patients undergoing non-robotic laparoscopic RRP by a single surgeon. To use ultrasensitive PSA (uPSA) assays to measure a PSA nadir in patients having laparoscopic radical prostatectomy below levels recorded by standard assays. The aim was to use uPSA nadir at 3 months' post-prostatectomy as an early surrogate end-point of oncological outcome. In so doing, laparoscopic oncological outcomes could then be compared with published results from other open radical prostatectomy series with similar end-points. Furthermore, this end-point could be used in the assessment of the surgeon's learning curve. Prospective, comprehensive, demographic, clinical, biochemical and operative data were collected from all patients undergoing non-robotic laparoscopic RRP. We present data from the first 300 consecutive patients undergoing laparoscopic RRP by a single surgeon. uPSA was measured every 3 months post surgery. Median follow-up was 29 months (minimum 3 months). The likelihood of reaching a uPSA of < or = 0.01 ng/ml at 3 months is 73% for the first 100 patients. This is statistically lower when compared with 83% (P < 0.05) for the second 100 patients and 80% for the third 100 patients (P < 0.05). Overall, 84% of patients with pT2 disease and 66% patients with pT3 disease had a uPSA of < or = 0.01 ng/ml at 3 months. Pre-operative PSA, PSA density and Gleason score were not correlated with outcome as determined by a uPSA of < or = 0.01 ng/ml at 3 months. Positive margins correlate with outcome as determined by a uPSA of < or = 0.01 ng/ml at 3 months but operative time and tumour volume do not (P < 0.05). Attempt at nerve sparing had no adverse effect on achieving a uPSA of < or = 0.01 ng/ml at 3 months. uPSA can be used as an early end-point in the analysis of oncological outcomes after radical prostatectomy. It is one of many measures that can be used in calculating a surgeon's learning curve for laparoscopic radical prostatectomy and in bench-marking performance. With experience, a surgeon can achieve in excess of an 80% chance of obtaining a uPSA nadir of < or = 0.01 ng/ml at 3 months after laparoscopic RRP for a British population. This is equivalent to most published open series.

  7. Classification techniques on computerized systems to predict and/or to detect Apnea: A systematic review.

    PubMed

    Pombo, Nuno; Garcia, Nuno; Bousson, Kouamana

    2017-03-01

    Sleep apnea syndrome (SAS), which can significantly decrease the quality of life is associated with a major risk factor of health implications such as increased cardiovascular disease, sudden death, depression, irritability, hypertension, and learning difficulties. Thus, it is relevant and timely to present a systematic review describing significant applications in the framework of computational intelligence-based SAS, including its performance, beneficial and challenging effects, and modeling for the decision-making on multiple scenarios. This study aims to systematically review the literature on systems for the detection and/or prediction of apnea events using a classification model. Forty-five included studies revealed a combination of classification techniques for the diagnosis of apnea, such as threshold-based (14.75%) and machine learning (ML) models (85.25%). In addition, the ML models, were clustered in a mind map, include neural networks (44.26%), regression (4.91%), instance-based (11.47%), Bayesian algorithms (1.63%), reinforcement learning (4.91%), dimensionality reduction (8.19%), ensemble learning (6.55%), and decision trees (3.27%). A classification model should provide an auto-adaptive and no external-human action dependency. In addition, the accuracy of the classification models is related with the effective features selection. New high-quality studies based on randomized controlled trials and validation of models using a large and multiple sample of data are recommended. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  8. Comparative activity of several beta-lactam antibiotics against anaerobes determined by two methods.

    PubMed

    Zabransky, R J; Birk, R J

    1987-01-01

    The susceptibility of 120 strains of several species of anaerobes to a number of second and third generation beta-lactam antibiotics was determined by the National Committee for Clinical Laboratory Standards reference agar dilution and microdilution methods. The antibiotics tested were cefoperazone, cefotaxime, cefotetan, ceftizoxime, cefoxitin, and imipenem. The MIC50s ranged from 0.125 to 16 micrograms/ml. The MIC90s were lowest with imipenem at 0.5 micrograms/ml, followed by cefoxitin at 32 micrograms/ml; they were highest with cefotetan at 128 micrograms/ml and were 64 micrograms/ml with the others. In vitro drug activity varied with the antibiotic, the organism, the method used, and the breakpoint selected. Rates of resistance varied considerably between the taxonomic groups of organisms tested and also among species within a group. Overall, reproducibility with the agar dilution method ranged from 44% to 85%; testing with ceftizoxime was the least reproducible. Microdilution results agreed within +/- 1 dilution of the agar dilution mode 79% to 95% of the time, with some variation between drugs and organisms tested. Because there were distinct differences in the activity of some drugs against certain species, no antibiotic can substitute for others in in vitro testing.

  9. Validated enantiospecific LC method for determination of (R)-enantiomer impurity in (S)-efavirenz.

    PubMed

    Seshachalam, U; Narasimha Rao, D V L; Chandrasekhar, K B

    2008-02-01

    A high-performance liquid chromatographic method was developed for separation of the enantiomers of efavirenz. The developed method was applied for the determination of (R)-enantiomer in (S)-efavirenz and satisfactory results were achieved. The base line separation with a resolution of more than 4.0 was achieved on Chiralcel OD (250 mm x 4.6 mm, 10 microm) column containing tris-(3,5-dimethylphenylcarbomate) as stationary phase. The mobile phase consists of n-hexane: isopropyl alcohol (80:20 v/v) with 0.1% (v/v) of formic acid as additive. The flow rate was kept at 1.0 ml/min and the UV detection was monitored at 254 nm. The (R)-enantiomer was found linear over the range of 0.1 microg/ml--6 microg/ml. The limit of detection (LOD) was 0.03 microg/ml and the limit of quantification (LOQ) was 0.1 microg/ml (n=3. The precision of (R)-enantiomer at LOQ level was evaluated through six replicate injections and the RSD of the peak response was achieved as 1.34%. The results demonstrated that the developed LC method was simple, precise, robust and applicable for the purity determination of efavirenz.

  10. Is Mistletoe Treatment Beneficial in Invasive Breast Cancer? A New Approach to an Unresolved Problem.

    PubMed

    Fritz, Peter; Dippon, Jürgen; Müller, Simon; Goletz, Sven; Trautmann, Christian; Pappas, Xenophon; Ott, German; Brauch, Hiltrud; Schwab, Matthias; Winter, Stefan; Mürdter, Thomas; Brinkmann, Friedhelm; Faisst, Simone; Rössle, Susanne; Gerteis, Andreas; Friedel, Godehard

    2018-03-01

    In this retrospective study, we compared breast cancer patients treated with and without mistletoe lectin I (ML-I) in addition to standard breast cancer treatment in order to determine a possible effect of this complementary treatment. This study included 18,528 patients with invasive breast cancer. Data on additional ML-I treatments were reported for 164 patients. We developed a "similar case" method with a distance measure retrieved from the beta variable in Cox regression to compare these patients, after stage adjustment, with their non-ML-1 treated counterparts in order to answer three hypotheses concerning overall survival, recurrence free survival and life quality. Raw data analysis of an additional ML-I treatment yielded a worse outcome (p=0.02) for patients with ML treatment, possibly due to a bias inherent in the ML-I-treated patients. Using the "similar case" method (a case-based reasoning approach) we could not confirm this harm for patients using ML-I. Analysis of life quality data did not demonstrate reliable differences between patients treated with ML-I treatment and those without proven ML-I treatment. Based on a "similar case" model we did not observe any differences in the overall survival (OS), recurrence-free survival (RFS), and quality of life data between breast cancer patients with standard treatment and those who in addition to standard treatment received ML-I treatment. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  11. A relational learning approach to Structure-Activity Relationships in drug design toxicity studies.

    PubMed

    Camacho, Rui; Pereira, Max; Costa, Vítor Santos; Fonseca, Nuno A; Adriano, Carlos; Simões, Carlos J V; Brito, Rui M M

    2011-09-16

    It has been recognized that the development of new therapeutic drugs is a complex and expensive process. A large number of factors affect the activity in vivo of putative candidate molecules and the propensity for causing adverse and toxic effects is recognized as one of the major hurdles behind the current "target-rich, lead-poor" scenario. Structure-Activity Relationship (SAR) studies, using relational Machine Learning (ML) algorithms, have already been shown to be very useful in the complex process of rational drug design. Despite the ML successes, human expertise is still of the utmost importance in the drug development process. An iterative process and tight integration between the models developed by ML algorithms and the know-how of medicinal chemistry experts would be a very useful symbiotic approach. In this paper we describe a software tool that achieves that goal--iLogCHEM. The tool allows the use of Relational Learners in the task of identifying molecules or molecular fragments with potential to produce toxic effects, and thus help in stream-lining drug design in silico. It also allows the expert to guide the search for useful molecules without the need to know the details of the algorithms used. The models produced by the algorithms may be visualized using a graphical interface, that is of common use amongst researchers in structural biology and medicinal chemistry. The graphical interface enables the expert to provide feedback to the learning system. The developed tool has also facilities to handle the similarity bias typical of large chemical databases. For that purpose the user can filter out similar compounds when assembling a data set. Additionally, we propose ways of providing background knowledge for Relational Learners using the results of Graph Mining algorithms. Copyright 2011 The Author(s). Published by Journal of Integrative Bioinformatics.

  12. Machine learning for prediction of 30-day mortality after ST elevation myocardial infraction: An Acute Coronary Syndrome Israeli Survey data mining study.

    PubMed

    Shouval, Roni; Hadanny, Amir; Shlomo, Nir; Iakobishvili, Zaza; Unger, Ron; Zahger, Doron; Alcalai, Ronny; Atar, Shaul; Gottlieb, Shmuel; Matetzky, Shlomi; Goldenberg, Ilan; Beigel, Roy

    2017-11-01

    Risk scores for prediction of mortality 30-days following a ST-segment elevation myocardial infarction (STEMI) have been developed using a conventional statistical approach. To evaluate an array of machine learning (ML) algorithms for prediction of mortality at 30-days in STEMI patients and to compare these to the conventional validated risk scores. This was a retrospective, supervised learning, data mining study. Out of a cohort of 13,422 patients from the Acute Coronary Syndrome Israeli Survey (ACSIS) registry, 2782 patients fulfilled inclusion criteria and 54 variables were considered. Prediction models for overall mortality 30days after STEMI were developed using 6 ML algorithms. Models were compared to each other and to the Global Registry of Acute Coronary Events (GRACE) and Thrombolysis In Myocardial Infarction (TIMI) scores. Depending on the algorithm, using all available variables, prediction models' performance measured in an area under the receiver operating characteristic curve (AUC) ranged from 0.64 to 0.91. The best models performed similarly to the Global Registry of Acute Coronary Events (GRACE) score (0.87 SD 0.06) and outperformed the Thrombolysis In Myocardial Infarction (TIMI) score (0.82 SD 0.06, p<0.05). Performance of most algorithms plateaued when introduced with 15 variables. Among the top predictors were creatinine, Killip class on admission, blood pressure, glucose level, and age. We present a data mining approach for prediction of mortality post-ST-segment elevation myocardial infarction. The algorithms selected showed competence in prediction across an increasing number of variables. ML may be used for outcome prediction in complex cardiology settings. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  13. [Analysis of antibiotic diffusion from agarose gel by spectrophotometry and laser interferometry methods].

    PubMed

    Arabski, Michał; Wasik, Sławomir; Piskulak, Patrycja; Góźdź, Natalia; Slezak, Andrzej; Kaca, Wiesław

    2011-01-01

    The aim of this study was to analysis of antibiotics (ampicilin, streptomycin, ciprofloxacin or colistin) release from agarose gel by spectrophotmetry and laser interferometry methods. The interferometric system consisted of a Mach-Zehnder interferometer with a He-Ne laser, TV-CCD camera, computerised data acquisition system and a gel system. The gel system under study consists of two cuvettes. We filled the lower cuvette with an aqueous 1% agarose solution with the antibiotics at initial concentration of antibiotics in the range of 0.12-2 mg/ml for spectrophotmetry analysis or 0.05-0.5 mg/ml for laser interferometry methods, while in the upper cuvette there was pure water. The diffusion was analysed from 120 to 2400 s with a time interval of deltat = 120 s by both methods. We observed that 0.25-1 mg/ml and 0,05 mg/ml are minimal initial concentrations detected by spectrophotometric and laser interferometry methods, respectively. Additionally, we observed differences in kinetic of antibiotic diffusion from gel measured by both methods. In conclusion, the laser interferometric method is a useful tool for studies of antibiotic release from agarose gel, especially for substances are not fully soluble in water, for example: colistin.

  14. Utilizing Machine Learning and Automated Performance Metrics to Evaluate Robot-Assisted Radical Prostatectomy Performance and Predict Outcomes.

    PubMed

    Hung, Andrew J; Chen, Jian; Che, Zhengping; Nilanon, Tanachat; Jarc, Anthony; Titus, Micha; Oh, Paul J; Gill, Inderbir S; Liu, Yan

    2018-05-01

    Surgical performance is critical for clinical outcomes. We present a novel machine learning (ML) method of processing automated performance metrics (APMs) to evaluate surgical performance and predict clinical outcomes after robot-assisted radical prostatectomy (RARP). We trained three ML algorithms utilizing APMs directly from robot system data (training material) and hospital length of stay (LOS; training label) (≤2 days and >2 days) from 78 RARP cases, and selected the algorithm with the best performance. The selected algorithm categorized the cases as "Predicted as expected LOS (pExp-LOS)" and "Predicted as extended LOS (pExt-LOS)." We compared postoperative outcomes of the two groups (Kruskal-Wallis/Fisher's exact tests). The algorithm then predicted individual clinical outcomes, which we compared with actual outcomes (Spearman's correlation/Fisher's exact tests). Finally, we identified five most relevant APMs adopted by the algorithm during predicting. The "Random Forest-50" (RF-50) algorithm had the best performance, reaching 87.2% accuracy in predicting LOS (73 cases as "pExp-LOS" and 5 cases as "pExt-LOS"). The "pExp-LOS" cases outperformed the "pExt-LOS" cases in surgery time (3.7 hours vs 4.6 hours, p = 0.007), LOS (2 days vs 4 days, p = 0.02), and Foley duration (9 days vs 14 days, p = 0.02). Patient outcomes predicted by the algorithm had significant association with the "ground truth" in surgery time (p < 0.001, r = 0.73), LOS (p = 0.05, r = 0.52), and Foley duration (p < 0.001, r = 0.45). The five most relevant APMs, adopted by the RF-50 algorithm in predicting, were largely related to camera manipulation. To our knowledge, ours is the first study to show that APMs and ML algorithms may help assess surgical RARP performance and predict clinical outcomes. With further accrual of clinical data (oncologic and functional data), this process will become increasingly relevant and valuable in surgical assessment and training.

  15. Measurement of phospholipids by hydrophilic interaction liquid chromatography coupled to tandem mass spectrometry: the determination of choline containing compounds in foods.

    PubMed

    Zhao, Yuan-Yuan; Xiong, Yeping; Curtis, Jonathan M

    2011-08-12

    A hydrophilic interaction liquid chromatography-tandem mass spectrometry (HILIC LC-MS/MS) method using multiple scan modes was developed to separate and quantify 11 compounds and lipid classes including acetylcholine (AcCho), betaine (Bet), choline (Cho), glycerophosphocholine (GPC), lysophosphatidylcholine (LPC), lysophosphatidylethanolamine (LPE), phosphatidylcholine (PC), phosphatidylethanolamine (PE), phosphatidylinositol (PI), phosphocholine (PCho) and sphingomyelin (SM). This includes all of the major choline-containing compounds found in foods. The method offers advantages over other LC methods since HILIC chromatography is readily compatible with electrospray ionization and results in higher sensitivity and improved peak shapes. The LC-MS/MS method allows quantification of all choline-containing compounds in a single run. Tests of method suitability indicated linear ranges of approximately 0.25-25 μg/ml for PI and PE, 0.5-50 μg/ml for PC, 0.05-5 μg/ml for SM and LPC, 0.5-25 μg/ml for LPE, 0.02-5 μg/ml for Cho, and 0.08-8 μg/ml for Bet, respectively. Accuracies of 83-105% with precisions of 1.6-13.2% RSD were achieved for standards over a wide range of concentrations, demonstrating that this method will be suitable for food analysis. 8 polar lipid classes were found in a lipid extract of egg yolk and different species of the same class were differentiated based on their molecular weights and fragment ion information. PC and PE were found to be the most abundant lipid classes consisting of 71% and 18% of the total phospholipids in egg yolk. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Development and validation of multivariate calibration methods for simultaneous estimation of Paracetamol, Enalapril maleate and hydrochlorothiazide in pharmaceutical dosage form

    NASA Astrophysics Data System (ADS)

    Singh, Veena D.; Daharwal, Sanjay J.

    2017-01-01

    Three multivariate calibration spectrophotometric methods were developed for simultaneous estimation of Paracetamol (PARA), Enalapril maleate (ENM) and Hydrochlorothiazide (HCTZ) in tablet dosage form; namely multi-linear regression calibration (MLRC), trilinear regression calibration method (TLRC) and classical least square (CLS) method. The selectivity of the proposed methods were studied by analyzing the laboratory prepared ternary mixture and successfully applied in their combined dosage form. The proposed methods were validated as per ICH guidelines and good accuracy; precision and specificity were confirmed within the concentration range of 5-35 μg mL- 1, 5-40 μg mL- 1 and 5-40 μg mL- 1of PARA, HCTZ and ENM, respectively. The results were statistically compared with reported HPLC method. Thus, the proposed methods can be effectively useful for the routine quality control analysis of these drugs in commercial tablet dosage form.

  17. Active learning-based information structure analysis of full scientific articles and two applications for biomedical literature review.

    PubMed

    Guo, Yufan; Silins, Ilona; Stenius, Ulla; Korhonen, Anna

    2013-06-01

    Techniques that are capable of automatically analyzing the information structure of scientific articles could be highly useful for improving information access to biomedical literature. However, most existing approaches rely on supervised machine learning (ML) and substantial labeled data that are expensive to develop and apply to different sub-fields of biomedicine. Recent research shows that minimal supervision is sufficient for fairly accurate information structure analysis of biomedical abstracts. However, is it realistic for full articles given their high linguistic and informational complexity? We introduce and release a novel corpus of 50 biomedical articles annotated according to the Argumentative Zoning (AZ) scheme, and investigate active learning with one of the most widely used ML models-Support Vector Machines (SVM)-on this corpus. Additionally, we introduce two novel applications that use AZ to support real-life literature review in biomedicine via question answering and summarization. We show that active learning with SVM trained on 500 labeled sentences (6% of the corpus) performs surprisingly well with the accuracy of 82%, just 2% lower than fully supervised learning. In our question answering task, biomedical researchers find relevant information significantly faster from AZ-annotated than unannotated articles. In the summarization task, sentences extracted from particular zones are significantly more similar to gold standard summaries than those extracted from particular sections of full articles. These results demonstrate that active learning of full articles' information structure is indeed realistic and the accuracy is high enough to support real-life literature review in biomedicine. The annotated corpus, our AZ classifier and the two novel applications are available at http://www.cl.cam.ac.uk/yg244/12bioinfo.html

  18. Oxyrase, a method which avoids CO2 in the incubation atmosphere for anaerobic susceptibility testing of antibiotics affected by CO2.

    PubMed

    Spangler, S K; Appelbaum, P C

    1993-02-01

    The Oxyrase agar dilution method, with exclusion of CO2 from the environment, was compared with the reference agar dilution method recommended by the National Committee for Clinical Laboratory Standards (anaerobic chamber with 10% CO2) to test the susceptibility of 51 gram-negative and 43 gram-positive anaerobes to azithromycin and erythromycin. With the Oxyrase method, anaerobiosis was achieved by incorporation of the O2-binding enzyme Oxyrase in addition to susceptibility test medium, antibiotic, and enzyme substrates into the upper level of a biplate. Plates were covered with a Brewer lid and incubated in ambient air. With azithromycin, Oxyrase yielded an MIC for 50% of strains tested (MIC50) and MIC90 of 2.0 and 8.0 micrograms/ml, compared to 8.0 and > 32.0 micrograms/ml in standard anaerobic conditions. At a breakpoint of 8.0 micrograms/ml, 90.4% of strains were susceptible to azithromycin with Oxyrase, compared to 53.2% in the chamber. The corresponding erythromycin MIC50 and MIC90 were 1.0 and 8.0 micrograms/ml with Oxyrase, compared to 4.0 and > 32.0 micrograms/ml by the reference method, with 89.3% of strains susceptible at a breakpoint of 4 micrograms/ml with Oxyrase, compared to 60.6% in CO2. Exclusion of CO2 from the anaerobic atmosphere when testing for susceptibility to azalides and macrolides yielded lower MICs, which may lead to a reconsideration of the role played by these compounds in treatment of infections caused by these strains.

  19. Comparative analysis of a modified ecolite method, the colicomplete method, and a most-probable-number method for detecting Escherichia coli in orange juice.

    PubMed

    Durbin, Gregory W; Salter, Robert

    2006-01-01

    The Ecolite High Volume Juice (HVJ) presence-absence method for a 10-ml juice sample was compared with the U.S. Food and Drug Administration Bacteriological Analytical Manual most-probable-number (MPN) method for analysis of artificially contaminated orange juices. Samples were added to Ecolite-HVJ medium and incubated at 35 degrees C for 24 to 48 h. Fluorescent blue results were positive for glucuronidase- and galactosidase-producing microorganisms, specifically indicative of about 94% of Escherichia coli strains. Four strains of E. coli were added to juices at concentrations of 0.21 to 6.8 CFU/ ml. Mixtures of enteric bacteria (Enterobacter plus Klebsiella, Citrobacter plus Proteus, or Hafnia plus Citrobacter plus Enterobacter) were added to simulate background flora. Three orange juice types were evaluated (n = 10) with and without the addition of the E. coli strains. Ecolite-HVJ produced 90 of 90 (10 of 10 samples of three juice types, each inoculated with three different E. coli strains) positive (blue-fluorescent) results with artificially contaminated E. coli that had MPN concentrations of <0.3 to 9.3 CFU/ml. Ten of 30 E. coli ATCC 11229 samples with MPN concentrations of <0.3 CFU/ml were identified as positive with Ecolite-HVJ. Isolated colonies recovered from positive Ecolite-HVJ samples were confirmed biochemically as E. coli. Thirty (10 samples each of three juice types) negative (not fluorescent) results were obtained for samples contaminated with only enteric bacteria and for uninoculated control samples. A juice manufacturer evaluated citrus juice production with both the Ecolite-HVJ and Colicomplete methods and recorded identical negative results for 95 20-ml samples and identical positive results for 5 20-ml samples artificially contaminated with E. coli. The Ecolite-HVJ method requires no preenrichment and subsequent transfer steps, which makes it a simple and easy method for use by juice producers.

  20. Flow-injection chemiluminescence determination of ofloxacin and levofloxacin in pharmaceutical preparations and biological fluids.

    PubMed

    Sun, Hanwen; Li, Liqing; Chen, Xueyan

    2006-08-01

    A novel, rapid and sensitive analytical method is described for determination of ofloxacin and levofloxacin by enhanced chemiluminescence (CL) with flow-injection sampling. The method is based on the CL reaction of the Ce(IV)-Na2S2O4-ofloxacin/levofloxacin-H2SO2 system. The enhanced CL mechanism was developed and the optimum conditions for CL emission were investigated. The CL intensity was correlated linearly (r = 0.9988) with the concentration of ofloxacin (or levofloxacin) in the range of 1.0 x 10(-8) - 1.0 x 10(-7) g ml(-1) and 1.0 x 10(-7) - 6.0 x 10(-6) g ml(-1). The detection limit (S/N = 3) is 7 x 10(-9) g ml(-1). The relative standard derivation (RSD, n = 11) is 2.0% for ofloxacin at 4 x 10(-7) g ml(-1) and for levofloxacin at 6 x 10(-7) g ml(-1). This method has been successfully applied for the determination of ofloxacin and levofloxacin in pharmaceutical preparations and biological fluids with satisfactory results.

  1. Parents' Workplace Experiences and Family Communication Patterns.

    ERIC Educational Resources Information Center

    Ritchie, L. David

    1997-01-01

    Gathers data from 178 parents of adolescents to elucidate observed relationships between social class and family communication patterns. Finds parents generalize from their own experiences--particularly in the workplace--consistent with M.L. Kohn's theory of learning generalization. Finds conversation orientation to be positively associated and…

  2. 77 FR 71019 - Japan Lessons-Learned Project Directorate Interim Staff Guidance JLD-ISG-2012-04; Guidance on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-28

    ... Insights from the Fukushima Dai-ichi Accident,'' dated March 12, 2012 (ADAMS Accession No. ML12053A340... resulting nuclear accident, at the Fukushima Dai-ichi nuclear power plant in March 2011. Enclosure 1 to the...

  3. MPS and ML

    MedlinePlus

    ... Contact Us Family Support Programs Learn Give Support Research Advocacy About P.O. Box 14686 Durham, NC 27709-4686 Toll Free: 877.MPS.1001 Local: 919.806.0101 Our Nonprofit 501(c)(3) is: 11-2734849 Copyright © National MPS Society. All Rights Reserved. Web Design & Development by TheeDesign

  4. A colorimetric method for the determination of carboxyhaemoglobin over a wide range of concentrations

    PubMed Central

    Trinder, P.; Harper, F. E.

    1962-01-01

    A colorimetric technique for the determination of carboxyhaemoglobin in blood is described. Carbon monoxide released from blood in a standard Conway unit reacts with palladous chloride/arsenomolybdate solution to produce a blue colour. Using 0·5 to 2 ml. of blood, the method will estimate carboxyhaemoglobin accurately at levels from 0·1% to 100% of total haemoglobin and in the presence of other abnormal pigments. A number of methods are available for the determination of carboxyhaemoglobin; none is accurate below a concentration of 1·5 g. carboxyhaemoglobin per 100 ml. but for most clinical purposes this is not important. For forensic purposes and occasionally in clinical use, an accurate determination of carboxyhaemoglobin below 750 mg. per 100 ml. may be required and no really satisfactory method is at present available. Some time ago when it was important to know whether a person who was found dead in a burning house had died before or after the fire had started, we became interested in developing a method which would determine accurately carboxyhaemoglobin at levels of 750 mg. per 100 ml. PMID:13922505

  5. Extraction and Determination of Cyproheptadine in Human Urine by DLLME-HPLC Method.

    PubMed

    Maham, Mehdi; Kiarostami, Vahid; Waqif-Husain, Syed; Abroomand-Azar, Parviz; Tehrani, Mohammad Saber; Khoeini Sharifabadi, Malihe; Afrouzi, Hossein; Shapouri, Mahmoudreza; Karami-Osboo, Rouhollah

    2013-01-01

    Novel dispersive liquid-liquid microextraction (DLLME), coupled with high performance liquid chromatography with photodiode array detection (HPLC-DAD) has been applied for the extraction and determination of cyproheptadine (CPH), an antihistamine, in human urine samples. In this method, 0.6 mL of acetonitrile (disperser solvent) containing 30 μL of carbon tetrachloride (extraction solvent) was rapidly injected by a syringe into 5 mL urine sample. After centrifugation, the sedimented phase containing enriched analyte was dissolved in acetonitrile and an aliquot of this solution injected into the HPLC system for analysis. Development of DLLME procedure includes optimization of some important parameters such as kind and volume of extraction and disperser solvent, pH and salt addition. The proposed method has good linearity in the range of 0.02-4.5 μg mL(-1) and low detection limit (13.1 ng mL(-1)). The repeatability of the method, expressed as relative standard deviation was 4.9% (n = 3). This method has also been applied to the analysis of real urine samples with satisfactory relative recoveries in the range of 91.6-101.0%.

  6. The Value of Successful MBSE Adoption

    NASA Technical Reports Server (NTRS)

    Parrott, Edith

    2016-01-01

    The value of successful adoption of Model Based System Engineering (MBSE) practices is hard to quantify. Most engineers and project managers look at the success in terms of cost. But there are other ways to quantify the value of MBSE and the steps necessary to achieve adoption. The Glenn Research Center (GRC) has been doing Model-Based Engineering (design, structural, etc.) for years, but the system engineering side has not. Since 2010, GRC has been moving from documents centric to MBSE/SysML. Project adoption of MBSE has been slow, but is steadily increasing in both MBSE usage and complexity of generated products. Sharing of knowledge of lessons learned in the implementation of MBSE/SysML is key for others who want to be successful. Along with GRC's implementation, NASA is working hard to increase the successful implementation of MBSE across all the other centers by developing guidelines, templates and libraries for projects to utilize. This presentation will provide insight into recent GRC and NASA adoption efforts, lessons learned and best practices.

  7. High-Level Prediction Signals in a Low-Level Area of the Macaque Face-Processing Hierarchy.

    PubMed

    Schwiedrzik, Caspar M; Freiwald, Winrich A

    2017-09-27

    Theories like predictive coding propose that lower-order brain areas compare their inputs to predictions derived from higher-order representations and signal their deviation as a prediction error. Here, we investigate whether the macaque face-processing system, a three-level hierarchy in the ventral stream, employs such a coding strategy. We show that after statistical learning of specific face sequences, the lower-level face area ML computes the deviation of actual from predicted stimuli. But these signals do not reflect the tuning characteristic of ML. Rather, they exhibit identity specificity and view invariance, the tuning properties of higher-level face areas AL and AM. Thus, learning appears to endow lower-level areas with the capability to test predictions at a higher level of abstraction than what is afforded by the feedforward sweep. These results provide evidence for computational architectures like predictive coding and suggest a new quality of functional organization of information-processing hierarchies beyond pure feedforward schemes. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Communication: Understanding molecular representations in machine learning: The role of uniqueness and target similarity

    NASA Astrophysics Data System (ADS)

    Huang, Bing; von Lilienfeld, O. Anatole

    2016-10-01

    The predictive accuracy of Machine Learning (ML) models of molecular properties depends on the choice of the molecular representation. Inspired by the postulates of quantum mechanics, we introduce a hierarchy of representations which meet uniqueness and target similarity criteria. To systematically control target similarity, we simply rely on interatomic many body expansions, as implemented in universal force-fields, including Bonding, Angular (BA), and higher order terms. Addition of higher order contributions systematically increases similarity to the true potential energy and predictive accuracy of the resulting ML models. We report numerical evidence for the performance of BAML models trained on molecular properties pre-calculated at electron-correlated and density functional theory level of theory for thousands of small organic molecules. Properties studied include enthalpies and free energies of atomization, heat capacity, zero-point vibrational energies, dipole-moment, polarizability, HOMO/LUMO energies and gap, ionization potential, electron affinity, and electronic excitations. After training, BAML predicts energies or electronic properties of out-of-sample molecules with unprecedented accuracy and speed.

  9. New High-performance Liquid Chromatography-DAD Method for Analytical Determination of Arbutin and Hydroquinone in Rat Plasma.

    PubMed

    Gallo, F R; Pagliuca, G; Multari, G; Panzini, G; D'amore, E; Altieri, I

    2015-01-01

    Natural substances present in herbal preparations should be carefully used because they can give toxic or therapeutic effects despite of their amount or the way of administration. The safety of products of vegetable origin must be assessed before commercialisation by monitoring the active ingredients and their metabolites. This study was therefore designed to identify and quantify arbutin and its metabolite hydroquinone, naturally present in Arctostaphylos uva-ursi (L.) Spreng plant in rat plasma, after an acute and subacute administration of aqueous arbutin solution in Wistar rats. For this purpose a reversed-phase high-performance liquid chromatography coupled with photodiode array detection was developed to assess the pharmacokinetic of arbutin and hydroquinone in plasma of female rats treated with aqueous arbutin solutions. The detection (arbutin: 0.0617 µg/ml and hydroquinone 0.0120 µg/ml) and quantification (arbutin: 0.2060 µg/ml and hydroquinone: 0.0400 µg/ml) limits were determined. At the arbutin concentration level of 10.7 µg/ml repeatability was 13.33% and its recovery 93.4±6.93%, while at the hydroquinone concentration level of 10.6 µg/ml repeatability was 11.66% and its recovery 92.9±7.75%. Furthermore the method was fully validated and the obtained data indicate that the new method provides good performances.

  10. Painting galaxies into dark matter halos using machine learning

    NASA Astrophysics Data System (ADS)

    Agarwal, Shankar; Davé, Romeel; Bassett, Bruce A.

    2018-05-01

    We develop a machine learning (ML) framework to populate large dark matter-only simulations with baryonic galaxies. Our ML framework takes input halo properties including halo mass, environment, spin, and recent growth history, and outputs central galaxy and halo baryonic properties including stellar mass (M*), star formation rate (SFR), metallicity (Z), neutral (H I) and molecular (H_2) hydrogen mass. We apply this to the MUFASA cosmological hydrodynamic simulation, and show that it recovers the mean trends of output quantities with halo mass highly accurately, including following the sharp drop in SFR and gas in quenched massive galaxies. However, the scatter around the mean relations is under-predicted. Examining galaxies individually, at z = 0 the stellar mass and metallicity are accurately recovered (σ ≲ 0.2 dex), but SFR and H I show larger scatter (σ ≳ 0.3 dex); these values improve somewhat at z = 1, 2. Remarkably, ML quantitatively recovers second parameter trends in galaxy properties, e.g. that galaxies with higher gas content and lower metallicity have higher SFR at a given M*. Testing various ML algorithms, we find that none perform significantly better than the others, nor does ensembling improve performance, likely because none of the algorithms reproduce the large observed scatter around the mean properties. For the random forest algorithm, we find that halo mass and nearby (˜200 kpc) environment are the most important predictive variables followed by growth history, while halo spin and ˜Mpc scale environment are not important. Finally we study the impact of additionally inputting key baryonic properties M*, SFR, and Z, as would be available e.g. from an equilibrium model, and show that particularly providing the SFR enables H I to be recovered substantially more accurately.

  11. Sorting Through the Safety Data Haystack: Using Machine Learning to Identify Individual Case Safety Reports in Social-Digital Media.

    PubMed

    Comfort, Shaun; Perera, Sujan; Hudson, Zoe; Dorrell, Darren; Meireis, Shawman; Nagarajan, Meenakshi; Ramakrishnan, Cartic; Fine, Jennifer

    2018-06-01

    There is increasing interest in social digital media (SDM) as a data source for pharmacovigilance activities; however, SDM is considered a low information content data source for safety data. Given that pharmacovigilance itself operates in a high-noise, lower-validity environment without objective 'gold standards' beyond process definitions, the introduction of large volumes of SDM into the pharmacovigilance workflow has the potential to exacerbate issues with limited manual resources to perform adverse event identification and processing. Recent advances in medical informatics have resulted in methods for developing programs which can assist human experts in the detection of valid individual case safety reports (ICSRs) within SDM. In this study, we developed rule-based and machine learning (ML) models for classifying ICSRs from SDM and compared their performance with that of human pharmacovigilance experts. We used a random sampling from a collection of 311,189 SDM posts that mentioned Roche products and brands in combination with common medical and scientific terms sourced from Twitter, Tumblr, Facebook, and a spectrum of news media blogs to develop and evaluate three iterations of an automated ICSR classifier. The ICSR classifier models consisted of sub-components to annotate the relevant ICSR elements and a component to make the final decision on the validity of the ICSR. Agreement with human pharmacovigilance experts was chosen as the preferred performance metric and was evaluated by calculating the Gwet AC1 statistic (gKappa). The best performing model was tested against the Roche global pharmacovigilance expert using a blind dataset and put through a time test of the full 311,189-post dataset. During this effort, the initial strict rule-based approach to ICSR classification resulted in a model with an accuracy of 65% and a gKappa of 46%. Adding an ML-based adverse event annotator improved the accuracy to 74% and gKappa to 60%. This was further improved by the addition of an additional ML ICSR detector. On a blind test set of 2500 posts, the final model demonstrated a gKappa of 78% and an accuracy of 83%. In the time test, it took the final model 48 h to complete a task that would have taken an estimated 44,000 h for human experts to perform. The results of this study indicate that an effective and scalable solution to the challenge of ICSR detection in SDM includes a workflow using an automated ML classifier to identify likely ICSRs for further human SME review.

  12. Solid-phase extraction in combination with dispersive liquid-liquid microextraction and ultra-high performance liquid chromatography-tandem mass spectrometry analysis: the ultra-trace determination of 10 antibiotics in water samples.

    PubMed

    Liang, Ning; Huang, Peiting; Hou, Xiaohong; Li, Zhen; Tao, Lei; Zhao, Longshan

    2016-02-01

    A novel method, solid-phase extraction combined with dispersive liquid-liquid microextraction (SPE-DLLME), was developed for ultra-preconcentration of 10 antibiotics in different environmental water samples prior to ultra-high performance liquid chromatography-tandem mass spectrometry detection. The optimized results were obtained as follows: after being adjusted to pH 4.0, the water sample was firstly passed through PEP-2 column at 10 mL min(-1), and then methanol was used to elute the target analytes for the following steps. Dichloromethane was selected as extraction solvent, and methanol/acetonitrile (1:1, v/v) as dispersive solvent. Under optimal conditions, the calibration curves were linear in the range of 1-1000 ng mL(-1) (sulfamethoxazole, cefuroxime axetil), 5-1000 ng mL(-1) (tinidazole), 10-1000 ng mL(-1) (chloramphenicol), 2-1000 ng mL(-1) (levofloxacin oxytetracycline, doxycycline, tetracycline, and ciprofloxacin) and 1-400 ng mL(-1) (sulfadiazine) with a good precision. The LOD and LOQ of the method were at very low levels, below 1.67 and 5.57 ng mL(-1), respectively. The relative recoveries of the target analytes were in the range from 64.16% to 99.80% with relative standard deviations between 0.7 and 8.4%. The matrix effect of this method showed a great decrease compared with solid-phase extraction and a significant value of enrichment factor (EF) compared with dispersive liquid-liquid microextraction. The developed method was successfully applied to the extraction and analysis of antibiotics in different water samples with satisfactory results.

  13. Maternal and Fetal Effect of Misgav Ladach Cesarean Section in Nigerian Women: A Randomized Control Study

    PubMed Central

    Ezechi, OC; Ezeobi, PM; Gab-Okafor, CV; Edet, A; Nwokoro, CA; Akinlade, A

    2013-01-01

    Background: The poor utilisation of the Misgav-Ladach (ML) caesarean section method in our environment despite its proven advantage has been attributed to several factors including its non-evaluation. A well designed and conducted trial is needed to provide evidence to convince clinician of its advantage over Pfannenstiel based methods. Aim: To evaluate the outcome of ML based caesarean section among Nigerian women. Subjects and Methods: Randomised controlled open label study of 323 women undergoing primary caesarean section in Lagos Nigeria. The women were randomised to either ML method or Pfannenstiel based (PB) caesarean section technique using computer generated random numbers. Results: The mean duration of surgery (P < 0.001), time to first bowel motion (P = 0.01) and ambulation (P < 0.001) were significantly shorter in the ML group compared to PB group. Postoperative anaemia (P < 0.01), analgesic needs (P = 0.02), extra suture use, estimated blood loss (P < 0.01) and post-operative complications (P = 0.001) were significantly lower in the ML group compared to PB group. Though the mean hospital stay was shorter (5.8 days) in the ML group as against 6.0 days, the difference was not significant statistically (P = 0.17). Of the fetal outcome measures compared, it was only in the fetal extraction time that there was significant difference between the two groups (P = 0.001). The mean fetal extraction time was 162 sec in ML group compared to 273 sec in the PB group. Conclusions: This study confirmed the already established benefit of ML techniques in Nigerian women, as it relates to the postoperative outcomes, duration of surgery, and fetal extraction time. The technique is recommended to clinicians as its superior maternal and fetal outcome and cost saving advantage makes it appropriate for use in poor resource setting. PMID:24380012

  14. Revisiting the susceptibility testing of Mycobacterium tuberculosis to ethionamide in solid culture medium.

    PubMed

    Lakshmi, Rajagopalan; Ramachandran, Ranjani; Kumar, D Ravi; Sundar, A Syam; Radhika, G; Rahman, Fathima; Selvakumar, N; Kumar, Vanaja

    2015-11-01

    Increase in the isolation of drug resistant phenotypes of Mycobacterium tuberculosis necessitates accuracy in the testing methodology. Critical concentration defining resistance for ethionamide (ETO), needs re-evaluation in accordance with the current scenario. Thus, re-evaluation of conventional minimum inhibitory concentration (MIC) and proportion sensitivity testing (PST) methods for ETO was done to identify the ideal breakpoint concentration defining resistance. Isolates of M. tuberculosis (n=235) from new and treated patients were subjected to conventional MIC and PST methods for ETO following standard operating procedures. With breakpoint concentration set at 114 and 156 µg/ml, an increase in specificity was observed whereas sensitivity was high with 80 µg/ml as breakpoint concentration. Errors due to false resistant and susceptible isolates were least at 80 µg/ml concentration. Performance parameters at 80 µg/ml breakpoint concentration indicated significant association between PST and MIC methods.

  15. Syringe-cartridge solid-phase extraction method for patulin in apple juice.

    PubMed

    Eisele, Thomas A; Gibson, Midori Z

    2003-01-01

    A syringe-cartridge solid-phase extraction (SPE) method was developed for determination of patulin in apple juice. A 2.5 mL portion of test sample was passed through a conditioned macroporous SPE cartridge and washed with 2 mL 1% sodium bicarbonate followed by 2 mL 1% acetic acid. Patulin was eluted with 1 mL 10% ethyl acetate in ethyl ether and determined by reversed-phase liquid chromatography using a mobile phase consisting of 81% acetonitrile, 9% water, and 10% 0.05M potassium phosphate buffer, pH 2.4. Recoveries averaged 92% and the relative standard deviation was 8.0% in test samples spiked with 50 ng/mL patulin. The method appears to be applicable for monitoring apple juice samples to meet the U.S. Food and Drug Administration compliance action level of 50 microg/kg in an industrial quality assurance laboratory environment.

  16. Analysis Study of Stevioside and Rebaudioside A from Stevia rebaudiana Bertoni by Normal Phase SPE and RP-HPLC

    NASA Astrophysics Data System (ADS)

    Martono, Y.; Rohman, A.; Riyanto, S.; Martono, S.

    2018-04-01

    Solid Phase Extraction (SPE) method using silica as sorbent for stevioside and rebaudiosida A analysis in Stevia rebaudiana Bertoni leaf have not been performed. The aim of this study is to develop SPE method using silica as sorbent for Reverse Phase-High Performance Liquid Chromatography (RP-HPLC) analysis of stevioside and rebaudiosida A in S. rebaudiana leaf. The results of this study indicate that the optimal conditions for normal phase SPE (silica) are conditioned with 3.0 mL of hexane. The sample loading volume is 0.1 mL. Cartridge is eluted with 1.0 mL acetonitrile: water (80: 20, v/v) to separate both analytes. The cartridge is washed with chloroform and water of 0.3 mL respectively. The developed SPE sample preparation method meets the accuracy and precision test and can be used for the analysis of stevioside and rebaudioside A by RP-HPLC.

  17. Assessment of volume measurement of breast cancer-related lymphedema by three methods: circumference measurement, water displacement, and dual energy X-ray absorptiometry.

    PubMed

    Gjorup, Caroline; Zerahn, Bo; Hendel, Helle W

    2010-06-01

    Following treatment for breast cancer 12%-60% develop breast cancer-related lymphedema (BCRL). There are several ways of assessing BCRL. Circumference measurement (CM) and water displacement (WD) for volume measurements (VM) are frequently used methods in practice and research, respectively. The aim of this study was to evaluate CM and WD for VM of the BCRL arm and the contralateral arm, comparing the results with regional dual energy X-ray absorptiometry (DXA). Twenty-four women with unilateral BCRL were included in the study. Blinded duplicate VM were obtained from both arms using the three methods mentioned above. CM and DXA were performed by two observers. WD was performed by a group of observers. Mean differences (d) in duplicated volumes, limits of agreement (LOA), and 95% confidence intervals (CI) were calculated for each method. The repeatability expressed as d (95% CI) between the duplicated VM of the BCRL arm and the contralateral arm was for DXA 3 ml (-6-11) and 3 ml (1-7), respectively. For CM and WD, the d (95% CI) of the BCRL arm were 107 ml (86-127) and 26 ml (-26-79), respectively and in the contralateral arm 100 ml (78-122) and -6 ml (-29-17), respectively. DXA is superior in repeatability when compared to CM and WD for VM, especially for the BCRL arm but also the contralateral arm.

  18. A simple high-performance liquid chromatographic method for the determination of acyclovir in human plasma and application to a pharmacokinetic study.

    PubMed

    Yu, Liyan; Xiang, Bingren; Zhan, Ying

    2008-01-01

    A rapid, simple and sensitive reversed-phase high-performance liquid chromatographic (HPLC) method has been developed for the measurement of acyclovir (CAS 59277-89-3) concentrations in human plasma and its use in bioavailability studies is evaluated. The method was linear in the concentration range of 0.05-4.0 microg/ml. The lower limit of quantification (LLOQ) was 0.05 microg/ml in 0.5 ml plasma sample. The intra- and inter-day relative standard deviations across three validation runs over the entire concentration range were less than 8.2%. This method was successfully applied for the evaluation of pharmacokinetic profiles of acyclovir capsule in 19 healthy volunteers. The main pharmacokinetic parameters obtained were: AUC(o-t) 6.50 +/- 1.47 and 7.13 +/- 1.44 microg x h/ml, AUC(0-infinity) 6.77 +/- 1.48 and 7.41 +/- 1.49 microg x h/ml, C(max) 2.27 +/- 0.57 and 2.27 +/- 0.62 microg/ml, t(1/2) 2.96 +/- 0.41 and 2.88 +/- 0.33 h, t(max) 0.8 +/- 0.3 and 1.0 +/- 0.5 h for test and reference formulations, respectively. No statistical differences were observed for C(max) and the area under the plasma concentration--time curve for acyclovir. 90% confidence limits calculated for C(max) and AUC from zero to infinity (AUC(0-infinity)) of acyclovir were included in the bioequivalence range (0.8-1.25 for AUC).

  19. [Clinical studies on regulatory system of thyroid hormone secretion and serum triiodothyronine. Part. I. Solid-state radioimmunoassy for human serum TSH and its clinical application (author's transl)].

    PubMed

    Takeda, Y

    1975-01-20

    A solid-state RIA method using a plastic microtiter plate for human TSH was developed: 1) The choice of carrier protein for standard TSH was critical in this method and pooled sera from untreated Graves patients was found to be suitable for this purpose. The mean lowest detectable TSH level was 0.2 muU/assay, which was almost equal to those reported by other methods. This method is superior in simple assay procedure, especially in the separation of bound and free TSH and in the shorter incubation time required in the double antibody method. 2) Serum TSH concentration in 22 normal subjects, 17 patients with Graves' disease, 35 Hashimoto's thyroiditis, 18 primary hypothyrodism, 16 simple goiter, 4 nodular goiter and 7 secondary hypothyroidism was estimated as 4.7 +/- 2.0 muU/ml (mean +/- s.d.), 2.1 +/- 0.2 mu/U/ml, 14.1 +/- 26.5 muU/ml, 211 +/- 177 muU/ml, 3.6 +/- 2.4 muU/ml, 3.2 +/- 2.4 muU/ml and 2.6 +/- 1.0 muU/ml, respectively. 3) A statistically significant and hyperbolic inverse correlation (r= --0.37, N=90) was found between TSH and T4 levels. Some cases with normal T4 level were found to be high in TSH levels. It was also noted that 36 of 65 euthyroid cases (55.4%) who had been treated with 131I for Graves' disease showed elevated TSH levels. 4) After intravenous injection of 500 mug TRH, TSH level reached its peak value of 8 to 32 muU/ml at 15 to 45 minutes in normal subjects. Low to no response was found in patients with Graves' disease. An exaggerated response in patients with primary hypothyroidism to TRH was observed and an inhibitory process in TSH production at the pituitary level was suggested in patients with Cushing syndrome. Hypothyroid patients with pituitary lesion showed low or no response, on the other hand some hypothyroid patients with lesions around the pituitary and hypothalamus showed high basal TSH and exaggerated response to TRH.

  20. Development and Validation of a Sensitive LC-MS/MS Method for the Determination of Fenoterol in Human Plasma and Urine Samples

    PubMed Central

    Sanghvi, M.; Ramamoorthy, A.; Strait, J.; Wainer, I. W.; Moaddel, R.

    2013-01-01

    Due to the lack of sensitivity in current methods for the determination of fenoterol (Fen). A rapid, LC-MS/MS method was developed for the determination of (R,R′)-Fen and (R,R′;S,S′)-Fen in plasma and urine. The method was fully validated and was linear from 50 pg/ml to 2000 pg/ml for plasma and from 2.500 ng/ml to 160 ng/ml for urine with a lower limit of quantitation of 52.8 pg/ml in plasma. The coefficient of variation was <15% for the high QC standards and <10% for the low QC standards in plasma and was <15% for the high and low QC standards in urine. The relative concentrations of (R,R′)-Fen and (S,S′)-Fen were determined using a chirobiotic T chiral stationary phase. The method was used to determine the concentration of (R,R′)-Fen in plasma and urine samples obtained in an oral cross-over study of (R,R′)-Fen and (R,R′;S,S′)-Fen formulations. The results demonstrated a potential pre-systemic enantioselective interaction in which the (S,S′)-Fen reduces the sulfation of the active (R,R′)-Fen. The data suggests that a non-racemic mixture of the Fen enantiomers may provide better bioavailability of the active (R,R′)-Fen for use in the treatment of cardiovascular disease PMID:23872161

  1. Optimization and validation of high-performance liquid chromatography method for analyzing 25-desacetyl rifampicin in human urine

    NASA Astrophysics Data System (ADS)

    Lily; Laila, L.; Prasetyo, B. E.

    2018-03-01

    A selective, reproducibility, effective, sensitive, simple and fast High-Performance Liquid Chromatography (HPLC) was developed, optimized and validated to analyze 25-Desacetyl Rifampicin (25-DR) in human urine which is from tuberculosis patient. The separation was performed by HPLC Agilent Technologies with column Agilent Eclipse XDB- Ci8 and amobile phase of 65:35 v/v methanol: 0.01 M sodium phosphate buffer pH 5.2, at 254 nm and flow rate of 0.8ml/min. The mean retention time was 3.016minutes. The method was linear from 2–10μg/ml 25-DR with a correlation coefficient of 0.9978. Standard deviation, relative standard deviation and coefficient variation of 2, 6, 10μg/ml 25-DR were 0-0.0829, 03.1752, 0-0.0317%, respectively. The recovery of 5, 7, 9μg/ml25-DR was 80.8661, 91.3480 and 111.1457%, respectively. Limits of detection (LoD) and quantification (LoQ) were 0.51 and 1.7μg/ml, respectively. The method has fulfilled the validity guidelines of the International Conference on Harmonization (ICH) bioanalytical method which includes parameters of specificity, linearity, precision, accuracy, LoD, and LoQ. The developed method is suitable for pharmacokinetic analysis of various concentrations of 25-DR in human urine.

  2. Development and Validation of Stability-Indicating Derivative Spectrophotometric Methods for Determination of Dronedarone Hydrochloride

    NASA Astrophysics Data System (ADS)

    Chadha, R.; Bali, A.

    2016-05-01

    Rapid, sensitive, cost effective and reproducible stability-indicating derivative spectrophotometric methods have been developed for the estimation of dronedarone HCl employing peak-zero (P-0) and peak-peak (P-P) techniques, and their stability-indicating potential assessed in forced degraded solutions of the drug. The methods were validated with respect to linearity, accuracy, precision and robustness. Excellent linearity was observed in concentrations 2-40 μg/ml ( r 2 = 0.9986). LOD and LOQ values for the proposed methods ranged from 0.42-0.46 μg/ml and 1.21-1.27 μg/ml, respectively, and excellent recovery of the drug was obtained in the tablet samples (99.70 ± 0.84%).

  3. Detection of shigella in lettuce by the use of a rapid molecular assay with increased sensitivity

    PubMed Central

    Jiménez, Kenia Barrantes; McCoy², Clyde B.; Achí, Rosario

    2010-01-01

    A Multiplex Polymerase Chain Reaction (PCR) assay to be used as an alternative to the conventional culture method in detecting Shigella and enteroinvasive Escherichia coli (EIEC) virulence genes ipaH and ial in lettuce was developed. Efficacy and rapidity of the molecular method were determined as compared to the conventional culture. Lettuce samples were inoculated with different Shigella flexneri concentrations (from 10 CFU/ml to 107 CFU/ml). DNA was extracted directly from lettuce after inoculation (direct-PCR) and after an enrichment step (enrichment PCR). Multiplex PCR detection limit was 104CFU/ml, diagnostic sensitivity and specificity were 100% accurate. An internal amplification control (IAC) of 100 bp was used in order to avoid false negative results. This method produced results in 1 to 2 days while the conventional culture method required 5 to 6 days. Also, the culture method detection limit was 106 CFU/ml, diagnostic sensitivity was 53% and diagnostic specificity was 100%. In this study a Multiplex PCR method for detection of virulence genes in Shigella and EIEC was shown to be effective in terms of diagnostic sensitivity, detection limit and amount of time as compared to Shigella conventional culture. PMID:24031579

  4. In vitro susceptibility of Sporothrix schenckii to six antifungal agents determined using three different methods.

    PubMed

    Alvarado-Ramírez, Eidi; Torres-Rodríguez, Josep M

    2007-07-01

    The in vitro susceptibility of Sporothrix schenckii to antifungal drugs has been determined with three different methods. Nineteen Peruvian clinical isolates of S. schenckii were tested against amphotericin B (AB), flucytosine (FC), fluconazole (FZ), itraconazole (IZ), voriconazole (VZ), and ketoconazole (KZ). Modified NCCLS M38-A, Sensititre YeastOne (SYO), and ATB Fungus 2 (ATBF2) methods were used to determine the MICs. ATCC isolates of Candida parapsilosis, Candida krusei, and Aspergillus flavus were used for quality control. Sporothrix inocula were prepared with the mycelial form growing on potato dextrose agar at 28 +/- 2 degrees C. MICs of AB, FC, FZ, and IZ were determined with all three methods, VZ with M38-A and SYO, and KZ with only SYO. The three methods showed high MICs of FZ and FC (MIC(90) of 0.5 microg/ml), being homogeneously lower than those of IZ and KZ. The M38-A method showed a variable MIC range of VZ (4.0 to 16 microg/ml); the geometric mean (GM) was 9.3 mug/ml. The MIC range of AB was wide (0.06 to 16 microg/ml), but the GM was 1.2 microg/ml, suggesting that the MIC is strain dependent. Agreement (two log(2) dilutions) between commercial techniques and the modified M38-A method was very high with FZ, IZ, and FC. In AB and VZ, the agreement was lower, being related to the antifungal concentrations of each method. The highest activity against S. schenckii was found with IZ and KZ. Lack of activity was observed with FZ, VZ, and FC. When AB is indicated for sporotrichosis, the susceptibility of the strain must be analyzed. Commercial quantitative antifungal methods have a limited usefulness in S. schenckii.

  5. [In vitro susceptibilities of causative organisms isolated from patients with primary respiratory tract infections to BRL 25000 (clavulanic acid/amoxicillin)].

    PubMed

    Deguchi, K; Fukayama, S; Nishimura, Y; Yokota, N; Tanaka, S; Oda, S; Matsumoto, Y; Ikegami, R; Sato, K; Fukumoto, T

    1985-10-01

    The in vitro susceptibilities of various causative organisms recently isolated from patients with primary respiratory tract infections to BRL 25000 (a formulation of amoxicillin, 2 parts, and potassium clavulanate, 1 part), amoxicillin (AMPC), cefaclor (CCL), cephalexin (CEX), cefadroxil (CDX) and cefroxadine (CXD) were determined. beta-Lactamase producing strains were detected by nitrocefin chromogenic method and PCG acidometric method. The frequency of isolation of beta-lactamase production in strains of S. aureus, H. influenzae, B. catarrhalis and K. pneumoniae was 92%, 18%, 36% and 98%, respectively. Against S. aureus strains with MIC values to AMPC of less than or equal to 100 micrograms/ml and CEX of less than or equal to 25 micrograms/ml BRL 25000 showed MIC values in the range 0.39-6.25 micrograms/ml with inocula of 10(6) CFU/ml, while BRL 25000 required 12.5-100 micrograms/ml of concentrations for inhibition of the strains with MIC values to AMPC of greater than 100 micrograms/ml and CEX of greater than or equal to 25 micrograms/ml. Against S. pyogenes and S. pneumoniae BRL 25000 showed MIC values in the range less than 0.024-0.10 micrograms/ml with inocula of 10(6) CFU/ml, which is much more active than CCL, CEX, CDX and CXD and slight less active than AMPC. Against H. influenzae and B. catarrhalis BRL 25000 showed MIC values in the range 0.20-6.25 micrograms/ml with inocula of 10(6) CFU/ml, which showed most potent activity among the agents tested. The activity of BRL 25000 against K. pneumoniae was approximately equal to that of CCL and superior to that of AMPC, CEX, CDX and CXD.

  6. Comprehensive assessment and performance improvement of effector protein predictors for bacterial secretion systems III, IV and VI.

    PubMed

    An, Yi; Wang, Jiawei; Li, Chen; Leier, André; Marquez-Lago, Tatiana; Wilksch, Jonathan; Zhang, Yang; Webb, Geoffrey I; Song, Jiangning; Lithgow, Trevor

    2018-01-01

    Bacterial effector proteins secreted by various protein secretion systems play crucial roles in host-pathogen interactions. In this context, computational tools capable of accurately predicting effector proteins of the various types of bacterial secretion systems are highly desirable. Existing computational approaches use different machine learning (ML) techniques and heterogeneous features derived from protein sequences and/or structural information. These predictors differ not only in terms of the used ML methods but also with respect to the used curated data sets, the features selection and their prediction performance. Here, we provide a comprehensive survey and benchmarking of currently available tools for the prediction of effector proteins of bacterial types III, IV and VI secretion systems (T3SS, T4SS and T6SS, respectively). We review core algorithms, feature selection techniques, tool availability and applicability and evaluate the prediction performance based on carefully curated independent test data sets. In an effort to improve predictive performance, we constructed three ensemble models based on ML algorithms by integrating the output of all individual predictors reviewed. Our benchmarks demonstrate that these ensemble models outperform all the reviewed tools for the prediction of effector proteins of T3SS and T4SS. The webserver of the proposed ensemble methods for T3SS and T4SS effector protein prediction is freely available at http://tbooster.erc.monash.edu/index.jsp. We anticipate that this survey will serve as a useful guide for interested users and that the new ensemble predictors will stimulate research into host-pathogen relationships and inspiration for the development of new bioinformatics tools for predicting effector proteins of T3SS, T4SS and T6SS. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. A method of preserving and testing the acceptability of gac fruit oil, a good source of beta-carotene and essential fatty acids.

    PubMed

    Vuong, L T; King, J C

    2003-06-01

    Gac fruit (Momordica cochinchinensis Spreng) is indigenous to Vietnam and other countries in Southeast Asia. Its seed pulp contains high concentrations of carotenoids, especially the provitamin A, beta-carotene. In northern Vietnam, gac fruits are seasonal and are mainly used in making a rice dish called xoi gac. The purpose of this study was to develop a method to collect and preserve gac fruit oil, to evaluate the nutritional composition of the oil, and to assess the acceptability of the gac oil by typical Vietnamese homemakers. One hundred women participated in training to learn how to prepare the fruits and operate the oil press. The women also participated in a survey of gac fruit use and their habitual use of animal fat and vegetable oil. Among all the participants in the training and surveys, 35 women actually produced oil from gac fruits grown in the village, using manual oil presses and locally available materials. The total carotene concentration in gac fruit oil was 5,700 micrograms/ml. The concentration of beta-carotene was 2,710 micrograms/ml. Sixty-nine percent of total fat was unsaturated, and 35% of that was polyunsaturated. The average daily consumption of gac fruit oil was estimated at 2 ml per person. The daily beta-carotene intake (from gac fruit oil) averaged approximately 5 mg per person. It was found that gac oil can be produced locally by village women using manual presses and locally available materials. The oil is a rich source of beta-carotene, vitamin E, and essential fatty acids. Although the beta-carotene concentration declines with time without a preservative or proper storage, it was still high after three months. The oil was readily accepted by the women and their children, and consumption of the oil increased the intake of beta-carotene and reduced the intake of lard.

  8. Biomedical visual data analysis to build an intelligent diagnostic decision support system in medical genetics.

    PubMed

    Kuru, Kaya; Niranjan, Mahesan; Tunca, Yusuf; Osvank, Erhan; Azim, Tayyaba

    2014-10-01

    In general, medical geneticists aim to pre-diagnose underlying syndromes based on facial features before performing cytological or molecular analyses where a genotype-phenotype interrelation is possible. However, determining correct genotype-phenotype interrelationships among many syndromes is tedious and labor-intensive, especially for extremely rare syndromes. Thus, a computer-aided system for pre-diagnosis can facilitate effective and efficient decision support, particularly when few similar cases are available, or in remote rural districts where diagnostic knowledge of syndromes is not readily available. The proposed methodology, visual diagnostic decision support system (visual diagnostic DSS), employs machine learning (ML) algorithms and digital image processing techniques in a hybrid approach for automated diagnosis in medical genetics. This approach uses facial features in reference images of disorders to identify visual genotype-phenotype interrelationships. Our statistical method describes facial image data as principal component features and diagnoses syndromes using these features. The proposed system was trained using a real dataset of previously published face images of subjects with syndromes, which provided accurate diagnostic information. The method was tested using a leave-one-out cross-validation scheme with 15 different syndromes, each of comprised 5-9 cases, i.e., 92 cases in total. An accuracy rate of 83% was achieved using this automated diagnosis technique, which was statistically significant (p<0.01). Furthermore, the sensitivity and specificity values were 0.857 and 0.870, respectively. Our results show that the accurate classification of syndromes is feasible using ML techniques. Thus, a large number of syndromes with characteristic facial anomaly patterns could be diagnosed with similar diagnostic DSSs to that described in the present study, i.e., visual diagnostic DSS, thereby demonstrating the benefits of using hybrid image processing and ML-based computer-aided diagnostics for identifying facial phenotypes. Copyright © 2014. Published by Elsevier B.V.

  9. SU-E-I-96: A Study About the Influence of ROI Variation On Tumor Segmentation in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L; Tan, S; Lu, W

    2014-06-01

    Purpose: To study the influence of different regions of interest (ROI) on tumor segmentation in PET. Methods: The experiments were conducted on a cylindrical phantom. Six spheres with different volumes (0.5ml, 1ml, 6ml, 12ml, 16ml and 20 ml) were placed inside a cylindrical container to mimic tumors of different sizes. The spheres were filled with 11C solution as sources and the cylindrical container was filled with 18F-FDG solution as the background. The phantom was continuously scanned in a Biograph-40 True Point/True View PET/CT scanner, and 42 images were reconstructed with source-to-background ratio (SBR) ranging from 16:1 to 1.8:1. We tookmore » a large and a small ROI for each sphere, both of which contain the whole sphere and does not contain any other spheres. Six other ROIs of different sizes were then taken between the large and the small ROI. For each ROI, all images were segmented by eitht thresholding methods and eight advanced methods, respectively. The segmentation results were evaluated by dice similarity index (DSI), classification error (CE) and volume error (VE). The robustness of different methods to ROI variation was quantified using the interrun variation and a generalized Cohen's kappa. Results: With the change of ROI, the segmentation results of all tested methods changed more or less. Compared with all advanced methods, thresholding methods were less affected by the ROI change. In addition, most of the thresholding methods got more accurate segmentation results for all sphere sizes. Conclusion: The results showed that the segmentation performance of all tested methods was affected by the change of ROI. Thresholding methods were more robust to this change and they can segment the PET image more accurately. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less

  10. Quantitation of mandibular symphysis volume as a source of bone grafting.

    PubMed

    Verdugo, Fernando; Simonian, Krikor; Smith McDonald, Roberto; Nowzari, Hessam

    2010-06-01

    Autogenous intramembranous bone graft present several advantages such as minimal resorption and high concentration of bone morphogenetic proteins. A method for measuring the amount of bone that can be harvested from the symphysis area has not been reported in real patients. The aim of the present study was to intrasurgically quantitate the volume of the symphysis bone graft that can be safely harvested in live patients and compare it with AutoCAD (version 16.0, Autodesk, Inc., San Rafael, CA, USA) tomographic calculations. AutoCAD software program quantitated symphysis bone graft in 40 patients using computerized tomographies. Direct intrasurgical measurements were recorded thereafter and compared with AutoCAD data. The bone volume was measured at the recipient sites of a subgroup of 10 patients, 6 months post sinus augmentation. The volume of bone graft measured by AutoCAD averaged 1.4 mL (SD 0.6 mL, range: 0.5-2.7 mL). The volume of bone graft measured intrasurgically averaged 2.3 mL (SD 0.4 mL, range 1.7-2.8 mL). The statistical difference between the two measurement methods was significant. The bone volume measured at the recipient sites 6 months post sinus augmentation averaged 1.9 mL (SD 0.3 mL, range 1.3-2.6 mL) with a mean loss of 0.4 mL. AutoCAD did not overestimate the volume of bone that can be safely harvested from the mandibular symphysis. The use of the design software program may improve surgical treatment planning prior to sinus augmentation.

  11. Modeling a Spatio-Temporal Individual Travel Behavior Using Geotagged Social Network Data: a Case Study of Greater Cincinnati

    NASA Astrophysics Data System (ADS)

    Saeedimoghaddam, M.; Kim, C.

    2017-10-01

    Understanding individual travel behavior is vital in travel demand management as well as in urban and transportation planning. New data sources including mobile phone data and location-based social media (LBSM) data allow us to understand mobility behavior on an unprecedented level of details. Recent studies of trip purpose prediction tend to use machine learning (ML) methods, since they generally produce high levels of predictive accuracy. Few studies used LSBM as a large data source to extend its potential in predicting individual travel destination using ML techniques. In the presented research, we created a spatio-temporal probabilistic model based on an ensemble ML framework named "Random Forests" utilizing the travel extracted from geotagged Tweets in 419 census tracts of Greater Cincinnati area for predicting the tract ID of an individual's travel destination at any time using the information of its origin. We evaluated the model accuracy using the travels extracted from the Tweets themselves as well as the travels from household travel survey. The Tweets and survey based travels that start from same tract in the south western parts of the study area is more likely to select same destination compare to the other parts. Also, both Tweets and survey based travels were affected by the attraction points in the downtown of Cincinnati and the tracts in the north eastern part of the area. Finally, both evaluations show that the model predictions are acceptable, but it cannot predict destination using inputs from other data sources as precise as the Tweets based data.

  12. A SEDIMENT TOXICITY METHOD USING LEMNA MINOR, DUCKWEED

    EPA Science Inventory

    We developed a Lemna minor sediment toxicity test method to assess sediment contaminants which may affect plants. This 96-hour test used 15 ml of sediment and 2 ml of overlying water which was renewed after 48 hours. Sand was used as the control sediment and also to dilute test ...

  13. INTERLABORATORY COMPARISON OF A REDUCED VOLUME MARINE SEDIMENT TOXICITY TEST METHOD USING AMPHIPOD AMPELISCA ABDITA

    EPA Science Inventory

    The U.S. Environmental Protection Agency has standardized methods for performing acute marine amphipod sediment toxicity tests. A test design reducing sediment volume from 200 to 50 ml and overlying water from 600 to 150 ml was recently proposed. An interlaboratory comparison wa...

  14. Maternal and fetal effect of misgav ladach cesarean section in nigerian women: a randomized control study.

    PubMed

    Ezechi, Oc; Ezeobi, Pm; Gab-Okafor, Cv; Edet, A; Nwokoro, Ca; Akinlade, A

    2013-10-01

    The poor utilisation of the Misgav-Ladach (ML) caesarean section method in our environment despite its proven advantage has been attributed to several factors including its non-evaluation. A well designed and conducted trial is needed to provide evidence to convince clinician of its advantage over Pfannenstiel based methods. To evaluate the outcome of ML based caesarean section among Nigerian women. Randomised controlled open label study of 323 women undergoing primary caesarean section in Lagos Nigeria. The women were randomised to either ML method or Pfannenstiel based (PB) caesarean section technique using computer generated random numbers. The mean duration of surgery (P < 0.001), time to first bowel motion (P = 0.01) and ambulation (P < 0.001) were significantly shorter in the ML group compared to PB group. Postoperative anaemia (P < 0.01), analgesic needs (P = 0.02), extra suture use, estimated blood loss (P < 0.01) and post-operative complications (P = 0.001) were significantly lower in the ML group compared to PB group. Though the mean hospital stay was shorter (5.8 days) in the ML group as against 6.0 days, the difference was not significant statistically (P = 0.17). Of the fetal outcome measures compared, it was only in the fetal extraction time that there was significant difference between the two groups (P = 0.001). The mean fetal extraction time was 162 sec in ML group compared to 273 sec in the PB group. This study confirmed the already established benefit of ML techniques in Nigerian women, as it relates to the postoperative outcomes, duration of surgery, and fetal extraction time. The technique is recommended to clinicians as its superior maternal and fetal outcome and cost saving advantage makes it appropriate for use in poor resource setting.

  15. A simple, rapid and sensitive RP-HPLC-UV method for the simultaneous determination of sorafenib & paclitaxel in plasma and pharmaceutical dosage forms: Application to pharmacokinetic study.

    PubMed

    Khan, Ismail; Iqbal, Zafar; Khan, Abad; Hassan, Muhammad; Nasir, Fazle; Raza, Abida; Ahmad, Lateef; Khan, Amjad; Akhlaq Mughal, Muhammad

    2016-10-15

    A simple, economical, fast, and sensitive RP-HPLC-UV method has been developed for the simultaneous quantification of Sorafenib and paclitaxel in biological samples and formulations using piroxicam as an internal standard. The experimental conditions were optimized and method was validated according to the standard guidelines. The separation of both the analytes and internal standard was achieved on Discovery HS C18 column (250mm×4.6mm, 5μm) using Acetonitrile and TFA (0.025%) in the ratio of (65:35V/V) as the mobile phase in isocratic mode at a flow rate of 1ml/min, with a wavelength of 245nm and at a column oven temperature of 25°Cin a short run time of 12min. The limits of detection (LLOD) were 5 and 10ng/ml while the limits of quantification (LLOQ) were 10 and 15ng/ml for sorafenib and paclitaxel, respectively. Sorafenib, paclitaxel and piroxicam (IS) were extracted from biological samples by applying acetonitrile as a precipitating and extraction solvent. The method is linear in the range of 15-20,000ng/ml for paclitaxel and 10-5000ng/ml for sorafenib, respectively. The method is sensitive and reliable by considering both of its intra-day and inter-day co-efficient of variance. The method was successfully applied for the quantification of the above mentioned drugs in plasma. The developed method will be applied towards sorafenib and paclitaxel pharmacokinetics studies in animal models. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. 40 CFR Appendix A to Subpart Ddd... - Free Formaldehyde Analysis of Insulation Resins by the Hydroxylamine Hydrochloride Method

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... buffer. 3.350-mL burette for 1.0 N sodium hydroxide. 3.4Magnetic stirrer and stir bars. 3.5250-mL beaker...L beaker. Record sample weight. 5.3Add 100 mL of the methanol/water mixture and stir on a magnetic...

  17. Maximization of the usage of coronary CTA derived plaque information using a machine learning based algorithm to improve risk stratification; insights from the CONFIRM registry.

    PubMed

    van Rosendael, Alexander R; Maliakal, Gabriel; Kolli, Kranthi K; Beecy, Ashley; Al'Aref, Subhi J; Dwivedi, Aeshita; Singh, Gurpreet; Panday, Mohit; Kumar, Amit; Ma, Xiaoyue; Achenbach, Stephan; Al-Mallah, Mouaz H; Andreini, Daniele; Bax, Jeroen J; Berman, Daniel S; Budoff, Matthew J; Cademartiri, Filippo; Callister, Tracy Q; Chang, Hyuk-Jae; Chinnaiyan, Kavitha; Chow, Benjamin J W; Cury, Ricardo C; DeLago, Augustin; Feuchtner, Gudrun; Hadamitzky, Martin; Hausleiter, Joerg; Kaufmann, Philipp A; Kim, Yong-Jin; Leipsic, Jonathon A; Maffei, Erica; Marques, Hugo; Pontone, Gianluca; Raff, Gilbert L; Rubinshtein, Ronen; Shaw, Leslee J; Villines, Todd C; Gransar, Heidi; Lu, Yao; Jones, Erica C; Peña, Jessica M; Lin, Fay Y; Min, James K

    Machine learning (ML) is a field in computer science that demonstrated to effectively integrate clinical and imaging data for the creation of prognostic scores. The current study investigated whether a ML score, incorporating only the 16 segment coronary tree information derived from coronary computed tomography angiography (CCTA), provides enhanced risk stratification compared with current CCTA based risk scores. From the multi-center CONFIRM registry, patients were included with complete CCTA risk score information and ≥3 year follow-up for myocardial infarction and death (primary endpoint). Patients with prior coronary artery disease were excluded. Conventional CCTA risk scores (conventional CCTA approach, segment involvement score, duke prognostic index, segment stenosis score, and the Leaman risk score) and a score created using ML were compared for the area under the receiver operating characteristic curve (AUC). Only 16 segment based coronary stenosis (0%, 1-24%, 25-49%, 50-69%, 70-99% and 100%) and composition (calcified, mixed and non-calcified plaque) were provided to the ML model. A boosted ensemble algorithm (extreme gradient boosting; XGBoost) was used and the entire data was randomly split into a training set (80%) and testing set (20%). First, tuned hyperparameters were used to generate a trained model from the training data set (80% of data). Second, the performance of this trained model was independently tested on the unseen test set (20% of data). In total, 8844 patients (mean age 58.0 ± 11.5 years, 57.7% male) were included. During a mean follow-up time of 4.6 ± 1.5 years, 609 events occurred (6.9%). No CAD was observed in 48.7% (3.5% event), non-obstructive CAD in 31.8% (6.8% event), and obstructive CAD in 19.5% (15.6% event). Discrimination of events as expressed by AUC was significantly better for the ML based approach (0.771) vs the other scores (ranging from 0.685 to 0.701), P < 0.001. Net reclassification improvement analysis showed that the improved risk stratification was the result of down-classification of risk among patients that did not experience events (non-events). A risk score created by a ML based algorithm, that utilizes standard 16 coronary segment stenosis and composition information derived from detailed CCTA reading, has greater prognostic accuracy than current CCTA integrated risk scores. These findings indicate that a ML based algorithm can improve the integration of CCTA derived plaque information to improve risk stratification. Published by Elsevier Inc.

  18. Formulation and Evaluation of Irinotecan Suppository for Rectal Administration

    PubMed Central

    Feng, Haiyang; Zhu, Yuping; Li, Dechuan

    2014-01-01

    Irinotecan suppository was prepared using the moulding method with a homogeneous blend. A sensitive and specific fluorescence method was developed and validated for the determination of irinotecan in plasma using HPLC. The pharmacokinetics of intravenous administered and rectal administered in rabbits was investigated. Following a single intravenous dose of irinotecan (50 mg/kg), the plasma irinotecan concentration demonstrated a bi-exponential decay, with a rapid decline over 15 min. Cmax, t1/2, AUC0–30h and AUC0-∞ were 16.1 ± 2.7 g/ml, 7.6 ± 1.2 h, 71.3 ± 8.8 μg·h/ml and 82.3 ± 9.5 μg·h/ml, respectively. Following rectal administration of 100 mg/kg irinotecan, the plasma irinotecan concentration reached a peak of 5.3 ± 2.5 μg/ml at 4 h. The AUC0–30h and AUC0-∞ were 32.2 ± 6.2 μg·h/ml and 41.6 ± 7.2 μg·h/ml, respectively. It representing ∼50.6% of the absolute bioavailability. PMID:24596626

  19. Formulation and evaluation of irinotecan suppository for rectal administration.

    PubMed

    Feng, Haiyang; Zhu, Yuping; Li, Dechuan

    2014-01-01

    Irinotecan suppository was prepared using the moulding method with a homogeneous blend. A sensitive and specific fluorescence method was developed and validated for the determination of irinotecan in plasma using HPLC. The pharmacokinetics of intravenous administered and rectal administered in rabbits was investigated. Following a single intravenous dose of irinotecan (50 mg/kg), the plasma irinotecan concentration demonstrated a bi-exponential decay, with a rapid decline over 15 min. Cmax, t1/2, AUC0-30h and AUC0-∞ were 16.1 ± 2.7 g/ml, 7.6 ± 1.2 h, 71.3 ± 8.8 μg·h/ml and 82.3 ± 9.5 μg·h/ml, respectively. Following rectal administration of 100 mg/kg irinotecan, the plasma irinotecan concentration reached a peak of 5.3 ± 2.5 μg/ml at 4 h. The AUC0-30h and AUC0-∞ were 32.2 ± 6.2 μg·h/ml and 41.6 ± 7.2 μg·h/ml, respectively. It representing ∼50.6% of the absolute bioavailability.

  20. Left Ventricular Stroke Volume Quantification by Contrast Echocardiography – Comparison of Linear and Flow-Based Methods to Cardiac Magnetic Resonance

    PubMed Central

    Dele-Michael, Abiola O.; Fujikura, Kana; Devereux, Richard B; Islam, Fahmida; Hriljac, Ingrid; Wilson, Sean R.; Lin, Fay; Weinsaft, Jonathan W.

    2014-01-01

    Background Echocardiography (echo) quantified LV stroke volume (SV) is widely used to assess systolic performance after acute myocardial infarction (AMI). This study compared two common echo approaches – predicated on flow (Doppler) and linear chamber dimensions (Teichholz) – to volumetric SV and global infarct parameters quantified by cardiac magnetic resonance (CMR). Methods Multimodality imaging was performed as part of a post-AMI registry. For echo, SV was measured by Doppler and Teichholz methods. Cine-CMR was used for volumetric SV and LVEF quantification, and delayed-enhancement CMR for infarct size. Results 142 patients underwent same-day echo and CMR. On echo, mean SV by Teichholz (78±17ml) was slightly higher than Doppler (75±16ml; Δ=3±13ml, p=0.02). Compared to SV on CMR (78±18ml), mean difference by Teichholz (Δ=−0.2±14; p=0.89) was slightly smaller than Doppler (Δ−3±14; p=0.02) but limits of agreement were similar between CMR and echo methods (Teichholz: −28, 27 ml, Doppler: −31, 24ml). For Teichholz, differences with CMR SV were greatest among patients with anteroseptal or lateral wall hypokinesis (p<0.05). For Doppler, differences were associated with aortic valve abnormalities or root dilation (p=0.01). SV by both echo methods decreased stepwise in relation to global LV injury as assessed by CMR-quantified LVEF and infarct size (p<0.01). Conclusions Teichholz and Doppler calculated SV yield similar magnitude of agreement with CMR. Teichholz differences with CMR increase with septal or lateral wall contractile dysfunction, whereas Doppler yields increased offsets in patients with aortic remodeling. PMID:23488864

  1. Simultaneous determination of rabeprazole sodium and itopride hydrochloride in capsule dosage form by spectrophotometry.

    PubMed

    Sabnis, Shweta S; Gandhi, Santosh V; Madgulkar, A R; Bothara, K G

    Three methods viz. Absorbance Ratio Method (I), Dual Wavelength Method (II) and First Order Derivative Spectroscopic Method (III) for simultaneous estimation of Rabeprazole sodium and Itopride hydrochloride have been developed. The drugs obey Beer's law in the concentration range 2-20 microg/ml for RAB and 5-75 microg/ml for ITO. The results of analysis of drugs have been validated statistically and by recovery studies.

  2. Performance characteristics of two bioassays and high-performance liquid chromatography for determination of flucytosine in serum.

    PubMed Central

    St-Germain, G; Lapierre, S; Tessier, D

    1989-01-01

    We compared the accuracy and precision of two microbiological methods and one high-pressure liquid chromatography (HPLC) procedure used to measure the concentrations of flucytosine in serum. On the basis of an analysis of six standards, all methods were judged reliable within acceptable limits for clinical use. With the biological methods, a slight loss of linearity was observed in the 75- to 100-micrograms/ml range. Compared with the bioassays, the HPLC method did not present linearity problems and was more precise and accurate in the critical zone of 100 micrograms/ml. On average, results obtained with patient sera containing 50 to 100 micrograms of flucytosine per ml were 10.6% higher with the HPLC method than with the bioassays. Standards for the biological assays may be prepared in serum or water. PMID:2802566

  3. Determination of coumarin, vanillin, and ethyl vanillin in vanilla extract products: liquid chromatography mass spectrometry method development and validation studies.

    PubMed

    de Jager, Lowri S; Perfetti, Gracia A; Diachenko, Gregory W

    2007-03-23

    A LC-MS method was developed for the determination of coumarin, vanillin, and ethyl vanillin in vanilla products. Samples were analyzed using LC-electrospray ionization (ESI)-MS in the positive ionization mode. Limits of detection for the method ranged from 0.051 to 0.073 microg mL(-1). Using the optimized method, 24 vanilla products were analyzed. All samples tested negative for coumarin. Concentrations ranged from 0.38 to 8.59 mg mL(-1) (x =3.73) for vanillin and 0.33 to 2.27 mg mL(-1) (x =1.03) for ethyl vanillin. The measured concentrations are compared to values calculated using UV monitoring and to results reported in a similar survey in 1988. Analytical results, method precision, and accuracy data are presented.

  4. Isoflurane waste anesthetic gas concentrations associated with the open-drop method.

    PubMed

    Taylor, Douglas K; Mook, Deborah M

    2009-01-01

    The open-drop technique is used frequently for anesthetic delivery to small rodents. Operator exposure to waste anesthetic gas (WAG) is a potential occupational hazard if this method is used without WAG scavenging. This study was conducted to determine whether administration of isoflurane by the open-drop technique without exposure controls generates significant WAG concentrations. We placed 0.1, 0.2, or 0.3 ml of liquid isoflurane into screw-top 500 or 1000 ml glass jars. WAG concentration was measured at the opening of the container and 20 and 40 cm from the opening, a distance at which users likely would operate, at 1, 2, or 3 min WAG was measured by using a portable infrared gas analyzer. Mean WAG concentrations at the vessel opening were as high as 662 +/- 168 ppm with a 500 ml jar and 122 +/- 87 ppm with a 1000 ml jar. At operator levels, WAG concentrations were always at or near 0 ppm. For measurements made at the vessel opening, time was the only factor that significantly affected WAG concentration when using the 500 ml jar. Neither time nor liquid volume were significant factors when using 1000 ml jar. At all liquid volumes and time points, the WAG concentration associated with using the 500 ml container was marginally to significantly greater than that for the 1000 ml jar.

  5. PeakML/mzMatch: a file format, Java library, R library, and tool-chain for mass spectrometry data analysis.

    PubMed

    Scheltema, Richard A; Jankevics, Andris; Jansen, Ritsert C; Swertz, Morris A; Breitling, Rainer

    2011-04-01

    The recent proliferation of high-resolution mass spectrometers has generated a wealth of new data analysis methods. However, flexible integration of these methods into configurations best suited to the research question is hampered by heterogeneous file formats and monolithic software development. The mzXML, mzData, and mzML file formats have enabled uniform access to unprocessed raw data. In this paper we present our efforts to produce an equally simple and powerful format, PeakML, to uniformly exchange processed intermediary and result data. To demonstrate the versatility of PeakML, we have developed an open source Java toolkit for processing, filtering, and annotating mass spectra in a customizable pipeline (mzMatch), as well as a user-friendly data visualization environment (PeakML Viewer). The PeakML format in particular enables the flexible exchange of processed data between software created by different groups or companies, as we illustrate by providing a PeakML-based integration of the widely used XCMS package with mzMatch data processing tools. As an added advantage, downstream analysis can benefit from direct access to the full mass trace information underlying summarized mass spectrometry results, providing the user with the means to rapidly verify results. The PeakML/mzMatch software is freely available at http://mzmatch.sourceforge.net, with documentation, tutorials, and a community forum.

  6. Fully automated analytical procedure for propofol determination by sequential injection technique with spectrophotometric and fluorimetric detections.

    PubMed

    Šrámková, Ivana; Amorim, Célia G; Sklenářová, Hana; Montenegro, Maria C B M; Horstkotte, Burkhard; Araújo, Alberto N; Solich, Petr

    2014-01-01

    In this work, an application of an enzymatic reaction for the determination of the highly hydrophobic drug propofol in emulsion dosage form is presented. Emulsions represent a complex and therefore challenging matrix for analysis. Ethanol was used for breakage of a lipid emulsion, which enabled optical detection. A fully automated method based on Sequential Injection Analysis was developed, allowing propofol determination without the requirement of tedious sample pre-treatment. The method was based on spectrophotometric detection after the enzymatic oxidation catalysed by horseradish peroxidase and subsequent coupling with 4-aminoantipyrine leading to a coloured product with an absorbance maximum at 485 nm. This procedure was compared with a simple fluorimetric method, which was based on the direct selective fluorescence emission of propofol in ethanol at 347 nm. Both methods provide comparable validation parameters with linear working ranges of 0.005-0.100 mg mL(-1) and 0.004-0.243 mg mL(-1) for the spectrophotometric and fluorimetric methods, respectively. The detection and quantitation limits achieved with the spectrophotometric method were 0.0016 and 0.0053 mg mL(-1), respectively. The fluorimetric method provided the detection limit of 0.0013 mg mL(-1) and limit of quantitation of 0.0043 mg mL(-1). The RSD did not exceed 5% and 2% (n=10), correspondingly. A sample throughput of approx. 14 h(-1) for the spectrophotometric and 68 h(-1) for the fluorimetric detection was achieved. Both methods proved to be suitable for the determination of propofol in pharmaceutical formulation with average recovery values of 98.1 and 98.5%. © 2013 Elsevier B.V. All rights reserved.

  7. Optimal effect-site concentration of remifentanil when combined with dexmedetomidine in patients undergoing cystoscopy

    PubMed Central

    Heo, Bongha; Kim, Minsun; Lee, Hyunjung; Park, Sanghee

    2014-01-01

    Background Cystoscopic procedure is a very common practice in the field of urology due to its ability to survey the bladder for a variety of indications. However, patients who undergo cystoscopy feel intense pain and discomfort. This study investigated the half maximal effective concentration (EC50) of remifentanil in preventing cystoscope insertion pain under sedation using dexmedetomidine. Methods The study was prospectively conducted on 18 male patients, aged 18 to 65. Remifentail infusion was initiated together with dexmedetomidine, and started at a dose of 2.4 ng/ml on the first patient. The effect-site concentration (Ce) of remifentanil for each subsequent patient was determined by the previous patient's response using Dixon's up-and-down method with an interval of 0.3 ng/ml. Patients received a loading dose of 1.0 µg/kg dexmedetomidine over 10 minutes, followed by a maintenance dose of 0.6 µg/kg/hr. After the patient's OAA/S score (Observer's Assessment of Alertness/Sedation scale) reached 3-4, and the Ce of remifentanil reached target concentration, the urologist was allowed to insert the cystoscope and the pain responses were observed. Results The effect-site concentration of remifentanil required to prevent cystoscope insertion pain in 50% of patients under sedation using dexmedetomidine was 1.30 ± 0.12 ng/ml by Dixon's up-and-down method. The logistic regression curve of the probability of response showed that the EC50 and EC95 values (95% confidence limits) of remifentanil were 1.33 ng/ml (1.12-1.52 ng/ml) and 1.58 ng/ml (1.44-2.48 ng/ml), respectively. Conclusions Cystoscopic procedure can be carried out successfully without any pain or adverse effects by optimal remifentanil effect-site concentration (EC50, 1.33 ng/ml; EC95, 1.58 ng/ ml) combined with sedation using dexmedetomidine. PMID:24567812

  8. Determination of indole-3-acetic acid and indole-3-butyric acid in mung bean sprouts using high performance liquid chromatography with immobilized Ru(bpy)3(2+)-KMnO4 chemiluminescence detection.

    PubMed

    Xi, Zhijun; Zhang, Zhujun; Sun, Yonghua; Shi, Zuolong; Tian, Wei

    2009-07-15

    A novel method for determination of indole-3-acetic acid (IAA) and indole-3-butyric acid (IBA) in an extract from mung bean sprouts using high performance liquid chromatography (HPLC) with chemiluminescence (CL) detection is described. The method is based on the CL reaction of auxin (indole-3-acetic acid and indole-3-butyric acid) with acidic potassium permanganate (KMnO(4)) and tris(2,2'-bipyridyl)ruthenium(II), which was immobilized on the cationic ion-exchange resin. The chromatographic separation was performed on a Nucleosil RP-C18 column (i.d.: 250 mm x 4.6 mm, particle size: 5 microm, pore size: 100) with an isocratic mobile phase consisting of methanol-water-acetic acid (45:55:1, v/v/v). At a flow rate of 1.0 mL min(-1), the total run time was 20 min. Under the optimal conditions, the linear ranges were 5.0x10(-8) to 5.0x10(-6)g mL(-1) and 5.0x10(-7) to 1.0x10(-5)g mL(-1) for IAA and IBA, respectively. The detection limits were 2.0x10(-8)g mL(-1) and 2.0x10(-7)g mL(-1) for IAA and IBA, respectively. The relative standard deviation (RSD) of intra-day were 3.1% and 2.3% (n=11) for 2x10(-6)g mL(-1) IAA and 2x10(-6)g mL(-1) IBA; The relative standard deviations of inter-day precision were 6.9% and 4.9% for 2x10(-6)g mL(-1) IAA and 2x10(-6)g mL(-1) IBA. The proposed method had been successfully applied to the determination of auxin in mung bean sprouts.

  9. Simultaneous liquid chromatographic determination of metals and organic compounds in pharmaceutical and food-supplement formulations using evaporative light scattering detection.

    PubMed

    Spacil, Zdenek; Folbrova, Jana; Megoulas, Nikolaos; Solich, Petr; Koupparis, Michael

    2007-02-05

    A novel method for the non-derivatization liquid chromatographic determination of metals (potassium, aluminium, calcium and magnesium) and organic compounds (ascorbate and aspartate) was developed and validated based on evaporative light scattering detection (ELSD). Separation of calcium, magnesium and aluminium was achieved by the cation exchange column Dionex CS-14 and an aqueous TFA mobile phase according to the following time program: 0-6 min TFA 0.96 mL L(-1), 6-7 min linear gradient from TFA 0.96-6.4 mL L(-1). Separation of potassium, magnesium and aspartate was achieved by the lipophilic C18 Waters Spherisorb column and isocratic aqueous 0.2 mL L(-1) TFA mobile phase. Separation of sodium, magnesium, ascorbate and citrate was also achieved by the C18 analytical column, according to the following elution program: 0-2.5 min aqueous nonafluoropentanoic acid (NFPA) 0.5 mL L(-1); 2.5-3.5 min linear gradient from 0.5 mL L(-1) NFPA to 1.0 mL L(-1) TFA. In all cases, evaporation temperature was 70 degrees C, pressure of the nebulizing gas (nitrogen) 3.5 bar, gain 11 and the flow rate 1.0 mL min(-1). Resolution among calcium and magnesium was 1.8, while for all other separations was > or = 3.2. Double logarithmic calibration curves were obtained within various ranges from 3-24 to 34-132 microg mL(-1), and with good correlation (r>0.996). Asymmetry factor ranged from 0.9 to 1.9 and limit of detection from 1.3 (magnesium) to 17 microg mL(-1) (ascorbate). The developed method was applied for the assay of potassium, magnesium, calcium, aluminium, aspartate and ascorbate in pharmaceuticals and food-supplements. The accuracy of the method was evaluated using spiked samples (%recovery 95-105%, %R.S.D. < 2) and the absence of constant or proportional errors was confirmed by dilution experiments.

  10. Spectrophotometric study of the reaction mechanism between DDQ as π-acceptor and potassium iodate and flucloxacillin and dicloxacillin drugs and their determination in pure and in dosage forms

    NASA Astrophysics Data System (ADS)

    Mohamed, Gehad G.; Nour El-Dien, F. A.; Farag, Eman U.

    2006-09-01

    Two simple and accurate spectrophotometric methods are presented for the determination of β-lactam drugs, flucloxacillin (Fluclox) and dicloxacillin (Diclox), in pure and in different pharmaceutical preparations. The charge transfer (CT) reactions between Fluclox and Diclox as electron donors and 2,3-dichloro-5,6-dicyano- p-benzoquinone (DDQ) π-acceptor and potassium iodate via oxidation reduction reaction where the highly coloured complex species or the liberated iodine have been spectrophotometrically studied. The optimum experimental conditions have been studied carefully. Beer's law is obeyed over the concentration range of 2-450 μg ml -1 for Fluclox and 10-450 μg ml -1 for Diclox using DDQ reagent and at 50-550 μg ml -1 for Fluclox and 50-560 μg ml -1 for Diclox using iodate method, respectively. For more accurate results, Ringbom optimum concentration range is calculated and found to be 6-450 and 15-450 μg ml -1 for Fluclox and Diclox using DDQ, respectively, and 65-550 and 63-560 μg ml -1 for Fluclox and Diclox using iodine, respectively. The Sandell sensitivity is found to be 0.018 and 0.011 μg cm -2 for DDQ method and 0.013 and 0.011 μg cm -2 for iodate method for Fluclox and Diclox, respectively, which indicates the high sensitivity of both methods. Standard deviation (S.D. = 0.01-0.80 and 0.07-0.98) and relative standard deviation (R.S.D. = 0.13-0.44 and 0.11-0.82%) ( n = 5) for DDQ and iodate methods, respectively, refer to the high accuracy and precision of the proposed methods. These results are also confirmed by between-day precision of percent recovery of 99.87-100.2 and 99.90-100% for Fluclox and Diclox by DDQ method and 99.88-100.1 and 99.30-100.2% for Fluclox and Diclox by iodate method, respectively. These data are comparable to those obtained by British and American pharmacopoeias assay for the determination of Fluclox and Diclox in raw materials and in pharmaceutical preparations.

  11. Spectrophotometric study of the reaction mechanism between DDQ as pi-acceptor and potassium iodate and flucloxacillin and dicloxacillin drugs and their determination in pure and in dosage forms.

    PubMed

    Mohamed, Gehad G; Nour El-Dien, F A; Farag, Eman U

    2006-09-01

    Two simple and accurate spectrophotometric methods are presented for the determination of beta-lactam drugs, flucloxacillin (Fluclox) and dicloxacillin (Diclox), in pure and in different pharmaceutical preparations. The charge transfer (CT) reactions between Fluclox and Diclox as electron donors and 2,3-dichloro-5,6-dicyano-p-benzoquinone (DDQ) pi-acceptor and potassium iodate via oxidation reduction reaction where the highly coloured complex species or the liberated iodine have been spectrophotometrically studied. The optimum experimental conditions have been studied carefully. Beer's law is obeyed over the concentration range of 2-450 microg ml(-1) for Fluclox and 10-450 microg ml(-1) for Diclox using DDQ reagent and at 50-550 microg ml(-1) for Fluclox and 50-560 microg ml(-1) for Diclox using iodate method, respectively. For more accurate results, Ringbom optimum concentration range is calculated and found to be 6-450 and 15-450 microg ml(-1) for Fluclox and Diclox using DDQ, respectively, and 65-550 and 63-560 microg ml(-1) for Fluclox and Diclox using iodine, respectively. The Sandell sensitivity is found to be 0.018 and 0.011 microg cm(-2) for DDQ method and 0.013 and 0.011 microg cm(-2) for iodate method for Fluclox and Diclox, respectively, which indicates the high sensitivity of both methods. Standard deviation (S.D.=0.01-0.80 and 0.07-0.98) and relative standard deviation (R.S.D.=0.13-0.44 and 0.11-0.82%) (n=5) for DDQ and iodate methods, respectively, refer to the high accuracy and precision of the proposed methods. These results are also confirmed by between-day precision of percent recovery of 99.87-100.2 and 99.90-100% for Fluclox and Diclox by DDQ method and 99.88-100.1 and 99.30-100.2% for Fluclox and Diclox by iodate method, respectively. These data are comparable to those obtained by British and American pharmacopoeias assay for the determination of Fluclox and Diclox in raw materials and in pharmaceutical preparations.

  12. Intellicount: High-Throughput Quantification of Fluorescent Synaptic Protein Puncta by Machine Learning

    PubMed Central

    Fantuzzo, J. A.; Mirabella, V. R.; Zahn, J. D.

    2017-01-01

    Abstract Synapse formation analyses can be performed by imaging and quantifying fluorescent signals of synaptic markers. Traditionally, these analyses are done using simple or multiple thresholding and segmentation approaches or by labor-intensive manual analysis by a human observer. Here, we describe Intellicount, a high-throughput, fully-automated synapse quantification program which applies a novel machine learning (ML)-based image processing algorithm to systematically improve region of interest (ROI) identification over simple thresholding techniques. Through processing large datasets from both human and mouse neurons, we demonstrate that this approach allows image processing to proceed independently of carefully set thresholds, thus reducing the need for human intervention. As a result, this method can efficiently and accurately process large image datasets with minimal interaction by the experimenter, making it less prone to bias and less liable to human error. Furthermore, Intellicount is integrated into an intuitive graphical user interface (GUI) that provides a set of valuable features, including automated and multifunctional figure generation, routine statistical analyses, and the ability to run full datasets through nested folders, greatly expediting the data analysis process. PMID:29218324

  13. Developing Modular and Adaptable Courseware Using TeachML.

    ERIC Educational Resources Information Center

    Wehner, Frank; Lorz, Alexander

    This paper presents the use of an XML grammar for two complementary projects--CHAMELEON (Cooperative Hypermedia Adaptive MultimEdia Learning Objects) and EIT (Enabling Informal Teamwork). Areas of applications are modular courseware documents and the collaborative authoring process of didactical units. A number of requirements for a suitable…

  14. Integration Framework for Heterogeneous Analysis Components: Building a Context Aware Virtual Analyst

    DTIC Science & Technology

    2014-11-01

    understands commands) modes are supported. By default, Julius comes with the Japanese language support. English acoustic and language models are...GUI, natura atar represent gue managem s the activitie ystem to und ry that suppo the Dialogu der to call arning (ML) learning ca r and feedb

  15. Determination of thorium (IV) using isophthalaldehyde-tetrapyrrole as probe by resonance light scattering, second-order scattering and frequency-doubling scattering spectra

    NASA Astrophysics Data System (ADS)

    Wang, Jiao; Xue, Jinhua; Xiao, Xilin; Xu, Li; Jiang, Min; Peng, Pengcheng; Liao, Lifu

    2017-12-01

    The coordination reaction of thorium (IV) with a ditopic bidentate ligand to form supramolecular polymer was studied by resonance light scattering (RLS) spectra, second-order scattering (SOS) spectra and frequency-doubling scattering (FDS) spectra, respectively. The ditopic bidentate ligand is isophthalaldehyde-tetrapyrrole (IPTP). It was synthesized through a condensation reaction of isophthalaldehyde with pyrrole. The formation of supramolecular polymer results in remarkable intensity enhancements of the three light scattering signals. The maximum scattering wavelengths of RLS, FDS and SOS were 290, 568 and 340 nm, respectively. The reaction was used to establish new light scattering methods for the determination of thorium (IV) by using IPTP as probe. Under optimum conditions, the intensity enhancements of RLS, SOS and FDS were directly proportional to the concentration of thorium (IV) in the ranges of 0.01 to 1.2 μg mL- 1, 0.05 to 1.2 μg mL- 1 and 0.05 to 1.2 μg mL- 1, respectively. The detection limits were 0.003 μg mL- 1, 0.012 μg mL- 1 and 0.021 μg mL- 1, respectively. The methods were suitable for analyzing thorium (IV) in actual samples. The results show acceptable recoveries and precision compared with a reference method.

  16. [Determination of tungsten and cobalt in the air of workplace by ICP-OES].

    PubMed

    Zhang, J; Ding, C G; Li, H B; Song, S; Yan, H F

    2017-08-20

    Objective: To establish the inductively coupled plasma optical emission spectrometry (ICP-OES) method for determination of cobalt and tungsten in the air of workplace. Methods: The cobalt and tungsten were collected by filter membrane and then digested by nitric acid, inductively coupled plasma optical emission spectrometry (ICP-OES) was used for the detection of cobalt and tungsten. Results: The linearity of tungsten was good at the range of 0.01-1 000 μg/ml with a correlation coefficient of 0.999 9, the LOD and LOQ were 0.006 7 μg/ml and 0.022 μg/ml, respectively. The recovery was ranged from 98%-101%, the RSD of intra-and inter-batch precision were 1.1%-3.0% and 2.1%-3.8%, respectively. The linearity of cobalt was good at the range of 0.01-100 μg/ml with a correlation coefficient of 0.999 9, the LOD and LOQ were 0.001 2 μg/ml and 0.044 μg/ml, respectively. The recovery was ranged from 95%-97%, the RSD of intra-and inter-batch precision were 1.1%-2.4% and 1.1%-2.9%, respectively. The sampling efficiency of tungsten and cobalt were higher than 94%. Conclusion: The linear range, sensitivity and precision of the method was suitable for the detection of tungsten and cobalt in the air of workplace.

  17. Rapid determination of chromium(VI) in electroplating waste water by use of a spectrophotometric flow injection system.

    PubMed

    Yuan, Dong; Fu, Dayou; Wang, Rong; Yuan, Jigang

    2008-11-01

    A new rapid and sensitive FI method is reported for spectrophotometric determination of trace chromium(VI) in electroplating waste water. The method is based on the reaction of Cr(VI) with sodium diphenylamine sulfonate (DPH) in acidic medium to form a purple complex (lambda(max) = 550 nm). Under the optimized conditions, the calibration curve is linear in the range 0.04-3.8 microg ml(-1) at a sampling rate of 30 h(-1). The detection limit of the method is 0.0217 microg ml(-1), and the relative standard deviation is 1.1% for eight determinations of 2 microg ml(-1) Cr(VI). The proposed method was applied to the determination of chromium in electroplating waste water with satisfactory results.

  18. Virtual reality simulator for training on photoselective vaporization of the prostate with 980 nm diode laser and learning curve of the technique.

    PubMed

    Angulo, J C; Arance, I; García-Tello, A; Las Heras, M M; Andrés, G; Gimbernat, H; Lista, F; Ramón de Fata, F

    2014-09-01

    The utility of a virtual reality simulator for training of the photoselective vaporization of the prostate with diode laser was studied. Two experiments were performed with a simulator (VirtaMed AG, Zürich, Switzerland) with software for specific training in prostate vaporization in contact mode with Twister fiber (Biolitec AG, Jena, German). Eighteen surgeons performed ablation of the prostate (55 cc) twice and compared the score obtained (190 points efficacy and 80 safety) in the second one of them by experience groups (medical students, residents, specialists). They also performed a spatial orientation test with scores of 0 to 6. After, six of these surgeons repeated 15 ablations of the prostate (55 and 70 ml). Improvement of the parameters obtained was evaluated to define the learning curve and how experience, spatial orientation skills and type of sequences performed affects them. Global efficacy and safety score was different according to the grade of experience (P=.005). When compared by pairs, specialist-student differences were detected (p=0.004), but not specialist-resident (P=.12) or resident-student (P=.2). Regarding efficacy of the procedure, specialist-student (p=0.0026) and resident-student (P=.08) differences were detected. The different partial indicators in terms of efficacy were rate of ablation (P=.01), procedure time (P=.03) and amount of unexposed capsule (p=0.03). Differences were not observed between groups in safety (P=.5). Regarding the learning curve, percentage median on the total score exceeded 90% after performing 4 procedures for prostates of 55 ml and 10 procedures for prostate glands of 70 ml. This course was not modified by previous experience (resident-specialist; P=.6). However, it was modified according to the repetition sequence (progressive-random; P=.007). Surgeons whose spatial orientation was less than the median of the group (value 2.5) did not surpass 90% of the score in spite of repetition of the procedure. Simulation for ablation of the prostate with contact diode laser is a good learning model with discriminative validity, as it correlates the metric results with levels of experience and sills. The sequential repetition of the procedure on growing levels of difficulty favors learning. Copyright © 2014 AEU. Published by Elsevier Espana. All rights reserved.

  19. RAPID AND PRECISE METHOD FOR MEASURING STABLE CARBON ISOTOPE RATIOS OF DISSOLVED INORGANIC CARBON

    EPA Science Inventory

    We describe a method for rapid preparation, concentration and stable isotopic analysis of dissolved inorganic carbon (d13C-DIC). Liberation of CO2 was accomplished by placing 100 ?l phosphoric acid and 0.9 ml water in an evacuated 1.7-ml gas chromatography (GC) injection vial. Fo...

  20. Determination of Lead in Urine by Atomic Absorption Spectrophotometry1

    PubMed Central

    Selander, Stig; Cramé, Kim

    1968-01-01

    A method for the determination of lead in urine by means of atomic absorption spectrophotometry (AAS) is described. A combination of wet ashing and extraction with ammonium pyrrolidine dithiocarbamate into isobutylmethylketone was used. The sensitivity was about 0·02 μg./ml. for 1% absorption, and the detection limit was about 0·02 μg./ml. with an instrumental setting convenient for routine analyses of urines. Using the scale expansion technique, the detection limit was below 0·01 μg./ml., but it was found easier to determine urinary lead concentrations below 0·05 μg./ml. by concentrating the lead in the organic solvent by increasing the volume of urine or decreasing that of the solvent. The method was applied to fresh urines, stored urines, and to urines, obtained during treatment with chelating agents, of patients with lead poisoning. Urines with added inorganic lead were not used. The results agreed well with those obtained with a colorimetric dithizone extraction method (r = 0·989). The AAS method is somewhat more simple and allows the determination of smaller lead concentrations. PMID:5647975

  1. Novel Spectrophotometric Method for the Determination of Pindolol in Pharmaceutical Samples

    NASA Astrophysics Data System (ADS)

    Nagaraja, P.; Kumar, H. R. Arun; Bhaskara, B. L.; Kumar, S. Anil

    2011-10-01

    A new facile and sensitive spectrophotometric determination of Pindolol (PDL), a beta blocker drug has been developed and validated. The method was based on the reaction between pindolol and K3 [Fe(CN)6] in presence of FeCl3 to form Prussian blue. The absorbance values were recorded at 700 nm and a calibrated graph was constructed. A dynamic Beer's law range was observed in the range 0.125-2.5 μg mL-1 with a detection limit of 0.03 μg mL-1 and a quantitation limit of 0.08 μg mL-1. Various experimental parameters such as effect of solvents, stability, interference effects due to excipients etc were studied. The reproducibility of this methods were checked by six replicate determinations at 1.0 μg ml-1 PDL and the standard deviation was found to be between 0.20 and 0.42%. The results were statistically compared with those of the reference/literature method by applying Student's t-test and F-test. The sensitivity, simplicity, temperature independence and stability of the colored product are the advantages of the proposed method and it is also free from extraction steps and use of carcinogenic solvents.

  2. Spectrophotometric method for simultaneous determination of valsartan and substances from the group of statins in binary mixtures.

    PubMed

    Stolarczyk, Mariusz; Apola, Anna; Maślanka, Anna; Kwiecień, Anna; Opoka, Włodzimierz

    2017-12-20

    Applicability of derivative spectrophotometry for the determination of valsartan in the presence of a substance from the group of statins was checked. The obtained results indicate that the proposed method may be effective by using appropriate derivatives: for valsartan and fluvastatin - D1, D2 and D3, for valsartan and pravastatin - D1 and D3, for valsartan and atorvastatin - D2 and D3. The method was characterized by high sensitivity and accuracy. Linearity was maintained in the following ranges: 9.28-32.48 mg mL-1 for valsartan, 8.16-28.56 mg mL-1 f or fluvastatin, 14.40-39.90 mg mL-1 for atorvastatin and 9.60-48.00 mg mL-1 for pravastatin. Determination coefficients were in the range of 0.989-0.999 depending on the analyte and the order of derivative. The precision of the method was high with RSD from 0.1 to 2.5 % and recovery of individual components was within the range of 100 ± 5 %. The developed method was successfully applied to the determination of valsartan combined with fluvastatin, atorvastatin and pravastatin in laboratory prepared mixtures and in pharmaceutical preparations.

  3. Extraction and Determination of Cyproheptadine in Human Urine by DLLME-HPLC Method

    PubMed Central

    Maham, Mehdi; Kiarostami, Vahid; Waqif-Husain, Syed; Abroomand-Azar, Parviz; Tehrani, Mohammad Saber; Khoeini Sharifabadi, Malihe; Afrouzi, Hossein; Shapouri, MahmoudReza; Karami-Osboo, Rouhollah

    2013-01-01

    Novel dispersive liquid-liquid microextraction (DLLME), coupled with high performance liquid chromatography with photodiode array detection (HPLC-DAD) has been applied for the extraction and determination of cyproheptadine (CPH), an antihistamine, in human urine samples. In this method, 0.6 mL of acetonitrile (disperser solvent) containing 30 μL of carbon tetrachloride (extraction solvent) was rapidly injected by a syringe into 5 mL urine sample. After centrifugation, the sedimented phase containing enriched analyte was dissolved in acetonitrile and an aliquot of this solution injected into the HPLC system for analysis. Development of DLLME procedure includes optimization of some important parameters such as kind and volume of extraction and disperser solvent, pH and salt addition. The proposed method has good linearity in the range of 0.02-4.5 μg mL-1 and low detection limit (13.1 ng mL-1). The repeatability of the method, expressed as relative standard deviation was 4.9% (n = 3). This method has also been applied to the analysis of real urine samples with satisfactory relative recoveries in the range of 91.6-101.0%. PMID:24250605

  4. Effect of aqueous and ethanolic extracts of Lippia citriodora on candida albicans

    PubMed Central

    Ghasempour, Maryam; Omran, Saeid Mahdavi; Moghadamnia, Ali Akbar; Shafiee, Faranak

    2016-01-01

    Introduction Because of resistance and side effects to common antifungal drugs activity, the research on herbal substances with antifungal activity is frequent. Lemon verbena (Lippia citriodora) is a member of Verbenaceae family. The aim of this study was to determine the anti-candida activities of the ethanolic and aqueous extracts of the lemon verbena leaves and compare them with nystatin and fluconazole. Methods In this 2015 study, 15 clinical isolates and standard strain of candida albicans PTCC 5027 were used, and the inhibitory effects of the ethanolic and aqueous extracts, Nystatin and Fluconazole, were evaluated using disk and well diffusion methods. Also, the minimal inhibitory concentration (MIC) was determined. Five concentrations of aqueous and ethanolic extracts (156–2500 μg/ml), Nystatin (8–128 μg/ml) and Fluconazole (4–64 μg/ml) were used in disk and well diffusion methods, and nine concentrations of aqueous and ethanolic extracts (19–5000 μg/ml), Nystatin (0.5–128 μg/ml), and Fluconazole (0.25–64 μg/ml) were applied for MIC. Data were analyzed using Tukey’s post-hoc and one-way ANOVA tests. The significant level was considered p < 0.05 in the current study. Results In the well and disk diffusion techniques, limited growth inhibition halos were produced around some clinical isolates at different concentrations of ethanolic extract; however, no growth inhibitory halo was observed with any concentrations of the aqueous extract. The MIC values of ethanolic extract, aqueous extract, Nystatin and Fluconazole for clinical isolated and standard strain were 833 ± 78.5and 625μg/ml; 4156 ± 67.4 and 2500 μg/ml; 10.13 ± 1.91 and 4 μg/ml; and 1.97 ± 0.25 and 1 μg/ml, respectively. Conclusion The results showed that the ethanolic extract was stronger than the aqueous extract of this plant, which can be used as an alternative for drugs. It is recommended that the ethanolic extract of this plant be investigated in vivo for better evaluation of its efficacy and properties. PMID:27757185

  5. [Quantitative determination of biogenic amine from Biomphalaria glabrata nervous system by UPLC MS/MS].

    PubMed

    Tao, Huang; Yun-Hai, Guo; He-Xiang, Liu; Yi, Zhang

    2018-04-19

    To establish a method for the quantitative determination of serotonin and dopamine in the nervous system of Biomphalaria glabrata by using ultra high performance liquid chromatography-tandem quadrupole mass spectrometry (UPLC MS/MS) . The B. glabrata nervous system was broken in the pure methanol solution after obtaining it by dissecting with microscope. Then, the supernatant containing the target substance after twice high speed centrifugation was got. The extraction was separated on an ACQUITY UPLC BEH Amide column with Waters TQ-XS series mass spectrometry detector, with ESI source and positive electrospray ionization mode when the machine testing. The detection limit of serotonin was 0.03 ng/ml and the limit of quantification was 0.1 ng/ml. The detection limit of dopamine was 0.05 ng/ml and the limit of quantification was 0.15 ng/ml. The recoveries of serotonin ranged from 90.68% to 94.72% over the range of 1 to 40 ng/ml. The recoveries of dopamine ranged from 91.68% to 96.12% over the range of 1.0 ng/ml to 40 ng/ml. The established UPLC MS/MS method is simple, stable and reproducible. It can be used for the quantitative analysis of serotonin and dopamine in the nervous system of B. glabrata snails.

  6. In vitro anti-mycobacterial activities of three species of Cola plant extracts (Sterculiaceae).

    PubMed

    Adeniyi, B A; Groves, M J; Gangadharam, P R J

    2004-05-01

    Extracts obtained from three Nigerian Sterculiaceae plants: Cola accuminata, C. nitida and C. milleni were screened for anti-mycobacterium properties using a slow growing Mycobacterium bovis ATCC 35738 (designated BCG Mexican and known to have some virulence in mouse and guinea pig) at 1000 microg/ml using the radiometric (BACTEC) method. The extracts were also tested against six fast growing ATCC strains of M. vaccae using the broth microdilution method. The methanol extracts from both leaves, stem bark and root bark of Cola accuminata and from the leaves and stem bark of C. nitida and C. milleni were not active at the highest concentration of 1000 microg/ml. Only the methanol extract of root bark for both C. nitida and C. milleni were found to be potent against both M. bovis and strains of M. vaccae. The minimum inhibitory concentration (MIC) of C. nitida against M. bovis is 125 microg/ml while the MIC of C. milleni against M. bovis is 62.5 microg/ml after at least 6 days of inhibition with growth index (GI) units lesser than or equal to the change in GI units inoculated with a 1/100 of the BACTEC inoculum for a control vial. The minimum inhibitory concentration of C. milleni against the six ATCC strain of M. vaccae ranged from 62.5 microg/ml to 250 microg/ml while for C. nitida ranged from 500 microg/ml to above 1000microg/ml. Evidently, C. milleni has the highest inhibitory activity against both M. bovis and strains of M. vaccae used. Rifampicin, the positive control used has strong activity against M. bovis at the tested concentration of 5 microg and 10 microg/ml and 4 to 8 microg/ml against the six strains of M. vaccae. Copyright 2004 John Wiley & Sons, Ltd.

  7. Development and validation of a sensitive LC-MS/MS method for the determination of fenoterol in human plasma and urine samples.

    PubMed

    Sanghvi, M; Ramamoorthy, A; Strait, J; Wainer, I W; Moaddel, R

    2013-08-15

    Due to the lack of sensitivity in current methods for the determination of fenoterol (Fen), a rapid LC-MS/MS method was developed for the determination of (R,R')-Fen and (R,R';S,S')-Fen in plasma and urine. The method was fully validated and was linear from 50pg/ml to 2000pg/ml for plasma and from 2.500ng/ml to 160ng/ml for urine with a lower limit of quantitation of 52.8pg/ml in plasma. The coefficient of variation was <15% for the high QC standards and <10% for the low QC standards in plasma and was <15% for the high and low QC standards in urine. The relative concentrations of (R,R')-Fen and (S,S')-Fen were determined using a chirobiotic T chiral stationary phase. The method was used to determine the concentration of (R,R')-Fen in plasma and urine samples obtained in an oral cross-over study of (R,R')-Fen and (R,R';S,S')-Fen formulations. The results demonstrated a potential pre-systemic enantioselective interaction in which the (S,S')-Fen reduces the sulfation of the active (R,R')-Fen. The data suggest that a non-racemic mixture of the Fen enantiomers may provide better bioavailability of the active (R,R')-Fen for use in the treatment of cardiovascular disease. Published by Elsevier B.V.

  8. Detection of bacteriuria and pyuria by URISCREEN a rapid enzymatic screening test.

    PubMed Central

    Pezzlo, M T; Amsterdam, D; Anhalt, J P; Lawrence, T; Stratton, N J; Vetter, E A; Peterson, E M; de la Maza, L M

    1992-01-01

    A multicenter study was performed to evaluate the ability of the URISCREEN (Analytab Products, Plainview, N.Y.), a 2-min catalase tube test, to detect bacteriuria and pyuria. This test was compared with the Chemstrip LN (BioDynamics, Division of Boehringer Mannheim Diagnostics, Indianapolis, Ind.), a 2-min enzyme dipstick test; a semiquantitative plate culture method was used as the reference test for bacteriuria, and the Gram stain or a quantitative chamber count method was used as the reference test for pyuria. Each test was evaluated for its ability to detect probable pathogens at greater than or equal to 10(2) CFU/ml and/or greater than or equal to 1 leukocyte per oil immersion field, as determined by the Gram stain method, or greater than 10 leukocytes per microliter, as determined by the quantitative count method. A total of 1,500 urine specimens were included in this evaluation. There were 298 specimens with greater than or equal 10(2) CFU/ml and 451 specimens with pyuria. Of the 298 specimens with probable pathogens isolated at various colony counts, 219 specimens had colony counts of greater than or equal to 10(5) CFU/ml, 51 specimens had between 10(4) and 10(5) CFU/ml, and 28 specimens had between 10(2) and less than 10(4) CFU/ml. Both the URISCREEN and the Chemstrip LN detected 93% (204 of 219) of the specimens with probable pathogens at greater than or equal to 10(5) CFU/ml. For the specimens with probable pathogens at greater than or equal to 10(2) CFU/ml, the sensitivities of the URISCREEN and the Chemstrip LN were 86% (256 of 298) and 81% (241 of 298), respectively. Of the 451 specimens with pyuria, the URISCREEN detected 88% (398 of 451) and Chemstrip LN detected 78% (350 if 451). There were 204 specimens with both greater than or equal to 10(2) CFU/ml and pyuria; the sensitivities of both methods were 95% (193 of 204) for these specimens. Overall, there were 545 specimens with probable pathogens at greater than or equal to 10(2) CFU/ml and/or pyuria. The URISCREEN detected 85% (461 of 545), and the Chemstrip LN detected 73% (398 of 545). A majority (76%) of the false-negative results obtained with either method were for specimens without leukocytes in the urine. There were 955 specimens with no probable pathogens or leukocytes. Of these, 28% (270 of 955) were found positive by the URISCREEN and 13% (122 of 955) were found positive by the Chemstrip LN. A majority of the false-positive results were probably due, in part, to the detection of enzymes present in both bacterial and somatic cells by each of the test systems. Overall, the URISCREEN is rapid, manual, easy-to-perform enzymatic test that yields findings similar to those yielded by the Chemstrip LN for specimens with both greater than or equal to 10(2) CFU/ml and pyuria or for specimens with greater than or equal to 10(5) CFU/ml and with or without pyuria. However, when the data were analyzed for either probable pathogens at less 10(5) CFU/ml or pyuria, the sensitivity of the URISCREEN was higher (P less than 0.05). PMID:1551986

  9. Sample preparation and UHPLC-FD analysis of pteridines in human urine.

    PubMed

    Tomšíková, H; Solich, P; Nováková, L

    2014-07-01

    Elevated levels of pteridines can indicate the activation of cellular immune system by certain diseases. No work dealing with the simultaneous determination of urinary neopterin, biopterin and their reduced forms has been published. Therefore, a new SPE-UHPLC-FD method for the analysis of these compounds has been developed. The main emphasis was put on the stability of dihydroforms during the sample processing and storage. As a stabilizing agent, dithiothreitol, at various concentrations, and various pH values (3.8-9.8) of working solutions were tested. Chromatographic separation was performed under HILIC isocratic conditions on BEH Amide column. The method was linear for the calibration standard solutions in the range of 10-10,000 ng/ml (dihydroforms) and 0.5-1000 ng/ml (oxidized forms), and for real samples in the range of 25-1000 ng/ml (dihydroforms) and 1-100 ng/ml (oxidized forms). The development of a new SPE sample preparation method was carried out on different types of sorbents (based on a mixed-mode cation exchange, porous graphitic carbon and a polymer comprising hydrophilic and hydrophobic components). Final validation was performed on a MCAX SPE column. Method accuracy ranged from 76.9 to 121.9%. The intra- and inter-day precision did not exceed 10.7%. The method provided high sensitivity for the use in routine clinical measurements of urine (LLOQ 1 ng/ml for oxidized forms and 25 ng/ml for dihydroforms). Average concentrations of biopterin, neopterin, and dihydrobiopterin found in urine of healthy persons were related to the mol of creatinine (66.8, 142.3, and 257.3 μmol/mol of creatinine, respectively) which corresponded to the literature data. The concentration of dihydroneopterin obtained using our method was 98.8 μmol/mol of creatinine. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Using VS30 to Estimate Station ML Adjustments (dML)

    NASA Astrophysics Data System (ADS)

    Yong, A.; Herrick, J.; Cochran, E. S.; Andrews, J. R.; Yu, E.

    2017-12-01

    Currently, new seismic stations added to a regional seismic network cannot be used to calculate local or Richter magnitude (ML) until a revised region-wide amplitude decay function is developed. The new station must record a minimum number of local and regional events that meet specific amplitude requirements prior to re-calibration of the amplitude decay function. Therefore, there can be significant delay between when a new station starts contributing real-time waveform packets and when the data can be included in magnitude estimation. The station component adjustments (dML; Uhrhammer et al., 2011) are calculated after first inverting for a new regional amplitude decay function, constrained by the sum of dML for long-running stations. Here, we propose a method to calculate an initial dML using known or proxy values of seismic site conditions. For site conditions, we use the time-averaged shear-wave velocity (VS) of the upper 30 m (VS30). We solve for dML as described in Equation (1) by Uhrhammer et al. (2011): ML = log (A) - log A0 (r) + dML, where A is the maximum Wood and Anderson (1925) trace amplitude (mm), r is the distance (km), and dML is the station adjustment. Measured VS30 and estimated dML data are comprised of records from 887 horizontal components (east-west and north-south orientations) from 93 seismic monitoring stations in the California Integrated Seismic Network. VS30 values range from 202 m/s to 1464 m/s and dML range from -1.10 to 0.39. VS30 and dML exhibit a positive correlation coefficient (R = 0.72), indicating that as VS30 increases, dML increases. This implies that greater site amplification (i.e., lower VS30) results in smaller ML. When we restrict VS30 < 760 m/s to focus on dML at soft soil to soft rock sites, R increases to 0.80. In locations where measured VS30 data are unavailable, we evaluate the use of proxy-based VS30 estimates based on geology, topographic slope and terrain classification, as well as other hybridized methods. Measured VS30 data or proxy-based VS30 estimates can be used for initial dML estimates that allow new stations to contribute to regional network ML estimates immediately without the need to wait until a minimum set of earthquake data has been recorded.

  11. [Polymerase chain reaction (PCR) for the identification of toxigenic Vibrio cholerae O1 in oysters].

    PubMed

    Rodríguez-Angeles, M G; Giono-Cerezo, S; Moreno-Escobar, A; Valdespino-Gómez, J L

    1994-01-01

    PCR was made with ctx2 (CGG GCA GAT TCT AGA CCT CCT G) y ctx3 (CGA TGA TCT TGG AGC ATT CCC AC) primers for subunit A of cholera toxin, 30 cycles of temperature on samples of 50 g of oysters added in 450 ml of peptone alcaline water that were inoculated with 15 x 10(6), 0.75 x 10(6) and 0.15 x 10(6) CFU/ml of toxigenic 6707 V. cholerae O1 reference strain. The samples were tested by three microbiological methods: INDRE's method uses 1 x 10(-1) dilution of sample, two fold pass to peptone alcaline water pH 9 incubated 18 h and 6 h at 37 degrees C, the Food and Drugs Administration (FDA) method uses 10(-1) to 10(-6) dilutions of sample, 6 h incubation and reincubation for 18 h at 37 and 42 degrees C and the Mexican laboratories (LMD) with 10(-4) to 10(-3) dilutions, the samples were incubated for 6 h and then reincubated for 18 h at two temperatures 37 and 42 degrees C. The PCR by INDRE's method was positive with 3 x 10(2) CFU/ml/g oyster. In the FDA's method the PCR detected DNA in 10(-4) dilution with 3 x 10(1) CFU/ml/g oyster and in LMD's method the PCR was positive in 10(-3) with 3 CFU/ml/g oyster. The results of the PCR were obtained between 5-6 h, and later V. cholerae O1 was isolated by three microbiological methods. The PCR reproducibility was better on DNA sample diluted 1:4 and 10 microliters of sample increased from 1:1000 to 1:10000 the sensitivity of PCR.

  12. Spectrophotometric study of the reaction mechanism between DDQ Pi- and iodine sigma-acceptors and chloroquine and pyrimethamine drugs and their determination in pure and in dosage forms.

    PubMed

    Zayed, M A; Khalil, Shaban M; El-Qudaby, Hoda M

    2005-11-01

    Two simple and accurate spectrophotometric methods are presented for the determination of anti-malarial drugs, chloroquine phosphate (CQP) and pyrimethamine (PYM), in pure and in different pharmaceutical preparations. The charge transphere (CT) reactions between CQP and PYM as electron donors and 2,3-dichloro-5,6-dicyano-p-benzoquinone (DDQ) pi-acceptor and iodine sigma-acceptor reagents to give highly coloured complex species have been spectrophotometrically studied. The optimum experimental conditions have been studied carefully. Beer' law is obeyed over the concentration range of 1.0-15 microg ml(-1) for CQP and 1.0-40 microg ml(-1) for PYM using I(2) and at 5.0-53 microg ml(-1) for CQP and 1.0-46 microg ml(-1) for PYM using DDQ reagents, respectively. For more accurate results, Ringbom optimum concentration range is calculated and found to be 10-53 and 8-46 microg ml(-1) for CQP and PYM using DDQ, respectively and 5-15 and 8-40 microg ml(-1) for CQP and PYM using iodine, respectively. The Sandell sensitivity is found to be 0.038 and 0.046 g cm(-2) for DDQ method and 0.0078 and 0.056 g cm(-2) for I(2) method for CQP and PYM, respectively which indicates the high sensitivity of both methods. Standard deviation (S.D.=0.012-0.014 and 0.013-0.015) and relative standard deviation (R.S.D.=0.09-1.4 and 1.3-1.5%) (n=5) for DDQ and I(2) methods respectively, refer to the high accuracy and precision of the proposed methods. These results are also confirmed by between day precision of percent recovery of 99-100.6%, and 98-101% for CQP and PYM by DDQ method and 99-102% and 99.2-101.4% for CQP and PYM by I(2) method respectively. These data are comparable to those obtained by British and American pharmacopoeias assay for the determination of CQP and PYM in raw materials and in pharmaceutical preparations.

  13. RobotReviewer: evaluation of a system for automatically assessing bias in clinical trials.

    PubMed

    Marshall, Iain J; Kuiper, Joël; Wallace, Byron C

    2016-01-01

    To develop and evaluate RobotReviewer, a machine learning (ML) system that automatically assesses bias in clinical trials. From a (PDF-formatted) trial report, the system should determine risks of bias for the domains defined by the Cochrane Risk of Bias (RoB) tool, and extract supporting text for these judgments. We algorithmically annotated 12,808 trial PDFs using data from the Cochrane Database of Systematic Reviews (CDSR). Trials were labeled as being at low or high/unclear risk of bias for each domain, and sentences were labeled as being informative or not. This dataset was used to train a multi-task ML model. We estimated the accuracy of ML judgments versus humans by comparing trials with two or more independent RoB assessments in the CDSR. Twenty blinded experienced reviewers rated the relevance of supporting text, comparing ML output with equivalent (human-extracted) text from the CDSR. By retrieving the top 3 candidate sentences per document (top3 recall), the best ML text was rated more relevant than text from the CDSR, but not significantly (60.4% ML text rated 'highly relevant' v 56.5% of text from reviews; difference +3.9%, [-3.2% to +10.9%]). Model RoB judgments were less accurate than those from published reviews, though the difference was <10% (overall accuracy 71.0% with ML v 78.3% with CDSR). Risk of bias assessment may be automated with reasonable accuracy. Automatically identified text supporting bias assessment is of equal quality to the manually identified text in the CDSR. This technology could substantially reduce reviewer workload and expedite evidence syntheses. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  14. Persistent effects of prior chronic exposure to corticosterone on reward-related learning and motivation in rodents.

    PubMed

    Olausson, Peter; Kiraly, Drew D; Gourley, Shannon L; Taylor, Jane R

    2013-02-01

    Repeated or prolonged exposure to stress has profound effects on a wide spectrum of behavioral and neurobiological processes and has been associated with the pathophysiology of depression. The multifaceted nature of this disorder includes despair, anhedonia, diminished motivation, and disrupted cognition, and it has been proposed that depression is also associated with reduced reward-motivated learning. We have previously reported that prior chronic corticosterone exposure to mice produces a lasting depressive-like state that can be reversed by chronic antidepressant treatment. In the present study, we tested the effects of prior chronic exposure to corticosterone (50 μg/ml) administered to rats or to mice in drinking water for 14 days followed by dose-tapering over 9 days. The exposure to corticosterone produced lasting deficits in the acquisition of reward-related learning tested on a food-motivated instrumental task conducted 10-20 days after the last day of full dose corticosterone exposure. Rats exposed to corticosterone also displayed reduced responding on a progressive ratio schedule of reinforcement when tested on day 21 after exposure. Amitriptyline (200 mg/ml in drinking water) exposure for 14 days to mice produced the opposite effect, enhancing food-motivated instrumental acquisition and performance. Repeated treatment with amitriptyline (5 mg/kg, intraperitoneally; bid) subsequent to corticosterone exposure also prevented the corticosterone-induced deficits in rats. These results are consistent with aberrant reward-related learning and motivational processes in depressive states and provide new evidence that stress-induced neuroadaptive alterations in cortico-limbic-striatal brain circuits involved in learning and motivation may play a critical role in aspects of mood disorders.

  15. Evaluation of the learning curve for thulium laser enucleation of the prostate with the aid of a simulator tool but without tutoring: comparison of two surgeons with different levels of endoscopic experience.

    PubMed

    Saredi, Giovanni; Pirola, Giacomo Maria; Pacchetti, Andrea; Lovisolo, Jon Alexander; Borroni, Giacomo; Sembenini, Federico; Marconi, Alberto Mario

    2015-06-09

    The aim of this study was to determine the learning curve for thulium laser enucleation of the prostate (ThuLEP) for two surgeons with different levels of urological endoscopic experience. From June 2012 to August 2013, ThuLEP was performed on 100 patients in our institution. We present the results of a prospective evaluation during which we analyzed data related to the learning curves for two surgeons of different levels of experience. The prostatic adenoma volumes ranged from 30 to 130 mL (average 61.2 mL). Surgeons A and B performed 48 and 52 operations, respectively. Six months after surgery, all patients were evaluated with the International Prostate Symptom Score questionnaire, uroflowmetry, and prostate-specific antigen test. Introduced in 2010, ThuLEP consists of blunt enucleation of the prostatic apex and lobes using the sheath of the resectoscope. This maneuver allows clearer visualization of the enucleation plane and precise identification of the prostatic capsule. These conditions permit total resection of the prostatic adenoma and coagulation of small penetrating vessels, thereby reducing the laser emission time. Most of the complications in this series were encountered during morcellation, which in some cases was performed under poor vision because of venous bleeding due to surgical perforation of the capsule during enucleation. Based on this analysis, we concluded that it is feasible for laser-naive urologists with endoscopic experience to learn to perform ThuLEP without tutoring. Those statements still require further validation in larger multicentric study cohort by several surgeon. The main novelty during the learning process was the use of a simulator that faithfully reproduced all of the surgical steps in prostates of various shapes and volumes.

  16. Dissociation between learning and memory impairment and other sickness behaviours during simulated Mycoplasma infection in rats.

    PubMed

    Swanepoel, Tanya; Harvey, Brian H; Harden, Lois M; Laburn, Helen P; Mitchell, Duncan

    2011-11-01

    To investigate potential consequences for learning and memory, we have simulated the effects of Mycoplasma infection, in rats, by administering fibroblast-stimulating lipopepide-1 (FSL-1), a pyrogenic moiety of Mycoplasma salivarium. We measured the effects on body temperature, cage activity, food intake, and on spatial learning and memory in a Morris Water Maze. Male Sprague-Dawley rats had radio transponders implanted to measure abdominal temperature and cage activity. After recovery, rats were assigned randomly to receive intraperitoneal (I.P.) injections of FSL-1 (500 or 1000 μg kg(-1) in 1 ml kg(-1) phosphate-buffered saline; PBS) or vehicle (PBS, 1 ml kg(-1)). Body mass and food intake were measured daily. Training in the Maze commenced 18 h after injections and continued daily for four days. Spatial memory was assessed on the fifth day. In other rats, we measured concentrations of brain pro-inflammatory cytokines, interleukin (IL)-1β and IL-6, at 3 and 18 h after injections. FSL-1 administration induced a dose-dependent fever (∼1°C) for two days, lethargy (∼78%) for four days, anorexia (∼65%) for three days and body mass stunting (∼6%) for at least four days. Eighteen hours after FSL-1 administration, when concentrations of IL-1β, but not that of IL-6, were elevated in both the hypothalamus and the hippocampus, and when rats were febrile, lethargic and anorexic, learning in the Maze was unaffected. There also was no memory impairment. Our results support emerging evidence that impaired learning and memory is not inevitable during simulated infection. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Validated spectrofluorometric method for determination of gemfibrozil in self nanoemulsifying drug delivery systems (SNEDDS)

    NASA Astrophysics Data System (ADS)

    Sierra Villar, Ana M.; Calpena Campmany, Ana C.; Bellowa, Lyda Halbaut; Trenchs, Monserrat Aróztegui; Naveros, Beatriz Clares

    2013-09-01

    A spectrofluorometric method has been developed and validated for the determination of gemfibrozil. The method is based on the excitation and emission capacities of gemfibrozil with excitation and emission wavelengths of 276 and 304 nm respectively. This method allows de determination of the drug in a self-nanoemulsifying drug delivery system (SNEDDS) for improve its intestinal absorption. Results obtained showed linear relationships with good correlation coefficients (r2 > 0.999) and low limits of detection and quantification (LOD of 0.075 μg mL-1 and LOQ of 0.226 μg mL-1) in the range of 0.2-5 μg mL-1, equally this method showed a good robustness and stability. Thus the amounts of gemfibrozil released from SNEDDS contained in gastro resistant hard gelatine capsules were analysed, and release studies could be performed satisfactorily.

  18. Validated spectrofluorometric method for determination of gemfibrozil in self nanoemulsifying drug delivery systems (SNEDDS).

    PubMed

    Sierra Villar, Ana M; Calpena Campmany, Ana C; Bellowa, Lyda Halbaut; Trenchs, Monserrat Aróztegui; Naveros, Beatriz Clares

    2013-09-01

    A spectrofluorometric method has been developed and validated for the determination of gemfibrozil. The method is based on the excitation and emission capacities of gemfibrozil with excitation and emission wavelengths of 276 and 304 nm respectively. This method allows de determination of the drug in a self-nanoemulsifying drug delivery system (SNEDDS) for improve its intestinal absorption. Results obtained showed linear relationships with good correlation coefficients (r(2)>0.999) and low limits of detection and quantification (LOD of 0.075 μg mL(-1) and LOQ of 0.226 μg mL(-1)) in the range of 0.2-5 μg mL(-1), equally this method showed a good robustness and stability. Thus the amounts of gemfibrozil released from SNEDDS contained in gastro resistant hard gelatine capsules were analysed, and release studies could be performed satisfactorily. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Thin layer chromatography of p-aminophenol in urine after mixed exposure to aniline and toluene.

    PubMed Central

    Bieniek, G; Karmańska, K; Wilczok, T

    1984-01-01

    A simple method of evaluating p-aminophenol in the urine of people exposed simultaneously to aniline and toluene relies on separating p-aminophenol from hippuric acid and other physiological components of the urine by thin layer chromatography. The adsorbents and developing system have been thus fixed to make possible the separation of p-aminophenol from hippuric acid, urea, and creatinine and their quantitative determination. This method also makes possible the determination of p-aminophenol in urine in the presence of hippuric acid. Hippuric acid is a physiological component of urine and also the metabolite of toluene, so the determination of p-aminophenol is possible also after simultaneous exposure to both compounds: aniline and toluene. At the same time the concentrations of urea and creatinine as additional factors may be determined. The limit of detection of the method is: 5 micrograms/ml for p-aminophenol, 9 micrograms/ml for hippuric acid, 8 micrograms/ml for urea, and 6 micrograms/ml for creatinine. PMID:6722055

  20. A technique for fast and accurate measurement of hand volumes using Archimedes' principle.

    PubMed

    Hughes, S; Lau, J

    2008-03-01

    A new technique for measuring hand volumes using Archimedes principle is described. The technique involves the immersion of a hand in a water container placed on an electronic balance. The volume is given by the change in weight divided by the density of water. This technique was compared with the more conventional technique of immersing an object in a container with an overflow spout and collecting and weighing the volume of overflow water. The hand volume of two subjects was measured. Hand volumes were 494 +/- 6 ml and 312 +/- 7 ml for the immersion method and 476 +/- 14 ml and 302 +/- 8 ml for the overflow method for the two subjects respectively. Using plastic test objects, the mean difference between the actual and measured volume was -0.3% and 2.0% for the immersion and overflow techniques respectively. This study shows that hand volumes can be obtained more quickly than the overflow method. The technique could find an application in clinics where frequent hand volumes are required.

  1. Effect of solvent on the extraction of phenolic compounds and antioxidant capacity of hazelnut kernel.

    PubMed

    Fanali, Chiara; Tripodo, Giusy; Russo, Marina; Della Posta, Susanna; Pasqualetti, Valentina; De Gara, Laura

    2018-03-22

    Hazelnut kernel phenolic compounds were recovered applying two different extraction approaches, namely ultrasound-assisted solid/liquid extraction (UA-SLE) and solid-phase extraction (SPE). Different solvents were tested evaluating total phenolic compounds and total flavonoids contents together to antioxidant activity. The optimum extraction conditions, in terms of the highest value of total phenolic compounds extracted together to other parameters like simplicity and cost were selected for method validation and individual phenolic compounds analysis. The UA-SLE protocol performed using 0.1 g of defatted sample and 15 mL of extraction solvent (1 mL methanol/1 mL water/8 mL methanol 0.1% formic acid/5 mL acetonitrile) was selected. The analysis of hazelnut kernel individual phenolic compounds was obtained by HPLC coupled with DAD and MS detections. Quantitative analysis was performed using a mixture of six phenolic compounds belonging to phenolic classes' representative of hazelnut. Then, the method was fully validated and the resulting RSD% values for retention time repeatability were below 1%. A good linearity was obtained giving R 2 no lower than 0.997.The accuracy of the extraction method was also assessed. Finally, the method was applied to the analysis of phenolic compounds in three different hazelnut kernel varieties observing a similar qualitative profile with differences in the quantity of detected compounds. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Dynamic species classification of microorganisms across time, abiotic and biotic environments—A sliding window approach

    PubMed Central

    Griffiths, Jason I.; Fronhofer, Emanuel A.; Garnier, Aurélie; Seymour, Mathew; Altermatt, Florian; Petchey, Owen L.

    2017-01-01

    The development of video-based monitoring methods allows for rapid, dynamic and accurate monitoring of individuals or communities, compared to slower traditional methods, with far reaching ecological and evolutionary applications. Large amounts of data are generated using video-based methods, which can be effectively processed using machine learning (ML) algorithms into meaningful ecological information. ML uses user defined classes (e.g. species), derived from a subset (i.e. training data) of video-observed quantitative features (e.g. phenotypic variation), to infer classes in subsequent observations. However, phenotypic variation often changes due to environmental conditions, which may lead to poor classification, if environmentally induced variation in phenotypes is not accounted for. Here we describe a framework for classifying species under changing environmental conditions based on the random forest classification. A sliding window approach was developed that restricts temporal and environmentally conditions to improve the classification. We tested our approach by applying the classification framework to experimental data. The experiment used a set of six ciliate species to monitor changes in community structure and behavior over hundreds of generations, in dozens of species combinations and across a temperature gradient. Differences in biotic and abiotic conditions caused simplistic classification approaches to be unsuccessful. In contrast, the sliding window approach allowed classification to be highly successful, as phenotypic differences driven by environmental change, could be captured by the classifier. Importantly, classification using the random forest algorithm showed comparable success when validated against traditional, slower, manual identification. Our framework allows for reliable classification in dynamic environments, and may help to improve strategies for long-term monitoring of species in changing environments. Our classification pipeline can be applied in fields assessing species community dynamics, such as eco-toxicology, ecology and evolutionary ecology. PMID:28472193

  3. Student Support to WL/ML and WL/AA

    DTIC Science & Technology

    1993-01-01

    the program, so please be candid. I. How did you learn about the Student Support Program? Check one. 14 a) Advertisement (flyer, brochure, campus paper...scientific research 25 27 3 1 3. I was satisfied with the way I spent my time 41 13 2 0 4. I learned a lot 32 21 3 0 5. I feel I contributed to the research...are being sought out and tested daily with the hope that from deep within these crystalline fretworks a signal may appear leading to further study and

  4. Digital Preservation and Deep Infrastructure; Dublin Core Metadata Initiative Progress Report and Workplan for 2002; Video Gaming, Education and Digital Learning Technologies: Relevance and Opportunities; Digital Collections of Real World Objects; The MusArt Music-Retrieval System: An Overview; eML: Taking Mississippi Libraries into the 21st Century.

    ERIC Educational Resources Information Center

    Granger, Stewart; Dekkers, Makx; Weibel, Stuart L.; Kirriemuir, John; Lensch, Hendrik P. A.; Goesele, Michael; Seidel, Hans-Peter; Birmingham, William; Pardo, Bryan; Meek, Colin; Shifrin, Jonah; Goodvin, Renee; Lippy, Brooke

    2002-01-01

    One opinion piece and five articles in this issue discuss: digital preservation infrastructure; accomplishments and changes in the Dublin Core Metadata Initiative in 2001 and plans for 2002; video gaming and how it relates to digital libraries and learning technologies; overview of a music retrieval system; and the online version of the…

  5. Transarterial Coil-Augmented Onyx Embolization for Brain Arteriovenous Malformation

    PubMed Central

    Gao, Xu; Liang, Guobiao; Li, Zhiqing; Wang, Xiaogang; Yu, Chunyong; Cao, Peng; Chen, Jun; Li, Jingyuan

    2014-01-01

    Summary Onyx has been widely adopted for the treatment of arteriovenous malformations (AVMs). However, its control demands operators accumulate a considerable learning curve. We describe our initial experience using a novel injection method for the embolization of AVMs. We retrospectively reviewed the data of all 22 patients with brain AVMs (12 men, 10 women; age range, 12-68 years; mean age, 43.2 years) treated by the transarterial coil-augmented Onyx injection technique. The size of the AVMs ranged from 25 mm to 70 mm (average 35.6 mm). The technical feasibility of the procedure, procedure-related complications, angiographic results, and clinical outcome were evaluated. In every case, endovascular treatment (EVT) was completed. A total of 31 sessions were performed, with a mean injection volume of 6.1 mL (range, 1.5-16.0 mL). An average of 96.7% (range 85%-100%) estimated size reduction was achieved, and 18 AVMs could be completely excluded by EVT alone. The results remained stable on follow-up angiograms. A procedural complication occurred in one patient, with permanent mild neurologic deficit. Our preliminary series demonstrated that the coil-augmented Onyx injection technique is a valuable adjunct achieving excellent nidal penetration and improving the safety of the procedure. PMID:24556304

  6. Implementation of a Goal-Based Systems Engineering Process Using the Systems Modeling Language (SysML)

    NASA Technical Reports Server (NTRS)

    Patterson, Jonathan D.; Breckenridge, Jonathan T.; Johnson, Stephen B.

    2013-01-01

    Building upon the purpose, theoretical approach, and use of a Goal-Function Tree (GFT) being presented by Dr. Stephen B. Johnson, described in a related Infotech 2013 ISHM abstract titled "Goal-Function Tree Modeling for Systems Engineering and Fault Management", this paper will describe the core framework used to implement the GFTbased systems engineering process using the Systems Modeling Language (SysML). These two papers are ideally accepted and presented together in the same Infotech session. Statement of problem: SysML, as a tool, is currently not capable of implementing the theoretical approach described within the "Goal-Function Tree Modeling for Systems Engineering and Fault Management" paper cited above. More generally, SysML's current capabilities to model functional decompositions in the rigorous manner required in the GFT approach are limited. The GFT is a new Model-Based Systems Engineering (MBSE) approach to the development of goals and requirements, functions, and its linkage to design. As a growing standard for systems engineering, it is important to develop methods to implement GFT in SysML. Proposed Method of Solution: Many of the central concepts of the SysML language are needed to implement a GFT for large complex systems. In the implementation of those central concepts, the following will be described in detail: changes to the nominal SysML process, model view definitions and examples, diagram definitions and examples, and detailed SysML construct and stereotype definitions.

  7. Phenotypic Characterization of pncA Mutants of Mycobacterium tuberculosis

    PubMed Central

    Morlock, Glenn P.; Crawford, Jack T.; Butler, W. Ray; Brim, Suzanne E.; Sikes, David; Mazurek, Gerald H.; Woodley, Charles L.; Cooksey, Robert C.

    2000-01-01

    We examined the correlation of mutations in the pyrazinamidase (PZase) gene (pncA) with the pyrazinamide (PZA) resistance phenotype with 60 Mycobacterium tuberculosis isolates. PZase activity was determined by the method of Wayne (L. G. Wayne, Am. Rev. Respir. Dis. 109:147–151, 1974), and the entire pncA nucleotide sequence, including the 74 bp upstream of the start codon, was determined. PZA susceptibility testing was performed by the method of proportions on modified Middlebrook and Cohn 7H10 medium. The PZA MICs were ≥100 μg/ml for 37 isolates, 34 of which had alterations in the pncA gene. These mutations included missense substitutions for 24 isolates, nonsense substitutions for 3 isolates, frameshifts by deletion for 4 isolates, a three-codon insertion for 1 isolate, and putative regulatory mutations for 2 isolates. Among 21 isolates for which PZA MICs were <100 μg/ml, 3 had the same mutation (Thr47→Ala) and 18 had the wild-type sequence. For the three Thr47→Ala mutants PZA MICs were 12.5 μg/ml by the method of proportions on 7H10 agar; two of these were resistant to 100 μg of PZA per ml and the third was resistant to 800 μg of PZA per ml by the BACTEC method. In all, 30 different pncA mutations were found among the 37 pncA mutants. No PZase activity was detected in 35 of 37 strains that were resistant to ≥100 μg of PZA per ml or in 34 of 37 pncA mutants. Reduced PZase activity was found in the three mutants with the Thr47→Ala mutation. This study demonstrates that mutations in the pncA gene may serve as a reliable indicator of resistance to ≥100 μg of PZA per ml. PMID:10952570

  8. Analytical methodology using ion-pair liquid chromatography-tandem mass spectrometry for the determination of four di-ester metabolites of organophosphate flame retardants in California human urine.

    PubMed

    Petropoulou, Syrago-Styliani E; Petreas, Myrto; Park, June-Soo

    2016-02-19

    Alkyl- and aryl-esters of phosphoric acid (both halogenated and non-halogenated) are mainly used as flame retardants (FRs), among other applications, in furniture and consumer products and they are collectively known as organophosphate flame retardants (OPFRs). The absorption, biotransformation or elimination of many of these chemicals in humans and their possible health effects are not yet well known. A major reason for the limited information is the nature of these compounds, which causes several technical difficulties in their isolation and sensitive determination. A novel analytical liquid chromatography tandem mass spectrometry (LC-MS/MS) method for the accurate and sensitive determination of four urinary OPFR metabolites: bis(1,3-dichloro-2-propyl) phosphate (BDCIPP), bis(2-chloroethyl) phosphate (BCEP), bis(1-chloro-2-propyl) phosphate (BCIPP), and diphenyl phosphate (DPhP), using mixed-mode solid phase extraction and isotope. For the first time all four analytes can be identified in one chromatographic run. An extensive investigation of method development parameters (enzymatic hydrolysis, matrix effects, process efficiency, sources of background interferences, linearity, accuracy, precision, stabilities and limits of detection and quantification) was performed in order to address previously reported method inconsistencies and select a process with the highest accuracy and sensitivity. Chromatographic separation was achieved on a Luna C18 (2) (2.00 mm × 150 mm, 3 μm) with mobile phase 80:20 v/v water: MeOH and MeOH: water 95:5 v/v, both containing 1mM tributylamine and 1mM acetic acid. Limits of detection were 0.025 ng mL(-1) for BDCIPP and BCIPP and 0.1 ng mL(-1) for DPhP and BCEP. Absolute recoveries of all four analytes and their labeled compounds were in the range of 88-107%. The method was tested on 13 adult California urine samples. BCEP was detected at 0.4-15 ng mL(-1) with a geometric mean (GM): 1.9 ng mL(-1); BDCIPP at 0.5-7.3 ng mL(-1), (GM: 2.5 ng mL(-1)) and DPhP at

  9. Reliable and More Powerful Methods for Power Analysis in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Zhang, Zhiyong; Zhao, Yanyun

    2017-01-01

    The normal-distribution-based likelihood ratio statistic T[subscript ml] = nF[subscript ml] is widely used for power analysis in structural Equation modeling (SEM). In such an analysis, power and sample size are computed by assuming that T[subscript ml] follows a central chi-square distribution under H[subscript 0] and a noncentral chi-square…

  10. Identification of multiple leaks in pipeline: Linearized model, maximum likelihood, and super-resolution localization

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Ghidaoui, Mohamed S.

    2018-07-01

    This paper considers the problem of identifying multiple leaks in a water-filled pipeline based on inverse transient wave theory. The analytical solution to this problem involves nonlinear interaction terms between the various leaks. This paper shows analytically and numerically that these nonlinear terms are of the order of the leak sizes to the power two and; thus, negligible. As a result of this simplification, a maximum likelihood (ML) scheme that identifies leak locations and leak sizes separately is formulated and tested. It is found that the ML estimation scheme is highly efficient and robust with respect to noise. In addition, the ML method is a super-resolution leak localization scheme because its resolvable leak distance (approximately 0.15λmin , where λmin is the minimum wavelength) is below the Nyquist-Shannon sampling theorem limit (0.5λmin). Moreover, the Cramér-Rao lower bound (CRLB) is derived and used to show the efficiency of the ML scheme estimates. The variance of the ML estimator approximates the CRLB proving that the ML scheme belongs to class of best unbiased estimator of leak localization methods.

  11. Supramolecular interaction of methotrexate with cucurbit[7]uril and analytical application

    NASA Astrophysics Data System (ADS)

    Chang, Yin-Xia; Zhang, Xiang-Mei; Duan, Xue-Chao; Liu, Fan; Du, Li-Ming

    2017-08-01

    The supramolecular interaction between cucurbit[7]uril (CB[7]) as the host and the anti-cancer drug methotrexate (MTX) as the guest was studied using fluorescence spectroscopy, UV-visible absorption spectroscopy, 1H NMR, 2D NOESY, and theoretical calculations. The experimental results confirmed the formation of 1:2 inclusion complex with CB[7] and indicated a simple and sensitive competitive method for the fluorescence detection of MTX. It was found that the fluorescence intensities of CB[7]-palmatine, CB[7]-berberine and CB[7]-coptisine were quenched linearly upon the addition of MTX. The linear ranges obtained in the detection of MTX were 0.1-15 μg mL- 1, 0.2-15 μg mL- 1, and 0.4-15 μg mL- 1 with detection limits of 0.03 μg mL-1, 0.06 μg mL-1, and 0.13 μg mL-1, respectively. This method can be used for the determination of MTX in biological fluids. These results suggested that cucurbit[7]uril is a promising drug carrier for targeted MTX delivery and monitoring, with improved efficacy and reduced toxicity in normal tissues.

  12. Recent advances in environmental data mining

    NASA Astrophysics Data System (ADS)

    Leuenberger, Michael; Kanevski, Mikhail

    2016-04-01

    Due to the large amount and complexity of data available nowadays in geo- and environmental sciences, we face the need to develop and incorporate more robust and efficient methods for their analysis, modelling and visualization. An important part of these developments deals with an elaboration and application of a contemporary and coherent methodology following the process from data collection to the justification and communication of the results. Recent fundamental progress in machine learning (ML) can considerably contribute to the development of the emerging field - environmental data science. The present research highlights and investigates the different issues that can occur when dealing with environmental data mining using cutting-edge machine learning algorithms. In particular, the main attention is paid to the description of the self-consistent methodology and two efficient algorithms - Random Forest (RF, Breiman, 2001) and Extreme Learning Machines (ELM, Huang et al., 2006), which recently gained a great popularity. Despite the fact that they are based on two different concepts, i.e. decision trees vs artificial neural networks, they both propose promising results for complex, high dimensional and non-linear data modelling. In addition, the study discusses several important issues of data driven modelling, including feature selection and uncertainties. The approach considered is accompanied by simulated and real data case studies from renewable resources assessment and natural hazards tasks. In conclusion, the current challenges and future developments in statistical environmental data learning are discussed. References - Breiman, L., 2001. Random Forests. Machine Learning 45 (1), 5-32. - Huang, G.-B., Zhu, Q.-Y., Siew, C.-K., 2006. Extreme learning machine: theory and applications. Neurocomputing 70 (1-3), 489-501. - Kanevski, M., Pozdnoukhov, A., Timonin, V., 2009. Machine Learning for Spatial Environmental Data. EPFL Press; Lausanne, Switzerland, p.392. - Leuenberger, M., Kanevski, M., 2015. Extreme Learning Machines for spatial environmental data. Computers and Geosciences 85, 64-73.

  13. Antibacterial and antifungal activity of Flindersine isolated from the traditional medicinal plant, Toddalia asiatica (L.) Lam.

    PubMed

    Duraipandiyan, V; Ignacimuthu, S

    2009-06-25

    The leaves and root of Toddalia asiatica (L.) Lam. (Rutaceae) are widely used as a folk medicine in India. Hexane, chloroform, ethyl acetate, methanol and water extracts of Toddalia asiatica leaves and isolated compound Flindersine were tested against bacteria and fungi. Antibacterial and antifungal activities were tested against bacteria and fungi using disc-diffusion method and minimum inhibitory concentrations (MICs). The compound was confirmed using X-ray crystallography technique. Antibacterial and antifungal activities were observed in ethyl acetate extract. One active principle Flindersine (2,6-dihydro-2,2-dimethyl-5H-pyrano [3,2-c] quinoline-5-one-9cl) was isolated from the ethyl acetate extract. The MIC values of the compound against bacteria Bacillus subtilis (31.25 microg/ml), Staphylococcus aureus (62.5 microg/ml), Staphylococcus epidermidis (62.5 microg/ml), Enterococcus faecalis (31.25 microg/ml), Pseudomonas aeruginosa (250 microg/ml), Acinetobacter baumannii (125 microg/ml) and fungi Trichophyton rubrum 57 (62.5 microg/ml), Trichophyton mentagrophytes (62.5 microg/ml), Trichophyton simii (62.5 microg/ml), Epidermophyton floccosum (62.5 microg/ml), Magnaporthe grisea (250 microg/ml) and Candida albicans (250 microg/ml) were determined. Ethyl acetate extract showed promising antibacterial and antifungal activity and isolated compound Flindersine showed moderate activity against bacteria and fungi.

  14. Ultrasound assessment of cranial spread during caudal blockade in children: Effect of different volumes of local anesthetic

    PubMed Central

    Sinha, Chandni; Kumar, Amarjeet; Sharma, Shalini; Singh, Akhilesh Kumar; Majumdar, Somak; Kumar, Ajeet; Sahay, Nishant; Kumar, Bindey; Bhadani, UK

    2017-01-01

    Background: Ultrasound-guided caudal block injection is a simple, safe, and effective method of anesthesia/analgesia in pediatric patients. The volume of caudal drug required has always been a matter of debate. Materials and Methods: This present prospective, randomized, double-blinded study aimed to measure extent of the cranial spread of caudally administered levobupivacaine in Indian children by means of real-time ultrasonography. Ninety American Society of Anesthesiologists I/II children scheduled for urogenital surgeries were enrolled in this trial. Anesthesia and caudal analgesia were administered in a standardized manner in the patients. The patients received 0.5 ml/kg or 1 ml/kg or 1.25 ml/kg of 0.125% levobupivacaine according to the group allocated. Cranial spread of local anesthetic was noted using ultrasound. Results: There was no difference in the spread when related to age, sex, weight, or body mass index. A significant difference of ultrasound-assessed cranial spread of the local anesthetic was found between Group 1 (0.5 ml/kg) with both Group 2 (1 ml/kg) (P = 0.001) and with Group 3 (1.125 ml/kg) (P < 0.001) but there is no significant difference between Group 2 and Group 3 (P = 0.451) revealing that spinal level spread is only different between 0.5 ml/kg and 1 ml/kg of local anesthetic. Conclusion: In conclusion, the ultrasound assessment of local anesthetic spread after a caudal block showed that cranial spread of the block is dependent on the volume injected into the caudal space. Since there was no difference between 1 ml/kg and 1.25 ml/kg, to achieve a dermatomal blockade up to thoracic level, we might have to increase the dose beyond 1.25 ml/kg, keeping the toxic dose in mind. PMID:29033727

  15. Behavioral Functions of the Mesolimbic Dopaminergic System: an Affective Neuroethological Perspective

    PubMed Central

    Alcaro, Antonio; Huber, Robert; Panksepp, Jaak

    2008-01-01

    The mesolimbic dopaminergic (ML-DA) system has been recognized for its central role in motivated behaviors, various types of reward, and, more recently, in cognitive processes. Functional theories have emphasized DA's involvement in the orchestration of goal-directed behaviors, and in the promotion and reinforcement of learning. The affective neuroethological perspective presented here, views the ML-DA system in terms of its ability to activate an instinctual emotional appetitive state (SEEKING) evolved to induce organisms to search for all varieties of life-supporting stimuli and to avoid harms. A description of the anatomical framework in which the ML system is embedded is followed by the argument that the SEEKING disposition emerges through functional integration of ventral basal ganglia (BG) into thalamocortical activities. Filtering cortical and limbic input that spread into BG, DA transmission promotes the “release” of neural activity patterns that induce active SEEKING behaviors when expressed at the motor level. Reverberation of these patterns constitutes a neurodynamic process for the inclusion of cognitive and perceptual representations within the extended networks of the SEEKING urge. In this way, the SEEKING disposition influences attention, incentive salience, associative learning, and anticipatory predictions. In our view, the rewarding properties of drugs of abuse are, in part, caused by the activation of the SEEKING disposition, ranging from appetitive drive to persistent craving depending on the intensity of the affect. The implications of such a view for understanding addiction are considered, with particular emphasis on factors predisposing individuals to develop compulsive drug seeking behaviors. PMID:17905440

  16. Behavioral functions of the mesolimbic dopaminergic system: an affective neuroethological perspective.

    PubMed

    Alcaro, Antonio; Huber, Robert; Panksepp, Jaak

    2007-12-01

    The mesolimbic dopaminergic (ML-DA) system has been recognized for its central role in motivated behaviors, various types of reward, and, more recently, in cognitive processes. Functional theories have emphasized DA's involvement in the orchestration of goal-directed behaviors and in the promotion and reinforcement of learning. The affective neuroethological perspective presented here views the ML-DA system in terms of its ability to activate an instinctual emotional appetitive state (SEEKING) evolved to induce organisms to search for all varieties of life-supporting stimuli and to avoid harms. A description of the anatomical framework in which the ML system is embedded is followed by the argument that the SEEKING disposition emerges through functional integration of ventral basal ganglia (BG) into thalamocortical activities. Filtering cortical and limbic input that spreads into BG, DA transmission promotes the "release" of neural activity patterns that induce active SEEKING behaviors when expressed at the motor level. Reverberation of these patterns constitutes a neurodynamic process for the inclusion of cognitive and perceptual representations within the extended networks of the SEEKING urge. In this way, the SEEKING disposition influences attention, incentive salience, associative learning, and anticipatory predictions. In our view, the rewarding properties of drugs of abuse are, in part, caused by the activation of the SEEKING disposition, ranging from appetitive drive to persistent craving depending on the intensity of the affect. The implications of such a view for understanding addiction are considered, with particular emphasis on factors predisposing individuals to develop compulsive drug seeking behaviors.

  17. Development and validation of a rapid turboflow LC-MS/MS method for the quantification of LSD and 2-oxo-3-hydroxy LSD in serum and urine samples of emergency toxicological cases.

    PubMed

    Dolder, Patrick C; Liechti, Matthias E; Rentsch, Katharina M

    2015-02-01

    Lysergic acid diethylamide (LSD) is a widely used recreational drug. The aim of the present study is to develop a quantitative turboflow LC-MS/MS method that can be used for rapid quantification of LSD and its main metabolite 2-oxo-3-hydroxy LSD (O-H-LSD) in serum and urine in emergency toxicological cases without time-consuming extraction steps. The method was developed on an ion-trap LC-MS/MS instrument coupled to a turbulent-flow extraction system. The validation data showed no significant matrix effects and no ion suppression has been observed in serum and urine. Mean intraday accuracy and precision for LSD were 101 and 6.84%, in urine samples and 97.40 and 5.89% in serum, respectively. For O-H-LSD, the respective values were 97.50 and 4.99% in urine and 107 and 4.70% in serum. Mean interday accuracy and precision for LSD were 100 and 8.26% in urine and 101 and 6.56% in serum, respectively. For O-H-LSD, the respective values were 101 and 8.11% in urine and 99.8 and 8.35% in serum, respectively. The lower limit of quantification for LSD was determined to be 0.1 ng/ml. LSD concentrations in serum were expected to be up to 8 ng/ml. 2-Oxo-3-hydroxy LSD concentrations in urine up to 250 ng/ml. The new method was accurate and precise in the range of expected serum and urine concentrations in patients with a suspected LSD intoxication. Until now, the method has been applied in five cases with suspected LSD intoxication where the intake of the drug has been verified four times with LSD concentrations in serum in the range of 1.80-14.70 ng/ml and once with a LSD concentration of 1.25 ng/ml in urine. In serum of two patients, the O-H-LSD concentration was determined to be 0.99 and 0.45 ng/ml. In the urine of a third patient, the O-H-LSD concentration was 9.70 ng/ml.

  18. Adaptations of the Saker-Solomons test: simple, reliable colorimetric field assays for chloroquine and its metabolites in urine.

    PubMed

    Mount, D L; Nahlen, B L; Patchen, L C; Churchill, F C

    1989-01-01

    Two field-adapted colorimetric methods for measuring the antimalarial drug chloroquine in urine are described. Both are modifications of the method of Saker and Solomons for screening urine for phencyclidine and other drugs of abuse, using the colour reagent tetrabromophenolphthalein ethyl ester. One method is semiquantitative, detecting the presence of chloroquine (Cq) and its metabolites in urine with a 1 microgram/ml detection limit; it is more sensitive and reliable than the commonly used Dill-Glazko method and is as easy to apply in the field. The second method uses a hand-held, battery-operated filter photometer to quantify Cq and its metabolites with a 2 microgram/ml detection limit and a linear range up to 8 micrograms/ml. The first method was validated in the field using a published quantitative colorimetric method and samples from a malaria study in Nigeria. The second method was validated in the laboratory against high-performance liquid chromatographic results on paired samples from the Nigerian study. Both methods may be used in remote locations where malaria is endemic and no electricity is available.

  19. Simultaneous determination of nikethamide and lidocaine in human blood and cerebrospinal fluid by high performance liquid chromatography.

    PubMed

    Chen, Lili; Liao, Linchuan; Zuo, Zhong; Yan, Youyi; Yang, Lin; Fu, Qiang; Chen, Yu; Hou, Junhong

    2007-04-11

    Nikethamide and lidocaine are often requested to be quantified simultaneously in forensic toxicological analysis. A simple reversed-phase high performance liquid chromatography (RP-HPLC) method has been developed for their simultaneous determination in human blood and cerebrospinal fluid. The method involves simple protein precipitation sample treatment followed by quantification of analytes using HPLC at 263 nm. Analytes were separated on a 5 microm Zorbax Dikema C18 column (150 mm x 4.60 mm, i.d.) with a mobile phase of 22:78 (v/v) mixture of methanol and a diethylamine-acetic acid buffer, pH 4.0. The mean recoveries were between 69.8 and 94.4% for nikethamide and between 78.9 and 97.2% for lidocaine. Limits of detection (LODs) for nikethamide and lidocaine were 0.008 and 0.16 microg/ml in plasma and 0.007 and 0.14 microg/ml in cerebrospinal fluid, respectively. The mean intra-assay and inter-assay coefficients of variation (CVs) for both analytes were less than 9.2 and 10.8%, respectively. The developed method was applied to blood sample analyses in eight forensic cases, where blood concentrations of lidocaine ranged from 0.68 to 34.4 microg/ml and nikethamide ranged from 1.25 to 106.8 microg/ml. In six cases cerebrospinal fluid analysis was requested. The values ranged from 20.3 to 185.6 microg/ml of lidocaine and 8.0 to 72.4 microg/ml of nikethamide. The method is simple and sensitive enough to be used in toxicological analysis for simultaneous determination of nikethamide and lidocaine in blood and cerebrospinal fluid.

  20. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    PubMed

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  1. Metacognitive Strategies: A Foundation for Early Word Spelling and Reading in Kindergartners with SLI

    ERIC Educational Resources Information Center

    Schiff, Rachel; Nuri Ben-Shushan, Yohi; Ben-Artzi, Elisheva

    2017-01-01

    This study assessed the effect of metacognitive instruction on the spelling and word reading of Hebrew-speaking children with specific language impairment (SLI). Participants were 67 kindergarteners with SLI in a supported learning context. Children were classified into three spelling instruction groups: (a) metalinguistic instruction (ML), (b) ML…

  2. Comparison of Radio Frequency Distinct Native Attribute and Matched Filtering Techniques for Device Discrimination and Operation Identification

    DTIC Science & Technology

    identification. URE from ten MSP430F5529 16-bit microcontrollers were analyzed using: 1) RF distinct native attributes (RF-DNA) fingerprints paired with multiple...discriminant analysis/maximum likelihood (MDA/ML) classification, 2) RF-DNA fingerprints paired with generalized relevance learning vector quantized

  3. Salting-out assisted liquid-liquid extraction with the aid of experimental design for determination of benzimidazole fungicides in high salinity samples by high-performance liquid chromatography.

    PubMed

    Wen, Yingying; Li, Jinhua; Yang, Fangfang; Zhang, Weiwei; Li, Weiran; Liao, Chunyang; Chen, Lingxin

    2013-03-15

    A novel method for the simultaneous separation and determination of four benzimidazole fungicides (i.e., carbendazim, fuberidazole, thiophanate-methyl and thiophanate) in high salinity samples was developed by using salting-out assisted liquid-liquid extraction (SALLE) via water-miscible acetonitrile as the extractant coupled with high-performance liquid chromatography. Box-Behnken design and response surface were employed to assist the optimization of SALLE conditions, including volume of salting-out solvent, the pH of sample solution and salting-out solvent as variable factors. The optimal salting-out parameters were obtained as follows: 2 mL of acetonitrile was added to 2 mL of sample solution with pH=4 and then 2 mL salting-out solvent containing 5 mol L(-1) sodium chloride at a pH of 7 was added to the solution for extraction. This procedure afforded a convenient and cost-saving operation with good cleanup ability for the benzimidazole fungicides, such as good linear relationships (R>0.996) between peak area and concentration from 2.5 ng mL(-1) to 500 ng mL(-1), low limits of detection between 0.14 ng mL(-1) and 0.38 ng mL(-1) and the intra-day precisions of retention time below 1.0%. The method recoveries obtained at fortified three concentrations for three seawater samples ranged from 60.4% to 99.1%. The simple, rapid and eco-benign SALLE based method proved potentially applicable for trace benzimidazole fungicides analysis in high salinity samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Phytochemical analysis and antibacterial activities extracts of mangrove leaf against the growth of some pathogenic bacteria.

    PubMed

    Alizadeh Behbahani, Behrooz; Tabatabaei Yazdi, Farideh; Shahidi, Fakhri; Noorbakhsh, Hamid; Vasiee, Alireza; Alghooneh, Ali

    2018-01-01

    In this study, the effects of water, ethanol, methanol and glycerin at five levels (0, 31.25, 83.33, 125 and 250 ml) were investigated on the efficiency of mangrove leaf extraction using mixture optimal design. The antimicrobial effect of the extracts on Streptococcus pneumoniae, Enterococcus faecium and Klebsiella pneumoniae was evaluated using disk diffusion, minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) methods. The mangrove leaf extraction components were identified through gas chromatography/mass spectrometry (GC/MS). Phytochemical analysis (alkaloids, tannins, saponins, flavone and glycosides) were evaluated based on qualitative methods. Antioxidant activity of extracts was measured using 2,2-diphenyl-1-picrylhydrazyl (DPPH), ferric reducing antioxidant potential (FRAP) methods. Maximum antimicrobial effect was observed in Enterococcus faecium and highest resistance against mangrove leaf extract in Enterococcus faecium and Klebsiella pneumoniae, respectively. Increasing concentration of mangrove extracts had a significant effect (p ≤ 0.05) on inhibition zone diameter. The MICs of the mangrove leaf extraction varied from 4 mg/ml to 16 mg/ml. The optimum formulation was found to contain glycerin (0 ml), water (28.22 ml), methanol (59.83 ml) and ethanol (161.95 ml). The results showed that the highest antioxidant activity was related to optimum extract of mangrove leaf and ethanolic extract respectively. The results of phytochemical screening of Avicennia marina leaves extract showed the existence of alkaloids, tannins, saponins, flavone and glycosides. 2-Propenoic acid, 3-phenyl- was the major compound of Avicennia marina. The results of non-significant lack of fit tests, and F value (14.62) indicated that the model was sufficiently accurate. In addition, the coefficient of variations (16.8%) showed an acceptable reproducibility. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A facile physical approach to make chitosan soluble in acid-free water.

    PubMed

    Fu, Yinghao; Xiao, Congming

    2017-10-01

    We changed the situation that chitosan was only dissolved in diluted acid through mild physical treatment. In viewing of the usual methods to modify chitosan are chemical ones, we established the approach by using a water-soluble chitosan derivative as the model polymer. Its water-solubility was modulated via changing the concentration of solution and varying the precipitants. Such a physical method was adopted to treat chitiosan. One gram chitosan was dissolved in a mixture of 100mL 10% acetic acid and 50mL methanol, and then precipitated from a precipitant consisted of 10mL ethanol and 90mL acetate ester. The treated chitosan became soluble in acid-free water completely, and its solubility was 8.02mg/mL. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Potentiometric titration for determining the composition and stability of metal(II) alginates and pectinates in aqueous solutions

    NASA Astrophysics Data System (ADS)

    Kaisheva, N. Sh.; Kaishev, A. Sh.

    2015-07-01

    The compositions and stabilities of Cu2+, Mn2+, Pb2+, Ca2+, Zn2+, Cd2+, Co2+, and Ni2+ alginates and pectinates are determined in aqueous solutions via titrimetry and potentiometry with calculations performed using Bjerrum's method, the curve intersection technique, and the equilibrium shift method. It is found that the interaction between Cu2+ and polyuronides is a stepwise process and, depending on the ligand concentration and the method of determination, Cu2+ alginate can be characterized by its ML, ML2, and ML3 compositions (where M is the metal ion and L is the structural unit of polyuronide) and stability constants logβ = 2.65, 5.00-5.70, and 7.18-7.80, respectively. The compositions of Cu2+ pectinates are ML and ML2 with logβ = 3.00 and 7.64-7.94, respectively. It is concluded that Pb2+, Ca2+, Mn2+, Zn2+, Cd2+, Co2+, and Ni2+ ions form only alginates and pectinates of ML2 composition with logβ values of 3.45 (Pb2+ alginate), 2.20 (Ca2+ alginate), 1.06 (Mn2+ alginate), 3.51 (Pb2+ pectinate), 2.35 (Ca2+ pectinate), and 1.24 (Mn2+ pectinate). The pectinates are shown to be more stable than the alginates, the most stable compounds being those formed by polyuronides and Cu2+. The least stable are those with Mn2+.

  7. On the evaluation of the fidelity of supervised classifiers in the prediction of chimeric RNAs.

    PubMed

    Beaumeunier, Sacha; Audoux, Jérôme; Boureux, Anthony; Ruffle, Florence; Commes, Thérèse; Philippe, Nicolas; Alves, Ronnie

    2016-01-01

    High-throughput sequencing technology and bioinformatics have identified chimeric RNAs (chRNAs), raising the possibility of chRNAs expressing particularly in diseases can be used as potential biomarkers in both diagnosis and prognosis. The task of discriminating true chRNAs from the false ones poses an interesting Machine Learning (ML) challenge. First of all, the sequencing data may contain false reads due to technical artifacts and during the analysis process, bioinformatics tools may generate false positives due to methodological biases. Moreover, if we succeed to have a proper set of observations (enough sequencing data) about true chRNAs, chances are that the devised model can not be able to generalize beyond it. Like any other machine learning problem, the first big issue is finding the good data to build models. As far as we were concerned, there is no common benchmark data available for chRNAs detection. The definition of a classification baseline is lacking in the related literature too. In this work we are moving towards benchmark data and an evaluation of the fidelity of supervised classifiers in the prediction of chRNAs. We proposed a modelization strategy that can be used to increase the tools performances in context of chRNA classification based on a simulated data generator, that permit to continuously integrate new complex chimeric events. The pipeline incorporated a genome mutation process and simulated RNA-seq data. The reads within distinct depth were aligned and analysed by CRAC that integrates genomic location and local coverage, allowing biological predictions at the read scale. Additionally, these reads were functionally annotated and aggregated to form chRNAs events, making it possible to evaluate ML methods (classifiers) performance in both levels of reads and events. Ensemble learning strategies demonstrated to be more robust to this classification problem, providing an average AUC performance of 95 % (ACC=94 %, Kappa=0.87 %). The resulting classification models were also tested on real RNA-seq data from a set of twenty-seven patients with acute myeloid leukemia (AML).

  8. A Generic Metallographic Preparation Method for Magnesium Alloys

    DTIC Science & Technology

    2013-05-01

    treated castings or wrought alloys. Stains solid solution, leaves compound white. 9: 100-ml water 0.2–2-g oxalic acid For pure Mg and most alloys. Swab...water 2-g oxalic acid Pure Mg Mg-Mn Mg-Al, Mg-Al-Zn (Al+Znɝ%) Mg-Al, Mg-Al-Zn (Al+Zn>5%) Mg-Zn-Zr Mg-Th-Zr Swab...using a 100-ml ethanol, 10-ml distilled water, 10-ml acetic acid , and 5-g picric acid etchant. Immersed and using gentle agitation 5–20 s. Though not

  9. Mercuric 5-Nitrotetrazole, a Possible Replacement for Lead Azide in Australian Ordnance. Part 1. An Assessment of Preparation Methods

    DTIC Science & Technology

    1983-08-01

    nitrotetrazole) (Cuen2 (NT)2 ) [7) A solution of sodium nitrite (26 g) and cupric sulfate pentahydrate (13.75 g) in water (75 ml) was placed in the 600 ml...pan and cooled to 50C. To this stirred solution was added a solution of 5-aminotetrazole monohydrate (12.9 g), cupric sulfate pentahydrate (1.0 g) and...stirring then a solution of cupric sulfate pentahydrate (5.25 q) and ethylenediamine (11.25 ml) in water (20 ml) was added. Stirring and heating were

  10. 40 CFR Appendix A to Subpart Hhhh... - Method for Determining Free-Formaldehyde in Urea-Formaldehyde Resins by Sodium Sulfite (Iced...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (class A). 3.2.5One 10-mL pipette (class A). 3.2.6One 50-mL graduated cylinder (class A). 3.2.7A pH meter, standardized using pH 7 and pH 10 buffers. 3.2.8Magnetic stirrer. 3.2.9Magnetic stirring bars. 3.2.10Several 5... mL of 1 M sodium sulfite into a stirred 250-mL beaker. 3.5.1.2Using a standardized pH meter, measure...

  11. Simultaneous determination of vitamins A and D3 in dairy products by liquid chromatography-tandem mass spectrometry (LC-MS/MS)

    NASA Astrophysics Data System (ADS)

    Barakat, I. S. A.; Hammouri, M. K.; Habib, I.

    2015-10-01

    A potential method for simultaneous determination of vitamin A and vitamin D3 (25- hydroxyvitamin D3) in fresh milk samples is addressed. The method is based on combination of high performance liquid chromatography and mass spectrometry during the course of analysis. The method applied for determination of vitamins A and D3 on eighteen (18) different fresh milk samples using liquid chromatography along with tandem -mass spectrometry. The work describes the suitability of the proposed method for the simultaneous determination of both vitamins using LC-MS/MS as a specific and quantitative technique. The vitamins of milk were separated by C18 Thermo gold column(100mm × 4.6mm × 5 μm) with a flow rate of 1ml/min (using an isocratic mobile phase). The method was validated using duplicate analyses, relative recovery experiment, and comparative analysis with control samples. Liquid- liquid extraction was employed as a pre-concentration step with n-hexane - dichloromethane mixture (90%:10%) as an extraction solvent. The molecular ions (m/z) appeared near 286 and 385nm and for the base peaks were appeared near 255 and 355nm for vitamins A and D3. Good correlation coefficients were obtained, 0.9999 for vitamin D3 and 0.9994 for vitamin A. The limit of detection and the limit of quantification were found to be 0.09ng/ml and 0.54ng/ml for vitamin D3 and 0.32ng/ml and 1.8ng/ml and for vitamin A. The proposed method showed excellent recoveries, about 98% for both vitamins A and D3.

  12. On the uncertainty in single molecule fluorescent lifetime and energy emission measurements

    NASA Technical Reports Server (NTRS)

    Brown, Emery N.; Zhang, Zhenhua; Mccollom, Alex D.

    1995-01-01

    Time-correlated single photon counting has recently been combined with mode-locked picosecond pulsed excitation to measure the fluorescent lifetimes and energy emissions of single molecules in a flow stream. Maximum likelihood (ML) and least square methods agree and are optimal when the number of detected photons is large however, in single molecule fluorescence experiments the number of detected photons can be less than 20, 67% of those can be noise and the detection time is restricted to 10 nanoseconds. Under the assumption that the photon signal and background noise are two independent inhomogeneous poisson processes, we derive the exact joint arrival time probably density of the photons collected in a single counting experiment performed in the presence of background noise. The model obviates the need to bin experimental data for analysis, and makes it possible to analyze formally the effect of background noise on the photon detection experiment using both ML or Bayesian methods. For both methods we derive the joint and marginal probability densities of the fluorescent lifetime and fluorescent emission. the ML and Bayesian methods are compared in an analysis of simulated single molecule fluorescence experiments of Rhodamine 110 using different combinations of expected background nose and expected fluorescence emission. While both the ML or Bayesian procedures perform well for analyzing fluorescence emissions, the Bayesian methods provide more realistic measures of uncertainty in the fluorescent lifetimes. The Bayesian methods would be especially useful for measuring uncertainty in fluorescent lifetime estimates in current single molecule flow stream experiments where the expected fluorescence emission is low. Both the ML and Bayesian algorithms can be automated for applications in molecular biology.

  13. On the Uncertainty in Single Molecule Fluorescent Lifetime and Energy Emission Measurements

    NASA Technical Reports Server (NTRS)

    Brown, Emery N.; Zhang, Zhenhua; McCollom, Alex D.

    1996-01-01

    Time-correlated single photon counting has recently been combined with mode-locked picosecond pulsed excitation to measure the fluorescent lifetimes and energy emissions of single molecules in a flow stream. Maximum likelihood (ML) and least squares methods agree and are optimal when the number of detected photons is large, however, in single molecule fluorescence experiments the number of detected photons can be less than 20, 67 percent of those can be noise, and the detection time is restricted to 10 nanoseconds. Under the assumption that the photon signal and background noise are two independent inhomogeneous Poisson processes, we derive the exact joint arrival time probability density of the photons collected in a single counting experiment performed in the presence of background noise. The model obviates the need to bin experimental data for analysis, and makes it possible to analyze formally the effect of background noise on the photon detection experiment using both ML or Bayesian methods. For both methods we derive the joint and marginal probability densities of the fluorescent lifetime and fluorescent emission. The ML and Bayesian methods are compared in an analysis of simulated single molecule fluorescence experiments of Rhodamine 110 using different combinations of expected background noise and expected fluorescence emission. While both the ML or Bayesian procedures perform well for analyzing fluorescence emissions, the Bayesian methods provide more realistic measures of uncertainty in the fluorescent lifetimes. The Bayesian methods would be especially useful for measuring uncertainty in fluorescent lifetime estimates in current single molecule flow stream experiments where the expected fluorescence emission is low. Both the ML and Bayesian algorithms can be automated for applications in molecular biology.

  14. Practical utility of on-line clearance and blood temperature monitors as noninvasive techniques to measure hemodialysis blood access flow.

    PubMed

    Fontseré, Néstor; Blasco, Miquel; Maduell, Francisco; Vera, Manel; Arias-Guillen, Marta; Herranz, Sandra; Blanco, Teresa; Barrufet, Marta; Burrel, Marta; Montaña, Javier; Real, Maria Isabel; Mestres, Gaspar; Riambau, Vicenç; Campistol, Josep M

    2011-01-01

    Access blood flow (Qa) measurements are recommended by the current guidelines as one of the most important components in vascular access maintenance programs. This study evaluates the efficiency of Qa measurement with on-line conductivity (OLC-Qa) and blood temperature monitoring (BTM-Qa) in comparison with the gold standard saline dilution method (SDM-Qa). 50 long-term hemodialysis patients (42 arteriovenous fistulas/8 arteriovenous grafts) were studied. Bland-Altman and Lin's coefficient (ρ(c)) were used to study accuracy and precision. Mean values were 1,021.7 ± 502.4 ml/min SDM-Qa, 832.8 ± 574.3 ml/min OLC-Qa (p = 0.007) and 1,094.9 ± 491.9 ml/min with BTM-Qa (p = NS). Biases and ρ(c) obtained were -188.8 ml/min (ρ(c) 0.58) OLC-Qa and 73.2 ml/min (ρ(c) 0.89) BTM-Qa. The limits of agreement (bias ± 1.96 SD) obtained were from -1,119 to 741.3 ml/min (OLC-Qa) and -350.6 to 497.2 ml/min (BTM-Qa). BTM-Qa and OLC-Qa are valid noninvasive and practical methods to estimate Qa, although BTM-Qa was more accurate and had better concordance than OLC-Qa compared with SDM-Qa. Copyright © 2010 S. Karger AG, Basel.

  15. [Application of bilateral direct anterior approach total hip arthroplasty: a report of 22 cases].

    PubMed

    Tang, J; Lv, M; Zhou, Y X; Zhang, J

    2017-04-18

    To analyze the operation technique and the methods to avoid early complications on the learning curve for bilateral direct anterior approach (DAA) total hip arthroplasty (THA). We retrospectively studied a series of continued cases with bilateral avascular necrosis of the femoral head (AVN) or degenerative dysplastic hip and rheumatoid arthritis that were treated by DAA THA in Beijing Jishuitan Hospital. A total of 22 patients with 44 hips were analyzed from June 2014 to August 2016 in this study. There were 17 males and 5 females, and the median age was 48 years (range: 34-67 years). All the surgery was done by DAA method by two senior surgeons. The clinic characters, early surgery treatment results and complications were analyzed. We used the cementless stems in all the cases. The average operating time was (167±23) min; the average blood loss was (775±300) mL;the blood transfusion was in average (327±341) mL; the wound drainage in average was (111±73) mL. Most of the patients could move out of the bed by themselves on the first day after operation, 5 patients could walk without crutches on the first operating day, and 13 patients could squat on the third days after operation. The patients were discharged averagely 4 days after operation. We followed up all the patients for averagely 16 months (range: 8-24 months). There was no loosening or failure case in the latest follow up. In the study, 2 patients had great trochanter fracture, 2 patients had thigh pain, 4 patients had lateral femoral cutaneous nerve palsy, and 3 patients had muscle damage. The Harris scores were improved from 29±8 preoperatively to 90±3 postoperatively (P<0.01). The DAA THA can achieve faster recovery and flexible hip joint after operation. However it is a kind of surgery with high technique demanding. Carefully selected patients, and skilled technique, can help the surgeon avoid the early complications. It is associated with high complication rate in the learning curve for bilateral DAA THA.

  16. Accuracy of water displacement hand volumetry using an ethanol and water mixture.

    PubMed

    Hargens, Alan R; Kim, Jong-Moon; Cao, Peihong

    2014-02-01

    The traditional water displacement method for measuring limb volume is improved by adding ethanol to water. Four solutions were tested (pure water, 0.5% ethanol, 3% ethanol, and 6% ethanol) to determine the most accurate method when measuring the volume of a known object. The 3% and 6% ethanol solutions significantly reduced (P < 0.001) the mean standard deviation of 10 measurements of a known sphere (390.1 +/- 0.25 mi) from 2.27 ml with pure water to 0.9 ml using the 3% alcohol solution and to 0.6 using 6% ethanol solution (the mean coefficients of variation were reduced from 0.59% for water to 0.22% for 3% ethanol and 0.16% for 6% ethanol). The spheres' volume measured with pure water, 0.5% ethanol solution, 3% ethanol solution, and 6% ethanol solution was 383.2 +/- 2.27 ml, 384.4 +/- 1.9 ml, 389.4 +/- 0.9 ml, and 390.2 +/- 0.6 ml, respectively. Using the 3% and 6% ethanol solutions to measure hand volume blindly in 10 volunteers significantly reduced the mean coefficient of variation for hand volumetry from 0.91% for water to 0.52% for the 3% ethanol solution (P < 0.05) and to 0.46% for the 6% ethanol solution (P < 0.05). The mean standard deviation from all 10 subjects decreased from 4.2 ml for water to 2.3 ml for 3% ethanol solution and 2.1 ml for the 6% solution. These findings document that the accuracy and reproducibility of hand volume measurements are improved by small additions of ethanol, most likely by reducing surface tension of water.

  17. How to Improve Fault Tolerance in Disaster Predictions: A Case Study about Flash Floods Using IoT, ML and Real Data.

    PubMed

    Furquim, Gustavo; Filho, Geraldo P R; Jalali, Roozbeh; Pessin, Gustavo; Pazzi, Richard W; Ueyama, Jó

    2018-03-19

    The rise in the number and intensity of natural disasters is a serious problem that affects the whole world. The consequences of these disasters are significantly worse when they occur in urban districts because of the casualties and extent of the damage to goods and property that is caused. Until now feasible methods of dealing with this have included the use of wireless sensor networks (WSNs) for data collection and machine-learning (ML) techniques for forecasting natural disasters. However, there have recently been some promising new innovations in technology which have supplemented the task of monitoring the environment and carrying out the forecasting. One of these schemes involves adopting IP-based (Internet Protocol) sensor networks, by using emerging patterns for IoT. In light of this, in this study, an attempt has been made to set out and describe the results achieved by SENDI (System for dEtecting and forecasting Natural Disasters based on IoT). SENDI is a fault-tolerant system based on IoT, ML and WSN for the detection and forecasting of natural disasters and the issuing of alerts. The system was modeled by means of ns-3 and data collected by a real-world WSN installed in the town of São Carlos - Brazil, which carries out the data collection from rivers in the region. The fault-tolerance is embedded in the system by anticipating the risk of communication breakdowns and the destruction of the nodes during disasters. It operates by adding intelligence to the nodes to carry out the data distribution and forecasting, even in extreme situations. A case study is also included for flash flood forecasting and this makes use of the ns-3 SENDI model and data collected by WSN.

  18. Ultrasound-guided thoracenthesis: the V-point as a site for optimal drainage positioning.

    PubMed

    Zanforlin, A; Gavelli, G; Oboldi, D; Galletti, S

    2013-01-01

    In the latest years the use of lung ultrasound is increasing in the evaluation of pleural effusions, because it makes follow-up easier and drainage more efficient by providing guidance on the most appropriate sampling site. However, no standardized approach for ultrasound-guided thoracenthesis is actually available. To evaluate our usual ultrasonographic landmark as a possible standard site to perform thoracenthesis by assessing its value in terms of safety and efficiency (success at first attempt, drainage as complete as possible). Hospitalized patients with non organized pleural effusion underwent thoracenthesis after ultrasound evaluation. The point showing on ultrasound the maximum thickness of the effusion ("V-point") was chosen for drainage. 45 ultrasound guided thoracenthesis were performed in 12 months. In 22 cases there were no complications; 16 cases of cough, 2 cases of mild dyspnea without desaturation, 4 cases of mild pain; 2 cases of complications requiring medical intervention occurred. No case of pneumothorax related to the procedure was detected. In all cases drainage was successful on the first attempt. The collected values of maximum thickness at V-point (min 3.4 cm - max 15.3 cm) and drained fluid volume (min 70 ml - max 2000 ml) showed a significative correlation (p < 0.0001). When the thickness was greater or equal to 9.9 cm, drained volume was always more than 1000 ml. The measure of the maximum thickness at V-point provides high efficiency to ultrasound guided thoracentesis and allows to estimate the amount of fluid in the pleural cavity. It is also an easy parameter that makes the proposed method quick to learn and apply.  

  19. Automated clinical trial eligibility prescreening: increasing the efficiency of patient identification for clinical trials in the emergency department

    PubMed Central

    Ni, Yizhao; Kennebeck, Stephanie; Dexheimer, Judith W; McAneney, Constance M; Tang, Huaxiu; Lingren, Todd; Li, Qi; Zhai, Haijun; Solti, Imre

    2015-01-01

    Objectives (1) To develop an automated eligibility screening (ES) approach for clinical trials in an urban tertiary care pediatric emergency department (ED); (2) to assess the effectiveness of natural language processing (NLP), information extraction (IE), and machine learning (ML) techniques on real-world clinical data and trials. Data and methods We collected eligibility criteria for 13 randomly selected, disease-specific clinical trials actively enrolling patients between January 1, 2010 and August 31, 2012. In parallel, we retrospectively selected data fields including demographics, laboratory data, and clinical notes from the electronic health record (EHR) to represent profiles of all 202795 patients visiting the ED during the same period. Leveraging NLP, IE, and ML technologies, the automated ES algorithms identified patients whose profiles matched the trial criteria to reduce the pool of candidates for staff screening. The performance was validated on both a physician-generated gold standard of trial–patient matches and a reference standard of historical trial–patient enrollment decisions, where workload, mean average precision (MAP), and recall were assessed. Results Compared with the case without automation, the workload with automated ES was reduced by 92% on the gold standard set, with a MAP of 62.9%. The automated ES achieved a 450% increase in trial screening efficiency. The findings on the gold standard set were confirmed by large-scale evaluation on the reference set of trial–patient matches. Discussion and conclusion By exploiting the text of trial criteria and the content of EHRs, we demonstrated that NLP-, IE-, and ML-based automated ES could successfully identify patients for clinical trials. PMID:25030032

  20. How to Improve Fault Tolerance in Disaster Predictions: A Case Study about Flash Floods Using IoT, ML and Real Data

    PubMed Central

    Furquim, Gustavo; Filho, Geraldo P. R.; Pessin, Gustavo; Pazzi, Richard W.

    2018-01-01

    The rise in the number and intensity of natural disasters is a serious problem that affects the whole world. The consequences of these disasters are significantly worse when they occur in urban districts because of the casualties and extent of the damage to goods and property that is caused. Until now feasible methods of dealing with this have included the use of wireless sensor networks (WSNs) for data collection and machine-learning (ML) techniques for forecasting natural disasters. However, there have recently been some promising new innovations in technology which have supplemented the task of monitoring the environment and carrying out the forecasting. One of these schemes involves adopting IP-based (Internet Protocol) sensor networks, by using emerging patterns for IoT. In light of this, in this study, an attempt has been made to set out and describe the results achieved by SENDI (System for dEtecting and forecasting Natural Disasters based on IoT). SENDI is a fault-tolerant system based on IoT, ML and WSN for the detection and forecasting of natural disasters and the issuing of alerts. The system was modeled by means of ns-3 and data collected by a real-world WSN installed in the town of São Carlos - Brazil, which carries out the data collection from rivers in the region. The fault-tolerance is embedded in the system by anticipating the risk of communication breakdowns and the destruction of the nodes during disasters. It operates by adding intelligence to the nodes to carry out the data distribution and forecasting, even in extreme situations. A case study is also included for flash flood forecasting and this makes use of the ns-3 SENDI model and data collected by WSN. PMID:29562657

  1. Foot volume estimates based on a geometric algorithm in comparison to water displacement.

    PubMed

    Mayrovitz, H N; Sims, N; Litwin, B; Pfister, S

    2005-03-01

    Assessing lower extremity limb volume and its change during and after lymphedema therapy is important for determining treatment efficacy and documenting outcomes. Although leg volumes may be determined by tape measure and other methods, there is no metric method to routinely assess foot volumes. Exclusion of foot volumes can under- or overestimate therapeutic progress. Our aim was to develop and test a metric measurement procedure and algorithm for practicing therapists to use to estimate foot volumes. The method uses a caliper and ruler to measure foot dimensions at standardized locations and calculates foot volume (VM) by a mathematical algorithm. VM was compared to volumes measured by water displacement (Vw) in 30 subjects (60 feet) using regression analysis and limits of agreement (LOA). Vw and VM (mean +/- sd) were similar 857 +/- 150 ml vs. 859 +/- 154 ml, and were highly correlated VM = 1.00Vw + 1.67 ml, r = 0.965, p < 0.001. The LOA for absolute volume differences and percentages were respectively +/- 79.6 ml and +/- 9.28 %. These results indicate that this metric method can be a useful alternative to water displacement when foot volumes are needed, but the water displacement method is contraindicated, impractical to implement, too time consuming or is not available.

  2. A validated stability indicating RP-HPLC method for estimation of Armodafinil in pharmaceutical dosage forms and characterization of its base hydrolytic product.

    PubMed

    Venkateswarlu, Kambham; Rangareddy, Ardhgeri; Narasimhaiah, Kanaka; Sharma, Hemraj; Bandi, Naga Mallikarjuna Raja

    2017-01-01

    The main objective of present study was to develop a RP-HPLC method for estimation of Armodafinil in pharmaceutical dosage forms and characterization of its base hydrolytic product. The method was developed for Armodafinil estimation and base hydrolytic products were characterized. The separation was carried out on C18 column by using mobile phase as mixture of water and methanol (45:55%v/v). Eluents were detected at 220nm at 1ml/min. Stress studies were performed with milder conditions followed by stronger conditions so as to get sufficient degradation around 20%. A total of five degradation products were detected and separated from analyte. The linearity of the proposed method was investigated in the range of 20-120µg/ml for Armodafinil. The detection limit and quantification limit was found to be 0.01183μg/ml and 0.035µg/ml respectively. The precision % RSD was found to be less than 2% and the recovery was between 98-102%. Armodafinil was found to be more sensitive to the base hydrolysis and yielded its carboxylic acid as degradant. The developed method was stability indicating assay, suitable to quantify Armodafinil in presence of possible degradants. The drug was sensitive to acid, base &photolytic stress and resistant to thermal &oxidation.

  3. METHOD 200.5 - DETERMINATION OF TRACE ELEMENTS IN DRINKING WATER BY AXIALLY VIEWED INDUCTIVELY COUPLED PLASMA-ATOMIC EMISSION SPECTROMETRY

    EPA Science Inventory

    2.0 SUMMARY OF METHOD
    2.1. A 50 mL aliquot of a well-mixed, non-filtered, acid preserved aqueous sample is accurately transferred to clean 50-mL plastic disposable digestion tube containing a mixture of nitric and hydrochloric acids. The aliquot is heated to 95 degrees C (+ o...

  4. Accuracy and reproducibility of a new contrast clearance method for the determination of glomerular filtration rate.

    PubMed Central

    O'Reilly, P H; Brooman, P J; Martin, P J; Pollard, A J; Farah, N B; Mason, G C

    1986-01-01

    A new method for determining the glomerular filtration rate was analysed prospectively. The method uses an x ray fluorescence technique to measure disappearance from the plasma of injected non-ionic iodinated contrast media. Eighty seven patients were studied. Fifty four had an intravenous dose of 100 ml iohexol (Omnipaque) and 33 had 50 ml iohexol. Clearances of chromium-51 labelled edetic acid (51Cr-EDTA) were measured simultaneously. In the patients given 100 ml iohexol there was excellent correlation with 51Cr-EDTA clearance (r = 0.90). The correlation using 50 ml iohexol was also good (r = 0.85). Correlation between creatinine clearance and clearance of 51Cr-EDTA in 33 patients was less satisfactory (r = 0.69). There were no adverse reactions to the contrast media. The equipment used for measuring contrast clearance was robust and simple to operate. Freezing plasma samples in 10 studies and re-examining them weekly for six weeks showed no significant variation in results; hence reproducibility was good. This new and accurate method for determining the glomerular filtration rate merits further study and might find a useful place in routine clinical practice. Images FIG 1 PMID:3089467

  5. Utility of Experimental Design in Pre-Column Derivatization for the Analysis of Tobramycin by HPLC-Fluorescence Detection: Application to Ophthalmic Solution and Human Plasma.

    PubMed

    El-Zaher, Asmaa A; Mahrouse, Marianne A

    2013-01-01

    A novel, selective, and sensitive reversed phase high-performance liquid chromatography (HPLC) method coupled with fluorescence detection has been developed for the determination of tobramycin (TOB) in pure form, in ophthalmic solution and in spiked human plasma. Since TOB lacks UV absorbing chromophores and native fluorescence, pre-column derivatization of TOB was carried out using fluorescamine reagent (0.01%, 1.5 mL) and borate buffer (pH 8.5, 2 mL). Experimental design was applied for optimization of the derivatization step. The resulting highly fluorescent stable derivative was chromatographed on C18 column and eluted using methanol:water (60:40, v/v) at a flow rate of 1 mL min(-1). A fluorescence detector (λex 390 and λem 480 nm) was used. The method was linear over the concentration range 20-200 ng mL(-1). The structure of the fluorescent product was proposed, the method was then validated and applied for the determination of TOB in human plasma. The results were statistically compared with the reference method, revealing no significant difference.

  6. Rapid detection of Salmonella in milk by electrochemical magneto-immunosensing.

    PubMed

    Liébana, Susana; Lermo, Anabel; Campoy, Susana; Cortés, María Pilar; Alegret, Salvador; Pividori, María Isabel

    2009-10-15

    A very simple and rapid method for the detection of Salmonella in milk is reported. In this approach, the bacteria are captured and preconcentrated from milk samples with magnetic beads through an immunological reaction. A second polyclonal antibody labeled with peroxidase is used as serological confirmation with electrochemical detection based on a magneto-electrode. The 'IMS/m-GEC electrochemical immunosensing' approach shows a limit of detection of 5 x 10(3) and 7.5 x 10(3)CFU mL(-1) in LB and in milk diluted 1/10 in LB broth, respectively, in 50 min without any pretreatment. If the skimmed-milk is preenriched for 6h, the method is able to detect as low as 1.4 CFU mL(-1), while if it is preenriched for 8h, as low as 0.108 x CFU mL(-1) (2.7 x CFU in 25 g of milk, in 5 samples of 5 mL) are detected accordingly with the legislation. Moreover, the method is able to clearly distinguish between food pathogenic bacteria such as Salmonella and Escherichia coli. The features of this approach are discussed and compared with classical culture methods.

  7. Assessing the cost effectiveness of robotics in urological surgery - a systematic review.

    PubMed

    Ahmed, Kamran; Ibrahim, Amel; Wang, Tim T; Khan, Nuzhath; Challacombe, Ben; Khan, Muhammed Shamim; Dasgupta, Prokar

    2012-11-01

    Although robotic technology is becoming increasingly popular for urological procedures, barriers to its widespread dissemination include cost and the lack of long term outcomes. This systematic review analyzed studies comparing the use of robotic with laparoscopic and open urological surgery. These three procedures were assessed for cost efficiency in the form of direct as well as indirect costs that could arise from length of surgery, hospital stay, complications, learning curve and postoperative outcomes. A systematic review was performed searching Medline, Embase and Web of Science databases. Two reviewers identified abstracts using online databases and independently reviewed full length papers suitable for inclusion in the study. Laparoscopic and robot assisted radical prostatectomy are superior with respect to reduced hospital stay (range 1-1.76 days and 1-5.5 days, respectively) and blood loss (range 482-780 mL and 227-234 mL, respectively) when compared with the open approach (range 2-8 days and 1015 mL). Robot assisted radical prostatectomy remains more expensive (total cost ranging from US $2000-$39,215) than both laparoscopic (range US $740-$29,771) and open radical prostatectomy (range US $1870-$31,518). This difference is due to the cost of robot purchase, maintenance and instruments. The reduced length of stay in hospital (range 1-1.5 days) and length of surgery (range 102-360 min) are unable to compensate for the excess costs. Robotic surgery may require a smaller learning curve (20-40 cases) although the evidence is inconclusive. Robotic surgery provides similar postoperative outcomes to laparoscopic surgery but a reduced learning curve. Although costs are currently high, increased competition from manufacturers and wider dissemination of the technology could drive down costs. Further trials are needed to evaluate long term outcomes in order to evaluate fully the value of all three procedures in urological surgery. © 2012 BJU INTERNATIONAL.

  8. Using Machine Learning for Advanced Anomaly Detection and Classification

    NASA Astrophysics Data System (ADS)

    Lane, B.; Poole, M.; Camp, M.; Murray-Krezan, J.

    2016-09-01

    Machine Learning (ML) techniques have successfully been used in a wide variety of applications to automatically detect and potentially classify changes in activity, or a series of activities by utilizing large amounts data, sometimes even seemingly-unrelated data. The amount of data being collected, processed, and stored in the Space Situational Awareness (SSA) domain has grown at an exponential rate and is now better suited for ML. This paper describes development of advanced algorithms to deliver significant improvements in characterization of deep space objects and indication and warning (I&W) using a global network of telescopes that are collecting photometric data on a multitude of space-based objects. The Phase II Air Force Research Laboratory (AFRL) Small Business Innovative Research (SBIR) project Autonomous Characterization Algorithms for Change Detection and Characterization (ACDC), contracted to ExoAnalytic Solutions Inc. is providing the ability to detect and identify photometric signature changes due to potential space object changes (e.g. stability, tumble rate, aspect ratio), and correlate observed changes to potential behavioral changes using a variety of techniques, including supervised learning. Furthermore, these algorithms run in real-time on data being collected and processed by the ExoAnalytic Space Operations Center (EspOC), providing timely alerts and warnings while dynamically creating collection requirements to the EspOC for the algorithms that generate higher fidelity I&W. This paper will discuss the recently implemented ACDC algorithms, including the general design approach and results to date. The usage of supervised algorithms, such as Support Vector Machines, Neural Networks, k-Nearest Neighbors, etc., and unsupervised algorithms, for example k-means, Principle Component Analysis, Hierarchical Clustering, etc., and the implementations of these algorithms is explored. Results of applying these algorithms to EspOC data both in an off-line "pattern of life" analysis as well as using the algorithms on-line in real-time, meaning as data is collected, will be presented. Finally, future work in applying ML for SSA will be discussed.

  9. Fast liquid chromatographic-tandem mass spectrometric method using mixed-mode phase chromatography and solid phase extraction for the determination of 12 mono-hydroxylated brominated diphenyl ethers in human serum.

    PubMed

    Petropoulou, Syrago-Styliani E; Duong, Wendy; Petreas, Myrto; Park, June-Soo

    2014-08-22

    Hydroxylated polybrominated diphenyl ethers (OH-PBDEs) are formed from the oxidative metabolism of polybrominated diphenyl ethers (PBDEs) in humans, rats and mice, but their quantitation in human blood and other matrices with liquid chromatography-mass spectrometric techniques has been a challenge. In this study, a novel analytical method was developed and validated using only 250 μL of human serum for the quantitation of twelve OH-PBDEs, fully chromatographically separated in a 15 min analytical run. This method includes two novel approaches: an enzymatic hydrolysis procedure and a chromatographic separation using a mixed mode chromatography column. The enzymatic hydrolysis (EH) was found critical for 4'-OH-BDE17, which was not detectable without it. For the sample clean up, a solid phase extraction protocol was developed and validated for the extraction of the 12 congeners from human serum. In addition, for the first time baseline resolution of two components was achieved that correspond to a single peak previously identified as 6'-OH-BDE99. The method was validated for linearity, accuracy, precision, matrix effects, limit of quantification, limit of detection, sample stability and overall efficiency. Recoveries (absolute and relative) ranged from 66 to 130% with relative standard deviations <21% for all analytes. Limit of detection and quantitation ranged from 4 to 90 pg mL(-1) and 6-120 pg mL(-1), respectively, with no carry over effects. This method was applied in ten commercially available human serum samples from the general US population. The mean values of the congeners detected in all samples are 4'-OH-BDE17 (34.2 pg mL(-1)), 4-OH-BDE42 (33.9 pg mL(-1)), 5-OH-BDE47 (17.5 pg mL(-1)) and 4'-OH-BDE49 (12.4 pg mL(-1)). Copyright © 2014 Elsevier B.V. All rights reserved.

  10. PepArML: A Meta-Search Peptide Identification Platform

    PubMed Central

    Edwards, Nathan J.

    2014-01-01

    The PepArML meta-search peptide identification platform provides a unified search interface to seven search engines; a robust cluster, grid, and cloud computing scheduler for large-scale searches; and an unsupervised, model-free, machine-learning-based result combiner, which selects the best peptide identification for each spectrum, estimates false-discovery rates, and outputs pepXML format identifications. The meta-search platform supports Mascot; Tandem with native, k-score, and s-score scoring; OMSSA; MyriMatch; and InsPecT with MS-GF spectral probability scores — reformatting spectral data and constructing search configurations for each search engine on the fly. The combiner selects the best peptide identification for each spectrum based on search engine results and features that model enzymatic digestion, retention time, precursor isotope clusters, mass accuracy, and proteotypic peptide properties, requiring no prior knowledge of feature utility or weighting. The PepArML meta-search peptide identification platform often identifies 2–3 times more spectra than individual search engines at 10% FDR. PMID:25663956

  11. Multi-parameter comparison of injection laryngoplasty, medialization laryngoplasty, and arytenoid adduction in an excised larynx model

    PubMed Central

    Hoffman, Matthew R.; Witt, Rachel E.; Chapin, William J.; McCulloch, Timothy M.; Jiang, Jack J.

    2010-01-01

    Objective Evaluate the effect of injection laryngoplasty (IL), medialization laryngoplasty (ML), and ML combined with arytenoid adduction (ML-AA) on acoustic, aerodynamic, and mucosal wave measurements in an excised larynx setup. Methods Measurements were recorded for eight excised canine larynges with simulated unilateral vocal fold paralysis (UVFP) before and after vocal fold injection with Cymetra. A second set of eight larynges was used to evaluate medialization laryngoplasty using a Silastic implant without and with arytenoid adduction. Results IL and ML led to comparable decreases in phonation threshold flow (PTF), phonation threshold pressure (PTP), and phonation threshold power (PTW). ML-AA led to significant decreases in PTF (p=0.008), PTP (p=0.008), and PTW (p=0.008). IL and ML led to approximately equal decreases in percent jitter and percent shimmer. ML-AA caused the greatest increase in signal to noise ratio (SNR). ML-AA discernibly decreased frequency (p=0.059); a clear trend was not observed for IL or ML. IL significantly reduced mucosal wave amplitude (p=0.002), while both ML and ML-AA increased it. All procedures significantly decreased glottal gap, with the most dramatic effects observed after ML-AA (p=0.004). Conclusions ML-AA led to the greatest improvements in phonatory parameters. IL was comparable to ML aerodynamically and acoustically, but caused detrimental changes to the mucosal wave. Incremental improvements in parameters recorded from the same larynx were observed after ML and ML-AA. To ensure optimal acoustic outcome, the arytenoid must be correctly rotated. This study provides objective support for the combined ML-AA procedure in tolerant patients. Evidence based medicine level Not applicable – animal study. PMID:20213797

  12. An Investigation on the Influence of Hyaluronic Acid on Polidocanol Foam Stability.

    PubMed

    Chen, An-Wei; Liu, Yi-Ran; Li, Kai; Liu, Shao-Hua

    2016-01-01

    Foam sclerotherapy is an effective treatment strategy for varicose veins and venous malformations. Foam stability varies according to foam composition, volume, and injection technique. To evaluate the stability of polidocanol (POL) foam with the addition of hyaluronic acid (HA). Group A: 2 mL of 1% POL + 0 mL of 1% HA + 8 mL of air; Group B: 2 mL of 1% POL + 0.05 mL of 1% HA + 8 mL of air; Group C: 2 mL of 1% POL + 0.1 mL of 1% HA + 8 mL of air. Tessari's method was used for foam generation. The half-life, or the time for a volume of foam to be reduced to half of its original volume, was used to evaluate foam stability. Five recordings were made for each group. The half-life was 142.8 (±4.32) seconds for 1% POL without the addition of HA, 310.6 (±7.53) seconds with the addition of 0.05 mL of 1% HA, and 390.4 (±13.06) seconds with the addition of 0.1 mL of 1% HA. The stability of POL foam was highly increased by the addition of small amounts of HA.

  13. Development and validation of chemometrics-assisted spectrophotometric and liquid chromatographic methods for the simultaneous determination of two multicomponent mixtures containing bronchodilator drugs.

    PubMed

    El-Gindy, Alaa; Emara, Samy; Shaaban, Heba

    2007-02-19

    Three methods are developed for the determination of two multicomponent mixtures containing guaiphenesine (GU) with salbutamol sulfate (SL), methylparaben (MP) and propylparaben (PP) [mixture 1]; and acephylline piperazine (AC) with bromhexine hydrochloride (BX), methylparaben (MP) and propylparaben (PP) [mixture 2]. The resolution of the two multicomponent mixtures has been accomplished by using numerical spectrophotometric methods such as partial least squares (PLS-1) and principal component regression (PCR) applied to UV absorption spectra of the two mixtures. In addition HPLC method was developed using a RP 18 column at ambient temperature with mobile phase consisting of acetonitrile-0.05 M potassium dihydrogen phosphate, pH 4.3 (60:40, v/v), with UV detection at 243 nm for mixture 1, and mobile phase consisting of acetonitrile-0.05 M potassium dihydrogen phosphate, pH 3 (50:50, v/v), with UV detection at 245 nm for mixture 2. The methods were validated in terms of accuracy, specificity, precision and linearity in the range of 20-60 microg ml(-1) for GU, 1-3 microg ml(-1) for SL, 20-80 microg ml(-1) for AC, 0.2-1.8 microgml(-1) for PP and 1-5 microg ml(-1) for BX and MP. The proposed methods were successfully applied for the determination of the two multicomponent combinations in laboratory prepared mixtures and commercial syrups.

  14. Sensitive method for the quantitative determination of bromocriptine in human plasma by liquid chromatography-tandem mass spectrometry.

    PubMed

    Salvador, Arnaud; Dubreuil, Didier; Denouel, Jannick; Millerioux, L

    2005-06-25

    A sensitive LC-MS-MS assay for the quantitative determination of bromocriptine has been developed and validated and is described in this work. The assay involved the extraction of the analyte from 1 ml of human plasma using a solid phase extraction on Oasis MCX cartridges. Chromatography was performed on a Symmetry C18 (2.1 mm x 100 mm, 3.5 microm) column using a mobile phase consisting of 25:75:01 acetonitrile-water-formic acid with a flow rate of 250 microl/min. The linearity was within the concentration range of 2-500 pg/ml. The lower limit of quantification was 2 pg/ml. This method has been demonstrated to be an improvement over existing methods due to its greater sensitivity and specificity.

  15. LC-ESI-MS/MS on an ion trap for the determination of LSD, iso-LSD, nor-LSD and 2-oxo-3-hydroxy-LSD in blood, urine and vitreous humor.

    PubMed

    Favretto, Donata; Frison, Giampietro; Maietti, Sergio; Ferrara, Santo Davide

    2007-07-01

    A method has been developed for the simultaneous determination of lysergic acid diethylamide (LSD), its epimer iso-LSD, and its main metabolites nor-LSD and 2-oxo-3-hydroxy LSD in blood, urine, and, for the first time, vitreous humor samples. The method is based on liquid/liquid extraction and liquid chromatography-multiple mass spectrometry detection in an ion trap mass spectrometer, in positive ion electrospray ionization conditions. Five microliter of sample are injected and analysis time is 12 min. The method is specific, selective and sensitive, and achieves limits of quantification of 20 pg/ml for both LSD and nor-LSD in blood, urine, and vitreous humor. No significant interfering substance or ion suppression was identified for LSD, iso-LSD, and nor-LSD. The interassay reproducibilities for LSD at 20 pg/ml and 2 ng/ml in urine were 8.3 and 5.6%, respectively. Within-run precision using control samples at 20 pg/ml and 2 ng/ml was 6.9 and 3.9%. Mean recoveries of two concentrations spiked into drug free samples were in the range 60-107% in blood, 50-105% in urine, and 65-105% in vitreous humor. The method was successfully applied to the forensic determination of postmortem LSD levels in the biological fluids of a multi drug abuser; for the first time, LSD could be detected in vitreous humor.

  16. [Activity of doripenem against anaerobic bacteria].

    PubMed

    Dubreuil, L; Neut, C; Mahieux, S; Muller-Serieys, C; Jean-Pierre, H; Marchandin, H; Soussy, C J; Miara, A

    2011-04-01

    This study examines the activity of doripenem, a new carbapenem compound compared with amoxicillin-clavulanic acid, piperacillin+tazobactam, imipenem, clindamycin and metronidazole against 316 anaerobes. Inoculum preparation and agar dilution method were performed according to the CLSI method for anaerobes (M11A7). At a concentration of 4μg/ml doripenem and imipenem (IMP) inhibited 122 (96 %) and 126 (99 %) strains of the Bacteroides fragilis group, respectively. In contrast, doripenem appeared more potent than IMP against Gram-positive anaerobes inhibiting at the same concentration of 4μg/ml 145/145 strains (100 %) versus 115/145 for IMP (79.3 %). Against 316 anaerobic strains, the carbapenem doripenem had an MIC(50) of 0.25μg/ml and an MIC(90) of 2μg/ml. Results were similar to those for imipenem (MIC(50) of 0.125μg/ml and MIC(90) of 4μg/ml). If we consider the resistant breakpoints of the two carbapenems as defined by EUCAST, the resistance rate for doripenem (MIC>4μg/ml) 1.6 % is similar to that of imipenem (MIC>8μg/ml) 1.3 %. Thus independently of the PK/PD parameters the two carbapenems demonstrated very close activity; doripenem was more potent on Gram-positive anaerobes and slightly less potent against Gram-negative anaerobes mainly the B. fragilis group. Further clinical studies are needed to assess its usefulness in patients. Copyright © 2010 Elsevier Masson SAS. All rights reserved.

  17. Effect-site concentration of remifentanil required to blunt haemodynamic responses during tracheal intubation: A randomized comparison between single- and double-lumen tubes.

    PubMed

    Kim, Tae Kyong; Hong, Deok Man; Lee, Seo Hee; Paik, Hyesun; Min, Se Hee; Seo, Jeong-Hwa; Jung, Chul-Woo; Bahk, Jae-Hyon

    2018-01-01

    Objective To investigate the effect-site concentration of remifentanil required to blunt haemodynamic responses during tracheal intubation with a single-lumen tube (SLT) or a double-lumen tube (DLT). Methods Patients scheduled for thoracic surgery requiring one-lung ventilation were randomly allocated to either the SLT or DLT group. All patients received a target-controlled infusion of propofol and a predetermined concentration of remifentanil. Haemodynamic parameters during intubation were recorded. The effect-site concentration of remifentanil was determined using a delayed up-and-down sequential allocation method. Results A total of 92 patients were enrolled in the study. The effective effect-site concentrations of remifentanil required to blunt haemodynamic responses in 50% of patients (EC 50 ) estimated by isotonic regression with bootstrapping was higher in the DLT than the SLT group (8.5 ng/ml [95% confidence interval (CI) 8.0-9.5 ng/ml] versus 6.5 ng/ml [95% CI 5.6-6.7 ng/ml], respectively). Similarly, the effective effect-site concentrations of remifentanil in 95% of patients in the DLT group was higher than the SLT group (9.9 ng/ml [95% CI 9.8-10.0 ng/ml] versus 7.0 ng/ml [95% CI 6.9-7.0 ng/ml], respectively). Conclusions This study demonstrated that a DLT requires a 30% higher EC 50 of remifentanil than does an SLT to blunt haemodynamic responses during tracheal intubation when combined with a target-controlled infusion of propofol. Trial registration Clinicaltrials.gov identifier: NCT01542099.

  18. Machine learning assisted first-principles calculation of multicomponent solid solutions: estimation of interface energy in Ni-based superalloys

    NASA Astrophysics Data System (ADS)

    Chandran, Mahesh; Lee, S. C.; Shim, Jae-Hyeok

    2018-02-01

    A disordered configuration of atoms in a multicomponent solid solution presents a computational challenge for first-principles calculations using density functional theory (DFT). The challenge is in identifying the few probable (low energy) configurations from a large configurational space before DFT calculation can be performed. The search for these probable configurations is possible if the configurational energy E({\\boldsymbol{σ }}) can be calculated accurately and rapidly (with a negligibly small computational cost). In this paper, we demonstrate such a possibility by constructing a machine learning (ML) model for E({\\boldsymbol{σ }}) trained with DFT-calculated energies. The feature vector for the ML model is formed by concatenating histograms of pair and triplet (only equilateral triangle) correlation functions, {g}(2)(r) and {g}(3)(r,r,r), respectively. These functions are a quantitative ‘fingerprint’ of the spatial arrangement of atoms, familiar in the field of amorphous materials and liquids. The ML model is used to generate an accurate distribution P(E({\\boldsymbol{σ }})) by rapidly spanning a large number of configurations. The P(E) contains full configurational information of the solid solution and can be selectively sampled to choose a few configurations for targeted DFT calculations. This new framework is employed to estimate (100) interface energy ({σ }{{IE}}) between γ and γ \\prime at 700 °C in Alloy 617, a Ni-based superalloy, with composition reduced to five components. The estimated {σ }{{IE}} ≈ 25.95 mJ m-2 is in good agreement with the value inferred by the precipitation model fit to experimental data. The proposed new ML-based ab initio framework can be applied to calculate the parameters and properties of alloys with any number of components, thus widening the reach of first-principles calculation to realistic compositions of industrially relevant materials and alloys.

  19. Improvement of the energy conversion efficiency of Chlorella pyrenoidosa biomass by a three-stage process comprising dark fermentation, photofermentation, and methanogenesis.

    PubMed

    Xia, Ao; Cheng, Jun; Ding, Lingkan; Lin, Richen; Huang, Rui; Zhou, Junhu; Cen, Kefa

    2013-10-01

    The effects of pre-treatment methods on saccharification and hydrogen fermentation of Chlorella pyrenoidosa biomass were investigated. When raw biomass and biomass pre-treated by steam heating, by microwave heating, and by ultrasonication were used as feedstock, the hydrogen yields were only 8.8-12.7 ml/g total volatile solids (TVS) during dark fermentation. When biomass was pre-treated by steam heating with diluted acid and by microwave heating with diluted acid, the dark hydrogen yields significantly increased to 75.6 ml/g TVS and 83.3 ml/g TVS, respectively. Steam heating with diluted acid is the preferred pre-treatment method of C. pyrenoidosa biomass to improve hydrogen yield during dark fermentation and photofermentation, which is followed by methanogenesis to increase energy conversion efficiency (ECE). A total hydrogen yield of 198.3 ml/g TVS and a methane yield of 186.2 ml/g TVS corresponding to an overall ECE of 34.0% were obtained through the three-stage process (dark fermentation, photofermentation, and methanogenesis). Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Comparison between amperometric and true potentiometric end-point detection in the determination of water by the Karl Fischer method.

    PubMed

    Cedergren, A

    1974-06-01

    A rapid and sensitive method using true potentiometric end-point detection has been developed and compared with the conventional amperometric method for Karl Fischer determination of water. The effect of the sulphur dioxide concentration on the shape of the titration curve is shown. By using kinetic data it was possible to calculate the course of titrations and make comparisons with those found experimentally. The results prove that the main reaction is the slow step, both in the amperometric and the potentiometric method. Results obtained in the standardization of the Karl Fischer reagent showed that the potentiometric method, including titration to a preselected potential, gave a standard deviation of 0.001(1) mg of water per ml, the amperometric method using extrapolation 0.002(4) mg of water per ml and the amperometric titration to a pre-selected diffusion current 0.004(7) mg of water per ml. Theories and results dealing with dilution effects are presented. The time of analysis was 1-1.5 min for the potentiometric and 4-5 min for the amperometric method using extrapolation.

  1. Optimization and validation of FePro cell labeling method.

    PubMed

    Janic, Branislava; Rad, Ali M; Jordan, Elaine K; Iskander, A S M; Ali, Md M; Varma, N Ravi S; Frank, Joseph A; Arbab, Ali S

    2009-06-11

    Current method to magnetically label cells using ferumoxides (Fe)-protamine (Pro) sulfate (FePro) is based on generating FePro complexes in a serum free media that are then incubated overnight with cells for the efficient labeling. However, this labeling technique requires long (>12-16 hours) incubation time and uses relatively high dose of Pro (5-6 microg/ml) that makes large extracellular FePro complexes. These complexes can be difficult to clean with simple cell washes and may create low signal intensity on T2* weighted MRI that is not desirable. The purpose of this study was to revise the current labeling method by using low dose of Pro and adding Fe and Pro directly to the cells before generating any FePro complexes. Human tumor glioma (U251) and human monocytic leukemia cell (THP-1) lines were used as model systems for attached and suspension cell types, respectively and dose dependent (Fe 25 to 100 microg/ml and Pro 0.75 to 3 microg/ml) and time dependent (2 to 48 h) labeling experiments were performed. Labeling efficiency and cell viability of these cells were assessed. Prussian blue staining revealed that more than 95% of cells were labeled. Intracellular iron concentration in U251 cells reached approximately 30-35 pg-iron/cell at 24 h when labeled with 100 microg/ml of Fe and 3 microg/ml of Pro. However, comparable labeling was observed after 4 h across the described FePro concentrations. Similarly, THP-1 cells achieved approximately 10 pg-iron/cell at 48 h when labeled with 100 microg/ml of Fe and 3 microg/ml of Pro. Again, comparable labeling was observed after 4 h for the described FePro concentrations. FePro labeling did not significantly affect cell viability. There was almost no extracellular FePro complexes observed after simple cell washes. To validate and to determine the effectiveness of the revised technique, human T-cells, human hematopoietic stem cells (hHSC), human bone marrow stromal cells (hMSC) and mouse neuronal stem cells (mNSC C17.2) were labeled. Labeling for 4 hours using 100 microg/ml of Fe and 3 microg/ml of Pro resulted in very efficient labeling of these cells, without impairing their viability and functional capability. The new technique with short incubation time using 100 microg/ml of Fe and 3 microg/ml of Pro is effective in labeling cells for cellular MRI.

  2. Testing Group Mean Differences of Latent Variables in Multilevel Data Using Multiple-Group Multilevel CFA and Multilevel MIMIC Modeling.

    PubMed

    Kim, Eun Sook; Cao, Chunhua

    2015-01-01

    Considering that group comparisons are common in social science, we examined two latent group mean testing methods when groups of interest were either at the between or within level of multilevel data: multiple-group multilevel confirmatory factor analysis (MG ML CFA) and multilevel multiple-indicators multiple-causes modeling (ML MIMIC). The performance of these methods were investigated through three Monte Carlo studies. In Studies 1 and 2, either factor variances or residual variances were manipulated to be heterogeneous between groups. In Study 3, which focused on within-level multiple-group analysis, six different model specifications were considered depending on how to model the intra-class group correlation (i.e., correlation between random effect factors for groups within cluster). The results of simulations generally supported the adequacy of MG ML CFA and ML MIMIC for multiple-group analysis with multilevel data. The two methods did not show any notable difference in the latent group mean testing across three studies. Finally, a demonstration with real data and guidelines in selecting an appropriate approach to multilevel multiple-group analysis are provided.

  3. Spectrofluorimetric and spectrophotometric stability-indicating methods for determination of some oxicams using 7-chloro-4-nitrobenz-2-oxa-1,3-diazole (NBD-Cl).

    PubMed

    Taha, Elham Anwer; Salama, Nahla Nour; Fattah, Laila El-Sayed Abdel

    2006-05-01

    Two sensitive and selective spectrofluorimetric and spectrophotometric stability-indicating methods have been developed for the determination of some non-steroidal anti-inflammatory oxicam derivatives namely lornoxicam (Lx), tenoxicam (Tx) and meloxicam (Mx) after their complete alkaline hydrolysis. The methods are based on derivatization of alkaline hydrolytic products with 7-chloro-4-nitrobenz-2-oxa-1,3-diazole (NBD-Cl). The products showed an absorption maximum at 460 nm for the three studied drugs and fluorescence emission peak at 535 nm in methanol. The color was stable for at least 48 h. The optimum conditions of the reaction were investigated and it was found that the reaction proceeds quantitatively at pH 8, after heating in a boiling water bath for 30 min. The methods were found to be linear in the ranges of 1-10 microg ml(-1) for Lx and Tx and 0.5-4.0 microg ml(-1) for Mx for spectrophotometric method, while 0.05-1.0 microg ml(-1) for Lx and Tx and 0.025-0.4 microg ml(-1) for Mx for the spectrofluorimetric method. The validity of the methods was assessed according to USP guidelines. Statistical analysis of the results revealed high accuracy and good precision. The suggested procedures could be used for the determination of the above mentioned drugs in pure and dosage forms as well as in the presence of their degradation products.

  4. Spectrophotometric and fluorimetric determination of diazepam, bromazepam and clonazepam in pharmaceutical and urine samples

    NASA Astrophysics Data System (ADS)

    Salem, A. A.; Barsoum, B. N.; Izake, E. L.

    2004-03-01

    New spectrophotometric and fluorimetric methods have been developed to determine diazepam, bromazepam and clonazepam (1,4-benzodiazepines) in pure forms, pharmaceutical preparations and biological fluid. The new methods are based on measuring absorption or emission spectra in methanolic potassium hydroxide solution. Fluorimetric methods have proved selective with low detection limits, whereas photometric methods showed relatively high detection limits. Successive applications of developed methods for drugs determination in pharmaceutical preparations and urine samples were performed. Photometric methods gave linear calibration graphs in the ranges of 2.85-28.5, 0.316-3.16, and 0.316-3.16 μg ml -1 with detection limits of 1.27, 0.08 and 0.13 μg ml -1 for diazepam, bromazepam and clonazepam, respectively. Corresponding average errors of 2.60, 5.26 and 3.93 and relative standard deviations (R.S.D.s) of 2.79, 2.12 and 2.83, respectively, were obtained. Fluorimetric methods gave linear calibration graphs in the ranges of 0.03-0.34, 0.03-0.32 and 0.03-0.38 μg ml -1 with detection limits of 7.13, 5.67 and 16.47 ng ml -1 for diazepam, bromazepam and clonazepam, respectively. Corresponding average errors of 0.29, 4.33 and 5.42 and R.S.D.s of 1.27, 1.96 and 1.14 were obtained, respectively. Statistical Students t-test and F-test have been used and satisfactory results were obtained.

  5. Preparation of fatty acid methyl esters for gas-liquid chromatography[S

    PubMed Central

    Ichihara, Ken'ichi; Fukubayashi, Yumeto

    2010-01-01

    A convenient method using commercial aqueous concentrated HCl (conc. HCl; 35%, w/w) as an acid catalyst was developed for preparation of fatty acid methyl esters (FAMEs) from sterol esters, triacylglycerols, phospholipids, and FFAs for gas-liquid chromatography (GC). An 8% (w/v) solution of HCl in methanol/water (85:15, v/v) was prepared by diluting 9.7 ml of conc. HCl with 41.5 ml of methanol. Toluene (0.2 ml), methanol (1.5 ml), and the 8% HCl solution (0.3 ml) were added sequentially to the lipid sample. The final HCl concentration was 1.2% (w/v). This solution (2 ml) was incubated at 45°C overnight or heated at 100°C for 1–1.5 h. The amount of FFA formed in the presence of water derived from conc. HCl was estimated to be <1.4%. The yields of FAMEs were >96% for the above lipid classes and were the same as or better than those obtained by saponification/methylation or by acid-catalyzed methanolysis/methylation using commercial anhydrous HCl/methanol. The method developed here could be successfully applied to fatty acid analysis of various lipid samples, including fish oils, vegetable oils, and blood lipids by GC. PMID:19759389

  6. Cytotoxicity of Sargassum angustifolium Partitions against Breast and Cervical Cancer Cell Lines

    PubMed Central

    Vaseghi, Golnaz; Sharifi, Mohsen; Dana, Nasim; Ghasemi, Ahmad; Yegdaneh, Afsaneh

    2018-01-01

    Background: Marine organisms produce a variety of compounds with pharmacological activities including anticancer effects. This study attempt to find cytotoxicity of hexane (HEX), dichloromethane (DCM), and butanol (BUTOH) partitions of Sargassum angustifolium. Materials and Methods: S. angustifolium was collected from Bushehr, a Southwest coastline of Persian Gulf. The plant was extracted by maceration with methanol-ethyl acetate. The extract was evaporated under vacuum and partitioned by Kupchan method to yield HEX, DCM, and BUTOH partitions. The cytotoxic activity of the extract (150, 450, and 900 μg/ml) was investigated against MCF-7 (breast cancer), HeLa (cervical cancer), and human umbilical vein endothelial cells cell lines by mitochondrial tetrazolium test assay after 72 h. Results: The cell survivals of HeLa and MCF-7 cell were decreased by increasing the concentration of extracts from 150 μg/ml to 900 μg/ml. The median growth inhibitory concentration value of HEX partition was 71 and 77 μg/ml against HeLa and MCF-7, dichloromethane partition was 36 and 88 μg/ml against HeLa and MCF-7, respectively. BUTOH partition was 25 μg/ml against MCF-7. Conclusion: This study reveals that different partitions of S. angustifolium have cytotoxic activity against cancer cell lines. PMID:29657928

  7. Nira acidity and antioxidant activity of Palm sugar in Sumowono Village

    NASA Astrophysics Data System (ADS)

    Winarni, Sri; Arifan, Fahmi; Wisnu Broto, RTD.; Fuadi, Ariza; Alviche, Lola

    2018-05-01

    The palm sugar not only has potential as natural sweetener but also has antioxidant. The purpose of this study was to analyze antioxidant and pH of the nira in palm sugar. The sample in this study was palm sugar from 6 different production sites. Test of antioxidant activity used DPPH method (1.1-diphenyl-2-picrylhydrazyl) with a wavelength of 517 nm. The value of absorbance solution was measured using spectrophotometry and the value of effective concentration (IC50) was counted. The pH test was measured using a pH meter. Pearson’s correlation test revealed r=-0.045 with significant value 0.932 (>0.005). There was no correlation between pH value and antioxidant activity of palm sugar. IC50 value of palm sugar in Sumowono village revealed that it had a strong antioxidant activity (50 μg/ml - 100 μg/ml) that is 74,73 μg/ml 83.94 μg/ml 82.31 μg/ml 83.94 μg/ml 86.10 μg/ml 82.13 μg/ml 89.17 μg/ml 89.71 μg/ml 89.17 μg/ml and 84.84 μg/ml). Lower IC50 values indicate higher antioxidant activity. Palm sugar with the best antioxidant activity came from the production sites which had IC50 values of 74.73 μg/ml. Potential antioxidants can be optimized by making improvements to the processing system.

  8. Using an EM Covariance Matrix to Estimate Structural Equation Models with Missing Data: Choosing an Adjusted Sample Size to Improve the Accuracy of Inferences

    ERIC Educational Resources Information Center

    Enders, Craig K.; Peugh, James L.

    2004-01-01

    Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…

  9. Training in cortical control of neuroprosthetic devices improves signal extraction from small neuronal ensembles.

    PubMed

    Helms Tillery, S I; Taylor, D M; Schwartz, A B

    2003-01-01

    We have recently developed a closed-loop environment in which we can test the ability of primates to control the motion of a virtual device using ensembles of simultaneously recorded neurons /29/. Here we use a maximum likelihood method to assess the information about task performance contained in the neuronal ensemble. We trained two animals to control the motion of a computer cursor in three dimensions. Initially the animals controlled cursor motion using arm movements, but eventually they learned to drive the cursor directly from cortical activity. Using a population vector (PV) based upon the relation between cortical activity and arm motion, the animals were able to control the cursor directly from the brain in a closed-loop environment, but with difficulty. We added a supervised learning method that modified the parameters of the PV according to task performance (adaptive PV), and found that animals were able to exert much finer control over the cursor motion from brain signals. Here we describe a maximum likelihood method (ML) to assess the information about target contained in neuronal ensemble activity. Using this method, we compared the information about target contained in the ensemble during arm control, during brain control early in the adaptive PV, and during brain control after the adaptive PV had settled and the animal could drive the cursor reliably and with fine gradations. During the arm-control task, the ML was able to determine the target of the movement in as few as 10% of the trials, and as many as 75% of the trials, with an average of 65%. This average dropped when the animals used a population vector to control motion of the cursor. On average we could determine the target in around 35% of the trials. This low percentage was also reflected in poor control of the cursor, so that the animal was unable to reach the target in a large percentage of trials. Supervised adjustment of the population vector parameters produced new weighting coefficients and directional tuning parameters for many neurons. This produced a much better performance of the brain-controlled cursor motion. It was also reflected in the maximum likelihood measure of cell activity, producing the correct target based only on neuronal activity in over 80% of the trials on average. The changes in maximum likelihood estimates of target location based on ensemble firing show that an animal's ability to regulate the motion of a cortically controlled device is not crucially dependent on the experimenter's ability to estimate intention from neuronal activity.

  10. Simultaneous Quantification of Free and Glucuronidated Cannabinoids in Human Urine by Liquid Chromatography-Tandem Mass Spectrometry

    PubMed Central

    Scheidweiler, Karl B.; Desrosiers, Nathalie A.; Huestis, Marilyn A.

    2012-01-01

    Background Cannabis is the most commonly abused drug of abuse and is commonly quantified during urine drug testing. We conducted a controlled drug administration studies investigating efficacy of urinary cannabinoid glucuronide metabolites for documenting recency of cannabis intake and for determining stability of urinary cannabinoids. Methods A liquid chromatography tandem mass spectrometry method was developed and validated quantifying Δ9-tetrahydrocannabinol (THC), 11-hydroxy-THC (11-OH-THC), 11-nor-9-carboxy-THC (THCCOOH), cannabidiol, cannabinol, THC-glucuronide and THCCOOH-glucuronide in 0.5 ml human urine via supported-liquid extraction. Chromatography was performed on an Ultra Biphenyl column with a gradient of 10 mmol/l ammonium acetate, pH 6.15 and 15% methanol in acetonitrile at 0. 4ml/min. Analytes were monitored by positive and negative mode electrospray ionization and multiple reaction monitoring mass spectrometry. Results Linear ranges were 0.5–50 ng/ml for THC-glucuronide, 1–100 ng/ml for THCCOOH, 11-OH-THC and cannabidiol, 2–100 ng/ml for THC and cannabinol, and 5–500 ng/ml for THCCOOH-glucuronide (R2>0.99). Mean extraction efficiencies were 34–73% with analytical recovery (bias) 80.5–118.0% and total imprecision 3.0–10.2% coefficient of variation. Conclusion This method simultaneously quantifies urinary cannabinoids and phase II glucuronide metabolites, and enables evaluation of urinary cannabinoid glucuronides for documenting recency of cannabis intake and cannabinoid stability. The assay is applicable for routine urine cannabinoid testing. PMID:22771478

  11. Applicability of multisyringe chromatography coupled to on-line solid-phase extraction to the simultaneous determination of dicamba, 2,4-D, and atrazine.

    PubMed

    Chávez-Moreno, C A; Guzmán-Mar, J L; Hinojosa-Reyes, L; Hernández-Ramírez, A; Ferrer, L; Cerdà, V

    2012-07-01

    Simultaneous determination of three herbicides (dicamba, 2,4-D, and atrazine) has been achieved by on-line solid-phase extraction (SPE) coupled to multisyringe chromatography (MSC) with UV detection. The preconcentration conditions were optimized; a preconcentration flow rate of 0.5 mL min(-1) and elution at 0.8 mL min(-1) were the optimum conditions. A C(18) (8 mm i.d.) membrane extraction disk conditioned with 0.3 mol L(-1) HCl in 0.5% MeOH was used. A 3-mL sample was preconcentrated, then eluted with 0.43 mL 40:60 water-MeOH. A C(18) monolithic column (25 mm × 4.6 mm) was used for chromatographic separation. Separation of the three compounds was achieved in 10 min by use of 0.01% aqueous acetic acid-MeOH (60:40) as mobile phase at a flow rate of 0.8 mL min(-1). The limits of detection (LOD) were 13, 57, and 22 μg L(-1) for dicamba, 2,4-D, and atrazine, respectively. The sampling frequency was three analyses per hour, and each analysis consumed only 7.3 mL solvent. The method was applied to spiked water samples, and recovery between 85 and 112% was obtained. Recovery was significantly better than in the conventional HPLC-UV method. These results indicated the reliability and accuracy of this flow-based method. This is the first time this family of herbicides has been simultaneously analyzed by on-line SPE-MSC using a monolithic column.

  12. Better Contract Oversight Could Have Prevented Deficiencies in the Detention Facility in Parwan, Afghanistan

    DTIC Science & Technology

    2012-05-17

    a visitation center, a water treatment plant, and vocational buildings where detainees can learn carpentry and culinary skills. The facility also...room not connected MHU PPI 69 DCID HVAC unit is inoperable Ml DAB PPI has o rdered p arts and w ill fix w hen it arrives 70 Access broken to VCD

  13. Evaluating DLAB as a Predictor of Foreign Language Learning

    DTIC Science & Technology

    2012-05-01

    JT - Italian KP - Korean ML - Malay NE - Nepalese NR - Norwegian PF - Persian-Iranian PG - Persian-Afghan PJ - Punjabi PL - Polish...Lithuanian NR - Norwegian PF - Persian-Iranian PG - Persian-Afghan PJ - Punjabi PL - Polish PQ - Portuguese-Brazilian PT - Portuguese...Lithuanian NR - Norwegian PF - Persian-Iranian PG - Persian-Afghan PJ - Punjabi PL - Polish PQ - Portuguese-Brazilian PT - Portuguese-European

  14. Transactions of the Army Conference on Applied Mathematics and Computing (8th) Held in Ithaca, New York on 19-22 June 1990

    DTIC Science & Technology

    1991-02-01

    Shamos, M I , "Computational Geometry", Ph.D Thesis , Department of Computer Science, Yale University, New Haven CT, 1978. [53] Steiglitz, K., An...431) whose real and imaginary parts are given by 222 mj cos OmJ + Az -mL cos 2 ML + MS Cos 2MS (432) mj sinO 0M cose OM = L sin aML cos ML + m S sin 9...Aequationes Math. 14, 1976, 271-291. 5. Greenwell, C.E., Finite element methods for partial integro-differential equations, Ph.D. Thesis , University of

  15. Can You Ride a Bicycle? The Ability to Ride a Bicycle Prevents Reduced Social Function in Older Adults With Mobility Limitation

    PubMed Central

    Sakurai, Ryota; Kawai, Hisashi; Yoshida, Hideyo; Fukaya, Taro; Suzuki, Hiroyuki; Kim, Hunkyung; Hirano, Hirohiko; Ihara, Kazushige; Obuchi, Shuichi; Fujiwara, Yoshinori

    2016-01-01

    Background The health benefits of bicycling in older adults with mobility limitation (ML) are unclear. We investigated ML and functional capacity of older cyclists by evaluating their instrumental activities of daily living (IADL), intellectual activity, and social function. Methods On the basis of interviews, 614 community-dwelling older adults (after excluding 63 participants who never cycled) were classified as cyclists with ML, cyclists without ML, non-cyclists with ML (who ceased bicycling due to physical difficulties), or non-cyclists without ML (who ceased bicycling for other reasons). A cyclist was defined as a person who cycled at least a few times per month, and ML was defined as difficulty walking 1 km or climbing stairs without using a handrail. Functional capacity and physical ability were evaluated by standardized tests. Results Regular cycling was documented in 399 participants, and 74 of them (18.5%) had ML; among non-cyclists, 49 had ML, and 166 did not. Logistic regression analysis for evaluating the relationship between bicycling and functional capacity revealed that non-cyclists with ML were more likely to have reduced IADL and social function compared to cyclists with ML. However, logistic regression analysis also revealed that the risk of bicycle-related falls was significantly associated with ML among older cyclists. Conclusions The ability and opportunity to bicycle may prevent reduced IADL and social function in older adults with ML, although older adults with ML have a higher risk of falls during bicycling. It is important to develop a safe environment for bicycling for older adults. PMID:26902165

  16. Stability Indicating Reverse Phase HPLC Method for Estimation of Rifampicin and Piperine in Pharmaceutical Dosage Form.

    PubMed

    Shah, Umang; Patel, Shraddha; Raval, Manan

    2018-01-01

    High performance liquid chromatography is an integral analytical tool in assessing drug product stability. HPLC methods should be able to separate, detect, and quantify the various drug-related degradants that can form on storage or manufacturing, plus detect any drug-related impurities that may be introduced during synthesis. A simple, economic, selective, precise, and stability-indicating HPLC method has been developed and validated for analysis of Rifampicin (RIFA) and Piperine (PIPE) in bulk drug and in the formulation. Reversed-phase chromatography was performed on a C18 column with Buffer (Potassium Dihydrogen Orthophosphate) pH 6.5 and Acetonitrile, 30:70), (%, v/v), as mobile phase at a flow rate of 1 mL min-1. The detection was performed at 341 nm and sharp peaks were obtained for RIFA and PIPE at retention time of 3.3 ± 0.01 min and 5.9 ± 0.01 min, respectively. The detection limits were found to be 2.385 ng/ml and 0.107 ng/ml and quantification limits were found to be 7.228ng/ml and 0.325ng/ml for RIFA and PIPE, respectively. The method was validated for accuracy, precision, reproducibility, specificity, robustness, and detection and quantification limits, in accordance with ICH guidelines. Stress study was performed on RIFA and PIPE and it was found that these degraded sufficiently in all applied chemical and physical conditions. Thus, the developed RP-HPLC method was found to be suitable for the determination of both the drugs in bulk as well as stability samples of capsule containing various excipients. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  17. Ultra-high performance liquid chromatography tandem mass spectrometric method for the determination of tamoxifen, N-desmethyltamoxifen, 4-hydroxytamoxifen and endoxifen in dried blood spots--development, validation and clinical application during breast cancer adjuvant therapy.

    PubMed

    Antunes, Marina Venzon; Raymundo, Suziane; de Oliveira, Vanessa; Staudt, Dilana Elisabeth; Gössling, Gustavo; Peteffi, Giovana Piva; Biazús, Jorge Villanova; Cavalheiro, José Antônio; Tre-Hardy, Marie; Capron, Arnaud; Haufroid, Vincent; Wallemacq, Pierre; Schwartsmann, Gilberto; Linden, Rafael

    2015-01-01

    A LC-MSMS method for the simultaneous determination of tamoxifen, N-desmethyltamoxifen, 4-hydroxytamoxifen and endoxifen in dried blood spots samples was developed and validated. The method employs an ultrasound-assisted liquid extraction and a reversed phase separation in an Acquity(®) C18 column (150×2.1 mm, 1.7 µm). Mobile phase was a mixture of formic acid 0.1% (v/v) pH 2.7 and acetonitrile (gradient from 60:40 to 50:50, v/v). Total analytical run time was 8 min. Precision assays showed CV % lower than 10.75% and accuracy in the range 94.5 to 110.3%. Mean analytes recoveries from DBS ranged from 40% to 92%. The method was successfully applied to 91 paired clinical DBS and plasma samples. Dried blood spots concentrations were highly correlated to plasma, with rs>0.83 (P<0.01). Median estimated plasma concentrations after hematocrit and partition factor adjustment were: TAM 123.3 ng mL(-1); NDT 267.9 ng mL(-1), EDF 10.0 ng mL(-1) and HTF 1.3 ng mL(-1,) representing in average 98 to 104% of the actually measured concentrations. The DBS method was able to identify 96% of patients with plasma EDF concentrations below the clinical threshold related to better prognosis (5.9 ng mL(-1)). The procedure has adequate analytical performance and can be an efficient tool to optimize adjuvant breast cancer treatment, especially in resource limited settings. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. The effect of nisin from Lactococcus lactis subsp. lactis on refrigerated patin fillet quality

    NASA Astrophysics Data System (ADS)

    Adilla, S. N.; Utami, R.; Nursiwi, A.; Nurhartadi, E.

    2017-04-01

    The effect of nisin from Lactococcus lactis subsp. lactis with spraying method application on quality of patin fillet during refrigerated storage (4±1°C) was investigated. The quality of patin fillet based on total plate count (TPC), pH, TVB-N, and TBA values during 16 days at 4±1°C. Completely Randomized Design (CDR) was used in one factor (nisin activity) at 0 IU/ml, 500 IU/ml, 1000 IU/ml, and 2000 IU/ml. The observation was done at 0, 4th, 8th, 12th, and 16th days of storage. The result showed that variation of nisin activity significantly affected the quality of fillet according to TPC, pH, and TVB-N values, however no significant difference on the obtained of TBA value. Nisin in 500 IU/ml, 1000 IU/ml, and 2000 IU/ml could extend the shelf-life of fillet until 4th, 8th, and 12th days respectively based on standard in all parameters.

  19. Total arsenic, lead, cadmium, copper, and zinc in some salt rivers in the northern Andes of Antofagasta, Chile.

    PubMed

    Queirolo, F; Stegen, S; Mondaca, J; Cortés, R; Rojas, R; Contreras, C; Munoz, L; Schwuger, M J; Ostapczuk, P

    2000-06-08

    The pre-Andes water in the region of Antofagasta is the main drinking and irrigation water source for approximately 3000 Atacameña (indigenous) people. The concentration for soluble elements (filtration in field through a 0.45-microm filter) was: Cd < 0.1 ng/ml; Pb < 0.5 ng/ml; and Zn and Cu between 1 and 10 ng/ml. In particulate material the concentrations were: for Cd < 0.1 ng/ml; for Pb < 0.3 ng/ml; and for Zn and Cu less than 1 ng/ml. The total content of these elements is far below the international recommendations (WHO) and the national standards (N. Ch. 1333 mod. 1987 and 409-1 of 1984). On the other hand, in some rivers a very high arsenic concentration was found (up to 3000 ng/ml) which exceed more than 50 times the national standard. In order to verify the analytical results, inter-laboratory and comparison with different determination methods have been done.

  20. A PET reconstruction formulation that enforces non-negativity in projection space for bias reduction in Y-90 imaging

    NASA Astrophysics Data System (ADS)

    Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.

    2018-02-01

    Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.

Top