Sample records for learning ml techniques

  1. Examining Mobile Learning Trends 2003-2008: A Categorical Meta-Trend Analysis Using Text Mining Techniques

    ERIC Educational Resources Information Center

    Hung, Jui-Long; Zhang, Ke

    2012-01-01

    This study investigated the longitudinal trends of academic articles in Mobile Learning (ML) using text mining techniques. One hundred and nineteen (119) refereed journal articles and proceedings papers from the SCI/SSCI database were retrieved and analyzed. The taxonomies of ML publications were grouped into twelve clusters (topics) and four…

  2. Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions.

    PubMed

    Kassahun, Yohannes; Yu, Bingbin; Tibebu, Abraham Temesgen; Stoyanov, Danail; Giannarou, Stamatia; Metzen, Jan Hendrik; Vander Poorten, Emmanuel

    2016-04-01

    Advances in technology and computing play an increasingly important role in the evolution of modern surgical techniques and paradigms. This article reviews the current role of machine learning (ML) techniques in the context of surgery with a focus on surgical robotics (SR). Also, we provide a perspective on the future possibilities for enhancing the effectiveness of procedures by integrating ML in the operating room. The review is focused on ML techniques directly applied to surgery, surgical robotics, surgical training and assessment. The widespread use of ML methods in diagnosis and medical image computing is beyond the scope of the review. Searches were performed on PubMed and IEEE Explore using combinations of keywords: ML, surgery, robotics, surgical and medical robotics, skill learning, skill analysis and learning to perceive. Studies making use of ML methods in the context of surgery are increasingly being reported. In particular, there is an increasing interest in using ML for developing tools to understand and model surgical skill and competence or to extract surgical workflow. Many researchers begin to integrate this understanding into the control of recent surgical robots and devices. ML is an expanding field. It is popular as it allows efficient processing of vast amounts of data for interpreting and real-time decision making. Already widely used in imaging and diagnosis, it is believed that ML will also play an important role in surgery and interventional treatments. In particular, ML could become a game changer into the conception of cognitive surgical robots. Such robots endowed with cognitive skills would assist the surgical team also on a cognitive level, such as possibly lowering the mental load of the team. For example, ML could help extracting surgical skill, learned through demonstration by human experts, and could transfer this to robotic skills. Such intelligent surgical assistance would significantly surpass the state of the art in surgical robotics. Current devices possess no intelligence whatsoever and are merely advanced and expensive instruments.

  3. Machine Learning Based Evaluation of Reading and Writing Difficulties.

    PubMed

    Iwabuchi, Mamoru; Hirabayashi, Rumi; Nakamura, Kenryu; Dim, Nem Khan

    2017-01-01

    The possibility of auto evaluation of reading and writing difficulties was investigated using non-parametric machine learning (ML) regression technique for URAWSS (Understanding Reading and Writing Skills of Schoolchildren) [1] test data of 168 children of grade 1 - 9. The result showed that the ML had better prediction than the ordinary rule-based decision.

  4. Unintended consequences of machine learning in medicine?

    PubMed

    McDonald, Laura; Ramagopalan, Sreeram V; Cox, Andrew P; Oguz, Mustafa

    2017-01-01

    Machine learning (ML) has the potential to significantly aid medical practice. However, a recent article highlighted some negative consequences that may arise from using ML decision support in medicine. We argue here that whilst the concerns raised by the authors may be appropriate, they are not specific to ML, and thus the article may lead to an adverse perception about this technique in particular. Whilst ML is not without its limitations like any methodology, a balanced view is needed in order to not hamper its use in potentially enabling better patient care.

  5. Quantum machine learning: a classical perspective

    NASA Astrophysics Data System (ADS)

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.

  6. Quantum machine learning: a classical perspective

    PubMed Central

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed. PMID:29434508

  7. Quantum machine learning: a classical perspective.

    PubMed

    Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard

    2018-01-01

    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.

  8. Less is more: Sampling chemical space with active learning

    NASA Astrophysics Data System (ADS)

    Smith, Justin S.; Nebgen, Ben; Lubbers, Nicholas; Isayev, Olexandr; Roitberg, Adrian E.

    2018-06-01

    The development of accurate and transferable machine learning (ML) potentials for predicting molecular energetics is a challenging task. The process of data generation to train such ML potentials is a task neither well understood nor researched in detail. In this work, we present a fully automated approach for the generation of datasets with the intent of training universal ML potentials. It is based on the concept of active learning (AL) via Query by Committee (QBC), which uses the disagreement between an ensemble of ML potentials to infer the reliability of the ensemble's prediction. QBC allows the presented AL algorithm to automatically sample regions of chemical space where the ML potential fails to accurately predict the potential energy. AL improves the overall fitness of ANAKIN-ME (ANI) deep learning potentials in rigorous test cases by mitigating human biases in deciding what new training data to use. AL also reduces the training set size to a fraction of the data required when using naive random sampling techniques. To provide validation of our AL approach, we develop the COmprehensive Machine-learning Potential (COMP6) benchmark (publicly available on GitHub) which contains a diverse set of organic molecules. Active learning-based ANI potentials outperform the original random sampled ANI-1 potential with only 10% of the data, while the final active learning-based model vastly outperforms ANI-1 on the COMP6 benchmark after training to only 25% of the data. Finally, we show that our proposed AL technique develops a universal ANI potential (ANI-1x) that provides accurate energy and force predictions on the entire COMP6 benchmark. This universal ML potential achieves a level of accuracy on par with the best ML potentials for single molecules or materials, while remaining applicable to the general class of organic molecules composed of the elements CHNO.

  9. Promises of Machine Learning Approaches in Prediction of Absorption of Compounds.

    PubMed

    Kumar, Rajnish; Sharma, Anju; Siddiqui, Mohammed Haris; Tiwari, Rajesh Kumar

    2018-01-01

    The Machine Learning (ML) is one of the fastest developing techniques in the prediction and evaluation of important pharmacokinetic properties such as absorption, distribution, metabolism and excretion. The availability of a large number of robust validation techniques for prediction models devoted to pharmacokinetics has significantly enhanced the trust and authenticity in ML approaches. There is a series of prediction models generated and used for rapid screening of compounds on the basis of absorption in last one decade. Prediction of absorption of compounds using ML models has great potential across the pharmaceutical industry as a non-animal alternative to predict absorption. However, these prediction models still have to go far ahead to develop the confidence similar to conventional experimental methods for estimation of drug absorption. Some of the general concerns are selection of appropriate ML methods and validation techniques in addition to selecting relevant descriptors and authentic data sets for the generation of prediction models. The current review explores published models of ML for the prediction of absorption using physicochemical properties as descriptors and their important conclusions. In addition, some critical challenges in acceptance of ML models for absorption are also discussed. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. Machine learning and microsimulation techniques on the prognosis of dementia: A systematic literature review.

    PubMed

    Dallora, Ana Luiza; Eivazzadeh, Shahryar; Mendes, Emilia; Berglund, Johan; Anderberg, Peter

    2017-01-01

    Dementia is a complex disorder characterized by poor outcomes for the patients and high costs of care. After decades of research little is known about its mechanisms. Having prognostic estimates about dementia can help researchers, patients and public entities in dealing with this disorder. Thus, health data, machine learning and microsimulation techniques could be employed in developing prognostic estimates for dementia. The goal of this paper is to present evidence on the state of the art of studies investigating and the prognosis of dementia using machine learning and microsimulation techniques. To achieve our goal we carried out a systematic literature review, in which three large databases-Pubmed, Socups and Web of Science were searched to select studies that employed machine learning or microsimulation techniques for the prognosis of dementia. A single backward snowballing was done to identify further studies. A quality checklist was also employed to assess the quality of the evidence presented by the selected studies, and low quality studies were removed. Finally, data from the final set of studies were extracted in summary tables. In total 37 papers were included. The data summary results showed that the current research is focused on the investigation of the patients with mild cognitive impairment that will evolve to Alzheimer's disease, using machine learning techniques. Microsimulation studies were concerned with cost estimation and had a populational focus. Neuroimaging was the most commonly used variable. Prediction of conversion from MCI to AD is the dominant theme in the selected studies. Most studies used ML techniques on Neuroimaging data. Only a few data sources have been recruited by most studies and the ADNI database is the one most commonly used. Only two studies have investigated the prediction of epidemiological aspects of Dementia using either ML or MS techniques. Finally, care should be taken when interpreting the reported accuracy of ML techniques, given studies' different contexts.

  11. Machine learning and microsimulation techniques on the prognosis of dementia: A systematic literature review

    PubMed Central

    Mendes, Emilia; Berglund, Johan; Anderberg, Peter

    2017-01-01

    Background Dementia is a complex disorder characterized by poor outcomes for the patients and high costs of care. After decades of research little is known about its mechanisms. Having prognostic estimates about dementia can help researchers, patients and public entities in dealing with this disorder. Thus, health data, machine learning and microsimulation techniques could be employed in developing prognostic estimates for dementia. Objective The goal of this paper is to present evidence on the state of the art of studies investigating and the prognosis of dementia using machine learning and microsimulation techniques. Method To achieve our goal we carried out a systematic literature review, in which three large databases—Pubmed, Socups and Web of Science were searched to select studies that employed machine learning or microsimulation techniques for the prognosis of dementia. A single backward snowballing was done to identify further studies. A quality checklist was also employed to assess the quality of the evidence presented by the selected studies, and low quality studies were removed. Finally, data from the final set of studies were extracted in summary tables. Results In total 37 papers were included. The data summary results showed that the current research is focused on the investigation of the patients with mild cognitive impairment that will evolve to Alzheimer’s disease, using machine learning techniques. Microsimulation studies were concerned with cost estimation and had a populational focus. Neuroimaging was the most commonly used variable. Conclusions Prediction of conversion from MCI to AD is the dominant theme in the selected studies. Most studies used ML techniques on Neuroimaging data. Only a few data sources have been recruited by most studies and the ADNI database is the one most commonly used. Only two studies have investigated the prediction of epidemiological aspects of Dementia using either ML or MS techniques. Finally, care should be taken when interpreting the reported accuracy of ML techniques, given studies’ different contexts. PMID:28662070

  12. Comparison of machine learning techniques to predict all-cause mortality using fitness data: the Henry ford exercIse testing (FIT) project.

    PubMed

    Sakr, Sherif; Elshawi, Radwa; Ahmed, Amjad M; Qureshi, Waqas T; Brawner, Clinton A; Keteyian, Steven J; Blaha, Michael J; Al-Mallah, Mouaz H

    2017-12-19

    Prior studies have demonstrated that cardiorespiratory fitness (CRF) is a strong marker of cardiovascular health. Machine learning (ML) can enhance the prediction of outcomes through classification techniques that classify the data into predetermined categories. The aim of this study is to present an evaluation and comparison of how machine learning techniques can be applied on medical records of cardiorespiratory fitness and how the various techniques differ in terms of capabilities of predicting medical outcomes (e.g. mortality). We use data of 34,212 patients free of known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems Between 1991 and 2009 and had a complete 10-year follow-up. Seven machine learning classification techniques were evaluated: Decision Tree (DT), Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayesian Classifier (BC), Bayesian Network (BN), K-Nearest Neighbor (KNN) and Random Forest (RF). In order to handle the imbalanced dataset used, the Synthetic Minority Over-Sampling Technique (SMOTE) is used. Two set of experiments have been conducted with and without the SMOTE sampling technique. On average over different evaluation metrics, SVM Classifier has shown the lowest performance while other models like BN, BC and DT performed better. The RF classifier has shown the best performance (AUC = 0.97) among all models trained using the SMOTE sampling. The results show that various ML techniques can significantly vary in terms of its performance for the different evaluation metrics. It is also not necessarily that the more complex the ML model, the more prediction accuracy can be achieved. The prediction performance of all models trained with SMOTE is much better than the performance of models trained without SMOTE. The study shows the potential of machine learning methods for predicting all-cause mortality using cardiorespiratory fitness data.

  13. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    PubMed

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time. It is found that the proposed ML based sentence extraction techniques and the composite feature set used with RNN as classifier outperform all other approaches. By using ANN in FF form as feature extractor, the performance of the system is evaluated and a comparison is made. Experimental results show that the application of big data samples has enhanced the learning of the ASR system. Further, the ANN based sample and feature extraction techniques are found to be efficient enough to enable application of ML techniques in big data aspects as part of ASR systems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Machine learning & artificial intelligence in the quantum domain: a review of recent progress

    NASA Astrophysics Data System (ADS)

    Dunjko, Vedran; Briegel, Hans J.

    2018-07-01

    Quantum information technologies, on the one hand, and intelligent learning systems, on the other, are both emergent technologies that are likely to have a transformative impact on our society in the future. The respective underlying fields of basic research—quantum information versus machine learning (ML) and artificial intelligence (AI)—have their own specific questions and challenges, which have hitherto been investigated largely independently. However, in a growing body of recent work, researchers have been probing the question of the extent to which these fields can indeed learn and benefit from each other. Quantum ML explores the interaction between quantum computing and ML, investigating how results and techniques from one field can be used to solve the problems of the other. Recently we have witnessed significant breakthroughs in both directions of influence. For instance, quantum computing is finding a vital application in providing speed-ups for ML problems, critical in our ‘big data’ world. Conversely, ML already permeates many cutting-edge technologies and may become instrumental in advanced quantum technologies. Aside from quantum speed-up in data analysis, or classical ML optimization used in quantum experiments, quantum enhancements have also been (theoretically) demonstrated for interactive learning tasks, highlighting the potential of quantum-enhanced learning agents. Finally, works exploring the use of AI for the very design of quantum experiments and for performing parts of genuine research autonomously, have reported their first successes. Beyond the topics of mutual enhancement—exploring what ML/AI can do for quantum physics and vice versa—researchers have also broached the fundamental issue of quantum generalizations of learning and AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is fully described by quantum mechanics. In this review, we describe the main ideas, recent developments and progress in a broad spectrum of research investigating ML and AI in the quantum domain.

  15. Machine learning & artificial intelligence in the quantum domain: a review of recent progress.

    PubMed

    Dunjko, Vedran; Briegel, Hans J

    2018-07-01

    Quantum information technologies, on the one hand, and intelligent learning systems, on the other, are both emergent technologies that are likely to have a transformative impact on our society in the future. The respective underlying fields of basic research-quantum information versus machine learning (ML) and artificial intelligence (AI)-have their own specific questions and challenges, which have hitherto been investigated largely independently. However, in a growing body of recent work, researchers have been probing the question of the extent to which these fields can indeed learn and benefit from each other. Quantum ML explores the interaction between quantum computing and ML, investigating how results and techniques from one field can be used to solve the problems of the other. Recently we have witnessed significant breakthroughs in both directions of influence. For instance, quantum computing is finding a vital application in providing speed-ups for ML problems, critical in our 'big data' world. Conversely, ML already permeates many cutting-edge technologies and may become instrumental in advanced quantum technologies. Aside from quantum speed-up in data analysis, or classical ML optimization used in quantum experiments, quantum enhancements have also been (theoretically) demonstrated for interactive learning tasks, highlighting the potential of quantum-enhanced learning agents. Finally, works exploring the use of AI for the very design of quantum experiments and for performing parts of genuine research autonomously, have reported their first successes. Beyond the topics of mutual enhancement-exploring what ML/AI can do for quantum physics and vice versa-researchers have also broached the fundamental issue of quantum generalizations of learning and AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is fully described by quantum mechanics. In this review, we describe the main ideas, recent developments and progress in a broad spectrum of research investigating ML and AI in the quantum domain.

  16. Machine Learning-based Virtual Screening and Its Applications to Alzheimer's Drug Discovery: A Review.

    PubMed

    Carpenter, Kristy A; Huang, Xudong

    2018-06-07

    Virtual Screening (VS) has emerged as an important tool in the drug development process, as it conducts efficient in silico searches over millions of compounds, ultimately increasing yields of potential drug leads. As a subset of Artificial Intelligence (AI), Machine Learning (ML) is a powerful way of conducting VS for drug leads. ML for VS generally involves assembling a filtered training set of compounds, comprised of known actives and inactives. After training the model, it is validated and, if sufficiently accurate, used on previously unseen databases to screen for novel compounds with desired drug target binding activity. The study aims to review ML-based methods used for VS and applications to Alzheimer's disease (AD) drug discovery. To update the current knowledge on ML for VS, we review thorough backgrounds, explanations, and VS applications of the following ML techniques: Naïve Bayes (NB), k-Nearest Neighbors (kNN), Support Vector Machines (SVM), Random Forests (RF), and Artificial Neural Networks (ANN). All techniques have found success in VS, but the future of VS is likely to lean more heavily toward the use of neural networks - and more specifically, Convolutional Neural Networks (CNN), which are a subset of ANN that utilize convolution. We additionally conceptualize a work flow for conducting ML-based VS for potential therapeutics of for AD, a complex neurodegenerative disease with no known cure and prevention. This both serves as an example of how to apply the concepts introduced earlier in the review and as a potential workflow for future implementation. Different ML techniques are powerful tools for VS, and they have advantages and disadvantages albeit. ML-based VS can be applied to AD drug development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  17. Exploration of Machine Learning Approaches to Predict Pavement Performance

    DOT National Transportation Integrated Search

    2018-03-23

    Machine learning (ML) techniques were used to model and predict pavement condition index (PCI) for various pavement types using a variety of input variables. The primary objective of this research was to develop and assess PCI predictive models for t...

  18. Impact of pixel-based machine-learning techniques on automated frameworks for delineation of gross tumor volume regions for stereotactic body radiation therapy.

    PubMed

    Kawata, Yasuo; Arimura, Hidetaka; Ikushima, Koujirou; Jin, Ze; Morita, Kento; Tokunaga, Chiaki; Yabu-Uchi, Hidetake; Shioyama, Yoshiyuki; Sasaki, Tomonari; Honda, Hiroshi; Sasaki, Masayuki

    2017-10-01

    The aim of this study was to investigate the impact of pixel-based machine learning (ML) techniques, i.e., fuzzy-c-means clustering method (FCM), and the artificial neural network (ANN) and support vector machine (SVM), on an automated framework for delineation of gross tumor volume (GTV) regions of lung cancer for stereotactic body radiation therapy. The morphological and metabolic features for GTV regions, which were determined based on the knowledge of radiation oncologists, were fed on a pixel-by-pixel basis into the respective FCM, ANN, and SVM ML techniques. Then, the ML techniques were incorporated into the automated delineation framework of GTVs followed by an optimum contour selection (OCS) method, which we proposed in a previous study. The three-ML-based frameworks were evaluated for 16 lung cancer cases (six solid, four ground glass opacity (GGO), six part-solid GGO) with the datasets of planning computed tomography (CT) and 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET)/CT images using the three-dimensional Dice similarity coefficient (DSC). DSC denotes the degree of region similarity between the GTVs contoured by radiation oncologists and those estimated using the automated framework. The FCM-based framework achieved the highest DSCs of 0.79±0.06, whereas DSCs of the ANN-based and SVM-based frameworks were 0.76±0.14 and 0.73±0.14, respectively. The FCM-based framework provided the highest segmentation accuracy and precision without a learning process (lowest calculation cost). Therefore, the FCM-based framework can be useful for delineation of tumor regions in practical treatment planning. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  19. Machine learning in computational docking.

    PubMed

    Khamis, Mohamed A; Gomaa, Walid; Ahmed, Walaa F

    2015-03-01

    The objective of this paper is to highlight the state-of-the-art machine learning (ML) techniques in computational docking. The use of smart computational methods in the life cycle of drug design is relatively a recent development that has gained much popularity and interest over the last few years. Central to this methodology is the notion of computational docking which is the process of predicting the best pose (orientation + conformation) of a small molecule (drug candidate) when bound to a target larger receptor molecule (protein) in order to form a stable complex molecule. In computational docking, a large number of binding poses are evaluated and ranked using a scoring function. The scoring function is a mathematical predictive model that produces a score that represents the binding free energy, and hence the stability, of the resulting complex molecule. Generally, such a function should produce a set of plausible ligands ranked according to their binding stability along with their binding poses. In more practical terms, an effective scoring function should produce promising drug candidates which can then be synthesized and physically screened using high throughput screening process. Therefore, the key to computer-aided drug design is the design of an efficient highly accurate scoring function (using ML techniques). The methods presented in this paper are specifically based on ML techniques. Despite many traditional techniques have been proposed, the performance was generally poor. Only in the last few years started the application of the ML technology in the design of scoring functions; and the results have been very promising. The ML-based techniques are based on various molecular features extracted from the abundance of protein-ligand information in the public molecular databases, e.g., protein data bank bind (PDBbind). In this paper, we present this paradigm shift elaborating on the main constituent elements of the ML approach to molecular docking along with the state-of-the-art research in this area. For instance, the best random forest (RF)-based scoring function on PDBbind v2007 achieves a Pearson correlation coefficient between the predicted and experimentally determined binding affinities of 0.803 while the best conventional scoring function achieves 0.644. The best RF-based ranking power ranks the ligands correctly based on their experimentally determined binding affinities with accuracy 62.5% and identifies the top binding ligand with accuracy 78.1%. We conclude with open questions and potential future research directions that can be pursued in smart computational docking; using molecular features of different nature (geometrical, energy terms, pharmacophore), advanced ML techniques (e.g., deep learning), combining more than one ML models. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. An introduction and overview of machine learning in neurosurgical care.

    PubMed

    Senders, Joeky T; Zaki, Mark M; Karhade, Aditya V; Chang, Bliss; Gormley, William B; Broekman, Marike L; Smith, Timothy R; Arnaout, Omar

    2018-01-01

    Machine learning (ML) is a branch of artificial intelligence that allows computers to learn from large complex datasets without being explicitly programmed. Although ML is already widely manifest in our daily lives in various forms, the considerable potential of ML has yet to find its way into mainstream medical research and day-to-day clinical care. The complex diagnostic and therapeutic modalities used in neurosurgery provide a vast amount of data that is ideally suited for ML models. This systematic review explores ML's potential to assist and improve neurosurgical care. A systematic literature search was performed in the PubMed and Embase databases to identify all potentially relevant studies up to January 1, 2017. All studies were included that evaluated ML models assisting neurosurgical treatment. Of the 6,402 citations identified, 221 studies were selected after subsequent title/abstract and full-text screening. In these studies, ML was used to assist surgical treatment of patients with epilepsy, brain tumors, spinal lesions, neurovascular pathology, Parkinson's disease, traumatic brain injury, and hydrocephalus. Across multiple paradigms, ML was found to be a valuable tool for presurgical planning, intraoperative guidance, neurophysiological monitoring, and neurosurgical outcome prediction. ML has started to find applications aimed at improving neurosurgical care by increasing the efficiency and precision of perioperative decision-making. A thorough validation of specific ML models is essential before implementation in clinical neurosurgical care. To bridge the gap between research and clinical care, practical and ethical issues should be considered parallel to the development of these techniques.

  1. Machine Learning–Based Differential Network Analysis: A Study of Stress-Responsive Transcriptomes in Arabidopsis[W

    PubMed Central

    Ma, Chuang; Xin, Mingming; Feldmann, Kenneth A.; Wang, Xiangfeng

    2014-01-01

    Machine learning (ML) is an intelligent data mining technique that builds a prediction model based on the learning of prior knowledge to recognize patterns in large-scale data sets. We present an ML-based methodology for transcriptome analysis via comparison of gene coexpression networks, implemented as an R package called machine learning–based differential network analysis (mlDNA) and apply this method to reanalyze a set of abiotic stress expression data in Arabidopsis thaliana. The mlDNA first used a ML-based filtering process to remove nonexpressed, constitutively expressed, or non-stress-responsive “noninformative” genes prior to network construction, through learning the patterns of 32 expression characteristics of known stress-related genes. The retained “informative” genes were subsequently analyzed by ML-based network comparison to predict candidate stress-related genes showing expression and network differences between control and stress networks, based on 33 network topological characteristics. Comparative evaluation of the network-centric and gene-centric analytic methods showed that mlDNA substantially outperformed traditional statistical testing–based differential expression analysis at identifying stress-related genes, with markedly improved prediction accuracy. To experimentally validate the mlDNA predictions, we selected 89 candidates out of the 1784 predicted salt stress–related genes with available SALK T-DNA mutagenesis lines for phenotypic screening and identified two previously unreported genes, mutants of which showed salt-sensitive phenotypes. PMID:24520154

  2. A machine learning approach as a surrogate of finite element analysis-based inverse method to estimate the zero-pressure geometry of human thoracic aorta.

    PubMed

    Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei

    2018-05-09

    Advances in structural finite element analysis (FEA) and medical imaging have made it possible to investigate the in vivo biomechanics of human organs such as blood vessels, for which organ geometries at the zero-pressure level need to be recovered. Although FEA-based inverse methods are available for zero-pressure geometry estimation, these methods typically require iterative computation, which are time-consuming and may be not suitable for time-sensitive clinical applications. In this study, by using machine learning (ML) techniques, we developed an ML model to estimate the zero-pressure geometry of human thoracic aorta given 2 pressurized geometries of the same patient at 2 different blood pressure levels. For the ML model development, a FEA-based method was used to generate a dataset of aorta geometries of 3125 virtual patients. The ML model, which was trained and tested on the dataset, is capable of recovering zero-pressure geometries consistent with those generated by the FEA-based method. Thus, this study demonstrates the feasibility and great potential of using ML techniques as a fast surrogate of FEA-based inverse methods to recover zero-pressure geometries of human organs. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.

    PubMed

    Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos

    2016-05-05

    Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  4. Learning atoms for materials discovery.

    PubMed

    Zhou, Quan; Tang, Peizhe; Liu, Shenxiu; Pan, Jinbo; Yan, Qimin; Zhang, Shou-Cheng

    2018-06-26

    Exciting advances have been made in artificial intelligence (AI) during recent decades. Among them, applications of machine learning (ML) and deep learning techniques brought human-competitive performances in various tasks of fields, including image recognition, speech recognition, and natural language understanding. Even in Go, the ancient game of profound complexity, the AI player has already beat human world champions convincingly with and without learning from the human. In this work, we show that our unsupervised machines (Atom2Vec) can learn the basic properties of atoms by themselves from the extensive database of known compounds and materials. These learned properties are represented in terms of high-dimensional vectors, and clustering of atoms in vector space classifies them into meaningful groups consistent with human knowledge. We use the atom vectors as basic input units for neural networks and other ML models designed and trained to predict materials properties, which demonstrate significant accuracy. Copyright © 2018 the Author(s). Published by PNAS.

  5. Integrating Natural Language Processing and Machine Learning Algorithms to Categorize Oncologic Response in Radiology Reports.

    PubMed

    Chen, Po-Hao; Zafar, Hanna; Galperin-Aizenberg, Maya; Cook, Tessa

    2018-04-01

    A significant volume of medical data remains unstructured. Natural language processing (NLP) and machine learning (ML) techniques have shown to successfully extract insights from radiology reports. However, the codependent effects of NLP and ML in this context have not been well-studied. Between April 1, 2015 and November 1, 2016, 9418 cross-sectional abdomen/pelvis CT and MR examinations containing our internal structured reporting element for cancer were separated into four categories: Progression, Stable Disease, Improvement, or No Cancer. We combined each of three NLP techniques with five ML algorithms to predict the assigned label using the unstructured report text and compared the performance of each combination. The three NLP algorithms included term frequency-inverse document frequency (TF-IDF), term frequency weighting (TF), and 16-bit feature hashing. The ML algorithms included logistic regression (LR), random decision forest (RDF), one-vs-all support vector machine (SVM), one-vs-all Bayes point machine (BPM), and fully connected neural network (NN). The best-performing NLP model consisted of tokenized unigrams and bigrams with TF-IDF. Increasing N-gram length yielded little to no added benefit for most ML algorithms. With all parameters optimized, SVM had the best performance on the test dataset, with 90.6 average accuracy and F score of 0.813. The interplay between ML and NLP algorithms and their effect on interpretation accuracy is complex. The best accuracy is achieved when both algorithms are optimized concurrently.

  6. Machine Learning Force Field Parameters from Ab Initio Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ying; Li, Hui; Pickard, Frank C.

    Machine learning (ML) techniques with the genetic algorithm (GA) have been applied to determine a polarizable force field parameters using only ab initio data from quantum mechanics (QM) calculations of molecular clusters at the MP2/6-31G(d,p), DFMP2(fc)/jul-cc-pVDZ, and DFMP2(fc)/jul-cc-pVTZ levels to predict experimental condensed phase properties (i.e., density and heat of vaporization). The performance of this ML/GA approach is demonstrated on 4943 dimer electrostatic potentials and 1250 cluster interaction energies for methanol. Excellent agreement between the training data set from QM calculations and the optimized force field model was achieved. The results were further improved by introducing an offset factor duringmore » the machine learning process to compensate for the discrepancy between the QM calculated energy and the energy reproduced by optimized force field, while maintaining the local “shape” of the QM energy surface. Throughout the machine learning process, experimental observables were not involved in the objective function, but were only used for model validation. The best model, optimized from the QM data at the DFMP2(fc)/jul-cc-pVTZ level, appears to perform even better than the original AMOEBA force field (amoeba09.prm), which was optimized empirically to match liquid properties. The present effort shows the possibility of using machine learning techniques to develop descriptive polarizable force field using only QM data. The ML/GA strategy to optimize force fields parameters described here could easily be extended to other molecular systems.« less

  7. Stable Atlas-based Mapped Prior (STAMP) machine-learning segmentation for multicenter large-scale MRI data.

    PubMed

    Kim, Eun Young; Magnotta, Vincent A; Liu, Dawei; Johnson, Hans J

    2014-09-01

    Machine learning (ML)-based segmentation methods are a common technique in the medical image processing field. In spite of numerous research groups that have investigated ML-based segmentation frameworks, there remains unanswered aspects of performance variability for the choice of two key components: ML algorithm and intensity normalization. This investigation reveals that the choice of those elements plays a major part in determining segmentation accuracy and generalizability. The approach we have used in this study aims to evaluate relative benefits of the two elements within a subcortical MRI segmentation framework. Experiments were conducted to contrast eight machine-learning algorithm configurations and 11 normalization strategies for our brain MR segmentation framework. For the intensity normalization, a Stable Atlas-based Mapped Prior (STAMP) was utilized to take better account of contrast along boundaries of structures. Comparing eight machine learning algorithms on down-sampled segmentation MR data, it was obvious that a significant improvement was obtained using ensemble-based ML algorithms (i.e., random forest) or ANN algorithms. Further investigation between these two algorithms also revealed that the random forest results provided exceptionally good agreement with manual delineations by experts. Additional experiments showed that the effect of STAMP-based intensity normalization also improved the robustness of segmentation for multicenter data sets. The constructed framework obtained good multicenter reliability and was successfully applied on a large multicenter MR data set (n>3000). Less than 10% of automated segmentations were recommended for minimal expert intervention. These results demonstrate the feasibility of using the ML-based segmentation tools for processing large amount of multicenter MR images. We demonstrated dramatically different result profiles in segmentation accuracy according to the choice of ML algorithm and intensity normalization chosen. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. An Event-Triggered Machine Learning Approach for Accelerometer-Based Fall Detection.

    PubMed

    Putra, I Putu Edy Suardiyana; Brusey, James; Gaura, Elena; Vesilo, Rein

    2017-12-22

    The fixed-size non-overlapping sliding window (FNSW) and fixed-size overlapping sliding window (FOSW) approaches are the most commonly used data-segmentation techniques in machine learning-based fall detection using accelerometer sensors. However, these techniques do not segment by fall stages (pre-impact, impact, and post-impact) and thus useful information is lost, which may reduce the detection rate of the classifier. Aligning the segment with the fall stage is difficult, as the segment size varies. We propose an event-triggered machine learning (EvenT-ML) approach that aligns each fall stage so that the characteristic features of the fall stages are more easily recognized. To evaluate our approach, two publicly accessible datasets were used. Classification and regression tree (CART), k -nearest neighbor ( k -NN), logistic regression (LR), and the support vector machine (SVM) were used to train the classifiers. EvenT-ML gives classifier F-scores of 98% for a chest-worn sensor and 92% for a waist-worn sensor, and significantly reduces the computational cost compared with the FNSW- and FOSW-based approaches, with reductions of up to 8-fold and 78-fold, respectively. EvenT-ML achieves a significantly better F-score than existing fall detection approaches. These results indicate that aligning feature segments with fall stages significantly increases the detection rate and reduces the computational cost.

  9. Using a Guided Machine Learning Ensemble Model to Predict Discharge Disposition following Meningioma Resection.

    PubMed

    Muhlestein, Whitney E; Akagi, Dallin S; Kallos, Justiss A; Morone, Peter J; Weaver, Kyle D; Thompson, Reid C; Chambless, Lola B

    2018-04-01

    Objective  Machine learning (ML) algorithms are powerful tools for predicting patient outcomes. This study pilots a novel approach to algorithm selection and model creation using prediction of discharge disposition following meningioma resection as a proof of concept. Materials and Methods  A diversity of ML algorithms were trained on a single-institution database of meningioma patients to predict discharge disposition. Algorithms were ranked by predictive power and top performers were combined to create an ensemble model. The final ensemble was internally validated on never-before-seen data to demonstrate generalizability. The predictive power of the ensemble was compared with a logistic regression. Further analyses were performed to identify how important variables impact the ensemble. Results  Our ensemble model predicted disposition significantly better than a logistic regression (area under the curve of 0.78 and 0.71, respectively, p  = 0.01). Tumor size, presentation at the emergency department, body mass index, convexity location, and preoperative motor deficit most strongly influence the model, though the independent impact of individual variables is nuanced. Conclusion  Using a novel ML technique, we built a guided ML ensemble model that predicts discharge destination following meningioma resection with greater predictive power than a logistic regression, and that provides greater clinical insight than a univariate analysis. These techniques can be extended to predict many other patient outcomes of interest.

  10. Comparing deep neural network and other machine learning algorithms for stroke prediction in a large-scale population-based electronic medical claims database.

    PubMed

    Chen-Ying Hung; Wei-Chen Chen; Po-Tsun Lai; Ching-Heng Lin; Chi-Chun Lee

    2017-07-01

    Electronic medical claims (EMCs) can be used to accurately predict the occurrence of a variety of diseases, which can contribute to precise medical interventions. While there is a growing interest in the application of machine learning (ML) techniques to address clinical problems, the use of deep-learning in healthcare have just gained attention recently. Deep learning, such as deep neural network (DNN), has achieved impressive results in the areas of speech recognition, computer vision, and natural language processing in recent years. However, deep learning is often difficult to comprehend due to the complexities in its framework. Furthermore, this method has not yet been demonstrated to achieve a better performance comparing to other conventional ML algorithms in disease prediction tasks using EMCs. In this study, we utilize a large population-based EMC database of around 800,000 patients to compare DNN with three other ML approaches for predicting 5-year stroke occurrence. The result shows that DNN and gradient boosting decision tree (GBDT) can result in similarly high prediction accuracies that are better compared to logistic regression (LR) and support vector machine (SVM) approaches. Meanwhile, DNN achieves optimal results by using lesser amounts of patient data when comparing to GBDT method.

  11. Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-05-12

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  12. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    DOE PAGES

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-04-14

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C 7H 10O 2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  13. Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods

    PubMed Central

    Burlina, Philippe; Billings, Seth; Joshi, Neil

    2017-01-01

    Objective To evaluate the use of ultrasound coupled with machine learning (ML) and deep learning (DL) techniques for automated or semi-automated classification of myositis. Methods Eighty subjects comprised of 19 with inclusion body myositis (IBM), 14 with polymyositis (PM), 14 with dermatomyositis (DM), and 33 normal (N) subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally) were acquired. We considered three problems of classification including (A) normal vs. affected (DM, PM, IBM); (B) normal vs. IBM patients; and (C) IBM vs. other types of myositis (DM or PM). We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs) for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF) and “engineered” features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification. Results The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A), 86.6% ± 2.4% for (B) and 74.8% ± 3.9% for (C), while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A), 84.3% ± 2.3% for (B) and 68.9% ± 2.5% for (C). Conclusions This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification. PMID:28854220

  14. Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods.

    PubMed

    Burlina, Philippe; Billings, Seth; Joshi, Neil; Albayda, Jemima

    2017-01-01

    To evaluate the use of ultrasound coupled with machine learning (ML) and deep learning (DL) techniques for automated or semi-automated classification of myositis. Eighty subjects comprised of 19 with inclusion body myositis (IBM), 14 with polymyositis (PM), 14 with dermatomyositis (DM), and 33 normal (N) subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally) were acquired. We considered three problems of classification including (A) normal vs. affected (DM, PM, IBM); (B) normal vs. IBM patients; and (C) IBM vs. other types of myositis (DM or PM). We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs) for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF) and "engineered" features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification. The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A), 86.6% ± 2.4% for (B) and 74.8% ± 3.9% for (C), while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A), 84.3% ± 2.3% for (B) and 68.9% ± 2.5% for (C). This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification.

  15. Comment on 'Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study'.

    PubMed

    Valdes, Gilmer; Interian, Yannet

    2018-03-15

    The application of machine learning (ML) presents tremendous opportunities for the field of oncology, thus we read 'Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study' with great interest. In this article, the authors used state of the art techniques: a pre-trained convolutional neural network (VGG-16 CNN), transfer learning, data augmentation, drop out and early stopping, all of which are directly responsible for the success and the excitement that these algorithms have created in other fields. We believe that the use of these techniques can offer tremendous opportunities in the field of Medical Physics and as such we would like to praise the authors for their pioneering application to the field of Radiation Oncology. That being said, given that the field of Medical Physics has unique characteristics that differentiate us from those fields where these techniques have been applied successfully, we would like to raise some points for future discussion and follow up studies that could help the community understand the limitations and nuances of deep learning techniques.

  16. Into the Bowels of Depression: Unravelling Medical Symptoms Associated with Depression by Applying Machine-Learning Techniques to a Community Based Population Sample.

    PubMed

    Dipnall, Joanna F; Pasco, Julie A; Berk, Michael; Williams, Lana J; Dodd, Seetal; Jacka, Felice N; Meyer, Denny

    2016-01-01

    Depression is commonly comorbid with many other somatic diseases and symptoms. Identification of individuals in clusters with comorbid symptoms may reveal new pathophysiological mechanisms and treatment targets. The aim of this research was to combine machine-learning (ML) algorithms with traditional regression techniques by utilising self-reported medical symptoms to identify and describe clusters of individuals with increased rates of depression from a large cross-sectional community based population epidemiological study. A multi-staged methodology utilising ML and traditional statistical techniques was performed using the community based population National Health and Nutrition Examination Study (2009-2010) (N = 3,922). A Self-organised Mapping (SOM) ML algorithm, combined with hierarchical clustering, was performed to create participant clusters based on 68 medical symptoms. Binary logistic regression, controlling for sociodemographic confounders, was used to then identify the key clusters of participants with higher levels of depression (PHQ-9≥10, n = 377). Finally, a Multiple Additive Regression Tree boosted ML algorithm was run to identify the important medical symptoms for each key cluster within 17 broad categories: heart, liver, thyroid, respiratory, diabetes, arthritis, fractures and osteoporosis, skeletal pain, blood pressure, blood transfusion, cholesterol, vision, hearing, psoriasis, weight, bowels and urinary. Five clusters of participants, based on medical symptoms, were identified to have significantly increased rates of depression compared to the cluster with the lowest rate: odds ratios ranged from 2.24 (95% CI 1.56, 3.24) to 6.33 (95% CI 1.67, 24.02). The ML boosted regression algorithm identified three key medical condition categories as being significantly more common in these clusters: bowel, pain and urinary symptoms. Bowel-related symptoms was found to dominate the relative importance of symptoms within the five key clusters. This methodology shows promise for the identification of conditions in general populations and supports the current focus on the potential importance of bowel symptoms and the gut in mental health research.

  17. A Machine Learning Ensemble Classifier for Early Prediction of Diabetic Retinopathy.

    PubMed

    S K, Somasundaram; P, Alli

    2017-11-09

    The main complication of diabetes is Diabetic retinopathy (DR), retinal vascular disease and it leads to the blindness. Regular screening for early DR disease detection is considered as an intensive labor and resource oriented task. Therefore, automatic detection of DR diseases is performed only by using the computational technique is the great solution. An automatic method is more reliable to determine the presence of an abnormality in Fundus images (FI) but, the classification process is poorly performed. Recently, few research works have been designed for analyzing texture discrimination capacity in FI to distinguish the healthy images. However, the feature extraction (FE) process was not performed well, due to the high dimensionality. Therefore, to identify retinal features for DR disease diagnosis and early detection using Machine Learning and Ensemble Classification method, called, Machine Learning Bagging Ensemble Classifier (ML-BEC) is designed. The ML-BEC method comprises of two stages. The first stage in ML-BEC method comprises extraction of the candidate objects from Retinal Images (RI). The candidate objects or the features for DR disease diagnosis include blood vessels, optic nerve, neural tissue, neuroretinal rim, optic disc size, thickness and variance. These features are initially extracted by applying Machine Learning technique called, t-distributed Stochastic Neighbor Embedding (t-SNE). Besides, t-SNE generates a probability distribution across high-dimensional images where the images are separated into similar and dissimilar pairs. Then, t-SNE describes a similar probability distribution across the points in the low-dimensional map. This lessens the Kullback-Leibler divergence among two distributions regarding the locations of the points on the map. The second stage comprises of application of ensemble classifiers to the extracted features for providing accurate analysis of digital FI using machine learning. In this stage, an automatic detection of DR screening system using Bagging Ensemble Classifier (BEC) is investigated. With the help of voting the process in ML-BEC, bagging minimizes the error due to variance of the base classifier. With the publicly available retinal image databases, our classifier is trained with 25% of RI. Results show that the ensemble classifier can achieve better classification accuracy (CA) than single classification models. Empirical experiments suggest that the machine learning-based ensemble classifier is efficient for further reducing DR classification time (CT).

  18. Overview of deep learning in medical imaging.

    PubMed

    Suzuki, Kenji

    2017-09-01

    The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.

  19. Machine Learning for Discriminating Quantum Measurement Trajectories and Improving Readout.

    PubMed

    Magesan, Easwar; Gambetta, Jay M; Córcoles, A D; Chow, Jerry M

    2015-05-22

    Current methods for classifying measurement trajectories in superconducting qubit systems produce fidelities systematically lower than those predicted by experimental parameters. Here, we place current classification methods within the framework of machine learning (ML) algorithms and improve on them by investigating more sophisticated ML approaches. We find that nonlinear algorithms and clustering methods produce significantly higher assignment fidelities that help close the gap to the fidelity possible under ideal noise conditions. Clustering methods group trajectories into natural subsets within the data, which allows for the diagnosis of systematic errors. We find large clusters in the data associated with T1 processes and show these are the main source of discrepancy between our experimental and ideal fidelities. These error diagnosis techniques help provide a path forward to improve qubit measurements.

  20. Comment on ‘Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study’

    NASA Astrophysics Data System (ADS)

    Valdes, Gilmer; Interian, Yannet

    2018-03-01

    The application of machine learning (ML) presents tremendous opportunities for the field of oncology, thus we read ‘Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study’ with great interest. In this article, the authors used state of the art techniques: a pre-trained convolutional neural network (VGG-16 CNN), transfer learning, data augmentation, drop out and early stopping, all of which are directly responsible for the success and the excitement that these algorithms have created in other fields. We believe that the use of these techniques can offer tremendous opportunities in the field of Medical Physics and as such we would like to praise the authors for their pioneering application to the field of Radiation Oncology. That being said, given that the field of Medical Physics has unique characteristics that differentiate us from those fields where these techniques have been applied successfully, we would like to raise some points for future discussion and follow up studies that could help the community understand the limitations and nuances of deep learning techniques.

  1. Classification techniques on computerized systems to predict and/or to detect Apnea: A systematic review.

    PubMed

    Pombo, Nuno; Garcia, Nuno; Bousson, Kouamana

    2017-03-01

    Sleep apnea syndrome (SAS), which can significantly decrease the quality of life is associated with a major risk factor of health implications such as increased cardiovascular disease, sudden death, depression, irritability, hypertension, and learning difficulties. Thus, it is relevant and timely to present a systematic review describing significant applications in the framework of computational intelligence-based SAS, including its performance, beneficial and challenging effects, and modeling for the decision-making on multiple scenarios. This study aims to systematically review the literature on systems for the detection and/or prediction of apnea events using a classification model. Forty-five included studies revealed a combination of classification techniques for the diagnosis of apnea, such as threshold-based (14.75%) and machine learning (ML) models (85.25%). In addition, the ML models, were clustered in a mind map, include neural networks (44.26%), regression (4.91%), instance-based (11.47%), Bayesian algorithms (1.63%), reinforcement learning (4.91%), dimensionality reduction (8.19%), ensemble learning (6.55%), and decision trees (3.27%). A classification model should provide an auto-adaptive and no external-human action dependency. In addition, the accuracy of the classification models is related with the effective features selection. New high-quality studies based on randomized controlled trials and validation of models using a large and multiple sample of data are recommended. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  2. Comparison of Radio Frequency Distinct Native Attribute and Matched Filtering Techniques for Device Discrimination and Operation Identification

    DTIC Science & Technology

    identification. URE from ten MSP430F5529 16-bit microcontrollers were analyzed using: 1) RF distinct native attributes (RF-DNA) fingerprints paired with multiple...discriminant analysis/maximum likelihood (MDA/ML) classification, 2) RF-DNA fingerprints paired with generalized relevance learning vector quantized

  3. Interactive machine learning for health informatics: when do we need the human-in-the-loop?

    PubMed

    Holzinger, Andreas

    2016-06-01

    Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as "algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human." This "human-in-the-loop" can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.

  4. Rainfall Prediction of Indian Peninsula: Comparison of Time Series Based Approach and Predictor Based Approach using Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Dash, Y.; Mishra, S. K.; Panigrahi, B. K.

    2017-12-01

    Prediction of northeast/post monsoon rainfall which occur during October, November and December (OND) over Indian peninsula is a challenging task due to the dynamic nature of uncertain chaotic climate. It is imperative to elucidate this issue by examining performance of different machine leaning (ML) approaches. The prime objective of this research is to compare between a) statistical prediction using historical rainfall observations and global atmosphere-ocean predictors like Sea Surface Temperature (SST) and Sea Level Pressure (SLP) and b) empirical prediction based on a time series analysis of past rainfall data without using any other predictors. Initially, ML techniques have been applied on SST and SLP data (1948-2014) obtained from NCEP/NCAR reanalysis monthly mean provided by the NOAA ESRL PSD. Later, this study investigated the applicability of ML methods using OND rainfall time series for 1948-2014 and forecasted up to 2018. The predicted values of aforementioned methods were verified using observed time series data collected from Indian Institute of Tropical Meteorology and the result revealed good performance of ML algorithms with minimal error scores. Thus, it is found that both statistical and empirical methods are useful for long range climatic projections.

  5. Dynamical analysis of contrastive divergence learning: Restricted Boltzmann machines with Gaussian visible units.

    PubMed

    Karakida, Ryo; Okada, Masato; Amari, Shun-Ichi

    2016-07-01

    The restricted Boltzmann machine (RBM) is an essential constituent of deep learning, but it is hard to train by using maximum likelihood (ML) learning, which minimizes the Kullback-Leibler (KL) divergence. Instead, contrastive divergence (CD) learning has been developed as an approximation of ML learning and widely used in practice. To clarify the performance of CD learning, in this paper, we analytically derive the fixed points where ML and CDn learning rules converge in two types of RBMs: one with Gaussian visible and Gaussian hidden units and the other with Gaussian visible and Bernoulli hidden units. In addition, we analyze the stability of the fixed points. As a result, we find that the stable points of CDn learning rule coincide with those of ML learning rule in a Gaussian-Gaussian RBM. We also reveal that larger principal components of the input data are extracted at the stable points. Moreover, in a Gaussian-Bernoulli RBM, we find that both ML and CDn learning can extract independent components at one of stable points. Our analysis demonstrates that the same feature components as those extracted by ML learning are extracted simply by performing CD1 learning. Expanding this study should elucidate the specific solutions obtained by CD learning in other types of RBMs or in deep networks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Needs, Pains, and Motivations in Autonomous Agents.

    PubMed

    Starzyk, Janusz A; Graham, James; Puzio, Leszek

    This paper presents the development of a motivated learning (ML) agent with symbolic I/O. Our earlier work on the ML agent was enhanced, giving it autonomy for interaction with other agents. Specifically, we equipped the agent with drives and pains that establish its motivations to learn how to respond to desired and undesired events and create related abstract goals. The purpose of this paper is to explore the autonomous development of motivations and memory in agents within a simulated environment. The ML agent has been implemented in a virtual environment created within the NeoAxis game engine. Additionally, to illustrate the benefits of an ML-based agent, we compared the performance of our algorithm against various reinforcement learning (RL) algorithms in a dynamic test scenario, and demonstrated that our ML agent learns better than any of the tested RL agents.This paper presents the development of a motivated learning (ML) agent with symbolic I/O. Our earlier work on the ML agent was enhanced, giving it autonomy for interaction with other agents. Specifically, we equipped the agent with drives and pains that establish its motivations to learn how to respond to desired and undesired events and create related abstract goals. The purpose of this paper is to explore the autonomous development of motivations and memory in agents within a simulated environment. The ML agent has been implemented in a virtual environment created within the NeoAxis game engine. Additionally, to illustrate the benefits of an ML-based agent, we compared the performance of our algorithm against various reinforcement learning (RL) algorithms in a dynamic test scenario, and demonstrated that our ML agent learns better than any of the tested RL agents.

  7. Performance of Machine Learning Algorithms for Qualitative and Quantitative Prediction Drug Blockade of hERG1 channel.

    PubMed

    Wacker, Soren; Noskov, Sergei Yu

    2018-05-01

    Drug-induced abnormal heart rhythm known as Torsades de Pointes (TdP) is a potential lethal ventricular tachycardia found in many patients. Even newly released anti-arrhythmic drugs, like ivabradine with HCN channel as a primary target, block the hERG potassium current in overlapping concentration interval. Promiscuous drug block to hERG channel may potentially lead to perturbation of the action potential duration (APD) and TdP, especially when with combined with polypharmacy and/or electrolyte disturbances. The example of novel anti-arrhythmic ivabradine illustrates clinically important and ongoing deficit in drug design and warrants for better screening methods. There is an urgent need to develop new approaches for rapid and accurate assessment of how drugs with complex interactions and multiple subcellular targets can predispose or protect from drug-induced TdP. One of the unexpected outcomes of compulsory hERG screening implemented in USA and European Union resulted in large datasets of IC 50 values for various molecules entering the market. The abundant data allows now to construct predictive machine-learning (ML) models. Novel ML algorithms and techniques promise better accuracy in determining IC 50 values of hERG blockade that is comparable or surpassing that of the earlier QSAR or molecular modeling technique. To test the performance of modern ML techniques, we have developed a computational platform integrating various workflows for quantitative structure activity relationship (QSAR) models using data from the ChEMBL database. To establish predictive powers of ML-based algorithms we computed IC 50 values for large dataset of molecules and compared it to automated patch clamp system for a large dataset of hERG blocking and non-blocking drugs, an industry gold standard in studies of cardiotoxicity. The optimal protocol with high sensitivity and predictive power is based on the novel eXtreme gradient boosting (XGBoost) algorithm. The ML-platform with XGBoost displays excellent performance with a coefficient of determination of up to R 2 ~0.8 for pIC 50 values in evaluation datasets, surpassing other metrics and approaches available in literature. Ultimately, the ML-based platform developed in our work is a scalable framework with automation potential to interact with other developing technologies in cardiotoxicity field, including high-throughput electrophysiology measurements delivering large datasets of profiled drugs, rapid synthesis and drug development via progress in synthetic biology.

  8. Separation of pulsar signals from noise using supervised machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Bethapudi, S.; Desai, S.

    2018-04-01

    We evaluate the performance of four different machine learning (ML) algorithms: an Artificial Neural Network Multi-Layer Perceptron (ANN MLP), Adaboost, Gradient Boosting Classifier (GBC), and XGBoost, for the separation of pulsars from radio frequency interference (RFI) and other sources of noise, using a dataset obtained from the post-processing of a pulsar search pipeline. This dataset was previously used for the cross-validation of the SPINN-based machine learning engine, obtained from the reprocessing of the HTRU-S survey data (Morello et al., 2014). We have used the Synthetic Minority Over-sampling Technique (SMOTE) to deal with high-class imbalance in the dataset. We report a variety of quality scores from all four of these algorithms on both the non-SMOTE and SMOTE datasets. For all the above ML methods, we report high accuracy and G-mean for both the non-SMOTE and SMOTE cases. We study the feature importances using Adaboost, GBC, and XGBoost and also from the minimum Redundancy Maximum Relevance approach to report algorithm-agnostic feature ranking. From these methods, we find that the signal to noise of the folded profile to be the best feature. We find that all the ML algorithms report FPRs about an order of magnitude lower than the corresponding FPRs obtained in Morello et al. (2014), for the same recall value.

  9. A Machine Learning Approach to Predicted Bathymetry

    NASA Astrophysics Data System (ADS)

    Wood, W. T.; Elmore, P. A.; Petry, F.

    2017-12-01

    Recent and on-going efforts have shown how machine learning (ML) techniques, incorporating more, and more disparate data than can be interpreted manually, can predict seafloor properties, with uncertainty, where they have not been measured directly. We examine here a ML approach to predicted bathymetry. Our approach employs a paradigm of global bathymetry as an integral component of global geology. From a marine geology and geophysics perspective the bathymetry is the thickness of one layer in an ensemble of layers that inter-relate to varying extents vertically and geospatially. The nature of the multidimensional relationships in these layers between bathymetry, gravity, magnetic field, age, and many other global measures is typically geospatially dependent and non-linear. The advantage of using ML is that these relationships need not be stated explicitly, nor do they need to be approximated with a transfer function - the machine learns them via the data. Fundamentally, ML operates by brute-force searching for multidimensional correlations between desired, but sparsely known data values (in this case water depth), and a multitude of (geologic) predictors. Predictors include quantities known extensively such as remotely sensed measurements (i.e. gravity and magnetics), distance from spreading ridge, trench etc., (and spatial statistics based on these quantities). Estimating bathymetry from an approximate transfer function is inherently model, as well as data limited - complex relationships are explicitly ruled out. The ML is a purely data-driven approach, so only the extent and quality of the available observations limit prediction accuracy. This allows for a system in which new data, of a wide variety of types, can be quickly and easily assimilated into updated bathymetry predictions with quantitative posterior uncertainties.

  10. NASA FDL: Accelerating Artificial Intelligence Applications in the Space Sciences.

    NASA Astrophysics Data System (ADS)

    Parr, J.; Navas-Moreno, M.; Dahlstrom, E. L.; Jennings, S. B.

    2017-12-01

    NASA has a long history of using Artificial Intelligence (AI) for exploration purposes, however due to the recent explosion of the Machine Learning (ML) field within AI, there are great opportunities for NASA to find expanded benefit. For over two years now, the NASA Frontier Development Lab (FDL) has been at the nexus of bright academic researchers, private sector expertise in AI/ML and NASA scientific problem solving. The FDL hypothesis of improving science results was predicated on three main ideas, faster results could be achieved through sprint methodologies, better results could be achieved through interdisciplinarity, and public-private partnerships could lower costs We present select results obtained during two summer sessions in 2016 and 2017 where the research was focused on topics in planetary defense, space resources and space weather, and utilized variational auto encoders, bayesian optimization, and deep learning techniques like deep, recurrent and residual neural networks. The FDL results demonstrate the power of bridging research disciplines and the potential that AI/ML has for supporting research goals, improving on current methodologies, enabling new discovery and doing so in accelerated timeframes.

  11. Leveraging knowledge engineering and machine learning for microbial bio-manufacturing.

    PubMed

    Oyetunde, Tolutola; Bao, Forrest Sheng; Chen, Jiung-Wen; Martin, Hector Garcia; Tang, Yinjie J

    2018-05-03

    Genome scale modeling (GSM) predicts the performance of microbial workhorses and helps identify beneficial gene targets. GSM integrated with intracellular flux dynamics, omics, and thermodynamics have shown remarkable progress in both elucidating complex cellular phenomena and computational strain design (CSD). Nonetheless, these models still show high uncertainty due to a poor understanding of innate pathway regulations, metabolic burdens, and other factors (such as stress tolerance and metabolite channeling). Besides, the engineered hosts may have genetic mutations or non-genetic variations in bioreactor conditions and thus CSD rarely foresees fermentation rate and titer. Metabolic models play important role in design-build-test-learn cycles for strain improvement, and machine learning (ML) may provide a viable complementary approach for driving strain design and deciphering cellular processes. In order to develop quality ML models, knowledge engineering leverages and standardizes the wealth of information in literature (e.g., genomic/phenomic data, synthetic biology strategies, and bioprocess variables). Data driven frameworks can offer new constraints for mechanistic models to describe cellular regulations, to design pathways, to search gene targets, and to estimate fermentation titer/rate/yield under specified growth conditions (e.g., mixing, nutrients, and O 2 ). This review highlights the scope of information collections, database constructions, and machine learning techniques (such as deep learning and transfer learning), which may facilitate "Learn and Design" for strain development. Copyright © 2018. Published by Elsevier Inc.

  12. Data mining in bioinformatics using Weka.

    PubMed

    Frank, Eibe; Hall, Mark; Trigg, Len; Holmes, Geoffrey; Witten, Ian H

    2004-10-12

    The Weka machine learning workbench provides a general-purpose environment for automatic classification, regression, clustering and feature selection-common data mining problems in bioinformatics research. It contains an extensive collection of machine learning algorithms and data pre-processing methods complemented by graphical user interfaces for data exploration and the experimental comparison of different machine learning techniques on the same problem. Weka can process data given in the form of a single relational table. Its main objectives are to (a) assist users in extracting useful information from data and (b) enable them to easily identify a suitable algorithm for generating an accurate predictive model from it. http://www.cs.waikato.ac.nz/ml/weka.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu Xiaoying; Ho, Shirley; Trac, Hy

    We investigate machine learning (ML) techniques for predicting the number of galaxies (N{sub gal}) that occupy a halo, given the halo's properties. These types of mappings are crucial for constructing the mock galaxy catalogs necessary for analyses of large-scale structure. The ML techniques proposed here distinguish themselves from traditional halo occupation distribution (HOD) modeling as they do not assume a prescribed relationship between halo properties and N{sub gal}. In addition, our ML approaches are only dependent on parent halo properties (like HOD methods), which are advantageous over subhalo-based approaches as identifying subhalos correctly is difficult. We test two algorithms: supportmore » vector machines (SVM) and k-nearest-neighbor (kNN) regression. We take galaxies and halos from the Millennium simulation and predict N{sub gal} by training our algorithms on the following six halo properties: number of particles, M{sub 200}, {sigma}{sub v}, v{sub max}, half-mass radius, and spin. For Millennium, our predicted N{sub gal} values have a mean-squared error (MSE) of {approx}0.16 for both SVM and kNN. Our predictions match the overall distribution of halos reasonably well and the galaxy correlation function at large scales to {approx}5%-10%. In addition, we demonstrate a feature selection algorithm to isolate the halo parameters that are most predictive, a useful technique for understanding the mapping between halo properties and N{sub gal}. Lastly, we investigate these ML-based approaches in making mock catalogs for different galaxy subpopulations (e.g., blue, red, high M{sub star}, low M{sub star}). Given its non-parametric nature as well as its powerful predictive and feature selection capabilities, ML offers an interesting alternative for creating mock catalogs.« less

  14. The use of genetic programming to develop a predictor of swash excursion on sandy beaches

    NASA Astrophysics Data System (ADS)

    Passarella, Marinella; Goldstein, Evan B.; De Muro, Sandro; Coco, Giovanni

    2018-02-01

    We use genetic programming (GP), a type of machine learning (ML) approach, to predict the total and infragravity swash excursion using previously published data sets that have been used extensively in swash prediction studies. Three previously published works with a range of new conditions are added to this data set to extend the range of measured swash conditions. Using this newly compiled data set we demonstrate that a ML approach can reduce the prediction errors compared to well-established parameterizations and therefore it may improve coastal hazards assessment (e.g. coastal inundation). Predictors obtained using GP can also be physically sound and replicate the functionality and dependencies of previous published formulas. Overall, we show that ML techniques are capable of both improving predictability (compared to classical regression approaches) and providing physical insight into coastal processes.

  15. Proceedings of the Workshop on Multivariable Control Systems Held at Wright-Patterson AFB, OH, on 3 December 1982.

    DTIC Science & Technology

    1983-09-01

    promising method of af- craft multivariable flight controller design. Like any ne.! design technique, there is still more to learn about the r.~ cd...M4atix - Feedback Gain Ma trix - Fandom ’htrix Z - Number of Outputs L1 - Roll Moment • : ’ - 7oll Moment with Inertia TrML 523 a.. Symbols m - Number of

  16. AstroML: "better, faster, cheaper" towards state-of-the-art data mining and machine learning

    NASA Astrophysics Data System (ADS)

    Ivezic, Zeljko; Connolly, Andrew J.; Vanderplas, Jacob

    2015-01-01

    We present AstroML, a Python module for machine learning and data mining built on numpy, scipy, scikit-learn, matplotlib, and astropy, and distributed under an open license. AstroML contains a growing library of statistical and machine learning routines for analyzing astronomical data in Python, loaders for several open astronomical datasets (such as SDSS and other recent major surveys), and a large suite of examples of analyzing and visualizing astronomical datasets. AstroML is especially suitable for introducing undergraduate students to numerical research projects and for graduate students to rapidly undertake cutting-edge research. The long-term goal of astroML is to provide a community repository for fast Python implementations of common tools and routines used for statistical data analysis in astronomy and astrophysics (see http://www.astroml.org).

  17. Toward Intelligent Machine Learning Algorithms

    DTIC Science & Technology

    1988-05-01

    Machine learning is recognized as a tool for improving the performance of many kinds of systems, yet most machine learning systems themselves are not...directed systems, and with the addition of a knowledge store for organizing and maintaining knowledge to assist learning, a learning machine learning (L...ML) algorithm is possible. The necessary components of L-ML systems are presented along with several case descriptions of existing machine learning systems

  18. AstroML: Python-powered Machine Learning for Astronomy

    NASA Astrophysics Data System (ADS)

    Vander Plas, Jake; Connolly, A. J.; Ivezic, Z.

    2014-01-01

    As astronomical data sets grow in size and complexity, automated machine learning and data mining methods are becoming an increasingly fundamental component of research in the field. The astroML project (http://astroML.org) provides a common repository for practical examples of the data mining and machine learning tools used and developed by astronomical researchers, written in Python. The astroML module contains a host of general-purpose data analysis and machine learning routines, loaders for openly-available astronomical datasets, and fast implementations of specific computational methods often used in astronomy and astrophysics. The associated website features hundreds of examples of these routines being used for analysis of real astronomical datasets, while the associated textbook provides a curriculum resource for graduate-level courses focusing on practical statistics, machine learning, and data mining approaches within Astronomical research. This poster will highlight several of the more powerful and unique examples of analysis performed with astroML, all of which can be reproduced in their entirety on any computer with the proper packages installed.

  19. Using Machine Learning for Advanced Anomaly Detection and Classification

    NASA Astrophysics Data System (ADS)

    Lane, B.; Poole, M.; Camp, M.; Murray-Krezan, J.

    2016-09-01

    Machine Learning (ML) techniques have successfully been used in a wide variety of applications to automatically detect and potentially classify changes in activity, or a series of activities by utilizing large amounts data, sometimes even seemingly-unrelated data. The amount of data being collected, processed, and stored in the Space Situational Awareness (SSA) domain has grown at an exponential rate and is now better suited for ML. This paper describes development of advanced algorithms to deliver significant improvements in characterization of deep space objects and indication and warning (I&W) using a global network of telescopes that are collecting photometric data on a multitude of space-based objects. The Phase II Air Force Research Laboratory (AFRL) Small Business Innovative Research (SBIR) project Autonomous Characterization Algorithms for Change Detection and Characterization (ACDC), contracted to ExoAnalytic Solutions Inc. is providing the ability to detect and identify photometric signature changes due to potential space object changes (e.g. stability, tumble rate, aspect ratio), and correlate observed changes to potential behavioral changes using a variety of techniques, including supervised learning. Furthermore, these algorithms run in real-time on data being collected and processed by the ExoAnalytic Space Operations Center (EspOC), providing timely alerts and warnings while dynamically creating collection requirements to the EspOC for the algorithms that generate higher fidelity I&W. This paper will discuss the recently implemented ACDC algorithms, including the general design approach and results to date. The usage of supervised algorithms, such as Support Vector Machines, Neural Networks, k-Nearest Neighbors, etc., and unsupervised algorithms, for example k-means, Principle Component Analysis, Hierarchical Clustering, etc., and the implementations of these algorithms is explored. Results of applying these algorithms to EspOC data both in an off-line "pattern of life" analysis as well as using the algorithms on-line in real-time, meaning as data is collected, will be presented. Finally, future work in applying ML for SSA will be discussed.

  20. Application of machine learning algorithms for clinical predictive modeling: a data-mining approach in SCT.

    PubMed

    Shouval, R; Bondi, O; Mishan, H; Shimoni, A; Unger, R; Nagler, A

    2014-03-01

    Data collected from hematopoietic SCT (HSCT) centers are becoming more abundant and complex owing to the formation of organized registries and incorporation of biological data. Typically, conventional statistical methods are used for the development of outcome prediction models and risk scores. However, these analyses carry inherent properties limiting their ability to cope with large data sets with multiple variables and samples. Machine learning (ML), a field stemming from artificial intelligence, is part of a wider approach for data analysis termed data mining (DM). It enables prediction in complex data scenarios, familiar to practitioners and researchers. Technological and commercial applications are all around us, gradually entering clinical research. In the following review, we would like to expose hematologists and stem cell transplanters to the concepts, clinical applications, strengths and limitations of such methods and discuss current research in HSCT. The aim of this review is to encourage utilization of the ML and DM techniques in the field of HSCT, including prediction of transplantation outcome and donor selection.

  1. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    NASA Astrophysics Data System (ADS)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  2. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    PubMed

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  3. Machine learning for epigenetics and future medical applications.

    PubMed

    Holder, Lawrence B; Haque, M Muksitul; Skinner, Michael K

    2017-07-03

    Understanding epigenetic processes holds immense promise for medical applications. Advances in Machine Learning (ML) are critical to realize this promise. Previous studies used epigenetic data sets associated with the germline transmission of epigenetic transgenerational inheritance of disease and novel ML approaches to predict genome-wide locations of critical epimutations. A combination of Active Learning (ACL) and Imbalanced Class Learning (ICL) was used to address past problems with ML to develop a more efficient feature selection process and address the imbalance problem in all genomic data sets. The power of this novel ML approach and our ability to predict epigenetic phenomena and associated disease is suggested. The current approach requires extensive computation of features over the genome. A promising new approach is to introduce Deep Learning (DL) for the generation and simultaneous computation of novel genomic features tuned to the classification task. This approach can be used with any genomic or biological data set applied to medicine. The application of molecular epigenetic data in advanced machine learning analysis to medicine is the focus of this review.

  4. Knowledge will Propel Machine Understanding of Content: Extrapolating from Current Examples

    PubMed Central

    Sheth, Amit; Perera, Sujan; Wijeratne, Sanjaya; Thirunarayan, Krishnaprasad

    2018-01-01

    Machine Learning has been a big success story during the AI resurgence. One particular stand out success relates to learning from a massive amount of data. In spite of early assertions of the unreasonable effectiveness of data, there is increasing recognition for utilizing knowledge whenever it is available or can be created purposefully. In this paper, we discuss the indispensable role of knowledge for deeper understanding of content where (i) large amounts of training data are unavailable, (ii) the objects to be recognized are complex, (e.g., implicit entities and highly subjective content), and (iii) applications need to use complementary or related data in multiple modalities/media. What brings us to the cusp of rapid progress is our ability to (a) create relevant and reliable knowledge and (b) carefully exploit knowledge to enhance ML/NLP techniques. Using diverse examples, we seek to foretell unprecedented progress in our ability for deeper understanding and exploitation of multimodal data and continued incorporation of knowledge in learning techniques.

  5. Machine learning approaches to diagnosis and laterality effects in semantic dementia discourse.

    PubMed

    Garrard, Peter; Rentoumi, Vassiliki; Gesierich, Benno; Miller, Bruce; Gorno-Tempini, Maria Luisa

    2014-06-01

    Advances in automatic text classification have been necessitated by the rapid increase in the availability of digital documents. Machine learning (ML) algorithms can 'learn' from data: for instance a ML system can be trained on a set of features derived from written texts belonging to known categories, and learn to distinguish between them. Such a trained system can then be used to classify unseen texts. In this paper, we explore the potential of the technique to classify transcribed speech samples along clinical dimensions, using vocabulary data alone. We report the accuracy with which two related ML algorithms [naive Bayes Gaussian (NBG) and naive Bayes multinomial (NBM)] categorized picture descriptions produced by: 32 semantic dementia (SD) patients versus 10 healthy, age-matched controls; and SD patients with left- (n = 21) versus right-predominant (n = 11) patterns of temporal lobe atrophy. We used information gain (IG) to identify the vocabulary features that were most informative to each of these two distinctions. In the SD versus control classification task, both algorithms achieved accuracies of greater than 90%. In the right- versus left-temporal lobe predominant classification, NBM achieved a high level of accuracy (88%), but this was achieved by both NBM and NBG when the features used in the training set were restricted to those with high values of IG. The most informative features for the patient versus control task were low frequency content words, generic terms and components of metanarrative statements. For the right versus left task the number of informative lexical features was too small to support any specific inferences. An enriched feature set, including values derived from Quantitative Production Analysis (QPA) may shed further light on this little understood distinction. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Challenges in the Verification of Reinforcement Learning Algorithms

    NASA Technical Reports Server (NTRS)

    Van Wesel, Perry; Goodloe, Alwyn E.

    2017-01-01

    Machine learning (ML) is increasingly being applied to a wide array of domains from search engines to autonomous vehicles. These algorithms, however, are notoriously complex and hard to verify. This work looks at the assumptions underlying machine learning algorithms as well as some of the challenges in trying to verify ML algorithms. Furthermore, we focus on the specific challenges of verifying reinforcement learning algorithms. These are highlighted using a specific example. Ultimately, we do not offer a solution to the complex problem of ML verification, but point out possible approaches for verification and interesting research opportunities.

  7. Enhancing the Biological Relevance of Machine Learning Classifiers for Reverse Vaccinology.

    PubMed

    Heinson, Ashley I; Gunawardana, Yawwani; Moesker, Bastiaan; Hume, Carmen C Denman; Vataga, Elena; Hall, Yper; Stylianou, Elena; McShane, Helen; Williams, Ann; Niranjan, Mahesan; Woelk, Christopher H

    2017-02-01

    Reverse vaccinology (RV) is a bioinformatics approach that can predict antigens with protective potential from the protein coding genomes of bacterial pathogens for subunit vaccine design. RV has become firmly established following the development of the BEXSERO® vaccine against Neisseria meningitidis serogroup B. RV studies have begun to incorporate machine learning (ML) techniques to distinguish bacterial protective antigens (BPAs) from non-BPAs. This research contributes significantly to the RV field by using permutation analysis to demonstrate that a signal for protective antigens can be curated from published data. Furthermore, the effects of the following on an ML approach to RV were also assessed: nested cross-validation, balancing selection of non-BPAs for subcellular localization, increasing the training data, and incorporating greater numbers of protein annotation tools for feature generation. These enhancements yielded a support vector machine (SVM) classifier that could discriminate BPAs (n = 200) from non-BPAs (n = 200) with an area under the curve (AUC) of 0.787. In addition, hierarchical clustering of BPAs revealed that intracellular BPAs clustered separately from extracellular BPAs. However, no immediate benefit was derived when training SVM classifiers on data sets exclusively containing intra- or extracellular BPAs. In conclusion, this work demonstrates that ML classifiers have great utility in RV approaches and will lead to new subunit vaccines in the future.

  8. Non-Contact Heart Rate and Blood Pressure Estimations from Video Analysis and Machine Learning Modelling Applied to Food Sensory Responses: A Case Study for Chocolate.

    PubMed

    Gonzalez Viejo, Claudia; Fuentes, Sigfredo; Torrico, Damir D; Dunshea, Frank R

    2018-06-03

    Traditional methods to assess heart rate (HR) and blood pressure (BP) are intrusive and can affect results in sensory analysis of food as participants are aware of the sensors. This paper aims to validate a non-contact method to measure HR using the photoplethysmography (PPG) technique and to develop models to predict the real HR and BP based on raw video analysis (RVA) with an example application in chocolate consumption using machine learning (ML). The RVA used a computer vision algorithm based on luminosity changes on the different RGB color channels using three face-regions (forehead and both cheeks). To validate the proposed method and ML models, a home oscillometric monitor and a finger sensor were used. Results showed high correlations with the G color channel (R² = 0.83). Two ML models were developed using three face-regions: (i) Model 1 to predict HR and BP using the RVA outputs with R = 0.85 and (ii) Model 2 based on time-series prediction with HR, magnitude and luminosity from RVA inputs to HR values every second with R = 0.97. An application for the sensory analysis of chocolate showed significant correlations between changes in HR and BP with chocolate hardness and purchase intention.

  9. Improving orbit prediction accuracy through supervised machine learning

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Bai, Xiaoli

    2018-05-01

    Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.

  10. Improving quantitative structure-activity relationship models using Artificial Neural Networks trained with dropout.

    PubMed

    Mendenhall, Jeffrey; Meiler, Jens

    2016-02-01

    Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both enrichment false positive rate and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22-46 % over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods.

  11. Improving Quantitative Structure-Activity Relationship Models using Artificial Neural Networks Trained with Dropout

    PubMed Central

    Mendenhall, Jeffrey; Meiler, Jens

    2016-01-01

    Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery (LB-CADD) pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both Enrichment false positive rate (FPR) and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22–46% over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods. PMID:26830599

  12. Machine learning techniques in searches for$$t\\bar{t}$$h in the h → $$b\\bar{b}$$ decay channel

    DOE PAGES

    Santos, Robert; Nguyen, M.; Webster, Jordan; ...

    2017-04-10

    Study of the production of pairs of top quarks in association with a Higgs boson is one of the primary goals of the Large Hadron Collider over the next decade, as measurements of this process may help us to understand whether the uniquely large mass of the top quark plays a special role in electroweak symmetry breaking. Higgs bosons decay predominantly to bmore » $$\\bar{_b}$$, yielding signatures for the signal that are similar to t$$\\bar{_t}$$ + jets with heavy flavor. Though particularly challenging to study due to the similar kinematics between signal and background events, such final states (t$$\\bar{_t}$$b$$\\bar{b}$$) are an important channel for studying the top quark Yukawa coupling. This paper presents a systematic study of machine learning (ML) methods for detecting t$$\\bar{_t}$$h in the h → b$$\\bar{b}$$ decay channel. Among the seven ML methods tested, we show that neural network models outperform alternative methods. In addition, two neural models used in this paper outperform NeuroBayes, one of the standard algorithms used in current particle physics experiments. We further study the effectiveness of ML algorithms by investigating the impact of feature set and data size, as well as the depth of the networks for neural models. We demonstrate that an extended feature set leads to improvement of performance over basic features. Furthermore, the availability of large samples for training is found to be important for improving the performance of the techniques. For the features and the data set studied here, neural networks of more layers deliver comparable performance to their simpler counterparts.« less

  13. Machine learning techniques in searches for$$t\\bar{t}$$h in the h → $$b\\bar{b}$$ decay channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, Robert; Nguyen, M.; Webster, Jordan

    Study of the production of pairs of top quarks in association with a Higgs boson is one of the primary goals of the Large Hadron Collider over the next decade, as measurements of this process may help us to understand whether the uniquely large mass of the top quark plays a special role in electroweak symmetry breaking. Higgs bosons decay predominantly to bmore » $$\\bar{_b}$$, yielding signatures for the signal that are similar to t$$\\bar{_t}$$ + jets with heavy flavor. Though particularly challenging to study due to the similar kinematics between signal and background events, such final states (t$$\\bar{_t}$$b$$\\bar{b}$$) are an important channel for studying the top quark Yukawa coupling. This paper presents a systematic study of machine learning (ML) methods for detecting t$$\\bar{_t}$$h in the h → b$$\\bar{b}$$ decay channel. Among the seven ML methods tested, we show that neural network models outperform alternative methods. In addition, two neural models used in this paper outperform NeuroBayes, one of the standard algorithms used in current particle physics experiments. We further study the effectiveness of ML algorithms by investigating the impact of feature set and data size, as well as the depth of the networks for neural models. We demonstrate that an extended feature set leads to improvement of performance over basic features. Furthermore, the availability of large samples for training is found to be important for improving the performance of the techniques. For the features and the data set studied here, neural networks of more layers deliver comparable performance to their simpler counterparts.« less

  14. Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.

    PubMed

    Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui

    2018-03-01

    Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.

  15. Opportunistic Behavior in Motivated Learning Agents.

    PubMed

    Graham, James; Starzyk, Janusz A; Jachyra, Daniel

    2015-08-01

    This paper focuses on the novel motivated learning (ML) scheme and opportunistic behavior of an intelligent agent. It extends previously developed ML to opportunistic behavior in a multitask situation. Our paper describes the virtual world implementation of autonomous opportunistic agents learning in a dynamically changing environment, creating abstract goals, and taking advantage of arising opportunities to improve their performance. An opportunistic agent achieves better results than an agent based on ML only. It does so by minimizing the average value of all need signals rather than a dominating need. This paper applies to the design of autonomous embodied systems (robots) learning in real-time how to operate in a complex environment.

  16. The learning curve of laparoscopic liver resection after the Louisville statement 2008: Will it be more effective and smooth?

    PubMed

    Lin, Chung-Wei; Tsai, Tzu-Jung; Cheng, Tsung-Yen; Wei, Hung-Kuang; Hung, Chen-Fang; Chen, Yin-Yin; Chen, Chii-Ming

    2016-07-01

    Laparoscopic liver resection (LLR) has been proven to be feasible and safe. However, it is a difficult and complex procedure with a steep learning curve. The aim of this study was to evaluate the learning curve of LLR at our institutions since 2008. One hundred and twenty-six consecutive LLRs were included from May 2008 to December 2014. Patient characteristics, operative data, and surgical outcomes were collected prospectively and analyzed. The median tumor size was 25 mm (range 5-90 mm), and 96 % of the resected tumors were malignant. 41.3 % (52/126) of patients had pathologically proven liver cirrhosis. The median operation time was 216 min (range 40-602 min) with a median blood loss of 100 ml (range 20-2300 ml). The median length of hospital stay was 4 days (range 2-10 days). Six major postoperative complications occurred in this series, and there was no 90-day postoperative mortality. Regarding the incidence of major operative events including operation time longer than 300 min, perioperative blood loss above 500 ml, and major postoperative complications, the learning curve [as evaluated by the cumulative sum (CUSUM) technique] showed its first reverse after 22 cases. The indication of laparoscopic resection in this series extended after 60 cases to include tumors located in difficult locations (segments 4a, 7, 8) and major hepatectomy. CUSUM showed that the incidence of major operative events proceeded to increase again, and the second reverse was noted after an additional 40 cases of experience. Location of the tumor in a difficult area emerged as a significant predictor of major operative events. In carefully selected patients, CUSUM analysis showed 22 cases were needed to overcome the learning curve for minor LLR.

  17. Machine learning for epigenetics and future medical applications

    PubMed Central

    Holder, Lawrence B.; Haque, M. Muksitul; Skinner, Michael K.

    2017-01-01

    ABSTRACT Understanding epigenetic processes holds immense promise for medical applications. Advances in Machine Learning (ML) are critical to realize this promise. Previous studies used epigenetic data sets associated with the germline transmission of epigenetic transgenerational inheritance of disease and novel ML approaches to predict genome-wide locations of critical epimutations. A combination of Active Learning (ACL) and Imbalanced Class Learning (ICL) was used to address past problems with ML to develop a more efficient feature selection process and address the imbalance problem in all genomic data sets. The power of this novel ML approach and our ability to predict epigenetic phenomena and associated disease is suggested. The current approach requires extensive computation of features over the genome. A promising new approach is to introduce Deep Learning (DL) for the generation and simultaneous computation of novel genomic features tuned to the classification task. This approach can be used with any genomic or biological data set applied to medicine. The application of molecular epigenetic data in advanced machine learning analysis to medicine is the focus of this review. PMID:28524769

  18. Approximate, computationally efficient online learning in Bayesian spiking neurons.

    PubMed

    Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André

    2014-03-01

    Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.

  19. Forecasting Solar Flares Using Magnetogram-based Predictors and Machine Learning

    NASA Astrophysics Data System (ADS)

    Florios, Kostas; Kontogiannis, Ioannis; Park, Sung-Hong; Guerra, Jordan A.; Benvenuto, Federico; Bloomfield, D. Shaun; Georgoulis, Manolis K.

    2018-02-01

    We propose a forecasting approach for solar flares based on data from Solar Cycle 24, taken by the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO) mission. In particular, we use the Space-weather HMI Active Region Patches (SHARP) product that facilitates cut-out magnetograms of solar active regions (AR) in the Sun in near-realtime (NRT), taken over a five-year interval (2012 - 2016). Our approach utilizes a set of thirteen predictors, which are not included in the SHARP metadata, extracted from line-of-sight and vector photospheric magnetograms. We exploit several machine learning (ML) and conventional statistics techniques to predict flares of peak magnitude {>} M1 and {>} C1 within a 24 h forecast window. The ML methods used are multi-layer perceptrons (MLP), support vector machines (SVM), and random forests (RF). We conclude that random forests could be the prediction technique of choice for our sample, with the second-best method being multi-layer perceptrons, subject to an entropy objective function. A Monte Carlo simulation showed that the best-performing method gives accuracy ACC=0.93(0.00), true skill statistic TSS=0.74(0.02), and Heidke skill score HSS=0.49(0.01) for {>} M1 flare prediction with probability threshold 15% and ACC=0.84(0.00), TSS=0.60(0.01), and HSS=0.59(0.01) for {>} C1 flare prediction with probability threshold 35%.

  20. Accurate Diabetes Risk Stratification Using Machine Learning: Role of Missing Value and Outliers.

    PubMed

    Maniruzzaman, Md; Rahman, Md Jahanur; Al-MehediHasan, Md; Suri, Harman S; Abedin, Md Menhazul; El-Baz, Ayman; Suri, Jasjit S

    2018-04-10

    Diabetes mellitus is a group of metabolic diseases in which blood sugar levels are too high. About 8.8% of the world was diabetic in 2017. It is projected that this will reach nearly 10% by 2045. The major challenge is that when machine learning-based classifiers are applied to such data sets for risk stratification, leads to lower performance. Thus, our objective is to develop an optimized and robust machine learning (ML) system under the assumption that missing values or outliers if replaced by a median configuration will yield higher risk stratification accuracy. This ML-based risk stratification is designed, optimized and evaluated, where: (i) the features are extracted and optimized from the six feature selection techniques (random forest, logistic regression, mutual information, principal component analysis, analysis of variance, and Fisher discriminant ratio) and combined with ten different types of classifiers (linear discriminant analysis, quadratic discriminant analysis, naïve Bayes, Gaussian process classification, support vector machine, artificial neural network, Adaboost, logistic regression, decision tree, and random forest) under the hypothesis that both missing values and outliers when replaced by computed medians will improve the risk stratification accuracy. Pima Indian diabetic dataset (768 patients: 268 diabetic and 500 controls) was used. Our results demonstrate that on replacing the missing values and outliers by group median and median values, respectively and further using the combination of random forest feature selection and random forest classification technique yields an accuracy, sensitivity, specificity, positive predictive value, negative predictive value and area under the curve as: 92.26%, 95.96%, 79.72%, 91.14%, 91.20%, and 0.93, respectively. This is an improvement of 10% over previously developed techniques published in literature. The system was validated for its stability and reliability. RF-based model showed the best performance when outliers are replaced by median values.

  1. ML-o-Scope: A Diagnostic Visualization System for Deep Machine Learning Pipelines

    DTIC Science & Technology

    2014-05-16

    ML-o-scope: a diagnostic visualization system for deep machine learning pipelines Daniel Bruckner Electrical Engineering and Computer Sciences... machine learning pipelines 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f...the system as a support for tuning large scale object-classification pipelines. 1 Introduction A new generation of pipelined machine learning models

  2. Using Machine Learning in Adversarial Environments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren Leon Davis

    Intrusion/anomaly detection systems are among the first lines of cyber defense. Commonly, they either use signatures or machine learning (ML) to identify threats, but fail to account for sophisticated attackers trying to circumvent them. We propose to embed machine learning within a game theoretic framework that performs adversarial modeling, develops methods for optimizing operational response based on ML, and integrates the resulting optimization codebase into the existing ML infrastructure developed by the Hybrid LDRD. Our approach addresses three key shortcomings of ML in adversarial settings: 1) resulting classifiers are typically deterministic and, therefore, easy to reverse engineer; 2) ML approachesmore » only address the prediction problem, but do not prescribe how one should operationalize predictions, nor account for operational costs and constraints; and 3) ML approaches do not model attackers’ response and can be circumvented by sophisticated adversaries. The principal novelty of our approach is to construct an optimization framework that blends ML, operational considerations, and a model predicting attackers reaction, with the goal of computing optimal moving target defense. One important challenge is to construct a realistic model of an adversary that is tractable, yet realistic. We aim to advance the science of attacker modeling by considering game-theoretic methods, and by engaging experimental subjects with red teaming experience in trying to actively circumvent an intrusion detection system, and learning a predictive model of such circumvention activities. In addition, we will generate metrics to test that a particular model of an adversary is consistent with available data.« less

  3. Quantitative structure-activity relationship analysis and virtual screening studies for identifying HDAC2 inhibitors from known HDAC bioactive chemical libraries.

    PubMed

    Pham-The, H; Casañola-Martin, G; Diéguez-Santana, K; Nguyen-Hai, N; Ngoc, N T; Vu-Duc, L; Le-Thi-Thu, H

    2017-03-01

    Histone deacetylases (HDAC) are emerging as promising targets in cancer, neuronal diseases and immune disorders. Computational modelling approaches have been widely applied for the virtual screening and rational design of novel HDAC inhibitors. In this study, different machine learning (ML) techniques were applied for the development of models that accurately discriminate HDAC2 inhibitors form non-inhibitors. The obtained models showed encouraging results, with the global accuracy in the external set ranging from 0.83 to 0.90. Various aspects related to the comparison of modelling techniques, applicability domain and descriptor interpretations were discussed. Finally, consensus predictions of these models were used for screening HDAC2 inhibitors from four chemical libraries whose bioactivities against HDAC1, HDAC3, HDAC6 and HDAC8 have been known. According to the results of virtual screening assays, structures of some hits with pair-isoform-selective activity (between HDAC2 and other HDACs) were revealed. This study illustrates the power of ML-based QSAR approaches for the screening and discovery of potent, isoform-selective HDACIs.

  4. Active learning-based information structure analysis of full scientific articles and two applications for biomedical literature review.

    PubMed

    Guo, Yufan; Silins, Ilona; Stenius, Ulla; Korhonen, Anna

    2013-06-01

    Techniques that are capable of automatically analyzing the information structure of scientific articles could be highly useful for improving information access to biomedical literature. However, most existing approaches rely on supervised machine learning (ML) and substantial labeled data that are expensive to develop and apply to different sub-fields of biomedicine. Recent research shows that minimal supervision is sufficient for fairly accurate information structure analysis of biomedical abstracts. However, is it realistic for full articles given their high linguistic and informational complexity? We introduce and release a novel corpus of 50 biomedical articles annotated according to the Argumentative Zoning (AZ) scheme, and investigate active learning with one of the most widely used ML models-Support Vector Machines (SVM)-on this corpus. Additionally, we introduce two novel applications that use AZ to support real-life literature review in biomedicine via question answering and summarization. We show that active learning with SVM trained on 500 labeled sentences (6% of the corpus) performs surprisingly well with the accuracy of 82%, just 2% lower than fully supervised learning. In our question answering task, biomedical researchers find relevant information significantly faster from AZ-annotated than unannotated articles. In the summarization task, sentences extracted from particular zones are significantly more similar to gold standard summaries than those extracted from particular sections of full articles. These results demonstrate that active learning of full articles' information structure is indeed realistic and the accuracy is high enough to support real-life literature review in biomedicine. The annotated corpus, our AZ classifier and the two novel applications are available at http://www.cl.cam.ac.uk/yg244/12bioinfo.html

  5. Predicting activities of daily living for cancer patients using an ontology-guided machine learning methodology.

    PubMed

    Min, Hua; Mobahi, Hedyeh; Irvin, Katherine; Avramovic, Sanja; Wojtusiak, Janusz

    2017-09-16

    Bio-ontologies are becoming increasingly important in knowledge representation and in the machine learning (ML) fields. This paper presents a ML approach that incorporates bio-ontologies and its application to the SEER-MHOS dataset to discover patterns of patient characteristics that impact the ability to perform activities of daily living (ADLs). Bio-ontologies are used to provide computable knowledge for ML methods to "understand" biomedical data. This retrospective study included 723 cancer patients from the SEER-MHOS dataset. Two ML methods were applied to create predictive models for ADL disabilities for the first year after a patient's cancer diagnosis. The first method is a standard rule learning algorithm; the second is that same algorithm additionally equipped with methods for reasoning with ontologies. The models showed that a patient's race, ethnicity, smoking preference, treatment plan and tumor characteristics including histology, staging, cancer site, and morphology were predictors for ADL performance levels one year after cancer diagnosis. The ontology-guided ML method was more accurate at predicting ADL performance levels (P < 0.1) than methods without ontologies. This study demonstrated that bio-ontologies can be harnessed to provide medical knowledge for ML algorithms. The presented method demonstrates that encoding specific types of hierarchical relationships to guide rule learning is possible, and can be extended to other types of semantic relationships present in biomedical ontologies. The ontology-guided ML method achieved better performance than the method without ontologies. The presented method can also be used to promote the effectiveness and efficiency of ML in healthcare, in which use of background knowledge and consistency with existing clinical expertise is critical.

  6. On-line capacity-building program on "analysis of data" for medical educators in the South Asia region: a qualitative exploration of our experience.

    PubMed

    Dongre, A R; Chacko, T V; Banu, S; Bhandary, S; Sahasrabudhe, R A; Philip, S; Deshmukh, P R

    2010-11-01

    In medical education, using the World Wide Web is a new approach for building the capacity of faculty. However, there is little information available on medical education researchers' needs and their collective learning outcomes in such on-line environments. Hence, the present study attempted: 1)to identify needs for capacity-building of fellows in a faculty development program on the topic of data analysis; and 2) to describe, analyze and understand the collective learning outcomes of the fellows during this need-based on-line session. The present research is based on quantitative (on-line survey for needs assessment) and qualitative (contents of e-mails exchanged in listserv discussion) data which were generated during the October 2009 Mentoring and Learning (M-L) Web discussion on the topic of data analysis. The data sources were shared e-mail responses during the process of planning and executing the M-L Web discussion. Content analysis was undertaken and the categories of discussion were presented as a simple non-hierarchical typology which represents the collective learning of the project fellows. We identified the types of learning needs on the topic 'Analysis of Data' to be addressed for faculty development in the field of education research. This need-based M-L Web discussion could then facilitate collective learning on such topics as 'basic concepts in statistics', tests of significance, Likert scale analysis, bivariate correlation, and simple regression analysis and content analysis of qualitative data. Steps like identifying the learning needs for an on-line M-L Web discussion, addressing the immediate needs of learners and creating a flexible reflective learning environment on the M-L Web facilitated the collective learning of the fellows on the topic of data analysis. Our outcomes can be useful in the design of on-line pedagogical strategies for supporting research in medical education.

  7. [KTP (green light) laser for the treatment of benign prostatic hyperplasia. Preliminary evaluation].

    PubMed

    Coz, Fernando; Domenech, Alfredo

    2007-09-01

    Photoselective vaporization of benign prostatic hyperplasia (BPH) is a minimally invasive technique, consisting of vaporization of prostatic tissue by KTP green light laser with a power of 80 W. The purpose of this study was to describe our experience with this technique. KTP laser photoselective vaporization was performed in 18 patients, with lower obstructive uropathy secondary to benign prostatic hyperplasia at Santiago Military hospital from December 2005. Preoperative characteristics, postoperative results and complications were recorded. Mean prostatic volume was 55 cc (range: 24 to 78). Mean operating time was 83 minutes (range: 40 to 120). In sixteen patients, the Foley catheter was removed before 24 hours. The mean preoperative AUA score was 22 and decreased to 11.4 after 30 days. The mean maximum preoperative urine flow rate was 9 ml/s and increased to 18.2; 22.1; 22.5; 25.3 and 27.2 ml/s on days 1, 7, 14, 21 and 30, respectively. Only minor complications were observed: delayed removal of the Foley catheter (11.1%), dysuria (16.6%) and late haematuria (11.1%). KTP laser photoselective vaporization of BPH is a safe technique, that is easy to learn, with good short-term functional results, associated with low complication rate.

  8. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    PubMed Central

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  9. Memory effects of Aronia melanocarpa fruit juice in a passive avoidance test in rats.

    PubMed

    Valcheva-Kuzmanova, Stefka V; Eftimov, Miroslav Tz; Tashev, Roman E; Belcheva, Iren P; Belcheva, Stiliana P

    2014-01-01

    To study the effect of Aronia melanocarpa fruit juice on memory in male Wistar rats. The juice was administered orally for 7, 14, 21 and 30 days at doses of 2.5 ml/kg, 5 ml/kg and 10 ml/kg. Memory was assessed in the one-way passive avoidance task (step through) which consisted of one training session and two retention tests (3 hours and 24 hours after training). The variables measured were the latency time to step into the dark compartment of the apparatus and the learning criterion (remaining in the illuminated compartment for at least 180 sec). Oral administration of Aronia melanocarpa fruit juice for 7 and 14 days resulted in a dose-dependent tendency to increase the latency time and the learning criterion compared to saline-treated controls but the effect failed to reach statistical significance. After 21 days of treatment, the juice dose-dependently prolonged the latency time at the retention tests, the effect being significant at doses of 5 ml/kg and 10 ml/kg. Applied for 30 days, the juice in all the tested doses increased significantly the latency time at the retention tests and the dose of 10 ml/kg significantly increased the percentage of rats reaching the learning criterion. These findings suggest that Aronia melanocarpa fruit juice could improve memory in rats. The effect is probably due to the polyphenolic ingredients of the juice which have been shown to be involved in learning and memory processes.

  10. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods.

    PubMed

    Luo, Gang; Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L

    2017-08-29

    To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient's weight kept rising in the past year). This process becomes infeasible with limited budgets. This study's goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care management allocation and pilot one model with care managers; and (3) perform simulations to estimate the impact of adopting Auto-ML on US patient outcomes. We are currently writing Auto-ML's design document. We intend to finish our study by around the year 2022. Auto-ML will generalize to various clinical prediction/classification problems. With minimal help from data scientists, health care researchers can use Auto-ML to quickly build high-quality models. This will boost wider use of machine learning in health care and improve patient outcomes. ©Gang Luo, Bryan L Stone, Michael D Johnson, Peter Tarczy-Hornoch, Adam B Wilcox, Sean D Mooney, Xiaoming Sheng, Peter J Haug, Flory L Nkoy. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 29.08.2017.

  11. Coronary CT Angiography-derived Fractional Flow Reserve: Machine Learning Algorithm versus Computational Fluid Dynamics Modeling.

    PubMed

    Tesche, Christian; De Cecco, Carlo N; Baumann, Stefan; Renker, Matthias; McLaurin, Tindal W; Duguay, Taylor M; Bayer, Richard R; Steinberg, Daniel H; Grant, Katharine L; Canstein, Christian; Schwemmer, Chris; Schoebinger, Max; Itu, Lucian M; Rapaka, Saikiran; Sharma, Puneet; Schoepf, U Joseph

    2018-04-10

    Purpose To compare two technical approaches for determination of coronary computed tomography (CT) angiography-derived fractional flow reserve (FFR)-FFR derived from coronary CT angiography based on computational fluid dynamics (hereafter, FFR CFD ) and FFR derived from coronary CT angiography based on machine learning algorithm (hereafter, FFR ML )-against coronary CT angiography and quantitative coronary angiography (QCA). Materials and Methods A total of 85 patients (mean age, 62 years ± 11 [standard deviation]; 62% men) who had undergone coronary CT angiography followed by invasive FFR were included in this single-center retrospective study. FFR values were derived on-site from coronary CT angiography data sets by using both FFR CFD and FFR ML . The performance of both techniques for detecting lesion-specific ischemia was compared against visual stenosis grading at coronary CT angiography, QCA, and invasive FFR as the reference standard. Results On a per-lesion and per-patient level, FFR ML showed a sensitivity of 79% and 90% and a specificity of 94% and 95%, respectively, for detecting lesion-specific ischemia. Meanwhile, FFR CFD resulted in a sensitivity of 79% and 89% and a specificity of 93% and 93%, respectively, on a per-lesion and per-patient basis (P = .86 and P = .92). On a per-lesion level, the area under the receiver operating characteristics curve (AUC) of 0.89 for FFR ML and 0.89 for FFR CFD showed significantly higher discriminatory power for detecting lesion-specific ischemia compared with that of coronary CT angiography (AUC, 0.61) and QCA (AUC, 0.69) (all P < .0001). Also, on a per-patient level, FFR ML (AUC, 0.91) and FFR CFD (AUC, 0.91) performed significantly better than did coronary CT angiography (AUC, 0.65) and QCA (AUC, 0.68) (all P < .0001). Processing time for FFR ML was significantly shorter compared with that of FFR CFD (40.5 minutes ± 6.3 vs 43.4 minutes ± 7.1; P = .042). Conclusion The FFR ML algorithm performs equally in detecting lesion-specific ischemia when compared with the FFR CFD approach. Both methods outperform accuracy of coronary CT angiography and QCA in the detection of flow-limiting stenosis. © RSNA, 2018.

  12. Specificity of coliphages in evaluating marker efficacy: a new insight for water quality indicators.

    PubMed

    Mookerjee, Subham; Batabyal, Prasenjit; Halder, Madhumanti; Palit, Anup

    2014-11-01

    Conventional procedures for qualitative assessment of coliphage are time consuming multiple step approach for achieving results. A modified and rapid technique has been introduced for determination of coliphage contamination among potable water sources during water borne outbreaks. During December 2013, 40 water samples from different potable water sources, were received for water quality analyses, from a jaundice affected Municipality of West Bengal, India. Altogether, 30% water samples were contaminated with coliform (1-20 cfu/ml) and 5% with E. coli (2-5 cfu/ml). Among post-outbreak samples, preponderance of coliform has decreased (1-4 cfu/ml) with total absence of E. coli. While standard technique has detected 55% outbreak samples with coliphage contamination, modified technique revealed that 80%, double than that of bacteriological identification rate, were contaminated with coliphages (4-20 pfu/10 ml). However, post-outbreak samples were detected with 1-5 pfu/10 ml coliphages among 20% samples. Coliphage detection rate through modified technique was nearly double (50%) than that of standard technique (27.5%). In few samples (with coliform load of 10-100 cfu/ml), while modified technique could detect coliphages among six samples (10-20 pfu/10 ml), standard protocol failed to detect coliphage in any of them. An easy, rapid and accurate modified technique has thereby been implemented for coliphage assessment from water samples. Coliform free water does not always signify pathogen free potable water and it is demonstrated that coliphage is a more reliable 'biomarker' to ascertain contamination level in potable water. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Supervised Machine Learning for Population Genetics: A New Paradigm

    PubMed Central

    Schrider, Daniel R.; Kern, Andrew D.

    2018-01-01

    As population genomic datasets grow in size, researchers are faced with the daunting task of making sense of a flood of information. To keep pace with this explosion of data, computational methodologies for population genetic inference are rapidly being developed to best utilize genomic sequence data. In this review we discuss a new paradigm that has emerged in computational population genomics: that of supervised machine learning (ML). We review the fundamentals of ML, discuss recent applications of supervised ML to population genetics that outperform competing methods, and describe promising future directions in this area. Ultimately, we argue that supervised ML is an important and underutilized tool that has considerable potential for the world of evolutionary genomics. PMID:29331490

  14. Downscaling Coarse Scale Microwave Soil Moisture Product using Machine Learning

    NASA Astrophysics Data System (ADS)

    Abbaszadeh, P.; Moradkhani, H.; Yan, H.

    2016-12-01

    Soil moisture (SM) is a key variable in partitioning and examining the global water-energy cycle, agricultural planning, and water resource management. It is also strongly coupled with climate change, playing an important role in weather forecasting and drought monitoring and prediction, flood modeling and irrigation management. Although satellite retrievals can provide an unprecedented information of soil moisture at a global-scale, the products might be inadequate for basin scale study or regional assessment. To improve the spatial resolution of SM, this work presents a novel approach based on Machine Learning (ML) technique that allows for downscaling of the satellite soil moisture to fine resolution. For this purpose, the SMAP L-band radiometer SM products were used and conditioned on the Variable Infiltration Capacity (VIC) model prediction to describe the relationship between the coarse and fine scale soil moisture data. The proposed downscaling approach was applied to a western US basin and the products were compared against the available SM data from in-situ gauge stations. The obtained results indicated a great potential of the machine learning technique to derive the fine resolution soil moisture information that is currently used for land data assimilation applications.

  15. Improving precision of glomerular filtration rate estimating model by ensemble learning.

    PubMed

    Liu, Xun; Li, Ningshan; Lv, Linsheng; Fu, Yongmei; Cheng, Cailian; Wang, Caixia; Ye, Yuqiu; Li, Shaomin; Lou, Tanqi

    2017-11-09

    Accurate assessment of kidney function is clinically important, but estimates of glomerular filtration rate (GFR) by regression are imprecise. We hypothesized that ensemble learning could improve precision. A total of 1419 participants were enrolled, with 1002 in the development dataset and 417 in the external validation dataset. GFR was independently estimated from age, sex and serum creatinine using an artificial neural network (ANN), support vector machine (SVM), regression, and ensemble learning. GFR was measured by 99mTc-DTPA renal dynamic imaging calibrated with dual plasma sample 99mTc-DTPA GFR. Mean measured GFRs were 70.0 ml/min/1.73 m 2 in the developmental and 53.4 ml/min/1.73 m 2 in the external validation cohorts. In the external validation cohort, precision was better in the ensemble model of the ANN, SVM and regression equation (IQR = 13.5 ml/min/1.73 m 2 ) than in the new regression model (IQR = 14.0 ml/min/1.73 m 2 , P < 0.001). The precision of ensemble learning was the best of the three models, but the models had similar bias and accuracy. The median difference ranged from 2.3 to 3.7 ml/min/1.73 m 2 , 30% accuracy ranged from 73.1 to 76.0%, and P was > 0.05 for all comparisons of the new regression equation and the other new models. An ensemble learning model including three variables, the average ANN, SVM, and regression equation values, was more precise than the new regression model. A more complex ensemble learning strategy may further improve GFR estimates.

  16. A technique for fast and accurate measurement of hand volumes using Archimedes' principle.

    PubMed

    Hughes, S; Lau, J

    2008-03-01

    A new technique for measuring hand volumes using Archimedes principle is described. The technique involves the immersion of a hand in a water container placed on an electronic balance. The volume is given by the change in weight divided by the density of water. This technique was compared with the more conventional technique of immersing an object in a container with an overflow spout and collecting and weighing the volume of overflow water. The hand volume of two subjects was measured. Hand volumes were 494 +/- 6 ml and 312 +/- 7 ml for the immersion method and 476 +/- 14 ml and 302 +/- 8 ml for the overflow method for the two subjects respectively. Using plastic test objects, the mean difference between the actual and measured volume was -0.3% and 2.0% for the immersion and overflow techniques respectively. This study shows that hand volumes can be obtained more quickly than the overflow method. The technique could find an application in clinics where frequent hand volumes are required.

  17. Relationship Between Non-invasive Brain Stimulation-induced Plasticity and Capacity for Motor Learning.

    PubMed

    López-Alonso, Virginia; Cheeran, Binith; Fernández-del-Olmo, Miguel

    2015-01-01

    Cortical plasticity plays a key role in motor learning (ML). Non-invasive brain stimulation (NIBS) paradigms have been used to modulate plasticity in the human motor cortex in order to facilitate ML. However, little is known about the relationship between NIBS-induced plasticity over M1 and ML capacity. NIBS-induced MEP changes are related to ML capacity. 56 subjects participated in three NIBS (paired associative stimulation, anodal transcranial direct current stimulation and intermittent theta-burst stimulation), and in three lab-based ML task (serial reaction time, visuomotor adaptation and sequential visual isometric pinch task) sessions. After clustering the patterns of response to the different NIBS protocols, we compared the ML variables between the different patterns found. We used regression analysis to explore further the relationship between ML capacity and summary measures of the MEPs change. We ran correlations with the "responders" group only. We found no differences in ML variables between clusters. Greater response to NIBS protocols may be predictive of poor performance within certain blocks of the VAT. "Responders" to AtDCS and to iTBS showed significantly faster reaction times than "non-responders." However, the physiological significance of these results is uncertain. MEP changes induced in M1 by PAS, AtDCS and iTBS appear to have little, if any, association with the ML capacity tested with the SRTT, the VAT and the SVIPT. However, cortical excitability changes induced in M1 by AtDCS and iTBS may be related to reaction time and retention of newly acquired skills in certain motor learning tasks. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods

    PubMed Central

    Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L

    2017-01-01

    Background To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient’s weight kept rising in the past year). This process becomes infeasible with limited budgets. Objective This study’s goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. Methods This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care management allocation and pilot one model with care managers; and (3) perform simulations to estimate the impact of adopting Auto-ML on US patient outcomes. Results We are currently writing Auto-ML’s design document. We intend to finish our study by around the year 2022. Conclusions Auto-ML will generalize to various clinical prediction/classification problems. With minimal help from data scientists, health care researchers can use Auto-ML to quickly build high-quality models. This will boost wider use of machine learning in health care and improve patient outcomes. PMID:28851678

  19. Designing Contestability: Interaction Design, Machine Learning, and Mental Health

    PubMed Central

    Hirsch, Tad; Merced, Kritzia; Narayanan, Shrikanth; Imel, Zac E.; Atkins, David C.

    2017-01-01

    We describe the design of an automated assessment and training tool for psychotherapists to illustrate challenges with creating interactive machine learning (ML) systems, particularly in contexts where human life, livelihood, and wellbeing are at stake. We explore how existing theories of interaction design and machine learning apply to the psychotherapy context, and identify “contestability” as a new principle for designing systems that evaluate human behavior. Finally, we offer several strategies for making ML systems more accountable to human actors. PMID:28890949

  20. Variational Bayesian Parameter Estimation Techniques for the General Linear Model

    PubMed Central

    Starke, Ludger; Ostwald, Dirk

    2017-01-01

    Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572

  1. Machine Learning and Neurosurgical Outcome Prediction: A Systematic Review.

    PubMed

    Senders, Joeky T; Staples, Patrick C; Karhade, Aditya V; Zaki, Mark M; Gormley, William B; Broekman, Marike L D; Smith, Timothy R; Arnaout, Omar

    2018-01-01

    Accurate measurement of surgical outcomes is highly desirable to optimize surgical decision-making. An important element of surgical decision making is identification of the patient cohort that will benefit from surgery before the intervention. Machine learning (ML) enables computers to learn from previous data to make accurate predictions on new data. In this systematic review, we evaluate the potential of ML for neurosurgical outcome prediction. A systematic search in the PubMed and Embase databases was performed to identify all potential relevant studies up to January 1, 2017. Thirty studies were identified that evaluated ML algorithms used as prediction models for survival, recurrence, symptom improvement, and adverse events in patients undergoing surgery for epilepsy, brain tumor, spinal lesions, neurovascular disease, movement disorders, traumatic brain injury, and hydrocephalus. Depending on the specific prediction task evaluated and the type of input features included, ML models predicted outcomes after neurosurgery with a median accuracy and area under the receiver operating curve of 94.5% and 0.83, respectively. Compared with logistic regression, ML models performed significantly better and showed a median absolute improvement in accuracy and area under the receiver operating curve of 15% and 0.06, respectively. Some studies also demonstrated a better performance in ML models compared with established prognostic indices and clinical experts. In the research setting, ML has been studied extensively, demonstrating an excellent performance in outcome prediction for a wide range of neurosurgical conditions. However, future studies should investigate how ML can be implemented as a practical tool supporting neurosurgical care. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. [Effects of chrysalis oil on learning, memory and oxidative stress in D-galactose-induced ageing model of mice].

    PubMed

    Chen, Weiping; Yang, Qiongjie; Wei, Xing

    2013-11-01

    To investigate the effects of chrysalis oil on learning, memory and oxidative stress in D-galactose-induced ageing model of mice. Mice were injected intraperitoneally with D-galactose daily and received chrysalis oil intragastrically simultaneously for 30 d. Then mice underwent space navigation test and spatial probe test, superoxide dismutase (SOD), glutathione peroxidase (GSH-PX) activity and malondialdehyde (MDA) contents in mouse brain were measured. Compared to model group, escape latency in mice treated with 6 ml/kg*d chrysalis oil was significantly shorter (P<0.05), crossing times in 12 ml/kg*d group and 6 ml/kg*d group treated with chrysalis oil were significantly increased (P<0.05). Chrysalis oil treatment (12ml/kg*d) significantly increased SOD and GSH-PX activity and reduced MDA contents in brain of D-galactose-induced aging mice. Chrysalis oil can improve the ability of learning and memory in D-galactose-induced aging mice, and inhibit peroxidation in brain tissue.

  3. Machine Learning Techniques for the Detection of Shockable Rhythms in Automated External Defibrillators

    PubMed Central

    Irusta, Unai; Morgado, Eduardo; Aramendi, Elisabete; Ayala, Unai; Wik, Lars; Kramer-Johansen, Jo; Eftestøl, Trygve; Alonso-Atienza, Felipe

    2016-01-01

    Early recognition of ventricular fibrillation (VF) and electrical therapy are key for the survival of out-of-hospital cardiac arrest (OHCA) patients treated with automated external defibrillators (AED). AED algorithms for VF-detection are customarily assessed using Holter recordings from public electrocardiogram (ECG) databases, which may be different from the ECG seen during OHCA events. This study evaluates VF-detection using data from both OHCA patients and public Holter recordings. ECG-segments of 4-s and 8-s duration were analyzed. For each segment 30 features were computed and fed to state of the art machine learning (ML) algorithms. ML-algorithms with built-in feature selection capabilities were used to determine the optimal feature subsets for both databases. Patient-wise bootstrap techniques were used to evaluate algorithm performance in terms of sensitivity (Se), specificity (Sp) and balanced error rate (BER). Performance was significantly better for public data with a mean Se of 96.6%, Sp of 98.8% and BER 2.2% compared to a mean Se of 94.7%, Sp of 96.5% and BER 4.4% for OHCA data. OHCA data required two times more features than the data from public databases for an accurate detection (6 vs 3). No significant differences in performance were found for different segment lengths, the BER differences were below 0.5-points in all cases. Our results show that VF-detection is more challenging for OHCA data than for data from public databases, and that accurate VF-detection is possible with segments as short as 4-s. PMID:27441719

  4. Machine Learning Techniques for the Detection of Shockable Rhythms in Automated External Defibrillators.

    PubMed

    Figuera, Carlos; Irusta, Unai; Morgado, Eduardo; Aramendi, Elisabete; Ayala, Unai; Wik, Lars; Kramer-Johansen, Jo; Eftestøl, Trygve; Alonso-Atienza, Felipe

    2016-01-01

    Early recognition of ventricular fibrillation (VF) and electrical therapy are key for the survival of out-of-hospital cardiac arrest (OHCA) patients treated with automated external defibrillators (AED). AED algorithms for VF-detection are customarily assessed using Holter recordings from public electrocardiogram (ECG) databases, which may be different from the ECG seen during OHCA events. This study evaluates VF-detection using data from both OHCA patients and public Holter recordings. ECG-segments of 4-s and 8-s duration were analyzed. For each segment 30 features were computed and fed to state of the art machine learning (ML) algorithms. ML-algorithms with built-in feature selection capabilities were used to determine the optimal feature subsets for both databases. Patient-wise bootstrap techniques were used to evaluate algorithm performance in terms of sensitivity (Se), specificity (Sp) and balanced error rate (BER). Performance was significantly better for public data with a mean Se of 96.6%, Sp of 98.8% and BER 2.2% compared to a mean Se of 94.7%, Sp of 96.5% and BER 4.4% for OHCA data. OHCA data required two times more features than the data from public databases for an accurate detection (6 vs 3). No significant differences in performance were found for different segment lengths, the BER differences were below 0.5-points in all cases. Our results show that VF-detection is more challenging for OHCA data than for data from public databases, and that accurate VF-detection is possible with segments as short as 4-s.

  5. Machine Learning and Inverse Problem in Geodynamics

    NASA Astrophysics Data System (ADS)

    Shahnas, M. H.; Yuen, D. A.; Pysklywec, R.

    2017-12-01

    During the past few decades numerical modeling and traditional HPC have been widely deployed in many diverse fields for problem solutions. However, in recent years the rapid emergence of machine learning (ML), a subfield of the artificial intelligence (AI), in many fields of sciences, engineering, and finance seems to mark a turning point in the replacement of traditional modeling procedures with artificial intelligence-based techniques. The study of the circulation in the interior of Earth relies on the study of high pressure mineral physics, geochemistry, and petrology where the number of the mantle parameters is large and the thermoelastic parameters are highly pressure- and temperature-dependent. More complexity arises from the fact that many of these parameters that are incorporated in the numerical models as input parameters are not yet well established. In such complex systems the application of machine learning algorithms can play a valuable role. Our focus in this study is the application of supervised machine learning (SML) algorithms in predicting mantle properties with the emphasis on SML techniques in solving the inverse problem. As a sample problem we focus on the spin transition in ferropericlase and perovskite that may cause slab and plume stagnation at mid-mantle depths. The degree of the stagnation depends on the degree of negative density anomaly at the spin transition zone. The training and testing samples for the machine learning models are produced by the numerical convection models with known magnitudes of density anomaly (as the class labels of the samples). The volume fractions of the stagnated slabs and plumes which can be considered as measures for the degree of stagnation are assigned as sample features. The machine learning models can determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at mid-mantle depths. Employing support vector machine (SVM) algorithms we show that SML techniques can successfully predict the magnitude of the mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex problems in mantle dynamics by employing deep learning algorithms for estimation of mantle properties such as viscosity, elastic parameters, and thermal and chemical anomalies.

  6. Machine Learning in Radiation Oncology: Opportunities, Requirements, and Needs

    PubMed Central

    Feng, Mary; Valdes, Gilmer; Dixit, Nayha; Solberg, Timothy D.

    2018-01-01

    Machine learning (ML) has the potential to revolutionize the field of radiation oncology, but there is much work to be done. In this article, we approach the radiotherapy process from a workflow perspective, identifying specific areas where a data-centric approach using ML could improve the quality and efficiency of patient care. We highlight areas where ML has already been used, and identify areas where we should invest additional resources. We believe that this article can serve as a guide for both clinicians and researchers to start discussing issues that must be addressed in a timely manner. PMID:29719815

  7. On-the-Fly Machine Learning of Atomic Potential in Density Functional Theory Structure Optimization

    NASA Astrophysics Data System (ADS)

    Jacobsen, T. L.; Jørgensen, M. S.; Hammer, B.

    2018-01-01

    Machine learning (ML) is used to derive local stability information for density functional theory calculations of systems in relation to the recently discovered SnO2 (110 )-(4 ×1 ) reconstruction. The ML model is trained on (structure, total energy) relations collected during global minimum energy search runs with an evolutionary algorithm (EA). While being built, the ML model is used to guide the EA, thereby speeding up the overall rate by which the EA succeeds. Inspection of the local atomic potentials emerging from the model further shows chemically intuitive patterns.

  8. Multilayer Extreme Learning Machine With Subnetwork Nodes for Representation Learning.

    PubMed

    Yang, Yimin; Wu, Q M Jonathan

    2016-11-01

    The extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feedforward neural networks, provides efficient unified learning solutions for the applications of clustering, regression, and classification. It presents competitive accuracy with superb efficiency in many applications. However, ELM with subnetwork nodes architecture has not attracted much research attentions. Recently, many methods have been proposed for supervised/unsupervised dimension reduction or representation learning, but these methods normally only work for one type of problem. This paper studies the general architecture of multilayer ELM (ML-ELM) with subnetwork nodes, showing that: 1) the proposed method provides a representation learning platform with unsupervised/supervised and compressed/sparse representation learning and 2) experimental results on ten image datasets and 16 classification datasets show that, compared to other conventional feature learning methods, the proposed ML-ELM with subnetwork nodes performs competitively or much better than other feature learning methods.

  9. Parameterization of typhoon-induced ocean cooling using temperature equation and machine learning algorithms: an example of typhoon Soulik (2013)

    NASA Astrophysics Data System (ADS)

    Wei, Jun; Jiang, Guo-Qing; Liu, Xin

    2017-09-01

    This study proposed three algorithms that can potentially be used to provide sea surface temperature (SST) conditions for typhoon prediction models. Different from traditional data assimilation approaches, which provide prescribed initial/boundary conditions, our proposed algorithms aim to resolve a flow-dependent SST feedback between growing typhoons and oceans in the future time. Two of these algorithms are based on linear temperature equations (TE-based), and the other is based on an innovative technique involving machine learning (ML-based). The algorithms are then implemented into a Weather Research and Forecasting model for the simulation of typhoon to assess their effectiveness, and the results show significant improvement in simulated storm intensities by including ocean cooling feedback. The TE-based algorithm I considers wind-induced ocean vertical mixing and upwelling processes only, and thus obtained a synoptic and relatively smooth sea surface temperature cooling. The TE-based algorithm II incorporates not only typhoon winds but also ocean information, and thus resolves more cooling features. The ML-based algorithm is based on a neural network, consisting of multiple layers of input variables and neurons, and produces the best estimate of the cooling structure, in terms of its amplitude and position. Sensitivity analysis indicated that the typhoon-induced ocean cooling is a nonlinear process involving interactions of multiple atmospheric and oceanic variables. Therefore, with an appropriate selection of input variables and neuron sizes, the ML-based algorithm appears to be more efficient in prognosing the typhoon-induced ocean cooling and in predicting typhoon intensity than those algorithms based on linear regression methods.

  10. Performance comparison of machine learning algorithms and number of independent components used in fMRI decoding of belief vs. disbelief.

    PubMed

    Douglas, P K; Harris, Sam; Yuille, Alan; Cohen, Mark S

    2011-05-15

    Machine learning (ML) has become a popular tool for mining functional neuroimaging data, and there are now hopes of performing such analyses efficiently in real-time. Towards this goal, we compared accuracy of six different ML algorithms applied to neuroimaging data of persons engaged in a bivariate task, asserting their belief or disbelief of a variety of propositional statements. We performed unsupervised dimension reduction and automated feature extraction using independent component (IC) analysis and extracted IC time courses. Optimization of classification hyperparameters across each classifier occurred prior to assessment. Maximum accuracy was achieved at 92% for Random Forest, followed by 91% for AdaBoost, 89% for Naïve Bayes, 87% for a J48 decision tree, 86% for K*, and 84% for support vector machine. For real-time decoding applications, finding a parsimonious subset of diagnostic ICs might be useful. We used a forward search technique to sequentially add ranked ICs to the feature subspace. For the current data set, we determined that approximately six ICs represented a meaningful basis set for classification. We then projected these six IC spatial maps forward onto a later scanning session within subject. We then applied the optimized ML algorithms to these new data instances, and found that classification accuracy results were reproducible. Additionally, we compared our classification method to our previously published general linear model results on this same data set. The highest ranked IC spatial maps show similarity to brain regions associated with contrasts for belief > disbelief, and disbelief < belief. Copyright © 2010 Elsevier Inc. All rights reserved.

  11. Enhancement of plant metabolite fingerprinting by machine learning.

    PubMed

    Scott, Ian M; Vermeer, Cornelia P; Liakata, Maria; Corol, Delia I; Ward, Jane L; Lin, Wanchang; Johnson, Helen E; Whitehead, Lynne; Kular, Baldeep; Baker, John M; Walsh, Sean; Dave, Anuja; Larson, Tony R; Graham, Ian A; Wang, Trevor L; King, Ross D; Draper, John; Beale, Michael H

    2010-08-01

    Metabolite fingerprinting of Arabidopsis (Arabidopsis thaliana) mutants with known or predicted metabolic lesions was performed by (1)H-nuclear magnetic resonance, Fourier transform infrared, and flow injection electrospray-mass spectrometry. Fingerprinting enabled processing of five times more plants than conventional chromatographic profiling and was competitive for discriminating mutants, other than those affected in only low-abundance metabolites. Despite their rapidity and complexity, fingerprints yielded metabolomic insights (e.g. that effects of single lesions were usually not confined to individual pathways). Among fingerprint techniques, (1)H-nuclear magnetic resonance discriminated the most mutant phenotypes from the wild type and Fourier transform infrared discriminated the fewest. To maximize information from fingerprints, data analysis was crucial. One-third of distinctive phenotypes might have been overlooked had data models been confined to principal component analysis score plots. Among several methods tested, machine learning (ML) algorithms, namely support vector machine or random forest (RF) classifiers, were unsurpassed for phenotype discrimination. Support vector machines were often the best performing classifiers, but RFs yielded some particularly informative measures. First, RFs estimated margins between mutant phenotypes, whose relations could then be visualized by Sammon mapping or hierarchical clustering. Second, RFs provided importance scores for the features within fingerprints that discriminated mutants. These scores correlated with analysis of variance F values (as did Kruskal-Wallis tests, true- and false-positive measures, mutual information, and the Relief feature selection algorithm). ML classifiers, as models trained on one data set to predict another, were ideal for focused metabolomic queries, such as the distinctiveness and consistency of mutant phenotypes. Accessible software for use of ML in plant physiology is highlighted.

  12. Out-of-Sample Extrapolation utilizing Semi-Supervised Manifold Learning (OSE-SSL): Content Based Image Retrieval for Histopathology Images

    PubMed Central

    Sparks, Rachel; Madabhushi, Anant

    2016-01-01

    Content-based image retrieval (CBIR) retrieves database images most similar to the query image by (1) extracting quantitative image descriptors and (2) calculating similarity between database and query image descriptors. Recently, manifold learning (ML) has been used to perform CBIR in a low dimensional representation of the high dimensional image descriptor space to avoid the curse of dimensionality. ML schemes are computationally expensive, requiring an eigenvalue decomposition (EVD) for every new query image to learn its low dimensional representation. We present out-of-sample extrapolation utilizing semi-supervised ML (OSE-SSL) to learn the low dimensional representation without recomputing the EVD for each query image. OSE-SSL incorporates semantic information, partial class label, into a ML scheme such that the low dimensional representation co-localizes semantically similar images. In the context of prostate histopathology, gland morphology is an integral component of the Gleason score which enables discrimination between prostate cancer aggressiveness. Images are represented by shape features extracted from the prostate gland. CBIR with OSE-SSL for prostate histology obtained from 58 patient studies, yielded an area under the precision recall curve (AUPRC) of 0.53 ± 0.03 comparatively a CBIR with Principal Component Analysis (PCA) to learn a low dimensional space yielded an AUPRC of 0.44 ± 0.01. PMID:27264985

  13. Memory Boost from Spaced-Out Learning.

    ERIC Educational Resources Information Center

    Bower, B.

    1987-01-01

    Discusses what learning conditions promote memory stamina. Reviews study findings which suggested that spacing of practice during rote learning of a foreign language vocabulary can produce lasting memories. (ML)

  14. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  15. Microstructures and Mechanical Properties of Co-Cr Dental Alloys Fabricated by Three CAD/CAM-Based Processing Techniques

    PubMed Central

    Kim, Hae Ri; Jang, Seong-Ho; Kim, Young Kyung; Son, Jun Sik; Min, Bong Ki; Kim, Kyo-Han; Kwon, Tae-Yub

    2016-01-01

    The microstructures and mechanical properties of cobalt-chromium (Co-Cr) alloys produced by three CAD/CAM-based processing techniques were investigated in comparison with those produced by the traditional casting technique. Four groups of disc- (microstructures) or dumbbell- (mechanical properties) specimens made of Co-Cr alloys were prepared using casting (CS), milling (ML), selective laser melting (SLM), and milling/post-sintering (ML/PS). For each technique, the corresponding commercial alloy material was used. The microstructures of the specimens were evaluated via X-ray diffractometry, optical and scanning electron microscopy with energy-dispersive X-ray spectroscopy, and electron backscattered diffraction pattern analysis. The mechanical properties were evaluated using a tensile test according to ISO 22674 (n = 6). The microstructure of the alloys was strongly influenced by the manufacturing processes. Overall, the SLM group showed superior mechanical properties, the ML/PS group being nearly comparable. The mechanical properties of the ML group were inferior to those of the CS group. The microstructures and mechanical properties of Co-Cr alloys were greatly dependent on the manufacturing technique as well as the chemical composition. The SLM and ML/PS techniques may be considered promising alternatives to the Co-Cr alloy casting process. PMID:28773718

  16. Improving permafrost distribution modelling using feature selection algorithms

    NASA Astrophysics Data System (ADS)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its overall operation. It operates by constructing a large collection of decorrelated classification trees, and then predicts the permafrost occurrence through a majority vote. With the so-called out-of-bag (OOB) error estimate, the classification of permafrost data can be validated as well as the contribution of each predictor can be assessed. The performances of compared permafrost distribution models (computed on independent testing sets) increased with the application of FS algorithms on the original dataset and irrelevant or redundant variables were removed. As a consequence, the process provided faster and more cost-effective predictors and a better understanding of the underlying structures residing in permafrost data. Our work demonstrates the usefulness of a feature selection step prior to applying a machine learning algorithm. In fact, permafrost predictors could be ranked not only based on their heuristic and subjective importance (expert knowledge), but also based on their statistical relevance in relation of the permafrost distribution.

  17. Virtual reality simulator for training on photoselective vaporization of the prostate with 980 nm diode laser and learning curve of the technique.

    PubMed

    Angulo, J C; Arance, I; García-Tello, A; Las Heras, M M; Andrés, G; Gimbernat, H; Lista, F; Ramón de Fata, F

    2014-09-01

    The utility of a virtual reality simulator for training of the photoselective vaporization of the prostate with diode laser was studied. Two experiments were performed with a simulator (VirtaMed AG, Zürich, Switzerland) with software for specific training in prostate vaporization in contact mode with Twister fiber (Biolitec AG, Jena, German). Eighteen surgeons performed ablation of the prostate (55 cc) twice and compared the score obtained (190 points efficacy and 80 safety) in the second one of them by experience groups (medical students, residents, specialists). They also performed a spatial orientation test with scores of 0 to 6. After, six of these surgeons repeated 15 ablations of the prostate (55 and 70 ml). Improvement of the parameters obtained was evaluated to define the learning curve and how experience, spatial orientation skills and type of sequences performed affects them. Global efficacy and safety score was different according to the grade of experience (P=.005). When compared by pairs, specialist-student differences were detected (p=0.004), but not specialist-resident (P=.12) or resident-student (P=.2). Regarding efficacy of the procedure, specialist-student (p=0.0026) and resident-student (P=.08) differences were detected. The different partial indicators in terms of efficacy were rate of ablation (P=.01), procedure time (P=.03) and amount of unexposed capsule (p=0.03). Differences were not observed between groups in safety (P=.5). Regarding the learning curve, percentage median on the total score exceeded 90% after performing 4 procedures for prostates of 55 ml and 10 procedures for prostate glands of 70 ml. This course was not modified by previous experience (resident-specialist; P=.6). However, it was modified according to the repetition sequence (progressive-random; P=.007). Surgeons whose spatial orientation was less than the median of the group (value 2.5) did not surpass 90% of the score in spite of repetition of the procedure. Simulation for ablation of the prostate with contact diode laser is a good learning model with discriminative validity, as it correlates the metric results with levels of experience and sills. The sequential repetition of the procedure on growing levels of difficulty favors learning. Copyright © 2014 AEU. Published by Elsevier Espana. All rights reserved.

  18. Cognitive domains in the dog: independence of working memory from object learning, selective attention, and motor learning.

    PubMed

    Zanghi, Brian M; Araujo, Joseph; Milgram, Norton W

    2015-05-01

    Cognition in dogs, like in humans, is not a unitary process. Some functions, such as simple discrimination learning, are relatively insensitive to age; others, such as visuospatial learning can provide behavioral biomarkers of age. The present experiment sought to further establish the relationship between various cognitive domains, namely visuospatial memory, object discrimination learning (ODL), and selective attention (SA). In addition, we also set up a task to assess motor learning (ML). Thirty-six beagles (9-16 years) performed a variable delay non-matching to position (vDNMP) task using two objects with 20- and 90-s delay and were divided into three groups based on a combined score (HMP = 88-93 % accuracy [N = 12]; MMP = 79-86 % accuracy [N = 12]; LMP = 61-78 % accuracy [N = 12]). Variable object oddity task was used to measure ODL (correct or incorrect object) and SA (0-3 incorrect distractor objects with same [SA-same] or different [SA-diff] correct object as ODL). ML involved reaching various distances (0-15 cm). Age did not differ between memory groups (mean 11.6 years). ODL (ANOVA P = 0.43), or SA-same and SA-different (ANOVA P = 0.96), performance did not differ between the three vDNMP groups, although mean errors during ODL was numerically higher for LMP dogs. Errors increased (P < 0.001) for all dogs with increasing number of distractor objects during both SA tasks. vDNMP groups remained different (ANOVA P < 0.001) when re-tested with vDNMP task 42 days later. Maximum ML distance did not differ between vDNMP groups (ANOVA P = 0.96). Impaired short-term memory performance in aged dogs does not appear to predict performance of cognitive domains associated with object learning, SA, or maximum ML distance.

  19. Effect of normalization methods on the performance of supervised learning algorithms applied to HTSeq-FPKM-UQ data sets: 7SK RNA expression as a predictor of survival in patients with colon adenocarcinoma.

    PubMed

    Shahriyari, Leili

    2017-11-03

    One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  20. PredicT-ML: a tool for automating machine learning model building with big clinical data.

    PubMed

    Luo, Gang

    2016-01-01

    Predictive modeling is fundamental to transforming large clinical data sets, or "big clinical data," into actionable knowledge for various healthcare applications. Machine learning is a major predictive modeling approach, but two barriers make its use in healthcare challenging. First, a machine learning tool user must choose an algorithm and assign one or more model parameters called hyper-parameters before model training. The algorithm and hyper-parameter values used typically impact model accuracy by over 40 %, but their selection requires many labor-intensive manual iterations that can be difficult even for computer scientists. Second, many clinical attributes are repeatedly recorded over time, requiring temporal aggregation before predictive modeling can be performed. Many labor-intensive manual iterations are required to identify a good pair of aggregation period and operator for each clinical attribute. Both barriers result in time and human resource bottlenecks, and preclude healthcare administrators and researchers from asking a series of what-if questions when probing opportunities to use predictive models to improve outcomes and reduce costs. This paper describes our design of and vision for PredicT-ML (prediction tool using machine learning), a software system that aims to overcome these barriers and automate machine learning model building with big clinical data. The paper presents the detailed design of PredicT-ML. PredicT-ML will open the use of big clinical data to thousands of healthcare administrators and researchers and increase the ability to advance clinical research and improve healthcare.

  1. Transarterial Coil-Augmented Onyx Embolization for Brain Arteriovenous Malformation

    PubMed Central

    Gao, Xu; Liang, Guobiao; Li, Zhiqing; Wang, Xiaogang; Yu, Chunyong; Cao, Peng; Chen, Jun; Li, Jingyuan

    2014-01-01

    Summary Onyx has been widely adopted for the treatment of arteriovenous malformations (AVMs). However, its control demands operators accumulate a considerable learning curve. We describe our initial experience using a novel injection method for the embolization of AVMs. We retrospectively reviewed the data of all 22 patients with brain AVMs (12 men, 10 women; age range, 12-68 years; mean age, 43.2 years) treated by the transarterial coil-augmented Onyx injection technique. The size of the AVMs ranged from 25 mm to 70 mm (average 35.6 mm). The technical feasibility of the procedure, procedure-related complications, angiographic results, and clinical outcome were evaluated. In every case, endovascular treatment (EVT) was completed. A total of 31 sessions were performed, with a mean injection volume of 6.1 mL (range, 1.5-16.0 mL). An average of 96.7% (range 85%-100%) estimated size reduction was achieved, and 18 AVMs could be completely excluded by EVT alone. The results remained stable on follow-up angiograms. A procedural complication occurred in one patient, with permanent mild neurologic deficit. Our preliminary series demonstrated that the coil-augmented Onyx injection technique is a valuable adjunct achieving excellent nidal penetration and improving the safety of the procedure. PMID:24556304

  2. Enhancement of Plant Metabolite Fingerprinting by Machine Learning1[W

    PubMed Central

    Scott, Ian M.; Vermeer, Cornelia P.; Liakata, Maria; Corol, Delia I.; Ward, Jane L.; Lin, Wanchang; Johnson, Helen E.; Whitehead, Lynne; Kular, Baldeep; Baker, John M.; Walsh, Sean; Dave, Anuja; Larson, Tony R.; Graham, Ian A.; Wang, Trevor L.; King, Ross D.; Draper, John; Beale, Michael H.

    2010-01-01

    Metabolite fingerprinting of Arabidopsis (Arabidopsis thaliana) mutants with known or predicted metabolic lesions was performed by 1H-nuclear magnetic resonance, Fourier transform infrared, and flow injection electrospray-mass spectrometry. Fingerprinting enabled processing of five times more plants than conventional chromatographic profiling and was competitive for discriminating mutants, other than those affected in only low-abundance metabolites. Despite their rapidity and complexity, fingerprints yielded metabolomic insights (e.g. that effects of single lesions were usually not confined to individual pathways). Among fingerprint techniques, 1H-nuclear magnetic resonance discriminated the most mutant phenotypes from the wild type and Fourier transform infrared discriminated the fewest. To maximize information from fingerprints, data analysis was crucial. One-third of distinctive phenotypes might have been overlooked had data models been confined to principal component analysis score plots. Among several methods tested, machine learning (ML) algorithms, namely support vector machine or random forest (RF) classifiers, were unsurpassed for phenotype discrimination. Support vector machines were often the best performing classifiers, but RFs yielded some particularly informative measures. First, RFs estimated margins between mutant phenotypes, whose relations could then be visualized by Sammon mapping or hierarchical clustering. Second, RFs provided importance scores for the features within fingerprints that discriminated mutants. These scores correlated with analysis of variance F values (as did Kruskal-Wallis tests, true- and false-positive measures, mutual information, and the Relief feature selection algorithm). ML classifiers, as models trained on one data set to predict another, were ideal for focused metabolomic queries, such as the distinctiveness and consistency of mutant phenotypes. Accessible software for use of ML in plant physiology is highlighted. PMID:20566707

  3. Synergies Between Quantum Mechanics and Machine Learning in Reaction Prediction.

    PubMed

    Sadowski, Peter; Fooshee, David; Subrahmanya, Niranjan; Baldi, Pierre

    2016-11-28

    Machine learning (ML) and quantum mechanical (QM) methods can be used in two-way synergy to build chemical reaction expert systems. The proposed ML approach identifies electron sources and sinks among reactants and then ranks all source-sink pairs. This addresses a bottleneck of QM calculations by providing a prioritized list of mechanistic reaction steps. QM modeling can then be used to compute the transition states and activation energies of the top-ranked reactions, providing additional or improved examples of ranked source-sink pairs. Retraining the ML model closes the loop, producing more accurate predictions from a larger training set. The approach is demonstrated in detail using a small set of organic radical reactions.

  4. A deep learning approach to estimate chemically-treated collagenous tissue nonlinear anisotropic stress-strain responses from microscopy images.

    PubMed

    Liang, Liang; Liu, Minliang; Sun, Wei

    2017-11-01

    Biological collagenous tissues comprised of networks of collagen fibers are suitable for a broad spectrum of medical applications owing to their attractive mechanical properties. In this study, we developed a noninvasive approach to estimate collagenous tissue elastic properties directly from microscopy images using Machine Learning (ML) techniques. Glutaraldehyde-treated bovine pericardium (GLBP) tissue, widely used in the fabrication of bioprosthetic heart valves and vascular patches, was chosen to develop a representative application. A Deep Learning model was designed and trained to process second harmonic generation (SHG) images of collagen networks in GLBP tissue samples, and directly predict the tissue elastic mechanical properties. The trained model is capable of identifying the overall tissue stiffness with a classification accuracy of 84%, and predicting the nonlinear anisotropic stress-strain curves with average regression errors of 0.021 and 0.031. Thus, this study demonstrates the feasibility and great potential of using the Deep Learning approach for fast and noninvasive assessment of collagenous tissue elastic properties from microstructural images. In this study, we developed, to our best knowledge, the first Deep Learning-based approach to estimate the elastic properties of collagenous tissues directly from noninvasive second harmonic generation images. The success of this study holds promise for the use of Machine Learning techniques to noninvasively and efficiently estimate the mechanical properties of many structure-based biological materials, and it also enables many potential applications such as serving as a quality control tool to select tissue for the manufacturing of medical devices (e.g. bioprosthetic heart valves). Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  5. Natural and Artificial Intelligence in Neurosurgery: A Systematic Review.

    PubMed

    Senders, Joeky T; Arnaout, Omar; Karhade, Aditya V; Dasenbrock, Hormuzdiyar H; Gormley, William B; Broekman, Marike L; Smith, Timothy R

    2017-09-07

    Machine learning (ML) is a domain of artificial intelligence that allows computer algorithms to learn from experience without being explicitly programmed. To summarize neurosurgical applications of ML where it has been compared to clinical expertise, here referred to as "natural intelligence." A systematic search was performed in the PubMed and Embase databases as of August 2016 to review all studies comparing the performance of various ML approaches with that of clinical experts in neurosurgical literature. Twenty-three studies were identified that used ML algorithms for diagnosis, presurgical planning, or outcome prediction in neurosurgical patients. Compared to clinical experts, ML models demonstrated a median absolute improvement in accuracy and area under the receiver operating curve of 13% (interquartile range 4-21%) and 0.14 (interquartile range 0.07-0.21), respectively. In 29 (58%) of the 50 outcome measures for which a P -value was provided or calculated, ML models outperformed clinical experts ( P < .05). In 18 of 50 (36%), no difference was seen between ML and expert performance ( P > .05), while in 3 of 50 (6%) clinical experts outperformed ML models ( P < .05). All 4 studies that compared clinicians assisted by ML models vs clinicians alone demonstrated a better performance in the first group. We conclude that ML models have the potential to augment the decision-making capacity of clinicians in neurosurgical applications; however, significant hurdles remain associated with creating, validating, and deploying ML models in the clinical setting. Shifting from the preconceptions of a human-vs-machine to a human-and-machine paradigm could be essential to overcome these hurdles. Published by Oxford University Press on behalf of Congress of Neurological Surgeons 2017.

  6. Dielectrophoretic label-free immunoassay for rare-analyte quantification in biological samples

    NASA Astrophysics Data System (ADS)

    Velmanickam, Logeeshan; Laudenbach, Darrin; Nawarathna, Dharmakeerthi

    2016-10-01

    The current gold standard for detecting or quantifying target analytes from blood samples is the ELISA (enzyme-linked immunosorbent assay). The detection limit of ELISA is about 250 pg/ml. However, to quantify analytes that are related to various stages of tumors including early detection requires detecting well below the current limit of the ELISA test. For example, Interleukin 6 (IL-6) levels of early oral cancer patients are <100 pg/ml and the prostate specific antigen level of the early stage of prostate cancer is about 1 ng/ml. Further, it has been reported that there are significantly less than 1 pg /mL of analytes in the early stage of tumors. Therefore, depending on the tumor type and the stage of the tumors, it is required to quantify various levels of analytes ranging from ng/ml to pg/ml. To accommodate these critical needs in the current diagnosis, there is a need for a technique that has a large dynamic range with an ability to detect extremely low levels of target analytes (

  7. Helium-3 MR q-space imaging with radial acquisition and iterative highly constrained back-projection.

    PubMed

    O'Halloran, Rafael L; Holmes, James H; Wu, Yu-Chien; Alexander, Andrew; Fain, Sean B

    2010-01-01

    An undersampled diffusion-weighted stack-of-stars acquisition is combined with iterative highly constrained back-projection to perform hyperpolarized helium-3 MR q-space imaging with combined regional correction of radiofrequency- and T1-related signal loss in a single breath-held scan. The technique is tested in computer simulations and phantom experiments and demonstrated in a healthy human volunteer with whole-lung coverage in a 13-sec breath-hold. Measures of lung microstructure at three different lung volumes are evaluated using inhaled gas volumes of 500 mL, 1000 mL, and 1500 mL to demonstrate feasibility. Phantom results demonstrate that the proposed technique is in agreement with theoretical values, as well as with a fully sampled two-dimensional Cartesian acquisition. Results from the volunteer study demonstrate that the root mean squared diffusion distance increased significantly from the 500-mL volume to the 1000-mL volume. This technique represents the first demonstration of a spatially resolved hyperpolarized helium-3 q-space imaging technique and shows promise for microstructural evaluation of lung disease in three dimensions. Copyright (c) 2009 Wiley-Liss, Inc.

  8. Biomedical visual data analysis to build an intelligent diagnostic decision support system in medical genetics.

    PubMed

    Kuru, Kaya; Niranjan, Mahesan; Tunca, Yusuf; Osvank, Erhan; Azim, Tayyaba

    2014-10-01

    In general, medical geneticists aim to pre-diagnose underlying syndromes based on facial features before performing cytological or molecular analyses where a genotype-phenotype interrelation is possible. However, determining correct genotype-phenotype interrelationships among many syndromes is tedious and labor-intensive, especially for extremely rare syndromes. Thus, a computer-aided system for pre-diagnosis can facilitate effective and efficient decision support, particularly when few similar cases are available, or in remote rural districts where diagnostic knowledge of syndromes is not readily available. The proposed methodology, visual diagnostic decision support system (visual diagnostic DSS), employs machine learning (ML) algorithms and digital image processing techniques in a hybrid approach for automated diagnosis in medical genetics. This approach uses facial features in reference images of disorders to identify visual genotype-phenotype interrelationships. Our statistical method describes facial image data as principal component features and diagnoses syndromes using these features. The proposed system was trained using a real dataset of previously published face images of subjects with syndromes, which provided accurate diagnostic information. The method was tested using a leave-one-out cross-validation scheme with 15 different syndromes, each of comprised 5-9 cases, i.e., 92 cases in total. An accuracy rate of 83% was achieved using this automated diagnosis technique, which was statistically significant (p<0.01). Furthermore, the sensitivity and specificity values were 0.857 and 0.870, respectively. Our results show that the accurate classification of syndromes is feasible using ML techniques. Thus, a large number of syndromes with characteristic facial anomaly patterns could be diagnosed with similar diagnostic DSSs to that described in the present study, i.e., visual diagnostic DSS, thereby demonstrating the benefits of using hybrid image processing and ML-based computer-aided diagnostics for identifying facial phenotypes. Copyright © 2014. Published by Elsevier B.V.

  9. A comparison of minimum distance and maximum likelihood techniques for proportion estimation

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.

    1982-01-01

    The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.

  10. Case-Based Reasoning in Mixed Paradigm Settings and with Learning

    DTIC Science & Technology

    1994-04-30

    Learning Prototypical Cases OFF-BROADWAY, MCI and RMHC -* are three CBR-ML systems that learn case prototypes. We feel that methods that enable the...at Irvine Machine Learning Repository, including heart disease and breast cancer databases. OFF-BROADWAY, MCI and RMHC -* made the following notable

  11. Machine Learning Approaches for Predicting Radiation Therapy Outcomes: A Clinician's Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, John; Schwartz, Russell; Flickinger, John

    Radiation oncology has always been deeply rooted in modeling, from the early days of isoeffect curves to the contemporary Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) initiative. In recent years, medical modeling for both prognostic and therapeutic purposes has exploded thanks to increasing availability of electronic data and genomics. One promising direction that medical modeling is moving toward is adopting the same machine learning methods used by companies such as Google and Facebook to combat disease. Broadly defined, machine learning is a branch of computer science that deals with making predictions from complex data through statistical models.more » These methods serve to uncover patterns in data and are actively used in areas such as speech recognition, handwriting recognition, face recognition, “spam” filtering (junk email), and targeted advertising. Although multiple radiation oncology research groups have shown the value of applied machine learning (ML), clinical adoption has been slow due to the high barrier to understanding these complex models by clinicians. Here, we present a review of the use of ML to predict radiation therapy outcomes from the clinician's point of view with the hope that it lowers the “barrier to entry” for those without formal training in ML. We begin by describing 7 principles that one should consider when evaluating (or creating) an ML model in radiation oncology. We next introduce 3 popular ML methods—logistic regression (LR), support vector machine (SVM), and artificial neural network (ANN)—and critique 3 seminal papers in the context of these principles. Although current studies are in exploratory stages, the overall methodology has progressively matured, and the field is ready for larger-scale further investigation.« less

  12. A Hybrid Semi-supervised Classification Scheme for Mining Multisource Geospatial Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju; Bhaduri, Budhendra L

    2011-01-01

    Supervised learning methods such as Maximum Likelihood (ML) are often used in land cover (thematic) classification of remote sensing imagery. ML classifier relies exclusively on spectral characteristics of thematic classes whose statistical distributions (class conditional probability densities) are often overlapping. The spectral response distributions of thematic classes are dependent on many factors including elevation, soil types, and ecological zones. A second problem with statistical classifiers is the requirement of large number of accurate training samples (10 to 30 |dimensions|), which are often costly and time consuming to acquire over large geographic regions. With the increasing availability of geospatial databases, itmore » is possible to exploit the knowledge derived from these ancillary datasets to improve classification accuracies even when the class distributions are highly overlapping. Likewise newer semi-supervised techniques can be adopted to improve the parameter estimates of statistical model by utilizing a large number of easily available unlabeled training samples. Unfortunately there is no convenient multivariate statistical model that can be employed for mulitsource geospatial databases. In this paper we present a hybrid semi-supervised learning algorithm that effectively exploits freely available unlabeled training samples from multispectral remote sensing images and also incorporates ancillary geospatial databases. We have conducted several experiments on real datasets, and our new hybrid approach shows over 25 to 35% improvement in overall classification accuracy over conventional classification schemes.« less

  13. Impact of novel techniques on minimally invasive adrenal surgery: trends and outcomes from a contemporary international large series in urology.

    PubMed

    Pavan, Nicola; Autorino, Riccardo; Lee, Hak; Porpiglia, Francesco; Sun, Yinghao; Greco, Francesco; Jeff Chueh, S; Han, Deok Hyun; Cindolo, Luca; Ferro, Matteo; Chen, Xiang; Branco, Anibal; Fornara, Paolo; Liao, Chun-Hou; Miyajima, Akira; Kyriazis, Iason; Puglisi, Marco; Fiori, Cristian; Yang, Bo; Fei, Guo; Altieri, Vincenzo; Jeong, Byong Chang; Berardinelli, Francesco; Schips, Luigi; De Cobelli, Ottavio; Chen, Zhi; Haber, Georges-Pascal; He, Yao; Oya, Mototsugu; Liatsikos, Evangelos; Brandao, Luis; Challacombe, Benjamin; Kaouk, Jihad; Darweesh, Ithaar

    2016-10-01

    To evaluate contemporary international trends in the implementation of minimally invasive adrenalectomy and to assess contemporary outcomes of different minimally invasive techniques performed at urologic centers worldwide. A retrospective multinational multicenter study of patients who underwent minimally invasive adrenalectomy from 2008 to 2013 at 14 urology institutions worldwide was included in the analysis. Cases were categorized based on the minimally invasive adrenalectomy technique: conventional laparoscopy (CL), robot-assisted laparoscopy (RAL), laparoendoscopic single-site surgery (LESS), and mini-laparoscopy (ML). The rates of the four treatment modalities were determined according to the year of surgery, and a regression analysis was performed for trends in all surgical modalities. Overall, a total of 737 adrenalectomies were performed across participating institutions and included in this analysis: 337 CL (46 % of cases), 57 ML (8 %), 263 LESS (36 %), and 80 RA (11 %). Overall, 204 (28 %) operations were performed with a retroperitoneal approach. The overall number of adrenalectomies increased from 2008 to 2013 (p = 0.05). A transperitoneal approach was preferred in all but the ML group (p < 0.001). European centers mostly adopted CL and ML techniques, whereas those from Asia and South America reported the highest rate in LESS procedures, and RAL was adopted to larger extent in the USA. LESS had the fastest increase in utilization at 6 %/year. The rate of RAL procedures increased at slower rates (2.2 %/year), similar to ML (1.7 %/year). Limitations of this study are the retrospective design and the lack of a cost analysis. Several minimally invasive surgical techniques for the management of adrenal masses are successfully implemented in urology institutions worldwide. CL and LESS seem to represent the most commonly adopted techniques, whereas ML and RAL are growing at a slower rate. All the MIS techniques can be safely and effectively performed for a variety of adrenal disease.

  14. Could machine learning improve the prediction of pelvic nodal status of prostate cancer patients? Preliminary results of a pilot study.

    PubMed

    De Bari, B; Vallati, M; Gatta, R; Simeone, C; Girelli, G; Ricardi, U; Meattini, I; Gabriele, P; Bellavita, R; Krengli, M; Cafaro, I; Cagna, E; Bunkheila, F; Borghesi, S; Signor, M; Di Marco, A; Bertoni, F; Stefanacci, M; Pasinetti, N; Buglione, M; Magrini, S M

    2015-07-01

    We tested and compared performances of Roach formula, Partin tables and of three Machine Learning (ML) based algorithms based on decision trees in identifying N+ prostate cancer (PC). 1,555 cN0 and 50 cN+ PC were analyzed. Results were also verified on an independent population of 204 operated cN0 patients, with a known pN status (187 pN0, 17 pN1 patients). ML performed better, also when tested on the surgical population, with accuracy, specificity, and sensitivity ranging between 48-86%, 35-91%, and 17-79%, respectively. ML potentially allows better prediction of the nodal status of PC, potentially allowing a better tailoring of pelvic irradiation.

  15. Retroperitoneoscopic living donor nephrectomy: initial experience with a unique hand-assisted approach.

    PubMed

    Capolicchio, J-P; Feifer, A; Plante, M K; Tchervenkov, J

    2011-01-01

    The retroperitoneoscopic (RP) approach to live donor nephrectomy (LDN) may be advantageous for the donor because it avoids mobilization of peritoneal organs and provides direct access to the renal vessels. Notwithstanding, this approach is not popular, likely because of the steeper learning curve. We feel that hand-assistance (HA) can reduce the learning curve and in this study, we present our experience with a novel hand-assist approach to retroperitoneoscopic live donor nephrectomy (HARP-LDN). Over a one-yr period, 10 consecutive patients underwent left HARP-LDN with a mean body mass index of 29 and three with prior left abdomen surgery. The surgical technique utilizes a 7 cm, muscle-sparing incision for the hand-port with two endoscopic ports. Operative time was an average of 155 min., with no open conversions. Mean blood loss was 68 mL, and warm ischemia time was 2.5 min. Hospital stay averaged 2.7 d with postoperative complications limited to one urinary retention. Our modified HARP approach to left LDN is safe, effective and can be performed expeditiously. Our promising initial results require a larger patient cohort to confirm the advantages of the hand-assisted retroperitoneal technique. © 2010 John Wiley & Sons A/S.

  16. THE PHYSIOLOGICAL SIGNIFICANCE OF THE CORTIOOSTEROIDS IN PAROTID FLUID.

    DTIC Science & Technology

    A highly sensitive and highly specific technique was devised, utilizing four chromatographic procedures, for the measurement of parotid fluid...cortisol and cortisone on 5 ml of parotid fluid, and plasma cortisol on 1 ml of plasma. In addition techniques are described for measuring plasma...derivative technique is high purified immediately before its use, blank values are too high for the low values found in parotid saliva. Blank values

  17. Machine Learning of Fault Friction

    NASA Astrophysics Data System (ADS)

    Johnson, P. A.; Rouet-Leduc, B.; Hulbert, C.; Marone, C.; Guyer, R. A.

    2017-12-01

    We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?

  18. Deep Learning for ECG Classification

    NASA Astrophysics Data System (ADS)

    Pyakillya, B.; Kazachenko, N.; Mikhailovsky, N.

    2017-10-01

    The importance of ECG classification is very high now due to many current medical applications where this problem can be stated. Currently, there are many machine learning (ML) solutions which can be used for analyzing and classifying ECG data. However, the main disadvantages of these ML results is use of heuristic hand-crafted or engineered features with shallow feature learning architectures. The problem relies in the possibility not to find most appropriate features which will give high classification accuracy in this ECG problem. One of the proposing solution is to use deep learning architectures where first layers of convolutional neurons behave as feature extractors and in the end some fully-connected (FCN) layers are used for making final decision about ECG classes. In this work the deep learning architecture with 1D convolutional layers and FCN layers for ECG classification is presented and some classification results are showed.

  19. High-Throughput, Protein-Targeted Biomolecular Detection Using Frequency-Domain Faraday Rotation Spectroscopy.

    PubMed

    Murdock, Richard J; Putnam, Shawn A; Das, Soumen; Gupta, Ankur; Chase, Elyse D Z; Seal, Sudipta

    2017-03-01

    A clinically relevant magneto-optical technique (fd-FRS, frequency-domain Faraday rotation spectroscopy) for characterizing proteins using antibody-functionalized magnetic nanoparticles (MNPs) is demonstrated. This technique distinguishes between the Faraday rotation of the solvent, iron oxide core, and functionalization layers of polyethylene glycol polymers (spacer) and model antibody-antigen complexes (anti-BSA/BSA, bovine serum albumin). A detection sensitivity of ≈10 pg mL -1 and broad detection range of 10 pg mL -1 ≲ c BSA ≲ 100 µg mL -1 are observed. Combining this technique with predictive analyte binding models quantifies (within an order of magnitude) the number of active binding sites on functionalized MNPs. Comparative enzyme-linked immunosorbent assay (ELISA) studies are conducted, reproducing the manufacturer advertised BSA ELISA detection limits from 1 ng mL -1 ≲ c BSA ≲ 500 ng mL -1 . In addition to the increased sensitivity, broader detection range, and similar specificity, fd-FRS can be conducted in less than ≈30 min, compared to ≈4 h with ELISA. Thus, fd-FRS is shown to be a sensitive optical technique with potential to become an efficient diagnostic in the chemical and biomolecular sciences. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Use of nonimaging nuclear medicine techniques to assess the effect of flunixin meglumine on effective renal plasma flow and effective renal blood flow in healthy horses.

    PubMed

    Held, J P; Daniel, G B

    1991-10-01

    The effect of flunixin meglumine on renal function was studied in 6 healthy horses by use of nonimaging nuclear medicine techniques. Effective renal plasma flow (ERPF) and effective renal blood flow (ERBF) were determined by plasma clearance of 131I-orthoiodohippuric acid before and after administration of flunixin meglumine. Mean ERPF and ERBF was 6.03 ml/min/kg and 10.7 ml/min/kg, respectively, before treatment and was 5.7 ml/min/kg and 9.7 ml/min/kg, respectively, after treatment. Although ERPF and ERBF decreased after flunixin meglumine administration, the difference was not statistically significant.

  1. Predictive modeling of dynamic fracture growth in brittle materials with machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Bryan A.; Rougier, Esteban; O’Malley, Daniel

    We use simulation data from a high delity Finite-Discrete Element Model to build an e cient Machine Learning (ML) approach to predict fracture growth and coalescence. Our goal is for the ML approach to be used as an emulator in place of the computationally intensive high delity models in an uncertainty quanti cation framework where thousands of forward runs are required. The failure of materials with various fracture con gurations (size, orientation and the number of initial cracks) are explored and used as data to train our ML model. This novel approach has shown promise in predicting spatial (path tomore » failure) and temporal (time to failure) aspects of brittle material failure. Predictions of where dominant fracture paths formed within a material were ~85% accurate and the time of material failure deviated from the actual failure time by an average of ~16%. Additionally, the ML model achieves a reduction in computational cost by multiple orders of magnitude.« less

  2. Materials Screening for the Discovery of New Half-Heuslers: Machine Learning versus ab Initio Methods.

    PubMed

    Legrain, Fleur; Carrete, Jesús; van Roekeghem, Ambroise; Madsen, Georg K H; Mingo, Natalio

    2018-01-18

    Machine learning (ML) is increasingly becoming a helpful tool in the search for novel functional compounds. Here we use classification via random forests to predict the stability of half-Heusler (HH) compounds, using only experimentally reported compounds as a training set. Cross-validation yields an excellent agreement between the fraction of compounds classified as stable and the actual fraction of truly stable compounds in the ICSD. The ML model is then employed to screen 71 178 different 1:1:1 compositions, yielding 481 likely stable candidates. The predicted stability of HH compounds from three previous high-throughput ab initio studies is critically analyzed from the perspective of the alternative ML approach. The incomplete consistency among the three separate ab initio studies and between them and the ML predictions suggests that additional factors beyond those considered by ab initio phase stability calculations might be determinant to the stability of the compounds. Such factors can include configurational entropies and quasiharmonic contributions.

  3. Predictive modeling of dynamic fracture growth in brittle materials with machine learning

    DOE PAGES

    Moore, Bryan A.; Rougier, Esteban; O’Malley, Daniel; ...

    2018-02-22

    We use simulation data from a high delity Finite-Discrete Element Model to build an e cient Machine Learning (ML) approach to predict fracture growth and coalescence. Our goal is for the ML approach to be used as an emulator in place of the computationally intensive high delity models in an uncertainty quanti cation framework where thousands of forward runs are required. The failure of materials with various fracture con gurations (size, orientation and the number of initial cracks) are explored and used as data to train our ML model. This novel approach has shown promise in predicting spatial (path tomore » failure) and temporal (time to failure) aspects of brittle material failure. Predictions of where dominant fracture paths formed within a material were ~85% accurate and the time of material failure deviated from the actual failure time by an average of ~16%. Additionally, the ML model achieves a reduction in computational cost by multiple orders of magnitude.« less

  4. Comparison of robotics, functional electrical stimulation, and motor learning methods for treatment of persistent upper extremity dysfunction after stroke: a randomized controlled trial.

    PubMed

    McCabe, Jessica; Monkiewicz, Michelle; Holcomb, John; Pundik, Svetlana; Daly, Janis J

    2015-06-01

    To compare response to upper-limb treatment using robotics plus motor learning (ML) versus functional electrical stimulation (FES) plus ML versus ML alone, according to a measure of complex functional everyday tasks for chronic, severely impaired stroke survivors. Single-blind, randomized trial. Medical center. Enrolled subjects (N=39) were >1 year postsingle stroke (attrition rate=10%; 35 completed the study). All groups received treatment 5d/wk for 5h/d (60 sessions), with unique treatment as follows: ML alone (n=11) (5h/d partial- and whole-task practice of complex functional tasks), robotics plus ML (n=12) (3.5h/d of ML and 1.5h/d of shoulder/elbow robotics), and FES plus ML (n=12) (3.5h/d of ML and 1.5h/d of FES wrist/hand coordination training). Primary measure: Arm Motor Ability Test (AMAT), with 13 complex functional tasks; secondary measure: upper-limb Fugl-Meyer coordination scale (FM). There was no significant difference found in treatment response across groups (AMAT: P≥.584; FM coordination: P≥.590). All 3 treatment groups demonstrated clinically and statistically significant improvement in response to treatment (AMAT and FM coordination: P≤.009). A group treatment paradigm of 1:3 (therapist/patient) ratio proved feasible for provision of the intensive treatment. No adverse effects. Severely impaired stroke survivors with persistent (>1y) upper-extremity dysfunction can make clinically and statistically significant gains in coordination and functional task performance in response to robotics plus ML, FES plus ML, and ML alone in an intensive and long-duration intervention; no group differences were found. Additional studies are warranted to determine the effectiveness of these methods in the clinical setting. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  5. [Application of bilateral direct anterior approach total hip arthroplasty: a report of 22 cases].

    PubMed

    Tang, J; Lv, M; Zhou, Y X; Zhang, J

    2017-04-18

    To analyze the operation technique and the methods to avoid early complications on the learning curve for bilateral direct anterior approach (DAA) total hip arthroplasty (THA). We retrospectively studied a series of continued cases with bilateral avascular necrosis of the femoral head (AVN) or degenerative dysplastic hip and rheumatoid arthritis that were treated by DAA THA in Beijing Jishuitan Hospital. A total of 22 patients with 44 hips were analyzed from June 2014 to August 2016 in this study. There were 17 males and 5 females, and the median age was 48 years (range: 34-67 years). All the surgery was done by DAA method by two senior surgeons. The clinic characters, early surgery treatment results and complications were analyzed. We used the cementless stems in all the cases. The average operating time was (167±23) min; the average blood loss was (775±300) mL;the blood transfusion was in average (327±341) mL; the wound drainage in average was (111±73) mL. Most of the patients could move out of the bed by themselves on the first day after operation, 5 patients could walk without crutches on the first operating day, and 13 patients could squat on the third days after operation. The patients were discharged averagely 4 days after operation. We followed up all the patients for averagely 16 months (range: 8-24 months). There was no loosening or failure case in the latest follow up. In the study, 2 patients had great trochanter fracture, 2 patients had thigh pain, 4 patients had lateral femoral cutaneous nerve palsy, and 3 patients had muscle damage. The Harris scores were improved from 29±8 preoperatively to 90±3 postoperatively (P<0.01). The DAA THA can achieve faster recovery and flexible hip joint after operation. However it is a kind of surgery with high technique demanding. Carefully selected patients, and skilled technique, can help the surgeon avoid the early complications. It is associated with high complication rate in the learning curve for bilateral DAA THA.

  6. The Public Economics of Mastery Learning.

    ERIC Educational Resources Information Center

    Garner, William T.

    There is both less and more to mastery learning (ML) than meets the eye. Less because mastery learning is not based on a model of school learning, and more because it is the most optimistic statement we have about the power of education. The notions of setting achievement standards and letting time for completion vary, of using criterion…

  7. 78 FR 68774 - Onsite Emergency Response Capabilities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-15

    [email protected] . SUPPLEMENTARY INFORMATION: I. Background As a result of the events at the Fukushima Dai-ichi... Force Review of Insights from the Fukushima Dai-ichi Accident'' (ADAMS Accession No. ML111861807), the... in Response to Fukushima Lessons Learned'' (ADAMS Accession No. ML11269A204). The NRC staff...

  8. Learning Machine Learning: A Case Study

    ERIC Educational Resources Information Center

    Lavesson, N.

    2010-01-01

    This correspondence reports on a case study conducted in the Master's-level Machine Learning (ML) course at Blekinge Institute of Technology, Sweden. The students participated in a self-assessment test and a diagnostic test of prerequisite subjects, and their results on these tests are correlated with their achievement of the course's learning…

  9. Determination of plasma volume in anaesthetized piglets using the carbon monoxide (CO) method.

    PubMed

    Heltne, J K; Farstad, M; Lund, T; Koller, M E; Matre, K; Rynning, S E; Husby, P

    2002-07-01

    Based on measurements of the circulating red blood cell volume (V(RBC)) in seven anaesthetized piglets using carbon monoxide (CO) as a label, plasma volume (PV) was calculated for each animal. The increase in carboxyhaemoglobin (COHb) concentration following administration of a known amount of CO into a closed circuit re-breathing system was determined by diode-array spectrophotometry. Simultaneously measured haematocrit (HCT) and haemoglobin (Hb) values were used for PV calculation. The PV values were compared with simultaneously measured PVs determined using the Evans blue technique. Mean values (SD) for PV were 1708.6 (287.3)ml and 1738.7 (412.4)ml with the CO method and the Evans blue technique, respectively. Comparison of PVs determined with the two techniques demonstrated good correlation (r = 0.995). The mean difference between PV measurements was -29.9 ml and the limits of agreement (mean difference +/-2SD) were -289.1 ml and 229.3 ml. In conclusion, the CO method can be applied easily under general anaesthesia and controlled ventilation with a simple administration system. The agreement between the compared methods was satisfactory. Plasma volume determined with the CO method is safe, accurate and has no signs of major side effects.

  10. Maternal and fetal effect of misgav ladach cesarean section in nigerian women: a randomized control study.

    PubMed

    Ezechi, Oc; Ezeobi, Pm; Gab-Okafor, Cv; Edet, A; Nwokoro, Ca; Akinlade, A

    2013-10-01

    The poor utilisation of the Misgav-Ladach (ML) caesarean section method in our environment despite its proven advantage has been attributed to several factors including its non-evaluation. A well designed and conducted trial is needed to provide evidence to convince clinician of its advantage over Pfannenstiel based methods. To evaluate the outcome of ML based caesarean section among Nigerian women. Randomised controlled open label study of 323 women undergoing primary caesarean section in Lagos Nigeria. The women were randomised to either ML method or Pfannenstiel based (PB) caesarean section technique using computer generated random numbers. The mean duration of surgery (P < 0.001), time to first bowel motion (P = 0.01) and ambulation (P < 0.001) were significantly shorter in the ML group compared to PB group. Postoperative anaemia (P < 0.01), analgesic needs (P = 0.02), extra suture use, estimated blood loss (P < 0.01) and post-operative complications (P = 0.001) were significantly lower in the ML group compared to PB group. Though the mean hospital stay was shorter (5.8 days) in the ML group as against 6.0 days, the difference was not significant statistically (P = 0.17). Of the fetal outcome measures compared, it was only in the fetal extraction time that there was significant difference between the two groups (P = 0.001). The mean fetal extraction time was 162 sec in ML group compared to 273 sec in the PB group. This study confirmed the already established benefit of ML techniques in Nigerian women, as it relates to the postoperative outcomes, duration of surgery, and fetal extraction time. The technique is recommended to clinicians as its superior maternal and fetal outcome and cost saving advantage makes it appropriate for use in poor resource setting.

  11. Comparative Evaluation of Two Final Irrigation Techniques for the Removal of Precipitate Formed by the Interaction between Sodium Hypochlorite and Chlorhexidine.

    PubMed

    Metri, Malasiddappa; Hegde, Swaroop; Dinesh, K; Indiresha, H N; Nagaraj, Shruthi; Bhandi, Shilpa H

    2015-11-01

    To evaluate the effectiveness of two final irrigation techniques for the removal of precipitate formed by the interaction between sodium hypochlorite (NaOCl) and chlorhexidine (CHX). Sixty freshly extracted human maxillary incisor teeth were taken and randomly divided into three groups, containing 20 teeth each. Group 1 (control group), were irrigated with 5 ml of 2.5% NaOCl and a final flush with 5 ml of 2% chlorhexidine. Group 2 were irrigated with 5 ml of 2.5% NaOCl and 5 ml of 2% chlorhexidine followed by 5 ml of saline and agitated with F-files. Group 3 were irrigated with 5 ml of 2.5% NaOCl and 5 ml of 2% chlorhexidine followed by 5 ml of 15% citric acid and passively agitated with ultrasonics. A thin longitudinal groove was made along the buccal and lingual aspect of the root using diamond disks and split with chisel and mallet. Both halves of the split tooth will be examined under stereomicroscope. Results were tabulated and analyzed statistically using analysis of variance (ANOVA) and Mann-Whitney U test. There was a significant difference between the mean values (p < 0.05) in groups 2 and 3 compared to group 1 at each level. Passive ultrasonic irrigation is more effective than the F-file agitation technique to remove the precipitate at all three levels measured. Combination of sodium hypochlorite and chlorhexidine irrigation protocol has been practiced since from many years to achieve good results. However, it has adverse effect in the form of precipitate and which is considered to be a carcinogenic in nature, hence this precipitate should be removed.

  12. Investigation on V2O5 Thin Films Prepared by Spray Pyrolysis Technique

    NASA Astrophysics Data System (ADS)

    Anasthasiya, A. Nancy Anna; Gowtham, K.; Shruthi, R.; Pandeeswari, R.; Jeyaprakash, B. G.

    The spray pyrolysis technique was employed to deposit V2O5 thin films on a glass substrate. By varying the precursor solution volume from 10mL to 50mL in steps of 10mL, films of various thicknesses were prepared. Orthorhombic polycrystalline V2O5 films were inferred from the XRD pattern irrespective of precursor solution volume. The micro-Raman studies suggested that annealed V2O5 thin film has good crystallinity. The effect of precursor solution volume on morphological and optical properties were analysed and reported.

  13. SU-G-201-09: Evaluation of a Novel Machine-Learning Algorithm for Permanent Prostate Brachytherapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicolae, A; Department of Physics, Ryerson University, Toronto, ON; Lu, L

    Purpose: A novel, automated, algorithm for permanent prostate brachytherapy (PPB) treatment planning has been developed. The novel approach uses machine-learning (ML), a form of artificial intelligence, to substantially decrease planning time while simultaneously retaining the clinical intuition of plans created by radiation oncologists. This study seeks to compare the ML algorithm against expert-planned PPB plans to evaluate the equivalency of dosimetric and clinical plan quality. Methods: Plan features were computed from historical high-quality PPB treatments (N = 100) and stored in a relational database (RDB). The ML algorithm matched new PPB features to a highly similar case in the RDB;more » this initial plan configuration was then further optimized using a stochastic search algorithm. PPB pre-plans (N = 30) generated using the ML algorithm were compared to plan variants created by an expert dosimetrist (RT), and radiation oncologist (MD). Planning time and pre-plan dosimetry were evaluated using a one-way Student’s t-test and ANOVA, respectively (significance level = 0.05). Clinical implant quality was evaluated by expert PPB radiation oncologists as part of a qualitative study. Results: Average planning time was 0.44 ± 0.42 min compared to 17.88 ± 8.76 min for the ML algorithm and RT, respectively, a significant advantage [t(9), p = 0.01]. A post-hoc ANOVA [F(2,87) = 6.59, p = 0.002] using Tukey-Kramer criteria showed a significantly lower mean prostate V150% for the ML plans (52.9%) compared to the RT (57.3%), and MD (56.2%) plans. Preliminary qualitative study results indicate comparable clinical implant quality between RT and ML plans with a trend towards preference for ML plans. Conclusion: PPB pre-treatment plans highly comparable to those of an expert radiation oncologist can be created using a novel ML planning model. The use of an ML-based planning approach is expected to translate into improved PPB accessibility and plan uniformity.« less

  14. Modern Languages and Specific Learning Difficulties (SpLD): Implications of Teaching Adult Learners with Dyslexia in Distance Learning

    ERIC Educational Resources Information Center

    Gallardo, Matilde; Heiser, Sarah; Arias McLaughlin, Ximena

    2015-01-01

    In modern language (ML) distance learning programmes, teachers and students use online tools to facilitate, reinforce and support independent learning. This makes it essential for teachers to develop pedagogical expertise in using online communication tools to perform their role. Teachers frequently raise questions of how best to support the needs…

  15. Developing Pedagogical Expertise in Modern Language Learning and Specific Learning Difficulties through Collaborative and Open Educational Practices

    ERIC Educational Resources Information Center

    Gallardo, Matilde; Heiser, Sarah; Arias McLaughlin, Ximena

    2017-01-01

    This paper analyses teachers' engagement with collaborative and open educational practices to develop their pedagogical expertise in the field of modern language (ML) learning and specific learning difficulties (SpLD). The study analyses the findings of a staff development initiative at the Department of Languages, Open University, UK, in 2013,…

  16. Merged or monolithic? Using machine-learning to reconstruct the dynamical history of simulated star clusters

    NASA Astrophysics Data System (ADS)

    Pasquato, Mario; Chung, Chul

    2016-05-01

    Context. Machine-learning (ML) solves problems by learning patterns from data with limited or no human guidance. In astronomy, ML is mainly applied to large observational datasets, e.g. for morphological galaxy classification. Aims: We apply ML to gravitational N-body simulations of star clusters that are either formed by merging two progenitors or evolved in isolation, planning to later identify globular clusters (GCs) that may have a history of merging from observational data. Methods: We create mock-observations from simulated GCs, from which we measure a set of parameters (also called features in the machine-learning field). After carrying out dimensionality reduction on the feature space, the resulting datapoints are fed in to various classification algorithms. Using repeated random subsampling validation, we check whether the groups identified by the algorithms correspond to the underlying physical distinction between mergers and monolithically evolved simulations. Results: The three algorithms we considered (C5.0 trees, k-nearest neighbour, and support-vector machines) all achieve a test misclassification rate of about 10% without parameter tuning, with support-vector machines slightly outperforming the others. The first principal component of feature space correlates with cluster concentration. If we exclude it from the regression, the performance of the algorithms is only slightly reduced.

  17. An Application of the Geo-Semantic Micro-services in Seamless Data-Model Integration

    NASA Astrophysics Data System (ADS)

    Jiang, P.; Elag, M.; Kumar, P.; Liu, R.; Hu, Y.; Marini, L.; Peckham, S. D.; Hsu, L.

    2016-12-01

    We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025: Earthquake source: from the laboratory to the fieldHulbert, C., Characterizing slow slip applying machine learning (2017), AGU Fall Meeting Session S019: Slow slip, Tectonic Tremor, and the Brittle-to-Ductile Transition Zone: What mechanisms control the diversity of slow and fast earthquakes?

  18. Tomographic three-dimensional echocardiographic determination of chamber size and systolic function in patients with left ventricular aneurysm: comparison to magnetic resonance imaging, cineventriculography, and two-dimensional echocardiography.

    PubMed

    Buck, T; Hunold, P; Wentz, K U; Tkalec, W; Nesser, H J; Erbel, R

    1997-12-16

    Two-dimensional (2D) echocardiographic approaches based on geometric assumptions face the greatest limitations and inaccuracies in patients with left ventricular (LV) aneurysms. Three-dimensional (3D) echocardiographic techniques can potentially overcome these limitations; to date, however, although tested in experimental models of aneurysms, they have not been applied to a series of patients with such distortion. The purpose of this study was therefore to validate the clinical application of tomographic 3D echocardiography (3DE) by the routine transthoracic approach to determine LV chamber size and systolic function without geometric assumptions in patients with LV aneurysms. In 23 patients with chronic stable LV aneurysms, LV end-systolic and end-diastolic volumes (LVEDV, LVESV) and ejection fraction (LVEF) by tomographic 3DE were compared with results from 3D magnetic resonance tomography (3DMRT) as an independent reference as well as with the conventional techniques of single plane and biplane 2D echocardiography and biplane cineventriculography. Dynamic 3DE image data sets were obtained from a transthoracic apical view with the use of a rotating probe with acquisition gated to control for ECG and respiration (Echoscan, TomTec). Volumes were calculated from the 3D data sets by summating the volumes of multiple parallel disks. 3DE results correlated and agreed well with those by 3DMRT, with better correlation and agreement than provided by other techniques for LVEDV (3DE: r=.97, SEE=14.7 mL, SD of differences from 3DMRT=14.5 mL; other techniques: r=.84 to .93, SEE=30.7 to 41.6 mL [P<.001 versus 3DE by F test], SD of differences=31.5 to 40.7 mL [P<.001 versus 3DE by F test]). The same also pertained to LVESV (3DE: r=.97, SEE=12.4 mL, SD of differences=12.9 mL; other techniques: r=.81 to .90, SEE=24.7 to 37.2 mL [P<.001], SD of differences=27.6 to 36.8 mL [P<.005]) and LVEF (3DE: r=.74, SEE=5.6%, SD of differences=6.7%; other techniques: r=.14 to .59, SEE=9.5% to 10.1% [P<.01], SD of differences=9.5% to 12.6% [P<.05]). Compared with 3DMRT, 3DE was less time consuming and patient discomfort was less. Tomographic 3DE is an accurate noninvasive technique for calculating LV volumes and systolic function in patients with LV aneurysm. Unlike current 2D methods, tomographic 3DE requires no geometric assumptions that limit accuracy.

  19. A universal strategy for the creation of machine learning-based atomistic force fields

    NASA Astrophysics Data System (ADS)

    Huan, Tran Doan; Batra, Rohit; Chapman, James; Krishnan, Sridevi; Chen, Lihua; Ramprasad, Rampi

    2017-09-01

    Emerging machine learning (ML)-based approaches provide powerful and novel tools to study a variety of physical and chemical problems. In this contribution, we outline a universal strategy to create ML-based atomistic force fields, which can be used to perform high-fidelity molecular dynamics simulations. This scheme involves (1) preparing a big reference dataset of atomic environments and forces with sufficiently low noise, e.g., using density functional theory or higher-level methods, (2) utilizing a generalizable class of structural fingerprints for representing atomic environments, (3) optimally selecting diverse and non-redundant training datasets from the reference data, and (4) proposing various learning approaches to predict atomic forces directly (and rapidly) from atomic configurations. From the atomistic forces, accurate potential energies can then be obtained by appropriate integration along a reaction coordinate or along a molecular dynamics trajectory. Based on this strategy, we have created model ML force fields for six elemental bulk solids, including Al, Cu, Ti, W, Si, and C, and show that all of them can reach chemical accuracy. The proposed procedure is general and universal, in that it can potentially be used to generate ML force fields for any material using the same unified workflow with little human intervention. Moreover, the force fields can be systematically improved by adding new training data progressively to represent atomic environments not encountered previously.

  20. Predicting Treatment Response to Intra-arterial Therapies for Hepatocellular Carcinoma with the Use of Supervised Machine Learning-An Artificial Intelligence Concept.

    PubMed

    Abajian, Aaron; Murali, Nikitha; Savic, Lynn Jeanette; Laage-Gaupp, Fabian Max; Nezami, Nariman; Duncan, James S; Schlachter, Todd; Lin, MingDe; Geschwind, Jean-François; Chapiro, Julius

    2018-06-01

    To use magnetic resonance (MR) imaging and clinical patient data to create an artificial intelligence (AI) framework for the prediction of therapeutic outcomes of transarterial chemoembolization by applying machine learning (ML) techniques. This study included 36 patients with hepatocellular carcinoma (HCC) treated with transarterial chemoembolization. The cohort (age 62 ± 8.9 years; 31 men; 13 white; 24 Eastern Cooperative Oncology Group performance status 0, 10 status 1, 2 status 2; 31 Child-Pugh stage A, 4 stage B, 1 stage C; 1 Barcelona Clinic Liver Cancer stage 0, 12 stage A, 10 stage B, 13 stage C; tumor size 5.2 ± 3.0 cm; number of tumors 2.6 ± 1.1; and 30 conventional transarterial chemoembolization, 6 with drug-eluting embolic agents). MR imaging was obtained before and 1 month after transarterial chemoembolization. Image-based tumor response to transarterial chemoembolization was assessed with the use of the 3D quantitative European Association for the Study of the Liver (qEASL) criterion. Clinical information, baseline imaging, and therapeutic features were used to train logistic regression (LR) and random forest (RF) models to predict patients as treatment responders or nonresponders under the qEASL response criterion. The performance of each model was validated using leave-one-out cross-validation. Both LR and RF models predicted transarterial chemoembolization treatment response with an overall accuracy of 78% (sensitivity 62.5%, specificity 82.1%, positive predictive value 50.0%, negative predictive value 88.5%). The strongest predictors of treatment response included a clinical variable (presence of cirrhosis) and an imaging variable (relative tumor signal intensity >27.0). Transarterial chemoembolization outcomes in patients with HCC may be predicted before procedures by combining clinical patient data and baseline MR imaging with the use of AI and ML techniques. Copyright © 2018 SIR. Published by Elsevier Inc. All rights reserved.

  1. Machine Learning to Assess Grassland Productivity in Southeastern Arizona

    NASA Astrophysics Data System (ADS)

    Ponce-Campos, G. E.; Heilman, P.; Armendariz, G.; Moser, E.; Archer, V.; Vaughan, R.

    2015-12-01

    We present preliminary results of machine learning (ML) techniques modeling the combined effects of climate, management, and inherent potential on productivity of grazed semi-arid grasslands in southeastern Arizona. Our goal is to support public land managers determine if agency management policies are meeting objectives and where to focus attention. Monitoring in the field is becoming more and more limited in space and time. Remotely sensed data cover the entire allotments and go back in time, but do not consider the key issue of species composition. By estimating expected vegetative production as a function of site potential and climatic inputs, management skill can be assessed through time, across individual allotments, and between allotments. Here we present the use of Random Forest (RF) as the main ML technique, in this case for the purpose of regression. Our response variable is the maximum annual NDVI, a surrogate for grassland productivity, as generated by the Google Earth Engine cloud computing platform based on Landsat 5, 7, and 8 datasets. PRISM 33-year normal precipitation (1980-2013) was resampled to the Landsat scale. In addition, the GRIDMET climate dataset was the source for the calculation of the annual SPEI (Standardized Precipitation Evapotranspiration Index), a drought index. We also included information about landscape position, aspect, streams, ponds, roads and fire disturbances as part of the modeling process. Our results show that in terms of variable importance, the 33-year normal precipitation, along with SPEI, are the most important features affecting grasslands productivity within the study area. The RF approach was compared to a linear regression model with the same variables. The linear model resulted in an r2 = 0.41, whereas RF showed a significant improvement with an r2 = 0.79. We continue refining the model by comparison with aerial photography and to include grazing intensity and infrastructure from units/allotments to assess the effect of management practices on vegetation production.

  2. Comparison of prometaphase chromosome techniques with emphasis on the role of colcemid.

    PubMed

    Wiley, J E; Sargent, L M; Inhorn, S L; Meisner, L F

    1984-12-01

    Six different techniques were evaluated to define better those technical factors that are most critical for obtaining prometaphase cells for banding analysis. Our results demonstrate: colcemid exposures of 30 min or less have no effect on increasing the yield of prometaphase cells, colcemid exposures of greater than 0.1 microgram/ml can be toxic, methotrexate depresses the mitotic index significantly and seems to increase the incidence of prometaphase cells only because it suppresses later forms; and (d) the optimum number of cytogenetically satisfactory prometaphase cells can be obtained with a 4-h exposure to a combination of low concentration actinomycin D (0.5 microgram/ml) and colcemid (0.1 microgram/ml). This technique inhibits chromosome condensation while permitting prometaphase cells to accumulate for 4 h.

  3. Motor Learning Versus StandardWalking Exercise in Older Adults with Subclinical Gait Dysfunction: A Randomized Clinical Trial

    PubMed Central

    Brach, Jennifer S.; Van Swearingen, Jessie M.; Perera, Subashan; Wert, David M.; Studenski, Stephanie

    2013-01-01

    Background Current exercise recommendationsfocus on endurance and strength, but rarely incorporate principles of motor learning. Motor learning exerciseis designed to address neurological aspects of movement. Motor learning exercise has not been evaluated in older adults with subclinical gait dysfunction. Objectives Tocompare motor learning versus standard exercise on measures of mobility and perceived function and disability. Design Single-blind randomized trial. Setting University research center. Participants Olderadults (n=40), mean age 77.1±6.0 years), who had normal walking speed (≥1.0 m/s) and impaired motor skill (Figure of 8 walk time > 8 s). Interventions The motor learning program (ML) incorporated goal-oriented stepping and walking to promote timing and coordination within the phases of the gait cycle. The standard program (S) employed endurance training by treadmill walking.Both included strength training and were offered twice weekly for one hour for 12 weeks. Measurements Primary outcomes included mobility performance (gait efficiency, motor skill in walking, gait speed, and walking endurance)and secondary outcomes included perceived function and disability (Late Life Function and Disability Instrument). Results 38 of 40 participants completed the trial (ML, n=18; S, n=20). ML improved more than Sin gait speed (0.13 vs. 0.05 m/s, p=0.008) and motor skill (−2.2 vs. −0.89 s, p<0.0001). Both groups improved in walking endurance (28.3 and 22.9m, but did not differ significantly p=0.14). Changes in gait efficiency and perceived function and disability were not different between the groups (p>0.10). Conclusion In older adults with subclinical gait dysfunction, motor learning exercise improved some parameters of mobility performance more than standard exercise. PMID:24219189

  4. Distributed Learning Enhances Relational Memory Consolidation

    ERIC Educational Resources Information Center

    Litman, Leib; Davachi, Lila

    2008-01-01

    It has long been known that distributed learning (DL) provides a mnemonic advantage over massed learning (ML). However, the underlying mechanisms that drive this robust mnemonic effect remain largely unknown. In two experiments, we show that DL across a 24 hr interval does not enhance immediate memory performance but instead slows the rate of…

  5. Machine learning algorithms for the prediction of hERG and CYP450 binding in drug development.

    PubMed

    Klon, Anthony E

    2010-07-01

    The cost of developing new drugs is estimated at approximately $1 billion; the withdrawal of a marketed compound due to toxicity can result in serious financial loss for a pharmaceutical company. There has been a greater interest in the development of in silico tools that can identify compounds with metabolic liabilities before they are brought to market. The two largest classes of machine learning (ML) models, which will be discussed in this review, have been developed to predict binding to the human ether-a-go-go related gene (hERG) ion channel protein and the various CYP isoforms. Being able to identify potentially toxic compounds before they are made would greatly reduce the number of compound failures and the costs associated with drug development. This review summarizes the state of modeling hERG and CYP binding towards this goal since 2003 using ML algorithms. A wide variety of ML algorithms that are comparable in their overall performance are available. These ML methods may be applied regularly in discovery projects to flag compounds with potential metabolic liabilities.

  6. Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.

    PubMed

    Gong, Xiajing; Hu, Meng; Zhao, Liang

    2018-05-01

    Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  7. The reluctant visitor: an alkaloid in toxic nectar can reduce olfactory learning and memory in Asian honey bees.

    PubMed

    Zhang, Junjun; Wang, Zhengwei; Wen, Ping; Qu, Yufeng; Tan, Ken; Nieh, James C

    2018-03-01

    The nectar of the thunder god vine, Tripterygium hypoglaucum , contains a terpenoid, triptolide (TRP), that may be toxic to the sympatric Asian honey bee, Apis cerana , because honey produced from this nectar is toxic to bees. However, these bees will forage on, recruit for, and pollinate this plant during a seasonal dearth of preferred food sources. Olfactory learning plays a key role in forager constancy and pollination, and we therefore tested the effects of acute and chronic TRP feeding on forager olfactory learning, using proboscis extension reflex conditioning. At concentrations of 0.5-10 µg TRP ml -1 , there were no learning effects of acute exposure. However, memory retention (1 h after the last learning trial) significantly decreased by 56% following acute consumption of 0.5 µg TRP ml -1 Chronic exposure did not alter learning or memory, except at high concentrations (5 and 10 µg TRP ml -1 ). TRP concentrations in nectar may therefore not significantly harm plant pollination. Surprisingly, TRP slightly increased bee survival, and thus other components in T. hypoglaucum honey may be toxic. Long-term exposure to TRP could have colony effects but these may be ameliorated by the bees' aversion to T. hypoglaucum nectar when other food sources are available and, perhaps, by detoxification mechanisms. The co-evolution of this plant and its reluctant visitor may therefore likely illustrate a classic compromise between the interests of both actors. © 2018. Published by The Company of Biologists Ltd.

  8. Multivariate Analysis and Machine Learning in Cerebral Palsy Research

    PubMed Central

    Zhang, Jing

    2017-01-01

    Cerebral palsy (CP), a common pediatric movement disorder, causes the most severe physical disability in children. Early diagnosis in high-risk infants is critical for early intervention and possible early recovery. In recent years, multivariate analytic and machine learning (ML) approaches have been increasingly used in CP research. This paper aims to identify such multivariate studies and provide an overview of this relatively young field. Studies reviewed in this paper have demonstrated that multivariate analytic methods are useful in identification of risk factors, detection of CP, movement assessment for CP prediction, and outcome assessment, and ML approaches have made it possible to automatically identify movement impairments in high-risk infants. In addition, outcome predictors for surgical treatments have been identified by multivariate outcome studies. To make the multivariate and ML approaches useful in clinical settings, further research with large samples is needed to verify and improve these multivariate methods in risk factor identification, CP detection, movement assessment, and outcome evaluation or prediction. As multivariate analysis, ML and data processing technologies advance in the era of Big Data of this century, it is expected that multivariate analysis and ML will play a bigger role in improving the diagnosis and treatment of CP to reduce mortality and morbidity rates, and enhance patient care for children with CP. PMID:29312134

  9. Multivariate Analysis and Machine Learning in Cerebral Palsy Research.

    PubMed

    Zhang, Jing

    2017-01-01

    Cerebral palsy (CP), a common pediatric movement disorder, causes the most severe physical disability in children. Early diagnosis in high-risk infants is critical for early intervention and possible early recovery. In recent years, multivariate analytic and machine learning (ML) approaches have been increasingly used in CP research. This paper aims to identify such multivariate studies and provide an overview of this relatively young field. Studies reviewed in this paper have demonstrated that multivariate analytic methods are useful in identification of risk factors, detection of CP, movement assessment for CP prediction, and outcome assessment, and ML approaches have made it possible to automatically identify movement impairments in high-risk infants. In addition, outcome predictors for surgical treatments have been identified by multivariate outcome studies. To make the multivariate and ML approaches useful in clinical settings, further research with large samples is needed to verify and improve these multivariate methods in risk factor identification, CP detection, movement assessment, and outcome evaluation or prediction. As multivariate analysis, ML and data processing technologies advance in the era of Big Data of this century, it is expected that multivariate analysis and ML will play a bigger role in improving the diagnosis and treatment of CP to reduce mortality and morbidity rates, and enhance patient care for children with CP.

  10. ClimateNet: A Machine Learning dataset for Climate Science Research

    NASA Astrophysics Data System (ADS)

    Prabhat, M.; Biard, J.; Ganguly, S.; Ames, S.; Kashinath, K.; Kim, S. K.; Kahou, S.; Maharaj, T.; Beckham, C.; O'Brien, T. A.; Wehner, M. F.; Williams, D. N.; Kunkel, K.; Collins, W. D.

    2017-12-01

    Deep Learning techniques have revolutionized commercial applications in Computer vision, speech recognition and control systems. The key for all of these developments was the creation of a curated, labeled dataset ImageNet, for enabling multiple research groups around the world to develop methods, benchmark performance and compete with each other. The success of Deep Learning can be largely attributed to the broad availability of this dataset. Our empirical investigations have revealed that Deep Learning is similarly poised to benefit the task of pattern detection in climate science. Unfortunately, labeled datasets, a key pre-requisite for training, are hard to find. Individual research groups are typically interested in specialized weather patterns, making it hard to unify, and share datasets across groups and institutions. In this work, we are proposing ClimateNet: a labeled dataset that provides labeled instances of extreme weather patterns, as well as associated raw fields in model and observational output. We develop a schema in NetCDF to enumerate weather pattern classes/types, store bounding boxes, and pixel-masks. We are also working on a TensorFlow implementation to natively import such NetCDF datasets, and are providing a reference convolutional architecture for binary classification tasks. Our hope is that researchers in Climate Science, as well as ML/DL, will be able to use (and extend) ClimateNet to make rapid progress in the application of Deep Learning for Climate Science research.

  11. Pre-trained convolutional neural networks as feature extractors toward improved malaria parasite detection in thin blood smear images.

    PubMed

    Rajaraman, Sivaramakrishnan; Antani, Sameer K; Poostchi, Mahdieh; Silamut, Kamolrat; Hossain, Md A; Maude, Richard J; Jaeger, Stefan; Thoma, George R

    2018-01-01

    Malaria is a blood disease caused by the Plasmodium parasites transmitted through the bite of female Anopheles mosquito. Microscopists commonly examine thick and thin blood smears to diagnose disease and compute parasitemia. However, their accuracy depends on smear quality and expertise in classifying and counting parasitized and uninfected cells. Such an examination could be arduous for large-scale diagnoses resulting in poor quality. State-of-the-art image-analysis based computer-aided diagnosis (CADx) methods using machine learning (ML) techniques, applied to microscopic images of the smears using hand-engineered features demand expertise in analyzing morphological, textural, and positional variations of the region of interest (ROI). In contrast, Convolutional Neural Networks (CNN), a class of deep learning (DL) models promise highly scalable and superior results with end-to-end feature extraction and classification. Automated malaria screening using DL techniques could, therefore, serve as an effective diagnostic aid. In this study, we evaluate the performance of pre-trained CNN based DL models as feature extractors toward classifying parasitized and uninfected cells to aid in improved disease screening. We experimentally determine the optimal model layers for feature extraction from the underlying data. Statistical validation of the results demonstrates the use of pre-trained CNNs as a promising tool for feature extraction for this purpose.

  12. Prediction of revascularization after myocardial perfusion SPECT by machine learning in a large population.

    PubMed

    Arsanjani, Reza; Dey, Damini; Khachatryan, Tigran; Shalev, Aryeh; Hayes, Sean W; Fish, Mathews; Nakanishi, Rine; Germano, Guido; Berman, Daniel S; Slomka, Piotr

    2015-10-01

    We aimed to investigate if early revascularization in patients with suspected coronary artery disease can be effectively predicted by integrating clinical data and quantitative image features derived from perfusion SPECT (MPS) by machine learning (ML) approach. 713 rest (201)Thallium/stress (99m)Technetium MPS studies with correlating invasive angiography with 372 revascularization events (275 PCI/97 CABG) within 90 days after MPS (91% within 30 days) were considered. Transient ischemic dilation, stress combined supine/prone total perfusion deficit (TPD), supine rest and stress TPD, exercise ejection fraction, and end-systolic volume, along with clinical parameters including patient gender, history of hypertension and diabetes mellitus, ST-depression on baseline ECG, ECG and clinical response during stress, and post-ECG probability by boosted ensemble ML algorithm (LogitBoost) to predict revascularization events. These features were selected using an automated feature selection algorithm from all available clinical and quantitative data (33 parameters). Tenfold cross-validation was utilized to train and test the prediction model. The prediction of revascularization by ML algorithm was compared to standalone measures of perfusion and visual analysis by two experienced readers utilizing all imaging, quantitative, and clinical data. The sensitivity of machine learning (ML) (73.6% ± 4.3%) for prediction of revascularization was similar to one reader (73.9% ± 4.6%) and standalone measures of perfusion (75.5% ± 4.5%). The specificity of ML (74.7% ± 4.2%) was also better than both expert readers (67.2% ± 4.9% and 66.0% ± 5.0%, P < .05), but was similar to ischemic TPD (68.3% ± 4.9%, P < .05). The receiver operator characteristics areas under curve for ML (0.81 ± 0.02) was similar to reader 1 (0.81 ± 0.02) but superior to reader 2 (0.72 ± 0.02, P < .01) and standalone measure of perfusion (0.77 ± 0.02, P < .01). ML approach is comparable or better than experienced readers in prediction of the early revascularization after MPS, and is significantly better than standalone measures of perfusion derived from MPS.

  13. Semi-local machine-learned kinetic energy density functional with third-order gradients of electron density

    NASA Astrophysics Data System (ADS)

    Seino, Junji; Kageyama, Ryo; Fujinami, Mikito; Ikabata, Yasuhiro; Nakai, Hiromi

    2018-06-01

    A semi-local kinetic energy density functional (KEDF) was constructed based on machine learning (ML). The present scheme adopts electron densities and their gradients up to third-order as the explanatory variables for ML and the Kohn-Sham (KS) kinetic energy density as the response variable in atoms and molecules. Numerical assessments of the present scheme were performed in atomic and molecular systems, including first- and second-period elements. The results of 37 conventional KEDFs with explicit formulae were also compared with those of the ML KEDF with an implicit formula. The inclusion of the higher order gradients reduces the deviation of the total kinetic energies from the KS calculations in a stepwise manner. Furthermore, our scheme with the third-order gradient resulted in the closest kinetic energies to the KS calculations out of the presented functionals.

  14. Detection of Hepatitis A Virus by the Nucleic Acid Sequence-Based Amplification Technique and Comparison with Reverse Transcription-PCR

    PubMed Central

    Jean, Julie; Blais, Burton; Darveau, André; Fliss, Ismaïl

    2001-01-01

    A nucleic acid sequence-based amplification (NASBA) technique for the detection of hepatitis A virus (HAV) in foods was developed and compared to the traditional reverse transcription (RT)-PCR technique. Oligonucleotide primers targeting the VP1 and VP2 genes encoding the major HAV capsid proteins were used for the amplification of viral RNA in an isothermal process resulting in the accumulation of RNA amplicons. Amplicons were detected by hybridization with a digoxigenin-labeled oligonucleotide probe in a dot blot assay format. Using the NASBA, as little as 0.4 ng of target RNA/ml was detected per comparison to 4 ng/ml for RT-PCR. When crude HAV viral lysate was used, a detection limit of 2 PFU (4 × 102 PFU/ml) was obtained with NASBA, compared to 50 PFU (1 × 104 PFU/ml) obtained with RT-PCR. No interference was encountered in the amplification of HAV RNA in the presence of excess nontarget RNA or DNA. The NASBA system successfully detected HAV recovered from experimentally inoculated samples of waste water, lettuce, and blueberries. Compared to RT-PCR and other amplification techniques, the NASBA system offers several advantages in terms of sensitivity, rapidity, and simplicity. This technique should be readily adaptable for detection of other RNA viruses in both foods and clinical samples. PMID:11722911

  15. Detection of hepatitis A virus by the nucleic acid sequence-based amplification technique and comparison with reverse transcription-PCR.

    PubMed

    Jean, J; Blais, B; Darveau, A; Fliss, I

    2001-12-01

    A nucleic acid sequence-based amplification (NASBA) technique for the detection of hepatitis A virus (HAV) in foods was developed and compared to the traditional reverse transcription (RT)-PCR technique. Oligonucleotide primers targeting the VP1 and VP2 genes encoding the major HAV capsid proteins were used for the amplification of viral RNA in an isothermal process resulting in the accumulation of RNA amplicons. Amplicons were detected by hybridization with a digoxigenin-labeled oligonucleotide probe in a dot blot assay format. Using the NASBA, as little as 0.4 ng of target RNA/ml was detected per comparison to 4 ng/ml for RT-PCR. When crude HAV viral lysate was used, a detection limit of 2 PFU (4 x 10(2) PFU/ml) was obtained with NASBA, compared to 50 PFU (1 x 10(4) PFU/ml) obtained with RT-PCR. No interference was encountered in the amplification of HAV RNA in the presence of excess nontarget RNA or DNA. The NASBA system successfully detected HAV recovered from experimentally inoculated samples of waste water, lettuce, and blueberries. Compared to RT-PCR and other amplification techniques, the NASBA system offers several advantages in terms of sensitivity, rapidity, and simplicity. This technique should be readily adaptable for detection of other RNA viruses in both foods and clinical samples.

  16. Techniques for Type I Collagen Organization

    NASA Astrophysics Data System (ADS)

    Anderson-Jackson, LaTecia Diamond

    Tissue Engineering is a process in which cells, engineering, and material methods are used in amalgamation to improve biological functions. The purpose of tissue engineering is to develop alternative solutions to treat or cure tissues and organs that have been severely altered or damaged by diseases, congenital defects, trauma, or cancer. One of the most common and most promising biological materials for tissue engineering to develop scaffolds is Type I collagen. A major challenge in biomedical research is aligning Type I collagen to mimic biological structures, such as ligaments, tendons, bones, and other hierarchal aligned structures within the human body. The intent of this research is to examine possible techniques for organizing Type I collagen and to assess which of the techniques is effective for potential biological applications. The techniques used in this research to organize collagen are soft lithography with solution-assisted sonication embossing, directional freezing, and direct poling. The final concentration used for both soft lithography with solution-assisted sonication embossing and direct poling was 1 mg/ml, whereas for directional freezing the final concentration varied between 4mg/ml, 2mg/ml, and 1 mg/ml. These techniques were characterized using the Atomic Force Microscope (AFM) and Helium Ion Microscope (HIM). In this study, we have found that out of the three techniques, the soft lithography and directional freezing techniques have been successful in organizing collagen in a particular pattern, but not alignment. We concluded alignment may be dependent on the pH of collagen and the amount of acetic acid used in collagen solution. However, experiments are still being conducted to optimize all three techniques to align collagen in a unidirectional arrangement.

  17. Evaluation of a Machine-Learning Algorithm for Treatment Planning in Prostate Low-Dose-Rate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicolae, Alexandru; Department of Medical Physics, Odette Cancer Center, Sunnybrook Health Sciences Centre, Toronto, Ontario; Morton, Gerard

    Purpose: This work presents the application of a machine learning (ML) algorithm to automatically generate high-quality, prostate low-dose-rate (LDR) brachytherapy treatment plans. The ML algorithm can mimic characteristics of preoperative treatment plans deemed clinically acceptable by brachytherapists. The planning efficiency, dosimetry, and quality (as assessed by experts) of preoperative plans generated with an ML planning approach was retrospectively evaluated in this study. Methods and Materials: Preimplantation and postimplantation treatment plans were extracted from 100 high-quality LDR treatments and stored within a training database. The ML training algorithm matches similar features from a new LDR case to those within the trainingmore » database to rapidly obtain an initial seed distribution; plans were then further fine-tuned using stochastic optimization. Preimplantation treatment plans generated by the ML algorithm were compared with brachytherapist (BT) treatment plans in terms of planning time (Wilcoxon rank sum, α = 0.05) and dosimetry (1-way analysis of variance, α = 0.05). Qualitative preimplantation plan quality was evaluated by expert LDR radiation oncologists using a Likert scale questionnaire. Results: The average planning time for the ML approach was 0.84 ± 0.57 minutes, compared with 17.88 ± 8.76 minutes for the expert planner (P=.020). Preimplantation plans were dosimetrically equivalent to the BT plans; the average prostate V150% was 4% lower for ML plans (P=.002), although the difference was not clinically significant. Respondents ranked the ML-generated plans as equivalent to expert BT treatment plans in terms of target coverage, normal tissue avoidance, implant confidence, and the need for plan modifications. Respondents had difficulty differentiating between plans generated by a human or those generated by the ML algorithm. Conclusions: Prostate LDR preimplantation treatment plans that have equivalent quality to plans created by brachytherapists can be rapidly generated using ML. The adoption of ML in the brachytherapy workflow is expected to improve LDR treatment plan uniformity while reducing planning time and resources.« less

  18. [HoLEP learning curve: Toward a standardised formation and a team strategy].

    PubMed

    Baron, M; Nouhaud, F-X; Delcourt, C; Grise, P; Pfister, C; Cornu, J-N; Sibert, L

    2016-09-01

    Holmium laser enucleation of prostate (HoLEP) is renowned for the difficulty of its learning curve. Our aim was to evaluate the interest of a three-step tutorial in the HoLEP learning curve, in a university center. It is a retrospective, monocentric study of the 82 first procedures done consecutively by the same operator with a proctoring in early experience and after 40 procedures. For all patients were noted: enucleation efficiency (g/min), morcellation efficiency (g/min), percentage of enucleated tissue (enucleated tissue/adenome weigth evaluated by ultrasonography. g/g), perioperative morbidity (Clavien), length of hospital stay, length of urinary drainage, functional outcomes at short and middle term (Qmax, post-void residual volume [PVR], QOL scores and IPSS at 3 and 6months). Enucleation and morcellation efficiency were significantly higher after the second proctoring (0.87 vs 0.44g/min; P<0.0001 and 4.2 vs 3.37g/min, P=0.038, respectively) so as the prostatic volume (43.5 vs 68.1mL, P=0.0001). Percentage of enucleated tissue was higher in the second group, however, the difference was not significant (69.5% vs 80.4%, P=0.03). Per- and postoperative complications, hospital length of stay, urinary drainage length and functional results at 3 and 6months were not significantly different. The learning curve did not interfere with functional results. The second proctoring was essential to us in order to grasp the technique. These data underlined the necessity of a pedagogic reflexion in order to built a standardized formation technique to the HoLEP. 4. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  19. STAR-GALAXY CLASSIFICATION IN MULTI-BAND OPTICAL IMAGING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fadely, Ross; Willman, Beth; Hogg, David W.

    2012-11-20

    Ground-based optical surveys such as PanSTARRS, DES, and LSST will produce large catalogs to limiting magnitudes of r {approx}> 24. Star-galaxy separation poses a major challenge to such surveys because galaxies-even very compact galaxies-outnumber halo stars at these depths. We investigate photometric classification techniques on stars and galaxies with intrinsic FWHM <0.2 arcsec. We consider unsupervised spectral energy distribution template fitting and supervised, data-driven support vector machines (SVMs). For template fitting, we use a maximum likelihood (ML) method and a new hierarchical Bayesian (HB) method, which learns the prior distribution of template probabilities from the data. SVM requires training datamore » to classify unknown sources; ML and HB do not. We consider (1) a best-case scenario (SVM{sub best}) where the training data are (unrealistically) a random sampling of the data in both signal-to-noise and demographics and (2) a more realistic scenario where training is done on higher signal-to-noise data (SVM{sub real}) at brighter apparent magnitudes. Testing with COSMOS ugriz data, we find that HB outperforms ML, delivering {approx}80% completeness, with purity of {approx}60%-90% for both stars and galaxies. We find that no algorithm delivers perfect performance and that studies of metal-poor main-sequence turnoff stars may be challenged by poor star-galaxy separation. Using the Receiver Operating Characteristic curve, we find a best-to-worst ranking of SVM{sub best}, HB, ML, and SVM{sub real}. We conclude, therefore, that a well-trained SVM will outperform template-fitting methods. However, a normally trained SVM performs worse. Thus, HB template fitting may prove to be the optimal classification method in future surveys.« less

  20. A simple thermometric technique for reaction-rate determination of inorganic species, based on the iodide-catalysed cerium(IV)-arsenic(III) reaction.

    PubMed

    Grases, F; Forteza, R; March, J G; Cerda, V

    1985-02-01

    A very simple reaction-rate thermometric technique is used for determination of iodide (5-20 ng ml ), based on its catalytic action on the cerium(IV)-arsenic(III) reaction, and for determination of mercury(II) (1.5-10 ng ml ) and silver(I) (2-10 ng ml ), based on their inhibitory effect on this reaction. The reaction is followed by measuring the rate of temperature increase. The method suffers from very few interferences and is applied to determination of iodide in biological and inorganic samples, and Hg(II) and Ag(I) in pharmaceutical products.

  1. A strategy to apply machine learning to small datasets in materials science

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Ling, Chen

    2018-12-01

    There is growing interest in applying machine learning techniques in the research of materials science. However, although it is recognized that materials datasets are typically smaller and sometimes more diverse compared to other fields, the influence of availability of materials data on training machine learning models has not yet been studied, which prevents the possibility to establish accurate predictive rules using small materials datasets. Here we analyzed the fundamental interplay between the availability of materials data and the predictive capability of machine learning models. Instead of affecting the model precision directly, the effect of data size is mediated by the degree of freedom (DoF) of model, resulting in the phenomenon of association between precision and DoF. The appearance of precision-DoF association signals the issue of underfitting and is characterized by large bias of prediction, which consequently restricts the accurate prediction in unknown domains. We proposed to incorporate the crude estimation of property in the feature space to establish ML models using small sized materials data, which increases the accuracy of prediction without the cost of higher DoF. In three case studies of predicting the band gap of binary semiconductors, lattice thermal conductivity, and elastic properties of zeolites, the integration of crude estimation effectively boosted the predictive capability of machine learning models to state-of-art levels, demonstrating the generality of the proposed strategy to construct accurate machine learning models using small materials dataset.

  2. The effects of deep network topology on mortality prediction.

    PubMed

    Hao Du; Ghassemi, Mohammad M; Mengling Feng

    2016-08-01

    Deep learning has achieved remarkable results in the areas of computer vision, speech recognition, natural language processing and most recently, even playing Go. The application of deep-learning to problems in healthcare, however, has gained attention only in recent years, and it's ultimate place at the bedside remains a topic of skeptical discussion. While there is a growing academic interest in the application of Machine Learning (ML) techniques to clinical problems, many in the clinical community see little incentive to upgrade from simpler methods, such as logistic regression, to deep learning. Logistic regression, after all, provides odds ratios, p-values and confidence intervals that allow for ease of interpretation, while deep nets are often seen as `black-boxes' which are difficult to understand and, as of yet, have not demonstrated performance levels far exceeding their simpler counterparts. If deep learning is to ever take a place at the bedside, it will require studies which (1) showcase the performance of deep-learning methods relative to other approaches and (2) interpret the relationships between network structure, model performance, features and outcomes. We have chosen these two requirements as the goal of this study. In our investigation, we utilized a publicly available EMR dataset of over 32,000 intensive care unit patients and trained a Deep Belief Network (DBN) to predict patient mortality at discharge. Utilizing an evolutionary algorithm, we demonstrate automated topology selection for DBNs. We demonstrate that with the correct topology selection, DBNs can achieve better prediction performance compared to several bench-marking methods.

  3. Isoflurane waste anesthetic gas concentrations associated with the open-drop method.

    PubMed

    Taylor, Douglas K; Mook, Deborah M

    2009-01-01

    The open-drop technique is used frequently for anesthetic delivery to small rodents. Operator exposure to waste anesthetic gas (WAG) is a potential occupational hazard if this method is used without WAG scavenging. This study was conducted to determine whether administration of isoflurane by the open-drop technique without exposure controls generates significant WAG concentrations. We placed 0.1, 0.2, or 0.3 ml of liquid isoflurane into screw-top 500 or 1000 ml glass jars. WAG concentration was measured at the opening of the container and 20 and 40 cm from the opening, a distance at which users likely would operate, at 1, 2, or 3 min WAG was measured by using a portable infrared gas analyzer. Mean WAG concentrations at the vessel opening were as high as 662 +/- 168 ppm with a 500 ml jar and 122 +/- 87 ppm with a 1000 ml jar. At operator levels, WAG concentrations were always at or near 0 ppm. For measurements made at the vessel opening, time was the only factor that significantly affected WAG concentration when using the 500 ml jar. Neither time nor liquid volume were significant factors when using 1000 ml jar. At all liquid volumes and time points, the WAG concentration associated with using the 500 ml container was marginally to significantly greater than that for the 1000 ml jar.

  4. Learners' Approaches to Solving Mathematical Tasks: Does Specialisation Matter?

    ERIC Educational Resources Information Center

    Machaba, France; Mwakapenda, Willy

    2016-01-01

    This article emerged from an analysis of learners' responses to a task presented to learners studying Mathematics and Mathematical Literacy (ML) in Gauteng, South Africa. Officially, Mathematics and ML are two separate learning areas. Learners from Grade 10 onwards are supposed to take either one or the other, but not both. This means that by…

  5. Primary Fat Grafting to the Pectoralis Muscle during Latissimus Dorsi Breast Reconstruction.

    PubMed

    Niddam, Jeremy; Vidal, Luciano; Hersant, Barbara; Meningaud, Jean Paul

    2016-11-01

    Latissimus dorsi flap is one of the best options for immediate and delayed breast reconstruction. However, this technique is limited by the tissue volume provided by the flap. To improve breast volume while reducing complications, fat grafting is now very often used in addition to latissimus dorsi flap. To the best of our knowledge, fat grafting was always performed as a second-line surgery, at least a few months after the flap procedure. We aimed to report our experience with an associated breast reconstruction technique combining musculocutaneous latissimus dorsi flap with intrapectoral lipofilling for totally autologous breast reconstruction. Between September 2014 and January 2015, 20 patients underwent this technique for unilateral autologous breast reconstruction (14 delayed and 6 immediate breast reconstructions). A mean harvested fat volume of 278 ml (range: 190-350 ml) and a mean injected fat volume of 228 ml (range: 170-280 ml) were used. None of the patients experienced complications, such as flap necrosis, breast skin necrosis, hematomas, or infection. One of the patients developed a seroma, which was treated with 3 drainage punctions. Only 2 patients underwent delayed fat grafting procedure. Totally autologous breast reconstruction combining latissimus dorsi flap and intrapectoral fat grafting in the same procedure is a new technique allowing increased breast volume in a single surgery.

  6. Aqueous extracts from asparagus stems prevent memory impairments in scopolamine-treated mice.

    PubMed

    Sui, Zifang; Qi, Ce; Huang, Yunxiang; Ma, Shufeng; Wang, Xinguo; Le, Guowei; Sun, Jin

    2017-04-19

    Aqueous extracts from Asparagus officinalis L. stems (AEAS) are rich in polysaccharides, gamma-amino butyric acid (GABA), and steroidal saponin. This study was designed to investigate the effects of AEAS on learning, memory, and acetylcholinesterase-related activity in a scopolamine-induced model of amnesia. Sixty ICR mice were randomly divided into 6 groups (n = 10) including the control group (CT), scopolamine group (SC), donepezil group (DON), low, medium, and high dose groups of AEAS (LS, MS, HS; 1.6 mL kg -1 , 8 mL kg -1 , 16 mL kg -1 ). The results showed that 8 mL kg -1 of AEAS used in this study significantly reversed scopolamine-induced cognitive impairments in mice in the novel object recognition test (P < 0.05) and the Y-maze test (P < 0.05), and also improved the latency to escape in the Morris water maze test (P < 0.05). Moreover, it significantly increased acetylcholine and inhibited acetylcholinesterase activity in the hippocampus, which was directly related to the reduction in learning and memory impairments. It also reversed scopolamine-induced reduction in the hippocampal brain-derived neurotrophic factor (BDNF) and the cAMP response element-binding protein (CREB) mRNA expression. AEAS protected against scopolamine-induced memory deficits. In conclusion, AEAS protected learning and memory function in mice by enhancing the activity of the cholinergic nervous system, and increasing BDNF and CREB expression. This suggests that AEAS has the potential to prevent cognitive impairments in age-related diseases, such as Alzheimer's disease.

  7. Ultra-Sensitive Biological Detection via Nanoparticle-Based Magnetically Amplified Surface Plasmon Resonance (Mag-SPR) Techniques

    DTIC Science & Technology

    2008-10-08

    of reactant to ferrocene and xylene, a liquid carbon source, results in longer nanostructures in larger amount as shown in Fig. 2(g). These samples...with 6.5 mol% ferrocene and 100 mol% xylene. The flow rate was (e) 0.195 ml/hr, (f) 0.98 ml/hr, and (g) 1.95 ml/hr. (d) and (h) are HR-TEM images of...and ferrocene . The flow rate was (a) 0.195 ml/hr and (b) 1.95 ml/hr........................ 19  Fig. A-5. STEM EDS analysis of the CF specimen after

  8. Comprehensive assessment and performance improvement of effector protein predictors for bacterial secretion systems III, IV and VI.

    PubMed

    An, Yi; Wang, Jiawei; Li, Chen; Leier, André; Marquez-Lago, Tatiana; Wilksch, Jonathan; Zhang, Yang; Webb, Geoffrey I; Song, Jiangning; Lithgow, Trevor

    2018-01-01

    Bacterial effector proteins secreted by various protein secretion systems play crucial roles in host-pathogen interactions. In this context, computational tools capable of accurately predicting effector proteins of the various types of bacterial secretion systems are highly desirable. Existing computational approaches use different machine learning (ML) techniques and heterogeneous features derived from protein sequences and/or structural information. These predictors differ not only in terms of the used ML methods but also with respect to the used curated data sets, the features selection and their prediction performance. Here, we provide a comprehensive survey and benchmarking of currently available tools for the prediction of effector proteins of bacterial types III, IV and VI secretion systems (T3SS, T4SS and T6SS, respectively). We review core algorithms, feature selection techniques, tool availability and applicability and evaluate the prediction performance based on carefully curated independent test data sets. In an effort to improve predictive performance, we constructed three ensemble models based on ML algorithms by integrating the output of all individual predictors reviewed. Our benchmarks demonstrate that these ensemble models outperform all the reviewed tools for the prediction of effector proteins of T3SS and T4SS. The webserver of the proposed ensemble methods for T3SS and T4SS effector protein prediction is freely available at http://tbooster.erc.monash.edu/index.jsp. We anticipate that this survey will serve as a useful guide for interested users and that the new ensemble predictors will stimulate research into host-pathogen relationships and inspiration for the development of new bioinformatics tools for predicting effector proteins of T3SS, T4SS and T6SS. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Genetic and Psychosocial Predictors of Aggression: Variable Selection and Model Building With Component-Wise Gradient Boosting.

    PubMed

    Suchting, Robert; Gowin, Joshua L; Green, Charles E; Walss-Bass, Consuelo; Lane, Scott D

    2018-01-01

    Rationale : Given datasets with a large or diverse set of predictors of aggression, machine learning (ML) provides efficient tools for identifying the most salient variables and building a parsimonious statistical model. ML techniques permit efficient exploration of data, have not been widely used in aggression research, and may have utility for those seeking prediction of aggressive behavior. Objectives : The present study examined predictors of aggression and constructed an optimized model using ML techniques. Predictors were derived from a dataset that included demographic, psychometric and genetic predictors, specifically FK506 binding protein 5 (FKBP5) polymorphisms, which have been shown to alter response to threatening stimuli, but have not been tested as predictors of aggressive behavior in adults. Methods : The data analysis approach utilized component-wise gradient boosting and model reduction via backward elimination to: (a) select variables from an initial set of 20 to build a model of trait aggression; and then (b) reduce that model to maximize parsimony and generalizability. Results : From a dataset of N = 47 participants, component-wise gradient boosting selected 8 of 20 possible predictors to model Buss-Perry Aggression Questionnaire (BPAQ) total score, with R 2 = 0.66. This model was simplified using backward elimination, retaining six predictors: smoking status, psychopathy (interpersonal manipulation and callous affect), childhood trauma (physical abuse and neglect), and the FKBP5_13 gene (rs1360780). The six-factor model approximated the initial eight-factor model at 99.4% of R 2 . Conclusions : Using an inductive data science approach, the gradient boosting model identified predictors consistent with previous experimental work in aggression; specifically psychopathy and trauma exposure. Additionally, allelic variants in FKBP5 were identified for the first time, but the relatively small sample size limits generality of results and calls for replication. This approach provides utility for the prediction of aggression behavior, particularly in the context of large multivariate datasets.

  10. Testing a machine-learning algorithm to predict the persistence and severity of major depressive disorder from baseline self-reports.

    PubMed

    Kessler, R C; van Loo, H M; Wardenaar, K J; Bossarte, R M; Brenner, L A; Cai, T; Ebert, D D; Hwang, I; Li, J; de Jonge, P; Nierenberg, A A; Petukhova, M V; Rosellini, A J; Sampson, N A; Schoevers, R A; Wilcox, M A; Zaslavsky, A M

    2016-10-01

    Heterogeneity of major depressive disorder (MDD) illness course complicates clinical decision-making. Although efforts to use symptom profiles or biomarkers to develop clinically useful prognostic subtypes have had limited success, a recent report showed that machine-learning (ML) models developed from self-reports about incident episode characteristics and comorbidities among respondents with lifetime MDD in the World Health Organization World Mental Health (WMH) Surveys predicted MDD persistence, chronicity and severity with good accuracy. We report results of model validation in an independent prospective national household sample of 1056 respondents with lifetime MDD at baseline. The WMH ML models were applied to these baseline data to generate predicted outcome scores that were compared with observed scores assessed 10-12 years after baseline. ML model prediction accuracy was also compared with that of conventional logistic regression models. Area under the receiver operating characteristic curve based on ML (0.63 for high chronicity and 0.71-0.76 for the other prospective outcomes) was consistently higher than for the logistic models (0.62-0.70) despite the latter models including more predictors. A total of 34.6-38.1% of respondents with subsequent high persistence chronicity and 40.8-55.8% with the severity indicators were in the top 20% of the baseline ML-predicted risk distribution, while only 0.9% of respondents with subsequent hospitalizations and 1.5% with suicide attempts were in the lowest 20% of the ML-predicted risk distribution. These results confirm that clinically useful MDD risk-stratification models can be generated from baseline patient self-reports and that ML methods improve on conventional methods in developing such models.

  11. Testing a machine-learning algorithm to predict the persistence and severity of major depressive disorder from baseline self-reports

    PubMed Central

    Kessler, Ronald C.; van Loo, Hanna M.; Wardenaar, Klaas J.; Bossarte, Robert M.; Brenner, Lisa A.; Cai, Tianxi; Ebert, David Daniel; Hwang, Irving; Li, Junlong; de Jonge, Peter; Nierenberg, Andrew A.; Petukhova, Maria V.; Rosellini, Anthony J.; Sampson, Nancy A.; Schoevers, Robert A.; Wilcox, Marsha A.; Zaslavsky, Alan M.

    2015-01-01

    Heterogeneity of major depressive disorder (MDD) illness course complicates clinical decision-making. While efforts to use symptom profiles or biomarkers to develop clinically useful prognostic subtypes have had limited success, a recent report showed that machine learning (ML) models developed from self-reports about incident episode characteristics and comorbidities among respondents with lifetime MDD in the World Health Organization World Mental Health (WMH) Surveys predicted MDD persistence, chronicity, and severity with good accuracy. We report results of model validation in an independent prospective national household sample of 1,056 respondents with lifetime MDD at baseline. The WMH ML models were applied to these baseline data to generate predicted outcome scores that were compared to observed scores assessed 10–12 years after baseline. ML model prediction accuracy was also compared to that of conventional logistic regression models. Area under the receiver operating characteristic curve (AUC) based on ML (.63 for high chronicity and .71–.76 for the other prospective outcomes) was consistently higher than for the logistic models (.62–.70) despite the latter models including more predictors. 34.6–38.1% of respondents with subsequent high persistence-chronicity and 40.8–55.8% with the severity indicators were in the top 20% of the baseline ML predicted risk distribution, while only 0.9% of respondents with subsequent hospitalizations and 1.5% with suicide attempts were in the lowest 20% of the ML predicted risk distribution. These results confirm that clinically useful MDD risk stratification models can be generated from baseline patient self-reports and that ML methods improve on conventional methods in developing such models. PMID:26728563

  12. Microelectrode Recordings Validate the Clinical Visualization of Subthalamic-Nucleus Based on 7T Magnetic Resonance Imaging and Machine Learning for Deep Brain Stimulation Surgery.

    PubMed

    Shamir, Reuben R; Duchin, Yuval; Kim, Jinyoung; Patriat, Remi; Marmor, Odeya; Bergman, Hagai; Vitek, Jerrold L; Sapiro, Guillermo; Bick, Atira; Eliahou, Ruth; Eitan, Renana; Israel, Zvi; Harel, Noam

    2018-05-24

    Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a proven and effective therapy for the management of the motor symptoms of Parkinson's disease (PD). While accurate positioning of the stimulating electrode is critical for success of this therapy, precise identification of the STN based on imaging can be challenging. We developed a method to accurately visualize the STN on a standard clinical magnetic resonance imaging (MRI). The method incorporates a database of 7-Tesla (T) MRIs of PD patients together with machine-learning methods (hereafter 7 T-ML). To validate the clinical application accuracy of the 7 T-ML method by comparing it with identification of the STN based on intraoperative microelectrode recordings. Sixteen PD patients who underwent microelectrode-recordings guided STN DBS were included in this study (30 implanted leads and electrode trajectories). The length of the STN along the electrode trajectory and the position of its contacts to dorsal, inside, or ventral to the STN were compared using microelectrode-recordings and the 7 T-ML method computed based on the patient's clinical 3T MRI. All 30 electrode trajectories that intersected the STN based on microelectrode-recordings, also intersected it when visualized with the 7 T-ML method. STN trajectory average length was 6.2 ± 0.7 mm based on microelectrode recordings and 5.8 ± 0.9 mm for the 7 T-ML method. We observed a 93% agreement regarding contact location between the microelectrode-recordings and the 7 T-ML method. The 7 T-ML method is highly consistent with microelectrode-recordings data. This method provides a reliable and accurate patient-specific prediction for targeting the STN.

  13. Distributed learning enhances relational memory consolidation.

    PubMed

    Litman, Leib; Davachi, Lila

    2008-09-01

    It has long been known that distributed learning (DL) provides a mnemonic advantage over massed learning (ML). However, the underlying mechanisms that drive this robust mnemonic effect remain largely unknown. In two experiments, we show that DL across a 24 hr interval does not enhance immediate memory performance but instead slows the rate of forgetting relative to ML. Furthermore, we demonstrate that this savings in forgetting is specific to relational, but not item, memory. In the context of extant theories and knowledge of memory consolidation, these results suggest that an important mechanism underlying the mnemonic benefit of DL is enhanced memory consolidation. We speculate that synaptic strengthening mechanisms supporting long-term memory consolidation may be differentially mediated by the spacing of memory reactivation. These findings have broad implications for the scientific study of episodic memory consolidation and, more generally, for educational curriculum development and policy.

  14. Maternal and Fetal Effect of Misgav Ladach Cesarean Section in Nigerian Women: A Randomized Control Study

    PubMed Central

    Ezechi, OC; Ezeobi, PM; Gab-Okafor, CV; Edet, A; Nwokoro, CA; Akinlade, A

    2013-01-01

    Background: The poor utilisation of the Misgav-Ladach (ML) caesarean section method in our environment despite its proven advantage has been attributed to several factors including its non-evaluation. A well designed and conducted trial is needed to provide evidence to convince clinician of its advantage over Pfannenstiel based methods. Aim: To evaluate the outcome of ML based caesarean section among Nigerian women. Subjects and Methods: Randomised controlled open label study of 323 women undergoing primary caesarean section in Lagos Nigeria. The women were randomised to either ML method or Pfannenstiel based (PB) caesarean section technique using computer generated random numbers. Results: The mean duration of surgery (P < 0.001), time to first bowel motion (P = 0.01) and ambulation (P < 0.001) were significantly shorter in the ML group compared to PB group. Postoperative anaemia (P < 0.01), analgesic needs (P = 0.02), extra suture use, estimated blood loss (P < 0.01) and post-operative complications (P = 0.001) were significantly lower in the ML group compared to PB group. Though the mean hospital stay was shorter (5.8 days) in the ML group as against 6.0 days, the difference was not significant statistically (P = 0.17). Of the fetal outcome measures compared, it was only in the fetal extraction time that there was significant difference between the two groups (P = 0.001). The mean fetal extraction time was 162 sec in ML group compared to 273 sec in the PB group. Conclusions: This study confirmed the already established benefit of ML techniques in Nigerian women, as it relates to the postoperative outcomes, duration of surgery, and fetal extraction time. The technique is recommended to clinicians as its superior maternal and fetal outcome and cost saving advantage makes it appropriate for use in poor resource setting. PMID:24380012

  15. ["Handle with care": about the potential unintended consequences of oracular artificial intelligence systems in medicine.

    PubMed

    Cabitza, Federico; Alderighi, Camilla; Rasoini, Raffaele; Gensini, Gian Franco

    2017-10-01

    Decisional support systems based on machine learning (ML) in medicine are gaining a growing interest as some recent articles have highlighted the high diagnostic accuracy exhibited by these systems in specific medical contexts. However, it is implausible that any potential advantage can be obtained without some potential drawbacks. In light of the current gaps in medical research about the side effects of the application of these new AI systems in medical practice, in this article we summarize the main unexpected consequences that may result from the widespread application of "oracular" systems, that is highly accurate systems that cannot give reasonable explanations of their advice as those endowed with predictive models developed with ML techniques usually are. These consequences range from the intrinsic uncertainty in the data that are used to train and feed these systems, to the inadequate explainability of their output; through the risk of overreliance, deskilling and context desensitization of their end-users. Although some of these issues may be currently hard to evaluate due to the still scarce adoption of these decisional systems in medical practice, we advocate the study of these potential consequences also for a more informed policy of approval beyond hype and disenchantment.

  16. [Pancreatoduodenectomy: learning curve within single multi-field center].

    PubMed

    Kaprin, A D; Kostin, A A; Nikiforov, P V; Egorov, V I; Grishin, N A; Lozhkin, M V; Petrov, L O; Bykasov, S A; Sidorov, D V

    2018-01-01

    To analyze learning curve by using of immediate results of pancreatoduodenectomy at multi-field oncology institute. For the period 2010-2016 at Abdominal Oncology Department of Herzen Moscow Oncology Research Institute 120 pancreatoduodenal resections were consistently performed. All patients were divided into two groups: the first 60 procedures (group A) and subsequent 60 operations (group B). Herewith, first 60 operations were performed within the first 4.5 years of study period, the next 60 operations - within remaining 2.5 years. Learning curves showed significantly variable intraoperative blood loss (1100 ml and 725 ml), surgery time (589 min and 513 min) and postoperative hospital-stay (15 days and 13 days) in group A followed by gradual improvement of these values in group B. Incidence of negative resection margin (R0) was also significantly improved in the last 60 operations (70 and 92%, respectively). Despite pancreatoduodenectomy is one of the most difficult surgical interventions in abdominal surgery learning curve will differ from one surgeon to another.

  17. "What is relevant in a text document?": An interpretable machine learning approach

    PubMed Central

    Arras, Leila; Horn, Franziska; Montavon, Grégoire; Müller, Klaus-Robert

    2017-01-01

    Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text’s category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications. PMID:28800619

  18. Efficacy of the Greater Occipital Nerve Block for Cervicogenic Headache: Comparing Classical and Subcompartmental Techniques.

    PubMed

    Lauretti, Gabriela R; Corrêa, Selma W R O; Mattos, Anita L

    2015-09-01

    The aim of the study was to compare the efficacy of the greater occipital nerve (GON) block using the classical technique and different volumes of injectate with the subcompartmental technique for the treatment of cervicogenic headache (CH). Thirty patients acted as his/her own control. All patients were submitted to the GON block by the classical technique with 10 mg dexamethasone, plus 40 mg lidocaine (5 mL volume). Patients were randomly allocated into 1 of 3 groups (n = 10) when pain VAS was > 3 cm. Each group was submitted to a GON subcompartmental technique (10 mg dexamethasone + 40 mg lidocaine + nonionic iodine contrast + saline) under fluoroscopy using either 5, 10, or 15 mL final volume. Analgesia and quality of life were evaluated. The classical GON technique resulted in 2 weeks of analgesia and less rescue analgesic consumption, compared to 24 weeks after the subcompartmental technique (P < 0.01). Quality of life improved at 2 and 24 weeks after the classical and the suboccipital techniques, respectively (P < 0.05). The data revealed that groups were similar regarding analgesia when compared to volume of injection (P > 0.05). While the classical technique for GON block resulted in only 2 weeks of analgesia, the subcompartmental technique resulted in at least 24 weeks of analgesia, being 5 mL volume sufficient for the performance of the block under fluoroscopy. © 2014 World Institute of Pain.

  19. Comparison of four machine learning algorithms for their applicability in satellite-based optical rainfall retrievals

    NASA Astrophysics Data System (ADS)

    Meyer, Hanna; Kühnlein, Meike; Appelhans, Tim; Nauss, Thomas

    2016-03-01

    Machine learning (ML) algorithms have successfully been demonstrated to be valuable tools in satellite-based rainfall retrievals which show the practicability of using ML algorithms when faced with high dimensional and complex data. Moreover, recent developments in parallel computing with ML present new possibilities for training and prediction speed and therefore make their usage in real-time systems feasible. This study compares four ML algorithms - random forests (RF), neural networks (NNET), averaged neural networks (AVNNET) and support vector machines (SVM) - for rainfall area detection and rainfall rate assignment using MSG SEVIRI data over Germany. Satellite-based proxies for cloud top height, cloud top temperature, cloud phase and cloud water path serve as predictor variables. The results indicate an overestimation of rainfall area delineation regardless of the ML algorithm (averaged bias = 1.8) but a high probability of detection ranging from 81% (SVM) to 85% (NNET). On a 24-hour basis, the performance of the rainfall rate assignment yielded R2 values between 0.39 (SVM) and 0.44 (AVNNET). Though the differences in the algorithms' performance were rather small, NNET and AVNNET were identified as the most suitable algorithms. On average, they demonstrated the best performance in rainfall area delineation as well as in rainfall rate assignment. NNET's computational speed is an additional advantage in work with large datasets such as in remote sensing based rainfall retrievals. However, since no single algorithm performed considerably better than the others we conclude that further research in providing suitable predictors for rainfall is of greater necessity than an optimization through the choice of the ML algorithm.

  20. The Cardiovascular Effects of Morphine THE PERIPHERAL CAPACITANCE AND RESISTANCE VESSELS IN HUMAN SUBJECTS

    PubMed Central

    Zelis, Robert; Mansour, Edward J.; Capone, Robert J.; Mason, Dean T.

    1974-01-01

    To evaluate the effects of morphine on the peripheral venous and arterial beds, 69 normal subjects were evaluated before and after the intravenous administration of 15 mg morphine. Venous tone was determined by three independent techniques in 22 subjects. The venous pressure measured in a hand vein during temporary circulatory arrest (isolated hand vein technique) fell from 20.2±1.4 to 13.4±0.9 mm Hg (P < 0.01) 10 min after morphine, indicating that a significant venodilation had occurred. With the acute occlusion technique, morphine induced a reduction in forearm venous tone from 12.8±1.1 to 7.9±2.3 mm Hg/ml/100 ml (P < 0.01). Although forearm venous volume at a pressure of 30 mm Hg (VV[30]) was increased from 2.26±0.17 to 2.55±0.26 ml/100 ml, measured by the equilibration technique, the change was not significant (P > 0.1). Of note is that the initial reaction to morphine was a pronounced venoconstriction, demonstrated during the first 1-2 min after the drug. (Isolated hand vein pressure increased to 37.2±5.4 mm Hg, P < 0.01). This rapidly subsided, and by 5 min a venodilation was evident. Morphine did not attenuate the venoconstrictor response to a single deep breath, mental arithmetic, or the application of ice to the forehead when measured by either the isolated hand vein technique or the equilibration technique. To evaluate the effects of morphine on the peripheral resistance vessels in 47 normal subjects, forearm blood flow was measured plethysmographically before and 10-15 min after the intravenous administration of 15 mg of morphine. Although mean systemic arterial pressure was unchanged, forearm blood flow increased from 2.92±0.28 to 3.96±0.46 ml/min/100 ml (P < 0.01), and calculated vascular resistance fell from 42.4±5.2 to 31.6±3.2 mm Hg/ml/min/100 ml (P < 0.01). When subjects were tilted to the 45° head-up position, morphine did not block the increase in total peripheral vascular resistance that occurs; however, it did significantly attenuate the forearm arteriolar constrictor response (before morphine, + 25.7±5.4; after morphine, + 13.7±5.3 mm Hg/ml/min/100 ml, P < 0.05). However, morphine did not block the post-Valsalva overshoot of blood pressure, nor did it block the increase in forearm vascular resistance produced by the application of ice to the forehead. Similarly, morphine did not block the arteriolar or venoconstrictor effects of intra-arterially administered norepinephrine. Morphine infused into the brachial artery in doses up to 200 μg/min produced no changes in ipsilateral forearm VV[30], forearm blood flow, or calculated forearm resistance. Intra-arterial promethazine, atropine, and propranolol did not block the forearm arteriolar dilator response to intravenous morphine; however, intra-arterial phentolamine abolished the response. These data suggest that in human subjects, morphine induces a peripheral venous and arteriolar dilation by a reflex reduction in sympathetic alpha adrenergic tone. Morphine does not appear to act as a peripheral alpha adrenergic blocking agent but seems to attenuate the sympathetic efferent discharge at a central nervous system level. Images PMID:4612057

  1. Primary Fat Grafting to the Pectoralis Muscle during Latissimus Dorsi Breast Reconstruction

    PubMed Central

    Vidal, Luciano; Hersant, Barbara; Meningaud, Jean Paul

    2016-01-01

    Background: Latissimus dorsi flap is one of the best options for immediate and delayed breast reconstruction. However, this technique is limited by the tissue volume provided by the flap. To improve breast volume while reducing complications, fat grafting is now very often used in addition to latissimus dorsi flap. To the best of our knowledge, fat grafting was always performed as a second-line surgery, at least a few months after the flap procedure. We aimed to report our experience with an associated breast reconstruction technique combining musculocutaneous latissimus dorsi flap with intrapectoral lipofilling for totally autologous breast reconstruction. Methods: Between September 2014 and January 2015, 20 patients underwent this technique for unilateral autologous breast reconstruction (14 delayed and 6 immediate breast reconstructions). A mean harvested fat volume of 278 ml (range: 190–350 ml) and a mean injected fat volume of 228 ml (range: 170–280 ml) were used. Results: None of the patients experienced complications, such as flap necrosis, breast skin necrosis, hematomas, or infection. One of the patients developed a seroma, which was treated with 3 drainage punctions. Only 2 patients underwent delayed fat grafting procedure. Conclusion: Totally autologous breast reconstruction combining latissimus dorsi flap and intrapectoral fat grafting in the same procedure is a new technique allowing increased breast volume in a single surgery. PMID:27975006

  2. Selected South African Grade 10 Learners' Perceptions of Two Learning Areas: Mathematical Literacy and Life Orientation

    ERIC Educational Resources Information Center

    Geldenhuys, J. L.; Kruger, C.; Moss, J.

    2013-01-01

    In 2006, Mathematical Literacy (ML) and Life Orientation (LO) were introduced into South Africa's Grade 10 national curriculum. The implementation of the ML programme in schools stemmed from a need to improve the level of numeracy of the general population of South Africa, while LO was introduced to equip learners to solve problems and to make…

  3. RuleML-Based Learning Object Interoperability on the Semantic Web

    ERIC Educational Resources Information Center

    Biletskiy, Yevgen; Boley, Harold; Ranganathan, Girish R.

    2008-01-01

    Purpose: The present paper aims to describe an approach for building the Semantic Web rules for interoperation between heterogeneous learning objects, namely course outlines from different universities, and one of the rule uses: identifying (in)compatibilities between course descriptions. Design/methodology/approach: As proof of concept, a rule…

  4. Evaluation of mobile learning: students' experiences in a new rural-based medical school.

    PubMed

    Nestel, Debra; Ng, Andre; Gray, Katherine; Hill, Robyn; Villanueva, Elmer; Kotsanas, George; Oaten, Andrew; Browne, Chris

    2010-08-11

    Mobile learning (ML) is an emerging educational method with success dependent on many factors including the ML device, physical infrastructure and user characteristics. At Gippsland Medical School (GMS), students are given a laptop at the commencement of their four-year degree. We evaluated the educational impact of the ML program from students' perspectives. Questionnaires and individual interviews explored students' experiences of ML. All students were invited to complete questionnaires. Convenience sampling was used for interviews. Quantitative data was entered to SPSS 17.0 and descriptive statistics computed. Free text comments from questionnaires and transcriptions of interviews were thematically analysed. Fifty students completed the questionnaire (response rate 88%). Six students participated in interviews. More than half the students owned a laptop prior to commencing studies, would recommend the laptop and took the laptop to GMS daily. Modal daily use of laptops was four hours. Most frequent use was for access to the internet and email while the most frequently used applications were Microsoft Word and PowerPoint. Students appreciated the laptops for several reasons. The reduced financial burden was valued. Students were largely satisfied with the laptop specifications. Design elements of teaching spaces limited functionality. Although students valued aspects of the virtual learning environment (VLE), they also made many suggestions for improvement. Students reported many educational benefits from school provision of laptops. In particular, the quick and easy access to electronic educational resources as and when they were needed. Improved design of physical facilities would enhance laptop use together with a more logical layout of the VLE, new computer-based resources and activities promoting interaction.

  5. Robotic radical cystectomy and intracorporeal urinary diversion: The USC technique.

    PubMed

    Abreu, Andre Luis de Castro; Chopra, Sameer; Azhar, Raed A; Berger, Andre K; Miranda, Gus; Cai, Jie; Gill, Inderbir S; Aron, Monish; Desai, Mihir M

    2014-07-01

    Radical cystectomy is the gold-standard treatment for muscle-invasive and refractory nonmuscle-invasive bladder cancer. We describe our technique for robotic radical cystectomy (RRC) and intracorporeal urinary diversion (ICUD), that replicates open surgical principles, and present our preliminary results. Specific descriptions for preoperative planning, surgical technique, and postoperative care are provided. Demographics, perioperative and 30-day complications data were collected prospectively and retrospectively analyzed. Learning curve trends were analyzed individually for ileal conduits (IC) and neobladders (NB). SAS(®) Software Version 9.3 was used for statistical analyses with statistical significance set at P < 0.05. Between July 2010 and September 2013, RRC and lymph node dissection with ICUD were performed in 103 consecutive patients (orthotopic NB=46, IC 57). All procedures were completed robotically replicating the open surgical principles. The learning curve trends showed a significant reduction in hospital stay for both IC (11 vs. 6-day, P < 0.01) and orthotopic NB (13 vs. 7.5-day, P < 0.01) when comparing the first third of the cohort with the rest of the group. Overall median (range) operative time and estimated blood loss was 7 h (4.8-13) and 200 mL (50-1200), respectively. Within 30-day postoperatively, complications occurred in 61 (59%) patients, with the majority being low grade (n = 43), and no patient died. Median (range) nodes yield was 36 (0-106) and 4 (3.9%) specimens had positive surgical margins. Robotic radical cystectomy with totally ICUD is safe and feasible. It can be performed using the established open surgical principles with encouraging perioperative outcomes.

  6. The potential for machine learning algorithms to improve and reduce the cost of 3-dimensional printing for surgical planning.

    PubMed

    Huff, Trevor J; Ludwig, Parker E; Zuniga, Jorge M

    2018-05-01

    3D-printed anatomical models play an important role in medical and research settings. The recent successes of 3D anatomical models in healthcare have led many institutions to adopt the technology. However, there remain several issues that must be addressed before it can become more wide-spread. Of importance are the problems of cost and time of manufacturing. Machine learning (ML) could be utilized to solve these issues by streamlining the 3D modeling process through rapid medical image segmentation and improved patient selection and image acquisition. The current challenges, potential solutions, and future directions for ML and 3D anatomical modeling in healthcare are discussed. Areas covered: This review covers research articles in the field of machine learning as related to 3D anatomical modeling. Topics discussed include automated image segmentation, cost reduction, and related time constraints. Expert commentary: ML-based segmentation of medical images could potentially improve the process of 3D anatomical modeling. However, until more research is done to validate these technologies in clinical practice, their impact on patient outcomes will remain unknown. We have the necessary computational tools to tackle the problems discussed. The difficulty now lies in our ability to collect sufficient data.

  7. Comparison of ASL and DCE MRI for the non-invasive measurement of renal blood flow: quantification and reproducibility.

    PubMed

    Cutajar, Marica; Thomas, David L; Hales, Patrick W; Banks, T; Clark, Christopher A; Gordon, Isky

    2014-06-01

    To investigate the reproducibility of arterial spin labelling (ASL) and dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) and quantitatively compare these techniques for the measurement of renal blood flow (RBF). Sixteen healthy volunteers were examined on two different occasions. ASL was performed using a multi-TI FAIR labelling scheme with a segmented 3D-GRASE imaging module. DCE MRI was performed using a 3D-FLASH pulse sequence. A Bland-Altman analysis was used to assess repeatability of each technique, and determine the degree of correspondence between the two methods. The overall mean cortical renal blood flow (RBF) of the ASL group was 263 ± 41 ml min(-1) [100 ml tissue](-1), and using DCE MRI was 287 ± 70 ml min(-1) [100 ml tissue](-1). The group coefficient of variation (CVg) was 18 % for ASL and 28 % for DCE-MRI. Repeatability studies showed that ASL was more reproducible than DCE with CVgs of 16 % and 25 % for ASL and DCE respectively. Bland-Altman analysis comparing the two techniques showed a good agreement. The repeated measures analysis shows that the ASL technique has better reproducibility than DCE-MRI. Difference analysis shows no significant difference between the RBF values of the two techniques. Reliable non-invasive monitoring of renal blood flow is currently clinically unavailable. Renal arterial spin labelling MRI is robust and repeatable. Renal dynamic contrast-enhanced MRI is robust and repeatable. ASL blood flow values are similar to those obtained using DCE-MRI.

  8. Mathematizing Process of Junior High School Students to Improve Mathematics Literacy Refers PISA on RCP Learning

    NASA Astrophysics Data System (ADS)

    Wardono; Mariani, S.; Hendikawati, P.; Ikayani

    2017-04-01

    Mathematizing process (MP) is the process of modeling a phenomenon mathematically or establish the concept of a phenomenon. There are two mathematizing that is Mathematizing Horizontal (MH) and Mathematizing Vertical (MV). MH as events changes contextual problems into mathematical problems, while MV is the process of formulation of the problem into a variety of settlement mathematics by using some appropriate rules. Mathematics Literacy (ML) is the ability to formulate, implement and interpret mathematics in various contexts, including the capacity to perform reasoning mathematically and using the concepts, procedures, and facts to describe, explain or predict phenomena incident. If junior high school students are conditioned continuously to conduct mathematizing activities on RCP (RME-Card Problem) learning, it will be able to improve ML that refers PISA. The purpose of this research is to know the capability of the MP grade VIII on ML content shape and space with the matter of the cube and beams with RCP learning better than the scientific learning, upgrade MP grade VIII in the issue of the cube and beams with RCP learning better than the scientific learning in terms of cognitive styles reflective and impulsive the MP grade VIII with the approach of the RCP learning in terms of cognitive styles reflective and impulsive This research is the mixed methods model concurrent embedded. The population in this study, i.e., class VIII SMPN 1 Batang with sample two class. Data were taken with the observation, interviews, and tests and analyzed with a different test average of one party the right qualitative and descriptive. The results of this study demonstrate the capability of the MP student with RCP learning better than the scientific learning, upgrade MP with RCP learning better compare with scientific learning in term cognitive style of reflective and impulsive. The subject of the reflective group top, middle, and bottom can meet all the process of MH indicators are then the subject of the reflective upper and intermediate group can meet all the MV indicators but to lower groups can only fulfill some MV indicators. The subject is impulsive upper and middle group can meet all the MH indicators but to lower groups can only meet some MH indicator, then the subject is impulsive group can meet all the MV indicators but for middle and the bottom group can only fulfill some MV indicators.

  9. A study of active learning methods for named entity recognition in clinical text.

    PubMed

    Chen, Yukun; Lasko, Thomas A; Mei, Qiaozhu; Denny, Joshua C; Xu, Hua

    2015-12-01

    Named entity recognition (NER), a sequential labeling task, is one of the fundamental tasks for building clinical natural language processing (NLP) systems. Machine learning (ML) based approaches can achieve good performance, but they often require large amounts of annotated samples, which are expensive to build due to the requirement of domain experts in annotation. Active learning (AL), a sample selection approach integrated with supervised ML, aims to minimize the annotation cost while maximizing the performance of ML-based models. In this study, our goal was to develop and evaluate both existing and new AL methods for a clinical NER task to identify concepts of medical problems, treatments, and lab tests from the clinical notes. Using the annotated NER corpus from the 2010 i2b2/VA NLP challenge that contained 349 clinical documents with 20,423 unique sentences, we simulated AL experiments using a number of existing and novel algorithms in three different categories including uncertainty-based, diversity-based, and baseline sampling strategies. They were compared with the passive learning that uses random sampling. Learning curves that plot performance of the NER model against the estimated annotation cost (based on number of sentences or words in the training set) were generated to evaluate different active learning and the passive learning methods and the area under the learning curve (ALC) score was computed. Based on the learning curves of F-measure vs. number of sentences, uncertainty sampling algorithms outperformed all other methods in ALC. Most diversity-based methods also performed better than random sampling in ALC. To achieve an F-measure of 0.80, the best method based on uncertainty sampling could save 66% annotations in sentences, as compared to random sampling. For the learning curves of F-measure vs. number of words, uncertainty sampling methods again outperformed all other methods in ALC. To achieve 0.80 in F-measure, in comparison to random sampling, the best uncertainty based method saved 42% annotations in words. But the best diversity based method reduced only 7% annotation effort. In the simulated setting, AL methods, particularly uncertainty-sampling based approaches, seemed to significantly save annotation cost for the clinical NER task. The actual benefit of active learning in clinical NER should be further evaluated in a real-time setting. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Intellicount: High-Throughput Quantification of Fluorescent Synaptic Protein Puncta by Machine Learning

    PubMed Central

    Fantuzzo, J. A.; Mirabella, V. R.; Zahn, J. D.

    2017-01-01

    Abstract Synapse formation analyses can be performed by imaging and quantifying fluorescent signals of synaptic markers. Traditionally, these analyses are done using simple or multiple thresholding and segmentation approaches or by labor-intensive manual analysis by a human observer. Here, we describe Intellicount, a high-throughput, fully-automated synapse quantification program which applies a novel machine learning (ML)-based image processing algorithm to systematically improve region of interest (ROI) identification over simple thresholding techniques. Through processing large datasets from both human and mouse neurons, we demonstrate that this approach allows image processing to proceed independently of carefully set thresholds, thus reducing the need for human intervention. As a result, this method can efficiently and accurately process large image datasets with minimal interaction by the experimenter, making it less prone to bias and less liable to human error. Furthermore, Intellicount is integrated into an intuitive graphical user interface (GUI) that provides a set of valuable features, including automated and multifunctional figure generation, routine statistical analyses, and the ability to run full datasets through nested folders, greatly expediting the data analysis process. PMID:29218324

  11. Relational machine learning for electronic health record-driven phenotyping.

    PubMed

    Peissig, Peggy L; Santos Costa, Vitor; Caldwell, Michael D; Rottscheit, Carla; Berg, Richard L; Mendonca, Eneida A; Page, David

    2014-12-01

    Electronic health records (EHR) offer medical and pharmacogenomics research unprecedented opportunities to identify and classify patients at risk. EHRs are collections of highly inter-dependent records that include biological, anatomical, physiological, and behavioral observations. They comprise a patient's clinical phenome, where each patient has thousands of date-stamped records distributed across many relational tables. Development of EHR computer-based phenotyping algorithms require time and medical insight from clinical experts, who most often can only review a small patient subset representative of the total EHR records, to identify phenotype features. In this research we evaluate whether relational machine learning (ML) using inductive logic programming (ILP) can contribute to addressing these issues as a viable approach for EHR-based phenotyping. Two relational learning ILP approaches and three well-known WEKA (Waikato Environment for Knowledge Analysis) implementations of non-relational approaches (PART, J48, and JRIP) were used to develop models for nine phenotypes. International Classification of Diseases, Ninth Revision (ICD-9) coded EHR data were used to select training cohorts for the development of each phenotypic model. Accuracy, precision, recall, F-Measure, and Area Under the Receiver Operating Characteristic (AUROC) curve statistics were measured for each phenotypic model based on independent manually verified test cohorts. A two-sided binomial distribution test (sign test) compared the five ML approaches across phenotypes for statistical significance. We developed an approach to automatically label training examples using ICD-9 diagnosis codes for the ML approaches being evaluated. Nine phenotypic models for each ML approach were evaluated, resulting in better overall model performance in AUROC using ILP when compared to PART (p=0.039), J48 (p=0.003) and JRIP (p=0.003). ILP has the potential to improve phenotyping by independently delivering clinically expert interpretable rules for phenotype definitions, or intuitive phenotypes to assist experts. Relational learning using ILP offers a viable approach to EHR-driven phenotyping. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Leveraging Experiential Learning Techniques for Transfer

    ERIC Educational Resources Information Center

    Furman, Nate; Sibthorp, Jim

    2013-01-01

    Experiential learning techniques can be helpful in fostering learning transfer. Techniques such as project-based learning, reflective learning, and cooperative learning provide authentic platforms for developing rich learning experiences. In contrast to more didactic forms of instruction, experiential learning techniques foster a depth of learning…

  13. The duration of effect of centrifuge concentrated intravitreal triamcinolone acetonide.

    PubMed

    Ober, Michael D; Valijan, Sevak

    2013-04-01

    To estimate the duration of activity for intravitreal triamcinolone injected with a new technique using centrifuge concentration (Centrifuge concentrated IntraVitreal Triamcinolone, C-IVT). All injections were performed by a single surgeon (M.D.O.) using a 30-gauge needle. A vial of Triesence (triamcinolone; Alcon Laboratories, Fort Worth, TX) was drawn into a 1-mL syringe and the plunger cut off. The contents were spun in a centrifuge, and a second plunger was placed. Records of all patients receiving C-IVT with 0.05 mL or 0.1 mL from January 1, 2009, through December 31, 2009, were retrospectively reviewed. Eighty-four injections from 69 eyes of 57 patients were included. Sixty-nine injections from 54 eyes of 44 patients received 0.05 mL of C-IVT, whereas 15 injections from 15 eyes of 13 patients received 0.1 mL of C-IVT. Triamcinolone acetonide was still visualized in the vitreous on an average of 5.0 ± 2.4 months (median 5 months) after 0.05 mL of C-IVT and 8.3 ± 4.0 months (median 8 months) after 0.1 mL of C-IVT during follow-up visits. The longest duration recorded was 14 months for the 0.05-mL group and 18 months for the 0.l-mL group. The C-IVT results in a long duration of effect that seems to be greater than previously published techniques. It may be considered for patients requiring chronic steroid therapy, in which the benefits of long-term intravitreal steroids are believed to outweigh their risk.

  14. Optimization and validation of FePro cell labeling method.

    PubMed

    Janic, Branislava; Rad, Ali M; Jordan, Elaine K; Iskander, A S M; Ali, Md M; Varma, N Ravi S; Frank, Joseph A; Arbab, Ali S

    2009-06-11

    Current method to magnetically label cells using ferumoxides (Fe)-protamine (Pro) sulfate (FePro) is based on generating FePro complexes in a serum free media that are then incubated overnight with cells for the efficient labeling. However, this labeling technique requires long (>12-16 hours) incubation time and uses relatively high dose of Pro (5-6 microg/ml) that makes large extracellular FePro complexes. These complexes can be difficult to clean with simple cell washes and may create low signal intensity on T2* weighted MRI that is not desirable. The purpose of this study was to revise the current labeling method by using low dose of Pro and adding Fe and Pro directly to the cells before generating any FePro complexes. Human tumor glioma (U251) and human monocytic leukemia cell (THP-1) lines were used as model systems for attached and suspension cell types, respectively and dose dependent (Fe 25 to 100 microg/ml and Pro 0.75 to 3 microg/ml) and time dependent (2 to 48 h) labeling experiments were performed. Labeling efficiency and cell viability of these cells were assessed. Prussian blue staining revealed that more than 95% of cells were labeled. Intracellular iron concentration in U251 cells reached approximately 30-35 pg-iron/cell at 24 h when labeled with 100 microg/ml of Fe and 3 microg/ml of Pro. However, comparable labeling was observed after 4 h across the described FePro concentrations. Similarly, THP-1 cells achieved approximately 10 pg-iron/cell at 48 h when labeled with 100 microg/ml of Fe and 3 microg/ml of Pro. Again, comparable labeling was observed after 4 h for the described FePro concentrations. FePro labeling did not significantly affect cell viability. There was almost no extracellular FePro complexes observed after simple cell washes. To validate and to determine the effectiveness of the revised technique, human T-cells, human hematopoietic stem cells (hHSC), human bone marrow stromal cells (hMSC) and mouse neuronal stem cells (mNSC C17.2) were labeled. Labeling for 4 hours using 100 microg/ml of Fe and 3 microg/ml of Pro resulted in very efficient labeling of these cells, without impairing their viability and functional capability. The new technique with short incubation time using 100 microg/ml of Fe and 3 microg/ml of Pro is effective in labeling cells for cellular MRI.

  15. Machine learning for prediction of all-cause mortality in patients with suspected coronary artery disease: a 5-year multicentre prospective registry analysis

    PubMed Central

    Motwani, Manish; Dey, Damini; Berman, Daniel S.; Germano, Guido; Achenbach, Stephan; Al-Mallah, Mouaz H.; Andreini, Daniele; Budoff, Matthew J.; Cademartiri, Filippo; Callister, Tracy Q.; Chang, Hyuk-Jae; Chinnaiyan, Kavitha; Chow, Benjamin J.W.; Cury, Ricardo C.; Delago, Augustin; Gomez, Millie; Gransar, Heidi; Hadamitzky, Martin; Hausleiter, Joerg; Hindoyan, Niree; Feuchtner, Gudrun; Kaufmann, Philipp A.; Kim, Yong-Jin; Leipsic, Jonathon; Lin, Fay Y.; Maffei, Erica; Marques, Hugo; Pontone, Gianluca; Raff, Gilbert; Rubinshtein, Ronen; Shaw, Leslee J.; Stehli, Julia; Villines, Todd C.; Dunning, Allison; Min, James K.; Slomka, Piotr J.

    2017-01-01

    Aims Traditional prognostic risk assessment in patients undergoing non-invasive imaging is based upon a limited selection of clinical and imaging findings. Machine learning (ML) can consider a greater number and complexity of variables. Therefore, we investigated the feasibility and accuracy of ML to predict 5-year all-cause mortality (ACM) in patients undergoing coronary computed tomographic angiography (CCTA), and compared the performance to existing clinical or CCTA metrics. Methods and results The analysis included 10 030 patients with suspected coronary artery disease and 5-year follow-up from the COronary CT Angiography EvaluatioN For Clinical Outcomes: An InteRnational Multicenter registry. All patients underwent CCTA as their standard of care. Twenty-five clinical and 44 CCTA parameters were evaluated, including segment stenosis score (SSS), segment involvement score (SIS), modified Duke index (DI), number of segments with non-calcified, mixed or calcified plaques, age, sex, gender, standard cardiovascular risk factors, and Framingham risk score (FRS). Machine learning involved automated feature selection by information gain ranking, model building with a boosted ensemble algorithm, and 10-fold stratified cross-validation. Seven hundred and forty-five patients died during 5-year follow-up. Machine learning exhibited a higher area-under-curve compared with the FRS or CCTA severity scores alone (SSS, SIS, DI) for predicting all-cause mortality (ML: 0.79 vs. FRS: 0.61, SSS: 0.64, SIS: 0.64, DI: 0.62; P< 0.001). Conclusions Machine learning combining clinical and CCTA data was found to predict 5-year ACM significantly better than existing clinical or CCTA metrics alone. PMID:27252451

  16. Machine learning for prediction of all-cause mortality in patients with suspected coronary artery disease: a 5-year multicentre prospective registry analysis.

    PubMed

    Motwani, Manish; Dey, Damini; Berman, Daniel S; Germano, Guido; Achenbach, Stephan; Al-Mallah, Mouaz H; Andreini, Daniele; Budoff, Matthew J; Cademartiri, Filippo; Callister, Tracy Q; Chang, Hyuk-Jae; Chinnaiyan, Kavitha; Chow, Benjamin J W; Cury, Ricardo C; Delago, Augustin; Gomez, Millie; Gransar, Heidi; Hadamitzky, Martin; Hausleiter, Joerg; Hindoyan, Niree; Feuchtner, Gudrun; Kaufmann, Philipp A; Kim, Yong-Jin; Leipsic, Jonathon; Lin, Fay Y; Maffei, Erica; Marques, Hugo; Pontone, Gianluca; Raff, Gilbert; Rubinshtein, Ronen; Shaw, Leslee J; Stehli, Julia; Villines, Todd C; Dunning, Allison; Min, James K; Slomka, Piotr J

    2017-02-14

    Traditional prognostic risk assessment in patients undergoing non-invasive imaging is based upon a limited selection of clinical and imaging findings. Machine learning (ML) can consider a greater number and complexity of variables. Therefore, we investigated the feasibility and accuracy of ML to predict 5-year all-cause mortality (ACM) in patients undergoing coronary computed tomographic angiography (CCTA), and compared the performance to existing clinical or CCTA metrics. The analysis included 10 030 patients with suspected coronary artery disease and 5-year follow-up from the COronary CT Angiography EvaluatioN For Clinical Outcomes: An InteRnational Multicenter registry. All patients underwent CCTA as their standard of care. Twenty-five clinical and 44 CCTA parameters were evaluated, including segment stenosis score (SSS), segment involvement score (SIS), modified Duke index (DI), number of segments with non-calcified, mixed or calcified plaques, age, sex, gender, standard cardiovascular risk factors, and Framingham risk score (FRS). Machine learning involved automated feature selection by information gain ranking, model building with a boosted ensemble algorithm, and 10-fold stratified cross-validation. Seven hundred and forty-five patients died during 5-year follow-up. Machine learning exhibited a higher area-under-curve compared with the FRS or CCTA severity scores alone (SSS, SIS, DI) for predicting all-cause mortality (ML: 0.79 vs. FRS: 0.61, SSS: 0.64, SIS: 0.64, DI: 0.62; P< 0.001). Machine learning combining clinical and CCTA data was found to predict 5-year ACM significantly better than existing clinical or CCTA metrics alone. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.

  17. Use of machine learning to improve autism screening and diagnostic instruments: effectiveness, efficiency, and multi-instrument fusion

    PubMed Central

    Bone, Daniel; Bishop, Somer; Black, Matthew P.; Goodwin, Matthew S.; Lord, Catherine; Narayanan, Shrikanth S.

    2016-01-01

    Background Machine learning (ML) provides novel opportunities for human behavior research and clinical translation, yet its application can have noted pitfalls (Bone et al., 2015). In this work, we fastidiously utilize ML to derive autism spectrum disorder (ASD) instrument algorithms in an attempt to improve upon widely-used ASD screening and diagnostic tools. Methods The data consisted of Autism Diagnostic Interview-Revised (ADI-R) and Social Responsiveness Scale (SRS) scores for 1,264 verbal individuals with ASD and 462 verbal individuals with non-ASD developmental or psychiatric disorders (DD), split at age 10. Algorithms were created via a robust ML classifier, support vector machine (SVM), while targeting best-estimate clinical diagnosis of ASD vs. non-ASD. Parameter settings were tuned in multiple levels of cross-validation. Results The created algorithms were more effective (higher performing) than current algorithms, were tunable (sensitivity and specificity can be differentially weighted), and were more efficient (achieving near-peak performance with five or fewer codes). Results from ML-based fusion of ADI-R and SRS are reported. We present a screener algorithm for below (above) age 10 that reached 89.2% (86.7%) sensitivity and 59.0% (53.4%) specificity with only five behavioral codes. Conclusions ML is useful for creating robust, customizable instrument algorithms. In a unique dataset comprised of controls with other difficulties, our findings highlight limitations of current caregiver-report instruments and indicate possible avenues for improving ASD screening and diagnostic tools. PMID:27090613

  18. Use of machine learning to improve autism screening and diagnostic instruments: effectiveness, efficiency, and multi-instrument fusion.

    PubMed

    Bone, Daniel; Bishop, Somer L; Black, Matthew P; Goodwin, Matthew S; Lord, Catherine; Narayanan, Shrikanth S

    2016-08-01

    Machine learning (ML) provides novel opportunities for human behavior research and clinical translation, yet its application can have noted pitfalls (Bone et al., 2015). In this work, we fastidiously utilize ML to derive autism spectrum disorder (ASD) instrument algorithms in an attempt to improve upon widely used ASD screening and diagnostic tools. The data consisted of Autism Diagnostic Interview-Revised (ADI-R) and Social Responsiveness Scale (SRS) scores for 1,264 verbal individuals with ASD and 462 verbal individuals with non-ASD developmental or psychiatric disorders, split at age 10. Algorithms were created via a robust ML classifier, support vector machine, while targeting best-estimate clinical diagnosis of ASD versus non-ASD. Parameter settings were tuned in multiple levels of cross-validation. The created algorithms were more effective (higher performing) than the current algorithms, were tunable (sensitivity and specificity can be differentially weighted), and were more efficient (achieving near-peak performance with five or fewer codes). Results from ML-based fusion of ADI-R and SRS are reported. We present a screener algorithm for below (above) age 10 that reached 89.2% (86.7%) sensitivity and 59.0% (53.4%) specificity with only five behavioral codes. ML is useful for creating robust, customizable instrument algorithms. In a unique dataset comprised of controls with other difficulties, our findings highlight the limitations of current caregiver-report instruments and indicate possible avenues for improving ASD screening and diagnostic tools. © 2016 Association for Child and Adolescent Mental Health.

  19. The effect of polishing technique on 3-D surface roughness and gloss of dental restorative resin composites.

    PubMed

    Ereifej, N S; Oweis, Y G; Eliades, G

    2013-01-01

    The aim of this study was to compare surface roughness and gloss of resin composites polished using different polishing systems. Five resin composites were investigated: Filtek Silorane (FS), IPS Empress Direct (IP), Clearfil Majesty Posterior (CM), Premise (PM), and Estelite Sigma (ES). Twenty-five disk specimens were prepared from each material, divided into five groups, each polished with one of the following methods: Opti1Step (OS), OptiDisc (OD), Kenda CGI (KD), Pogo (PG), or metallurgical polishing (ML). Gloss and roughness parameters (Sa, Sz, Sq, and St) were evaluated by 60°-angle glossimetry and white-light interferometric profilometry. Two-way analysis of variance was used to detect differences in different materials and polishing techniques. Regression and correlation analyses were performed to examine correlations between roughness and gloss. Significant differences in roughness parameters and gloss were found according to the material, type of polishing, and material/polishing technique (p< 0.05). The highest roughness was recorded when KD was used (Sa: 581.8 [62.1] for FS/KD, Sq: 748.7 [55.6] for FS/KD, Sz: 17.7 [2.7] for CM/KD, and St: 24.6 [6.8] for FS/KD), while the lowest was recorded after ML (Sa: 133.6 [68.9] for PM/ML, Sq: 256.5 [53.5] for ES/ML, Sz: 4.0 [1.3] for ES/ML, and St: 7.1 [0.7] for ES/ML). The highest gloss was recorded for PM/ML (88.4 [2.3]) and lowest for FS/KD (30.3 [5.7]). All roughness parameters were significantly correlated with gloss (r= 0.871, 0.846, 0.713, and 0.707 for Sa, Sq, Sz, St, and gloss, respectively). It was concluded that the polishing procedure and the type of composite can have significant impacts on surface roughness and gloss of resin composites.

  20. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  1. [MK-801 or DNQX reduces electroconvulsive shock-induced impairment of learning-memory and hyperphosphorylation of Tau in rats].

    PubMed

    Liu, Chao; Min, Su; Wei, Ke; Liu, Dong; Dong, Jun; Luo, Jie; Liu, Xiao-Bin

    2012-08-25

    This study explored the effect of the excitatory amino acid receptor antagonists on the impairment of learning-memory and the hyperphosphorylation of Tau protein induced by electroconvulsive shock (ECT) in depressed rats, in order to provide experimental evidence for the study on neuropsychological mechanisms improving learning and memory impairment and the clinical intervention treatment. The analysis of variance of factorial design set up two intervention factors which were the electroconvulsive shock (two level: no disposition; a course of ECT) and the excitatory amino acid receptor antagonists (three level: iv saline; iv NMDA receptor antagonist MK-801; iv AMPA receptor antagonist DNQX). Forty-eight adult Wistar-Kyoto (WKY) rats (an animal model for depressive behavior) were randomly divided into six experimental groups (n = 8 in each group): saline (iv 2 mL saline through the tail veins of WKY rats ); MK-801 (iv 2 mL 5 mg/kg MK-801 through the tail veins of WKY rats) ; DNQX (iv 2 mL 5 mg/kg DNQX through the tail veins of WKY rats ); saline + ECT (iv 2 mL saline through the tail veins of WKY rats and giving a course of ECT); MK-801 + ECT (iv 2 mL 5 mg/kg MK-801 through the tail veins of WKY rats and giving a course of ECT); DNQX + ECT (iv 2 mL 5 mg/kg DNQX through the tail veins of WKY rats and giving a course of ECT). The Morris water maze test started within 1 day after the finish of the course of ECT to evaluate learning and memory. The hippocampus was removed from rats within 1 day after the finish of Morris water maze test. The content of glutamate in the hippocampus of rats was detected by high performance liquid chromatography. The contents of Tau protein which included Tau5 (total Tau protein), p-PHF1(Ser396/404), p-AT8(Ser199/202) and p-12E8(Ser262) in the hippocampus of rats were detected by immunohistochemistry staining (SP) and Western blot. The results showed that ECT and the glutamate ionic receptor blockers (NMDA receptor antagonist MK-801 and AMPA receptor antagonist DNQX) induced the impairment of learning and memory in depressed rats with extended evasive latency time and shortened space exploration time. And the two factors presented a subtractive effect. ECT significantly up-regulated the content of glutamate in the hippocampus of depressed rats which were not affected by the glutamate ionic receptor blockers. ECT and the glutamate ionic receptor blockers did not affect the total Tau protein in the hippocampus of rats. ECT up-regulated the hyperphosphorylation of Tau protein in the hippocampus of depressed rats, while the glutamate ionic receptor blockers down-regulated it, and combination of the two factors presented a subtractive effect. Our results indicate that ECT up-regulates the content of glutamate in the hippocampus of depressed rats, which up-regulates the hyperphosphorylation of Tau protein resulting in the impairment of learning and memory in depressed rats.

  2. Wall-based measurement features provides an improved IVUS coronary artery risk assessment when fused with plaque texture-based features during machine learning paradigm.

    PubMed

    Banchhor, Sumit K; Londhe, Narendra D; Araki, Tadashi; Saba, Luca; Radeva, Petia; Laird, John R; Suri, Jasjit S

    2017-12-01

    Planning of percutaneous interventional procedures involves a pre-screening and risk stratification of the coronary artery disease. Current screening tools use stand-alone plaque texture-based features and therefore lack the ability to stratify the risk. This IRB approved study presents a novel strategy for coronary artery disease risk stratification using an amalgamation of IVUS plaque texture-based and wall-based measurement features. Due to common genetic plaque makeup, carotid plaque burden was chosen as a gold standard for risk labels during training-phase of machine learning (ML) paradigm. Cross-validation protocol was adopted to compute the accuracy of the ML framework. A set of 59 plaque texture-based features was padded with six wall-based measurement features to show the improvement in stratification accuracy. The ML system was executed using principle component analysis-based framework for dimensionality reduction and uses support vector machine classifier for training and testing-phases. The ML system produced a stratification accuracy of 91.28%, demonstrating an improvement of 5.69% when wall-based measurement features were combined with plaque texture-based features. The fused system showed an improvement in mean sensitivity, specificity, positive predictive value, and area under the curve by: 6.39%, 4.59%, 3.31% and 5.48%, respectively when compared to the stand-alone system. While meeting the stability criteria of 5%, the ML system also showed a high average feature retaining power and mean reliability of 89.32% and 98.24%, respectively. The ML system showed an improvement in risk stratification accuracy when the wall-based measurement features were fused with the plaque texture-based features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A comparison of the short- and long-term effects of corticosterone exposure on extinction in adolescence versus adulthood.

    PubMed

    Den, Miriam Liora; Altmann, Sarah R; Richardson, Rick

    2014-12-01

    Human and nonhuman adolescents have impaired retention of extinction of learned fear, relative to juveniles and adults. It is unknown whether exposure to stress affects extinction differently in adolescents versus adults. These experiments compared the short- and long-term effects of exposure to the stress-related hormone corticosterone (CORT) on the extinction of learned fear in adolescent and adult rats. Across all experiments, adolescent and adult rats were trained to exhibit good extinction retention by giving extinction training across 2 consecutive days. Despite this extra training, adolescents exposed to 1 week of CORT (200 μg/ml) in their drinking water showed impaired extinction retention when trained shortly after the CORT was removed (Experiment 1a). In contrast, adult rats exposed to CORT (200 μg/ml) for the same duration did not exhibit deficits in extinction retention (Experiment 1b). Exposing adolescents to half the amount of CORT (100 μg/ml; Experiment 1c) for 1 week similarly disrupted extinction retention. Extinction impairments in adult rats were only observed after 3 weeks, rather than 1 week, of CORT (200 μg/ml; Experiment 1d). Remarkably, however, adult rats showed impaired extinction retention if they had been exposed to 1 week of CORT (200 μg/ml) during adolescence (Experiment 2). Finally, exposure to 3 weeks of CORT (200 μg/ml) in adulthood led to long-lasting extinction deficits after a 6-week drug-free period (Experiment 3). These findings suggest that although CORT disrupts both short- and long-term extinction retention in adolescents and adults, adolescents may be more vulnerable to these effects because of the maturation of stress-sensitive brain regions. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  4. Computer vision and machine learning for robust phenotyping in genome-wide studies

    PubMed Central

    Zhang, Jiaoping; Naik, Hsiang Sing; Assefa, Teshale; Sarkar, Soumik; Reddy, R. V. Chowda; Singh, Arti; Ganapathysubramanian, Baskar; Singh, Asheesh K.

    2017-01-01

    Traditional evaluation of crop biotic and abiotic stresses are time-consuming and labor-intensive limiting the ability to dissect the genetic basis of quantitative traits. A machine learning (ML)-enabled image-phenotyping pipeline for the genetic studies of abiotic stress iron deficiency chlorosis (IDC) of soybean is reported. IDC classification and severity for an association panel of 461 diverse plant-introduction accessions was evaluated using an end-to-end phenotyping workflow. The workflow consisted of a multi-stage procedure including: (1) optimized protocols for consistent image capture across plant canopies, (2) canopy identification and registration from cluttered backgrounds, (3) extraction of domain expert informed features from the processed images to accurately represent IDC expression, and (4) supervised ML-based classifiers that linked the automatically extracted features with expert-rating equivalent IDC scores. ML-generated phenotypic data were subsequently utilized for the genome-wide association study and genomic prediction. The results illustrate the reliability and advantage of ML-enabled image-phenotyping pipeline by identifying previously reported locus and a novel locus harboring a gene homolog involved in iron acquisition. This study demonstrates a promising path for integrating the phenotyping pipeline into genomic prediction, and provides a systematic framework enabling robust and quicker phenotyping through ground-based systems. PMID:28272456

  5. An investigation on the interaction of DNA with hesperetin/apigenin in the presence of CTAB by resonance Rayleigh light scattering technique and its analytical application

    NASA Astrophysics Data System (ADS)

    Bi, Shuyun; Wang, Yu; Pang, Bo; Yan, Lili; Wang, Tianjiao

    2012-05-01

    Two new systems for measuring DNA at nanogram levels by a resonance Rayleigh light scattering (RLS) technique with a common spectrofluorometer were proposed. In the presence of cetyltrimethylammonium bromide (CTAB), the interaction of DNA with hesperetin and apigenin (two effective components of Chinese herbal medicine) could enhance RLS signals with the maximum peak at 363 and 433 nm respectively. The enhanced intensity of RLS was directly proportional to the concentration of DNA in the range of 0.022-4.4 μg mL-1 for DNA-CTAB-hesperetin system and 0.013-4.4 μg mL-1 for DNA-CTAB-apigenin system. The detection limit was 2.34 ng mL-1 and 2.97 ng mL-1 respectively. Synthetic samples were measured satisfactorily. The recovery of DNA-CTAB-hesperetin system was 97.3-101.9% and that of DNA-CTAB-apigenin system was 101.2-109.5%.

  6. Ultracentrifugation in the Concentration and Detection of Enteroviruses

    PubMed Central

    Cliver, Dean O.; Yeatman, John

    1965-01-01

    Ultracentrifugation has been evaluated as a method of concentrating enteroviruses from suspensions whose initial titers ranged from 1.7 × 108 to 1.6 × 10-2 plaque-forming units (PFU) per ml. A technique employing a “trap” of 0.1 ml of 2% gelatin solution at the point at which the pellet forms in tubes for the number 30 and number 50 rotors of the Spinco model L preparative ultracentrifuge has been tested and found to have a number of advantages. Qualitative studies have been performed to determine the sensitivity of the ultracentrifuge technique in detecting the presence of enteroviruses in very dilute suspensions. There was found to be at least a 50% probability of detecting virus present initially at levels as low as 0.12 PFU per ml by means of the number 50 rotor. The input level for similar results with the number 30 rotor was found to be 0.025 PFU per ml. PMID:14325278

  7. Automatic Selection of Suitable Sentences for Language Learning Exercises

    ERIC Educational Resources Information Center

    Pilán, Ildikó; Volodina, Elena; Johansson, Richard

    2013-01-01

    In our study we investigated second and foreign language (L2) sentence readability, an area little explored so far in the case of several languages, including Swedish. The outcome of our research consists of two methods for sentence selection from native language corpora based on Natural Language Processing (NLP) and machine learning (ML)…

  8. Bt Toxin Cry1Ie Causes No Negative Effects on Survival, Pollen Consumption, or Olfactory Learning in Worker Honey Bees (Hymenoptera: Apidae).

    PubMed

    Dai, Ping-Li; Jia, Hui-Ru; Geng, Li-Li; Diao, Qing-Yun

    2016-04-27

    The honey bee (Apis mellifera L.) is a key nontarget insect in environmental risk assessments of insect-resistant genetically modified crops. In controlled laboratory conditions, we evaluated the potential effects of Cry1Ie toxin on survival, pollen consumption, and olfactory learning of young adult honey bees. We exposed worker bees to syrup containing 20, 200, or 20,000 ng/ml Cry1Ie toxin, and also exposed some bees to 48 ng/ml imidacloprid as a positive control for exposure to a sublethal concentration of a toxic product. Results suggested that Cry1Ie toxin carries no risk to survival, pollen consumption, or learning capabilities of young adult honey bees. However, during oral exposure to the imidacloprid treatments, honey bee learning behavior was affected and bees consumed significantly less pollen than the control and Cry1Ie groups. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Oral mask ventilation is more effective than face mask ventilation after nasal surgery.

    PubMed

    Yazicioğlu, Dilek; Baran, Ilkay; Uzumcugil, Filiz; Ozturk, Ibrahim; Utebey, Gulten; Sayın, M Murat

    2016-06-01

    To evaluate and compare the face mask (FM) and oral mask (OM) ventilation techniques during anesthesia emergence regarding tidal volume, leak volume, and difficult mask ventilation (DMV) incidence. Prospective, randomized, crossover study. Operating room, training and research hospital. American Society of Anesthesiologists physical status I and II adult patients scheduled for nasal surgery. Patients in group FM-OM received FM ventilation first, followed by OM ventilation, and patients in group OM-FM received OM ventilation first, followed by FM ventilation, with spontaneous ventilation after deep extubation. The FM ventilation was applied with the 1-handed EC-clamp technique. The OM was placed only over the mouth, and the 1-handed EC-clamp technique was used again. A child's size FM was used for the OM ventilation technique, the mask was rotated, and the inferior part of the mask was placed toward the nose. The leak volume (MVleak), mean airway pressure (Pmean), and expired tidal volume (TVe) were assessed with each mask technique for 3 consecutive breaths. A mask ventilation grade ≥3 was considered DMV. DMV occurred more frequently during FM ventilation (75% with FM vs 8% with OM). In the FM-first sequence, the mean TVe was 249±61mL with the FM and 455±35mL with the OM (P=.0001), whereas in the OM-first sequence, it was 276±81mL with the FM and 409±37mL with the OM (P=.0001). Regardless of the order used, the OM technique significantly decreased the MVleak and increased the TVe when compared to the FM technique. During anesthesia emergence after nasal surgery the OM may offer an effective ventilation method as it decreases the incidence of DMV and the gas leak around the mask and provides higher tidal volume delivery compared with FM ventilation. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. [Trapping techniques for Solenopsis invicta].

    PubMed

    Liang, Xiao-song; Zhang, Qiang; Zhuang, Yiong-lin; Li, Gui-wen; Ji, Lin-peng; Wang, Jian-guo; Dai, Hua-guo

    2007-06-01

    A field study was made to investigate the trapping effects of different attractants, traps, and wind directions on Solenopsis invicta. The results showed that among the test attractants, TB1 (50 g fishmeal, 40 g peptone, 10 ml 10% sucrose water solution and 20 ml soybean oil) had the best effect, followed by TB2 (ham), TB6 (100 g cornmeal and 20 ml soybean oil) and TB4 (10 ml 10% sucrose water solution, 100 g sugarcane powder and 20 ml soybean oil), with a mean capture efficiency being 77.6, 58.7, 29 and 7.7 individuals per trap, respectively. No S. invicta was trapped with TB3 (10 ml 10% sucrose water solution, 100 g cornmeal and 20 ml soybean oil) and TB5 (honey). Tube trap was superior to dish trap, with a trapping efficiency of 75.2 and 35 individuals per trap, respectively. The attractants had better effects in leeward than in windward.

  11. How to Build a Functional Connectomic Biomarker for Mild Cognitive Impairment From Source Reconstructed MEG Resting-State Activity: The Combination of ROI Representation and Connectivity Estimator Matters.

    PubMed

    Dimitriadis, Stavros I; López, María E; Bruña, Ricardo; Cuesta, Pablo; Marcos, Alberto; Maestú, Fernando; Pereda, Ernesto

    2018-01-01

    Our work aimed to demonstrate the combination of machine learning and graph theory for the designing of a connectomic biomarker for mild cognitive impairment (MCI) subjects using eyes-closed neuromagnetic recordings. The whole analysis based on source-reconstructed neuromagnetic activity. As ROI representation, we employed the principal component analysis (PCA) and centroid approaches. As representative bi-variate connectivity estimators for the estimation of intra and cross-frequency interactions, we adopted the phase locking value (PLV), the imaginary part (iPLV) and the correlation of the envelope (CorrEnv). Both intra and cross-frequency interactions (CFC) have been estimated with the three connectivity estimators within the seven frequency bands (intra-frequency) and in pairs (CFC), correspondingly. We demonstrated how different versions of functional connectivity graphs single-layer (SL-FCG) and multi-layer (ML-FCG) can give us a different view of the functional interactions across the brain areas. Finally, we applied machine learning techniques with main scope to build a reliable connectomic biomarker by analyzing both SL-FCG and ML-FCG in two different options: as a whole unit using a tensorial extraction algorithm and as single pair-wise coupling estimations. We concluded that edge-weighed feature selection strategy outperformed the tensorial treatment of SL-FCG and ML-FCG. The highest classification performance was obtained with the centroid ROI representation and edge-weighted analysis of the SL-FCG reaching the 98% for the CorrEnv in α 1 :α 2 and 94% for the iPLV in α 2 . Classification performance based on the multi-layer participation coefficient, a multiplexity index reached 52% for iPLV and 52% for CorrEnv. Selected functional connections that build the multivariate connectomic biomarker in the edge-weighted scenario are located in default-mode, fronto-parietal, and cingulo-opercular network. Our analysis supports the notion of analyzing FCG simultaneously in intra and cross-frequency whole brain interactions with various connectivity estimators in beamformed recordings.

  12. Modeling a Spatio-Temporal Individual Travel Behavior Using Geotagged Social Network Data: a Case Study of Greater Cincinnati

    NASA Astrophysics Data System (ADS)

    Saeedimoghaddam, M.; Kim, C.

    2017-10-01

    Understanding individual travel behavior is vital in travel demand management as well as in urban and transportation planning. New data sources including mobile phone data and location-based social media (LBSM) data allow us to understand mobility behavior on an unprecedented level of details. Recent studies of trip purpose prediction tend to use machine learning (ML) methods, since they generally produce high levels of predictive accuracy. Few studies used LSBM as a large data source to extend its potential in predicting individual travel destination using ML techniques. In the presented research, we created a spatio-temporal probabilistic model based on an ensemble ML framework named "Random Forests" utilizing the travel extracted from geotagged Tweets in 419 census tracts of Greater Cincinnati area for predicting the tract ID of an individual's travel destination at any time using the information of its origin. We evaluated the model accuracy using the travels extracted from the Tweets themselves as well as the travels from household travel survey. The Tweets and survey based travels that start from same tract in the south western parts of the study area is more likely to select same destination compare to the other parts. Also, both Tweets and survey based travels were affected by the attraction points in the downtown of Cincinnati and the tracts in the north eastern part of the area. Finally, both evaluations show that the model predictions are acceptable, but it cannot predict destination using inputs from other data sources as precise as the Tweets based data.

  13. Considerations for automated machine learning in clinical metabolic profiling: Altered homocysteine plasma concentration associated with metformin exposure.

    PubMed

    Orlenko, Alena; Moore, Jason H; Orzechowski, Patryk; Olson, Randal S; Cairns, Junmei; Caraballo, Pedro J; Weinshilboum, Richard M; Wang, Liewei; Breitenstein, Matthew K

    2018-01-01

    With the maturation of metabolomics science and proliferation of biobanks, clinical metabolic profiling is an increasingly opportunistic frontier for advancing translational clinical research. Automated Machine Learning (AutoML) approaches provide exciting opportunity to guide feature selection in agnostic metabolic profiling endeavors, where potentially thousands of independent data points must be evaluated. In previous research, AutoML using high-dimensional data of varying types has been demonstrably robust, outperforming traditional approaches. However, considerations for application in clinical metabolic profiling remain to be evaluated. Particularly, regarding the robustness of AutoML to identify and adjust for common clinical confounders. In this study, we present a focused case study regarding AutoML considerations for using the Tree-Based Optimization Tool (TPOT) in metabolic profiling of exposure to metformin in a biobank cohort. First, we propose a tandem rank-accuracy measure to guide agnostic feature selection and corresponding threshold determination in clinical metabolic profiling endeavors. Second, while AutoML, using default parameters, demonstrated potential to lack sensitivity to low-effect confounding clinical covariates, we demonstrated residual training and adjustment of metabolite features as an easily applicable approach to ensure AutoML adjustment for potential confounding characteristics. Finally, we present increased homocysteine with long-term exposure to metformin as a potentially novel, non-replicated metabolite association suggested by TPOT; an association not identified in parallel clinical metabolic profiling endeavors. While warranting independent replication, our tandem rank-accuracy measure suggests homocysteine to be the metabolite feature with largest effect, and corresponding priority for further translational clinical research. Residual training and adjustment for a potential confounding effect by BMI only slightly modified the suggested association. Increased homocysteine is thought to be associated with vitamin B12 deficiency - evaluation for potential clinical relevance is suggested. While considerations for clinical metabolic profiling are recommended, including adjustment approaches for clinical confounders, AutoML presents an exciting tool to enhance clinical metabolic profiling and advance translational research endeavors.

  14. Calibrated delivery drape versus indirect gravimetric technique for the measurement of blood loss after delivery: a randomized trial.

    PubMed

    Ambardekar, Shubha; Shochet, Tara; Bracken, Hillary; Coyaji, Kurus; Winikoff, Beverly

    2014-08-15

    Trials of interventions for PPH prevention and treatment rely on different measurement methods for the quantification of blood loss and identification of PPH. This study's objective was to compare measures of blood loss obtained from two different measurement protocols frequently used in studies. Nine hundred women presenting for vaginal delivery were randomized to a direct method (a calibrated delivery drape) or an indirect method (a shallow bedpan placed below the buttocks and weighing the collected blood and blood-soaked gauze/pads). Blood loss was measured from immediately after delivery for at least one hour or until active bleeding stopped. Significantly greater mean blood loss was recorded by the direct than by the indirect measurement technique (253.9 mL and 195.3 mL, respectively; difference = 58.6 mL (95% CI: 31-86); p < 0.001). Almost twice as many women in the direct than in the indirect group measured blood loss > 500 mL (8.7% vs. 4.7%, p = 0.02). The study suggests a real and significant difference in blood loss measurement between these methods. Research using blood loss measurement as an endpoint needs to be interpreted taking measurement technique into consideration. This study has been registered at clinicaltrials.gov as NCT01885845.

  15. Evaluation of mobile learning: Students' experiences in a new rural-based medical school

    PubMed Central

    2010-01-01

    Background Mobile learning (ML) is an emerging educational method with success dependent on many factors including the ML device, physical infrastructure and user characteristics. At Gippsland Medical School (GMS), students are given a laptop at the commencement of their four-year degree. We evaluated the educational impact of the ML program from students' perspectives. Methods Questionnaires and individual interviews explored students' experiences of ML. All students were invited to complete questionnaires. Convenience sampling was used for interviews. Quantitative data was entered to SPSS 17.0 and descriptive statistics computed. Free text comments from questionnaires and transcriptions of interviews were thematically analysed. Results Fifty students completed the questionnaire (response rate 88%). Six students participated in interviews. More than half the students owned a laptop prior to commencing studies, would recommend the laptop and took the laptop to GMS daily. Modal daily use of laptops was four hours. Most frequent use was for access to the internet and email while the most frequently used applications were Microsoft Word and PowerPoint. Students appreciated the laptops for several reasons. The reduced financial burden was valued. Students were largely satisfied with the laptop specifications. Design elements of teaching spaces limited functionality. Although students valued aspects of the virtual learning environment (VLE), they also made many suggestions for improvement. Conclusions Students reported many educational benefits from school provision of laptops. In particular, the quick and easy access to electronic educational resources as and when they were needed. Improved design of physical facilities would enhance laptop use together with a more logical layout of the VLE, new computer-based resources and activities promoting interaction. PMID:20701752

  16. Formal Verification at System Level

    NASA Astrophysics Data System (ADS)

    Mazzini, S.; Puri, S.; Mari, F.; Melatti, I.; Tronci, E.

    2009-05-01

    System Level Analysis calls for a language comprehensible to experts with different background and yet precise enough to support meaningful analyses. SysML is emerging as an effective balance between such conflicting goals. In this paper we outline some the results obtained as for SysML based system level functional formal verification by an ESA/ESTEC study, with a collaboration among INTECS and La Sapienza University of Roma. The study focuses on SysML based system level functional requirements techniques.

  17. Five methods of breast volume measurement: a comparative study of measurements of specimen volume in 30 mastectomy cases.

    PubMed

    Kayar, Ragip; Civelek, Serdar; Cobanoglu, Murat; Gungor, Osman; Catal, Hidayet; Emiroglu, Mustafa

    2011-03-27

    To compare breast volume measurement techniques in terms of accuracy, convenience, and cost. Breast volumes of 30 patients who were scheduled to undergo total mastectomy surgery were measured preoperatively by using five different methods (mammography, anatomic [anthropometric], thermoplastic casting, the Archimedes procedure, and the Grossman-Roudner device). Specimen volume after total mastectomy was measured in each patient with the water displacement method (Archimedes). The results were compared statistically with the values obtained by the five different methods. The mean mastectomy specimen volume was 623.5 (range 150-1490) mL. The breast volume values were established to be 615.7 mL (r = 0.997) with the mammographic method, 645.4 mL (r = 0.975) with the anthropometric method, 565.8 mL (r = 0.934) with the Grossman-Roudner device, 583.2 mL (r = 0.989) with the Archimedes procedure, and 544.7 mL (r = 0.94) with the casting technique. Examination of r values revealed that the most accurate method was mammography for all volume ranges, followed by the Archimedes method. The present study demonstrated that the most accurate method of breast volume measurement is mammography, followed by the Archimedes method. However, when patient comfort, ease of application, and cost were taken into consideration, the Grossman-Roudner device and anatomic measurement were relatively less expensive, and easier methods with an acceptable degree of accuracy.

  18. A robust multilevel simultaneous eigenvalue solver

    NASA Technical Reports Server (NTRS)

    Costiner, Sorin; Taasan, Shlomo

    1993-01-01

    Multilevel (ML) algorithms for eigenvalue problems are often faced with several types of difficulties such as: the mixing of approximated eigenvectors by the solution process, the approximation of incomplete clusters of eigenvectors, the poor representation of solution on coarse levels, and the existence of close or equal eigenvalues. Algorithms that do not treat appropriately these difficulties usually fail, or their performance degrades when facing them. These issues motivated the development of a robust adaptive ML algorithm which treats these difficulties, for the calculation of a few eigenvectors and their corresponding eigenvalues. The main techniques used in the new algorithm include: the adaptive completion and separation of the relevant clusters on different levels, the simultaneous treatment of solutions within each cluster, and the robustness tests which monitor the algorithm's efficiency and convergence. The eigenvectors' separation efficiency is based on a new ML projection technique generalizing the Rayleigh Ritz projection, combined with a technique, the backrotations. These separation techniques, when combined with an FMG formulation, in many cases lead to algorithms of O(qN) complexity, for q eigenvectors of size N on the finest level. Previously developed ML algorithms are less focused on the mentioned difficulties. Moreover, algorithms which employ fine level separation techniques are of O(q(sub 2)N) complexity and usually do not overcome all these difficulties. Computational examples are presented where Schrodinger type eigenvalue problems in 2-D and 3-D, having equal and closely clustered eigenvalues, are solved with the efficiency of the Poisson multigrid solver. A second order approximation is obtained in O(qN) work, where the total computational work is equivalent to only a few fine level relaxations per eigenvector.

  19. The jmzQuantML programming interface and validator for the mzQuantML data standard.

    PubMed

    Qi, Da; Krishna, Ritesh; Jones, Andrew R

    2014-03-01

    The mzQuantML standard from the HUPO Proteomics Standards Initiative has recently been released, capturing quantitative data about peptides and proteins, following analysis of MS data. We present a Java application programming interface (API) for mzQuantML called jmzQuantML. The API provides robust bridges between Java classes and elements in mzQuantML files and allows random access to any part of the file. The API provides read and write capabilities, and is designed to be embedded in other software packages, enabling mzQuantML support to be added to proteomics software tools (http://code.google.com/p/jmzquantml/). The mzQuantML standard is designed around a multilevel validation system to ensure that files are structurally and semantically correct for different proteomics quantitative techniques. In this article, we also describe a Java software tool (http://code.google.com/p/mzquantml-validator/) for validating mzQuantML files, which is a formal part of the data standard. © 2014 The Authors. Proteomics published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Quantification of glomerular filtration rate by measurement of gadobutrol clearance from the extracellular fluid volume: comparison of a TurboFLASH and a TrueFISP approach

    NASA Astrophysics Data System (ADS)

    Boss, Andreas; Martirosian, Petros; Artunc, Ferruh; Risler, Teut; Claussen, Claus D.; Schlemmer, Heinz-Peter; Schick, Fritz

    2007-03-01

    Purpose: As the MR contrast-medium gadobutrol is completely eliminated via glomerular filtration, the glomerular filtration rate (GFR) can be quantified after bolus-injection of gadobutrol and complete mixing in the extracellular fluid volume (ECFV) by measuring the signal decrease within the liver parenchyma. Two different navigator-gated single-shot saturation-recovery sequences have been tested for suitability of GFR quantification: a TurboFLASH and a TrueFISP readout technique. Materials and Methods: Ten healthy volunteers (mean age 26.1+/-3.6) were equally devided in two subgroups. After bolus-injection of 0.05 mmol/kg gadobutrol, coronal single-slice images of the liver were recorded every 4-5 seconds during free breathing using either the TurboFLASH or the TrueFISP technique. Time-intensity curves were determined from manually drawn regions-of-interest over the liver parenchyma. Both sequences were subsequently evaluated regarding signal to noise ratio (SNR) and the behaviour of signal intensity curves. The calculated GFR values were compared to an iopromide clearance gold standard. Results: The TrueFISP sequence exhibited a 3.4-fold higher SNR as compared to the TurboFLASH sequence and markedly lower variability of the recorded time-intensity curves. The calculated mean GFR values were 107.0+/-16.1 ml/min/1.73m2 (iopromide: 92.1+/-14.5 ml/min/1.73m2) for the TrueFISP technique and 125.6+/-24.1 ml/min/1.73m2 (iopromide: 97.7+/-6.3 ml/min/1.73m2) for the TurboFLASH approach. The mean paired differences with TrueFISP was lower (15.0 ml/min/1.73m2) than in the TurboFLASH method (27.9 ml/min/1.73m2). Conclusion: The global GFR can be quantified via measurement of gadobutrol clearance from the ECFV. A saturation-recovery TrueFISP sequence allows for more reliable GFR quantification as a saturation recovery TurboFLASH technique.

  1. Statistical and Machine Learning forecasting methods: Concerns and ways forward

    PubMed Central

    Makridakis, Spyros; Assimakopoulos, Vassilios

    2018-01-01

    Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions. PMID:29584784

  2. Accelerating Chemical Discovery with Machine Learning: Simulated Evolution of Spin Crossover Complexes with an Artificial Neural Network.

    PubMed

    Janet, Jon Paul; Chan, Lydia; Kulik, Heather J

    2018-03-01

    Machine learning (ML) has emerged as a powerful complement to simulation for materials discovery by reducing time for evaluation of energies and properties at accuracy competitive with first-principles methods. We use genetic algorithm (GA) optimization to discover unconventional spin-crossover complexes in combination with efficient scoring from an artificial neural network (ANN) that predicts spin-state splitting of inorganic complexes. We explore a compound space of over 5600 candidate materials derived from eight metal/oxidation state combinations and a 32-ligand pool. We introduce a strategy for error-aware ML-driven discovery by limiting how far the GA travels away from the nearest ANN training points while maximizing property (i.e., spin-splitting) fitness, leading to discovery of 80% of the leads from full chemical space enumeration. Over a 51-complex subset, average unsigned errors (4.5 kcal/mol) are close to the ANN's baseline 3 kcal/mol error. By obtaining leads from the trained ANN within seconds rather than days from a DFT-driven GA, this strategy demonstrates the power of ML for accelerating inorganic material discovery.

  3. Antibacterial and antifungal activity of Flindersine isolated from the traditional medicinal plant, Toddalia asiatica (L.) Lam.

    PubMed

    Duraipandiyan, V; Ignacimuthu, S

    2009-06-25

    The leaves and root of Toddalia asiatica (L.) Lam. (Rutaceae) are widely used as a folk medicine in India. Hexane, chloroform, ethyl acetate, methanol and water extracts of Toddalia asiatica leaves and isolated compound Flindersine were tested against bacteria and fungi. Antibacterial and antifungal activities were tested against bacteria and fungi using disc-diffusion method and minimum inhibitory concentrations (MICs). The compound was confirmed using X-ray crystallography technique. Antibacterial and antifungal activities were observed in ethyl acetate extract. One active principle Flindersine (2,6-dihydro-2,2-dimethyl-5H-pyrano [3,2-c] quinoline-5-one-9cl) was isolated from the ethyl acetate extract. The MIC values of the compound against bacteria Bacillus subtilis (31.25 microg/ml), Staphylococcus aureus (62.5 microg/ml), Staphylococcus epidermidis (62.5 microg/ml), Enterococcus faecalis (31.25 microg/ml), Pseudomonas aeruginosa (250 microg/ml), Acinetobacter baumannii (125 microg/ml) and fungi Trichophyton rubrum 57 (62.5 microg/ml), Trichophyton mentagrophytes (62.5 microg/ml), Trichophyton simii (62.5 microg/ml), Epidermophyton floccosum (62.5 microg/ml), Magnaporthe grisea (250 microg/ml) and Candida albicans (250 microg/ml) were determined. Ethyl acetate extract showed promising antibacterial and antifungal activity and isolated compound Flindersine showed moderate activity against bacteria and fungi.

  4. Long Term Outcomes of Laparoscopic and Open Modified Lich-Gregoir Reimplantation in Adults: A multicentric comparative study.

    PubMed

    Atar, Arda; Eksi, Mithat; Güler, Ahmet Faysal; Tuncer, Murat; Akkas, Fatih; Tugcu, Volkan

    2017-01-01

    Obstructive ureteral pathologies in adult patients are most commonly due to ureteral strictures and secondary to surgical interventions. In this study, we aimed to compare open and laparoscopic modified Lich-Gregoir ureteral reimplantation with regards to outcomes in benign ureteral pathologies in adult patients. Between December 2008 and December 2014, 32 open cases and 29 laparoscopic cases were performed as per the data retrieved from surgical databases. All laparoscopic procedures were performed in Bakirkoy Dr. Sadi Konuk Training and Research Hospital(BEAH) and all open ureteral reimplantation procedures in Kartal Dr Lutfi Kirdar Training and Research Hospital(KEAH) and Okmeydani Training and Research Hospital(OEAH). The mean operation time was significantly lower in the group of patients operated with open group (142.5 minutes versus 188.9 minutes; P< 0.0001). The mean duration of follow-up was longer in the laparoscopy group (31 versus 28 months; p< 0.0001). The mean amount of operation associated blood loss was significantly lower in patients operated laparoscopically (93.7 mL versus 214 mL; P< 0.0001). The mean VAS score obtained six hours after surgery was 6.6 ± 0.8 in open group, and 5.8 ± 0.7 in laparoscopic group (p=0.0004). The mean VAS scores measured at post-operative day 1 was 4.5 ± 0.7 in open group and 3.7 ± 0.9 in laparoscopy group. Time required to achieve the pre-operative capability of daily activities was significantly longer in open group (15 ± 1.4 days vs 11 ± 1.4 days; p< 0.0001). Despite open techniques provide shorter operation time and laparoscopic techniques require long learning curve, we think that laparoscopic techniques are superior to open ones since that they provide a better post-operative comfort and are better tolerated in terms of complications.

  5. Long Term Outcomes of Laparoscopic and Open Modified Lich-Gregoir Reimplantation in Adults: A multicentric comparative study

    PubMed Central

    Atar, Arda; Eksi, Mithat; Güler, Ahmet Faysal; Tuncer, Murat; Akkas, Fatih; Tugcu, Volkan

    2017-01-01

    Background & Objective: Obstructive ureteral pathologies in adult patients are most commonly due to ureteral strictures and secondary to surgical interventions. In this study, we aimed to compare open and laparoscopic modified Lich-Gregoir ureteral reimplantation with regards to outcomes in benign ureteral pathologies in adult patients. Methods: Between December 2008 and December 2014, 32 open cases and 29 laparoscopic cases were performed as per the data retrieved from surgical databases. All laparoscopic procedures were performed in Bakirkoy Dr. Sadi Konuk Training and Research Hospital(BEAH) and all open ureteral reimplantation procedures in Kartal Dr Lutfi Kirdar Training and Research Hospital(KEAH) and Okmeydani Training and Research Hospital(OEAH). Results: The mean operation time was significantly lower in the group of patients operated with open group (142.5 minutes versus 188.9 minutes; P< 0.0001). The mean duration of follow-up was longer in the laparoscopy group (31 versus 28 months; p< 0.0001). The mean amount of operation associated blood loss was significantly lower in patients operated laparoscopically (93.7 mL versus 214 mL; P< 0.0001). The mean VAS score obtained six hours after surgery was 6.6 ± 0.8 in open group, and 5.8 ± 0.7 in laparoscopic group (p=0.0004). The mean VAS scores measured at post-operative day 1 was 4.5 ± 0.7 in open group and 3.7 ± 0.9 in laparoscopy group. Time required to achieve the pre-operative capability of daily activities was significantly longer in open group (15 ± 1.4 days vs 11 ± 1.4 days; p< 0.0001). Conclusion: Despite open techniques provide shorter operation time and laparoscopic techniques require long learning curve, we think that laparoscopic techniques are superior to open ones since that they provide a better post-operative comfort and are better tolerated in terms of complications. PMID:29067040

  6. A performance model for GPUs with caches

    DOE PAGES

    Dao, Thanh Tuan; Kim, Jungwon; Seo, Sangmin; ...

    2014-06-24

    To exploit the abundant computational power of the world's fastest supercomputers, an even workload distribution to the typically heterogeneous compute devices is necessary. While relatively accurate performance models exist for conventional CPUs, accurate performance estimation models for modern GPUs do not exist. This paper presents two accurate models for modern GPUs: a sampling-based linear model, and a model based on machine-learning (ML) techniques which improves the accuracy of the linear model and is applicable to modern GPUs with and without caches. We first construct the sampling-based linear model to predict the runtime of an arbitrary OpenCL kernel. Based on anmore » analysis of NVIDIA GPUs' scheduling policies we determine the earliest sampling points that allow an accurate estimation. The linear model cannot capture well the significant effects that memory coalescing or caching as implemented in modern GPUs have on performance. We therefore propose a model based on ML techniques that takes several compiler-generated statistics about the kernel as well as the GPU's hardware performance counters as additional inputs to obtain a more accurate runtime performance estimation for modern GPUs. We demonstrate the effectiveness and broad applicability of the model by applying it to three different NVIDIA GPU architectures and one AMD GPU architecture. On an extensive set of OpenCL benchmarks, on average, the proposed model estimates the runtime performance with less than 7 percent error for a second-generation GTX 280 with no on-chip caches and less than 5 percent for the Fermi-based GTX 580 with hardware caches. On the Kepler-based GTX 680, the linear model has an error of less than 10 percent. On an AMD GPU architecture, Radeon HD 6970, the model estimates with 8 percent of error rates. As a result, the proposed technique outperforms existing models by a factor of 5 to 6 in terms of accuracy.« less

  7. Fe3O4/γ-Fe2O3 nanoparticle multilayers deposited by the Langmuir-Blodgett technique for gas sensors application.

    PubMed

    Capone, S; Manera, M G; Taurino, A; Siciliano, P; Rella, R; Luby, S; Benkovicova, M; Siffalovic, P; Majkova, E

    2014-02-04

    Fe3O4/γ-Fe2O3 nanoparticles (NPs) based thin films were used as active layers in solid state resistive chemical sensors. NPs were synthesized by high temperature solution phase reaction. Sensing NP monolayers (ML) were deposited by Langmuir-Blodgett (LB) techniques onto chemoresistive transduction platforms. The sensing ML were UV treated to remove NP insulating capping. Sensors surface was characterized by scanning electron microscopy (SEM). Systematic gas sensing tests in controlled atmosphere were carried out toward NO2, CO, and acetone at different concentrations and working temperatures of the sensing layers. The best sensing performance results were obtained for sensors with higher NPs coverage (10 ML), mainly for NO2 gas showing interesting selectivity toward nitrogen oxides. Electrical properties and conduction mechanisms are discussed.

  8. Using Machine Learning to Predict MCNP Bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grechanuk, Pavel Aleksandrovi

    For many real-world applications in radiation transport where simulations are compared to experimental measurements, like in nuclear criticality safety, the bias (simulated - experimental k eff) in the calculation is an extremely important quantity used for code validation. The objective of this project is to accurately predict the bias of MCNP6 [1] criticality calculations using machine learning (ML) algorithms, with the intention of creating a tool that can complement the current nuclear criticality safety methods. In the latest release of MCNP6, the Whisper tool is available for criticality safety analysts and includes a large catalogue of experimental benchmarks, sensitivity profiles,more » and nuclear data covariance matrices. This data, coming from 1100+ benchmark cases, is used in this study of ML algorithms for criticality safety bias predictions.« less

  9. Bridging paradigms: hybrid mechanistic-discriminative predictive models.

    PubMed

    Doyle, Orla M; Tsaneva-Atansaova, Krasimira; Harte, James; Tiffin, Paul A; Tino, Peter; Díaz-Zuccarini, Vanessa

    2013-03-01

    Many disease processes are extremely complex and characterized by multiple stochastic processes interacting simultaneously. Current analytical approaches have included mechanistic models and machine learning (ML), which are often treated as orthogonal viewpoints. However, to facilitate truly personalized medicine, new perspectives may be required. This paper reviews the use of both mechanistic models and ML in healthcare as well as emerging hybrid methods, which are an exciting and promising approach for biologically based, yet data-driven advanced intelligent systems.

  10. Advanced Sine Wave Modulation of Continuous Wave Laser System for Atmospheric CO2 Differential Absorption Measurements

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.; Lin, Bing; Nehrir, Amin R.

    2014-01-01

    NASA Langley Research Center in collaboration with ITT Exelis have been experimenting with Continuous Wave (CW) laser absorption spectrometer (LAS) as a means of performing atmospheric CO2 column measurements from space to support the Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) mission.Because range resolving Intensity Modulated (IM) CW lidar techniques presented here rely on matched filter correlations, autocorrelation properties without side lobes or other artifacts are highly desirable since the autocorrelation function is critical for the measurements of lidar return powers, laser path lengths, and CO2 column amounts. In this paper modulation techniques are investigated that improve autocorrelation properties. The modulation techniques investigated in this paper include sine waves modulated by maximum length (ML) sequences in various hardware configurations. A CW lidar system using sine waves modulated by ML pseudo random noise codes is described, which uses a time shifting approach to separate channels and make multiple, simultaneous online/offline differential absorption measurements. Unlike the pure ML sequence, this technique is useful in hardware that is band pass filtered as the IM sine wave carrier shifts the main power band. Both amplitude and Phase Shift Keying (PSK) modulated IM carriers are investigated that exibit perfect autocorrelation properties down to one cycle per code bit. In addition, a method is presented to bandwidth limit the ML sequence based on a Gaussian filter implemented in terms of Jacobi theta functions that does not seriously degrade the resolution or introduce side lobes as a means of reducing aliasing and IM carrier bandwidth.

  11. Technical note: In vitro total gas and methane production measurements from closed or vented rumen batch culture systems.

    PubMed

    Cattani, M; Tagliapietra, F; Maccarana, L; Hansen, H H; Bailoni, L; Schiavon, S

    2014-03-01

    This study compared measured gas production (GP) and computed CH4 production values provided by closed or vented bottles connected to gas collection bags. Two forages and 3 concentrates were incubated. Two incubations were conducted, where the 5 feeds were tested in 3 replicates in closed or vented bottles, plus 4 blanks, for a total of 64 bottles. Half of the bottles were not vented, and the others were vented at a fixed pressure (6.8 kPa) and gas was collected into one gas collection bag connected to each bottle. Each bottle (317 mL) was filled with 0.4000 ± 0.0010 g of feed sample and 60 mL of buffered rumen fluid (headspace volume = 257 mL) and incubated at 39.0°C for 24 h. At 24 h, gas samples were collected from the headspace of closed bottles or from headspace and bags of vented bottles and analyzed for CH4 concentration. Volumes of GP at 24 h were corrected for the gas dissolved in the fermentation fluid, according to Henry's law of gas solubility. Methane concentration (mL/100mL of GP) was measured and CH4 production (mL/g of incubated DM) was computed using corrected or uncorrected GP values. Data were analyzed for the effect of venting technique (T), feed (F), interaction between venting technique and feed (T × F), and incubation run as a random factor. Closed bottles provided lower uncorrected GP (-18%) compared with vented bottles, especially for concentrates. Correction for dissolved gas reduced but did not remove differences between techniques, and closed bottles (+25 mL of gas/g of incubated DM) had a greater magnitude of variation than did vented bottles (+1 mL of gas/g of incubated DM). Feeds differed in uncorrected and corrected GP, but the ranking was the same for the 2 techniques. The T × F interaction influenced uncorrected GP values, but this effect disappeared after correction. Closed bottles provided uncorrected CH4 concentrations 23% greater than that of vented bottles. Correction reduced but did not remove this difference. Methane concentration was influenced by feed but not by the T × F interaction. Corrected CH4 production was influenced by feed, but not by venting technique or the T × F interaction. Closed bottles provide good measurements of CH4 production but not of GP. Venting of bottles at low pressure permits a reliable evaluation of total GP and CH4 production. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. Tuning vertical alignment and field emission properties of multi-walled carbon nanotube bundles

    NASA Astrophysics Data System (ADS)

    Sreekanth, M.; Ghosh, S.; Srivastava, P.

    2018-01-01

    We report the growth of vertically aligned carbon nanotube bundles on Si substrate by thermal chemical vapor deposition technique. Vertical alignment was achieved without any carrier gas or lithography-assisted deposition. Growth has been carried out at 850 °C for different quantities of solution of xylene and ferrocene ranging from 2.25 to 3.00 ml in steps of 0.25 ml at a fixed concentration of 0.02 gm (ferrocene) per ml. To understand the growth mechanism, deposition was carried out for different concentrations of the solution by changing only the ferrocene quantity, ranging from 0.01 to 0.03 gm/ml. A tunable vertical alignment of multi-walled carbon nanotubes (CNTs) has been achieved by this process and examined by scanning and transmission electron microscopic techniques. Micro-crystalline structural analysis has been done using Raman spectroscopy. A systematic variation in field emission (FE) current density has been observed. The highest FE current density is seen for the film grown with 0.02 gm/ml concentration, which is attributed to the better alignment of CNTs, less structural disorder and less entanglement of CNTs on the surface. The alignment of CNTs has been qualitatively understood on the basis of self-assembled catalytic particles.

  13. Monitoring Air Quality with Leaf Yeasts.

    ERIC Educational Resources Information Center

    Richardson, D. H. S.; And Others

    1985-01-01

    Proposes that leaf yeast serve as quick, inexpensive, and effective techniques for monitoring air quality. Outlines procedures and provides suggestions for data analysis. Includes results from sample school groups who employed this technique. (ML)

  14. An Investigation on the Influence of Hyaluronic Acid on Polidocanol Foam Stability.

    PubMed

    Chen, An-Wei; Liu, Yi-Ran; Li, Kai; Liu, Shao-Hua

    2016-01-01

    Foam sclerotherapy is an effective treatment strategy for varicose veins and venous malformations. Foam stability varies according to foam composition, volume, and injection technique. To evaluate the stability of polidocanol (POL) foam with the addition of hyaluronic acid (HA). Group A: 2 mL of 1% POL + 0 mL of 1% HA + 8 mL of air; Group B: 2 mL of 1% POL + 0.05 mL of 1% HA + 8 mL of air; Group C: 2 mL of 1% POL + 0.1 mL of 1% HA + 8 mL of air. Tessari's method was used for foam generation. The half-life, or the time for a volume of foam to be reduced to half of its original volume, was used to evaluate foam stability. Five recordings were made for each group. The half-life was 142.8 (±4.32) seconds for 1% POL without the addition of HA, 310.6 (±7.53) seconds with the addition of 0.05 mL of 1% HA, and 390.4 (±13.06) seconds with the addition of 0.1 mL of 1% HA. The stability of POL foam was highly increased by the addition of small amounts of HA.

  15. Fast Inference of Deep Neural Networks in FPGAs for Particle Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duarte, Javier; Han, Song; Harris, Philip

    Recent results at the Large Hadron Collider (LHC) have pointed to enhanced physics capabilities through the improvement of the real-time event processing techniques. Machine learning methods are ubiquitous and have proven to be very powerful in LHC physics, and particle physics as a whole. However, exploration of the use of such techniques in low-latency, low-power FPGA hardware has only just begun. FPGA-based trigger and data acquisition (DAQ) systems have extremely low, sub-microsecond latency requirements that are unique to particle physics. We present a case study for neural network inference in FPGAs focusing on a classifier for jet substructure which wouldmore » enable, among many other physics scenarios, searches for new dark sector particles and novel measurements of the Higgs boson. While we focus on a specific example, the lessons are far-reaching. We develop a package based on High-Level Synthesis (HLS) called hls4ml to build machine learning models in FPGAs. The use of HLS increases accessibility across a broad user community and allows for a drastic decrease in firmware development time. We map out FPGA resource usage and latency versus neural network hyperparameters to identify the problems in particle physics that would benefit from performing neural network inference with FPGAs. For our example jet substructure model, we fit well within the available resources of modern FPGAs with a latency on the scale of 100 ns.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cuocolo, A.; Esposito, S.; Volpe, M.

    We studied 11 hypertensive patients by a radionuclide technique using Gates' method with (/sup 99m/Tc)DTPA to investigate the acute effects of captopril on glomerular filtration rate (GFR). Five patients had hypertension with unilateral renal artery stenosis (RAS) angiographically documented and six patients had essential hypertension (EH). Total and split GFR were determined under control conditions and after oral administration of captopril (50 mg). In the patients with RAS, captopril induced a significant decrease of GFR in the stenotic kidneys (from 42.4 +/- 4 to 29.6 +/- 3 ml/min, p less than 0.01), while no changes were observed in the nonstenoticmore » kidneys (from 61.2 +/- 3 to 61.6 +/- 5 ml/min, NS). Total GFR was 103.6 +/- 5 ml/min under control conditions and decreased to 91.8 +/- 6 ml/min after captopril (p less than 0.05). No significant changes of GFR were detected after captopril administration in patients with EH. In a separate group of ten patients with EH, good correlation between 24-hr creatinine clearance and fractional uptake of (/sup 99m/Tc)DTPA was obtained. Good reproducibility of this radionuclide technique was also shown. This study demonstrates that the computed radionuclide GFR determination coupled with the captopril test allows one to unmask angiotensin II-dependent renal function and hemodynamic changes. This technique can be useful in clinical practice for identifying patients with renovascular hypertension.« less

  17. Quantitative stress measurement of elastic deformation using mechanoluminescent sensor: An intensity ratio model

    NASA Astrophysics Data System (ADS)

    Cai, Tao; Guo, Songtao; Li, Yongzeng; Peng, Di; Zhao, Xiaofeng; Liu, Yingzheng

    2018-04-01

    The mechanoluminescent (ML) sensor is a newly developed non-invasive technique for stress/strain measurement. However, its application has been mostly restricted to qualitative measurement due to the lack of a well-defined relationship between ML intensity and stress. To achieve accurate stress measurement, an intensity ratio model was proposed in this study to establish a quantitative relationship between the stress condition and its ML intensity in elastic deformation. To verify the proposed model, experiments were carried out on a ML measurement system using resin samples mixed with the sensor material SrAl2O4:Eu2+, Dy3+. The ML intensity ratio was found to be dependent on the applied stress and strain rate, and the relationship acquired from the experimental results agreed well with the proposed model. The current study provided a physical explanation for the relationship between ML intensity and its stress condition. The proposed model was applicable in various SrAl2O4:Eu2+, Dy3+-based ML measurement in elastic deformation, and could provide a useful reference for quantitative stress measurement using the ML sensor in general.

  18. Partial stapled hemorrhoidopexy: a minimally invasive technique for hemorrhoids.

    PubMed

    Lin, Hong-Cheng; He, Qiu-Lan; Ren, Dong-Lin; Peng, Hui; Xie, Shang-Kui; Su, Dan; Wang, Xiao-Xue

    2012-09-01

    This study was designed to assess the safety, efficacy, and postoperative outcomes of partial stapled hemorrhoidopexy (PSH). A prospective study was conducted between February and March 2010. PSH was performed with single-window anoscopes for single isolated hemorrhoids, bi-window anoscopes for two isolated hemorrhoids, and tri-window anoscopes for three isolated hemorrhoids or circumferential hemorrhoids. The data pertaining to demographics, preoperative characteristics and postoperative outcomes were collected and analyzed. Forty-four eligible patients underwent PSH. Single-window anoscopes were used in 2 patients, and bi- and tri-window anoscopes in 6 and 36 patients. The blood loss in patients with single-window, bi-window, and tri-window anoscopes was 6.0 ml (range 5.0-7.0 ml), 5.0 ml (range 5.0-6.5 ml), and 5.0 ml (4.5-14.5 ml) (P = 0.332). The mean postoperative visual analog scale score for pain was 3 (range, 1-4), 2 (range 1-4), 3 (range 2-6), 1 (range 0-3), 1 (range 0-2) and 2 (range 2-4) at 12 h, days 1, 2, 3, and 7, and at first defecation. The rate of urgency was 9.1%. No patients developed anal incontinence or stenosis. The 1-year recurrence rate of prolapsing hemorrhoids was 2.3%. Partial stapled hemorrhoidopexy appears to be a safe and effective technique for grade III-IV hemorrhoids. Encouragingly, PSH is associated with mild postoperative pain, few urgency episodes, and no stenosis or anal incontinence.

  19. Courseware Development Model (CDM): The Effects of CDM on Primary School Pre-Service Teachers' Achievements and Attitudes

    ERIC Educational Resources Information Center

    Efendioglu, Akin

    2012-01-01

    The main purpose of this study is to design a "Courseware Development Model" (CDM) and investigate its effects on pre-service teachers' academic achievements in the field of geography and attitudes toward computer-based education (ATCBE). The CDM consisted of three components: content (C), learning theory, namely, meaningful learning (ML), and…

  20. Combining Human and Machine Learning to Map Cropland in the 21st Century's Major Agricultural Frontier

    NASA Astrophysics Data System (ADS)

    Estes, L. D.; Debats, S. R.; Caylor, K. K.; Evans, T. P.; Gower, D.; McRitchie, D.; Searchinger, T.; Thompson, D. R.; Wood, E. F.; Zeng, L.

    2016-12-01

    In the coming decades, large areas of new cropland will be created to meet the world's rapidly growing food demands. Much of this new cropland will be in sub-Saharan Africa, where food needs will increase most and the area of remaining potential farmland is greatest. If we are to understand the impacts of global change, it is critical to accurately identify Africa's existing croplands and how they are changing. Yet the continent's smallholder-dominated agricultural systems are unusually challenging for remote sensing analyses, making accurate area estimates difficult to obtain, let alone important details related to field size and geometry. Fortunately, the rapidly growing archives of moderate to high-resolution satellite imagery hosted on open servers now offer an unprecedented opportunity to improve landcover maps. We present a system that integrates two critical components needed to capitalize on this opportunity: 1) human image interpretation and 2) machine learning (ML). Human judgment is needed to accurately delineate training sites within noisy imagery and a highly variable cover type, while ML provides the ability to scale and to interpret large feature spaces that defy human comprehension. Because large amounts of training data are needed (a major impediment for analysts), we use a crowdsourcing platform that connects amazon.com's Mechanical Turk service to satellite imagery hosted on open image servers. Workers map visible fields at pre-assigned sites, and are paid according to their mapping accuracy. Initial tests show overall high map accuracy and mapping rates >1800 km2/hour. The ML classifier uses random forests and randomized quasi-exhaustive feature selection, and is highly effective in classifying diverse agricultural types in southern Africa (AUC > 0.9). We connect the ML and crowdsourcing components to make an interactive learning framework. The ML algorithm performs an initial classification using a first batch of crowd-sourced maps, using thresholds of posterior probabilities to segregate sub-images classified with high or low confidence. Workers are then directed to collect new training data in low confidence sub-images, after which classification is repeated and re-assessed, and the entire process iterated until maximum possible accuracy is realized.

  1. Smart Training, Smart Learning: The Role of Cooperative Learning in Training for Youth Services.

    ERIC Educational Resources Information Center

    Doll, Carol A.

    1997-01-01

    Examines cooperative learning in youth services and adult education. Discusses characteristics of cooperative learning techniques; specific cooperative learning techniques (brainstorming, mini-lecture, roundtable technique, send-a-problem problem solving, talking chips technique, and three-step interview); and the role of the trainer. (AEF)

  2. 3D CT cerebral angiography technique using a 320-detector machine with a time-density curve and low contrast medium volume: comparison with fixed time delay technique.

    PubMed

    Das, K; Biswas, S; Roughley, S; Bhojak, M; Niven, S

    2014-03-01

    To describe a cerebral computed tomography angiography (CTA) technique using a 320-detector CT machine and a small contrast medium volume (35 ml, 15 ml for test bolus). Also, to compare the quality of these images with that of the images acquired using a larger contrast medium volume (90 or 120 ml) and a fixed time delay (FTD) of 18 s using a 16-detector CT machine. Cerebral CTA images were acquired using a 320-detector machine by synchronizing the scanning time with the time of peak enhancement as determined from the time-density curve (TDC) using a test bolus dose. The quality of CTA images acquired using this technique was compared with that obtained using a FTD of 18 s (by 16-detector CT), retrospectively. Average densities in four different intracranial arteries, overall opacification of arteries, and the degree of venous contamination were graded and compared. Thirty-eight patients were scanned using the TDC technique and 40 patients using the FTD technique. The arterial densities achieved by the TDC technique were higher (significant for supraclinoid and basilar arteries, p < 0.05). The proportion of images deemed as having "good" arterial opacification was 95% for TDC and 90% for FTD. The degree of venous contamination was significantly higher in images produced by the FTD technique (p < 0.001%). Good diagnostic quality CTA images with significant reduction of venous contamination can be achieved with a low contrast medium dose using a 320-detector machine by coupling the time of data acquisition with the time of peak enhancement. Copyright © 2013 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  3. Why so GLUMM? Detecting depression clusters through graphing lifestyle-environs using machine-learning methods (GLUMM).

    PubMed

    Dipnall, J F; Pasco, J A; Berk, M; Williams, L J; Dodd, S; Jacka, F N; Meyer, D

    2017-01-01

    Key lifestyle-environ risk factors are operative for depression, but it is unclear how risk factors cluster. Machine-learning (ML) algorithms exist that learn, extract, identify and map underlying patterns to identify groupings of depressed individuals without constraints. The aim of this research was to use a large epidemiological study to identify and characterise depression clusters through "Graphing lifestyle-environs using machine-learning methods" (GLUMM). Two ML algorithms were implemented: unsupervised Self-organised mapping (SOM) to create GLUMM clusters and a supervised boosted regression algorithm to describe clusters. Ninety-six "lifestyle-environ" variables were used from the National health and nutrition examination study (2009-2010). Multivariate logistic regression validated clusters and controlled for possible sociodemographic confounders. The SOM identified two GLUMM cluster solutions. These solutions contained one dominant depressed cluster (GLUMM5-1, GLUMM7-1). Equal proportions of members in each cluster rated as highly depressed (17%). Alcohol consumption and demographics validated clusters. Boosted regression identified GLUMM5-1 as more informative than GLUMM7-1. Members were more likely to: have problems sleeping; unhealthy eating; ≤2 years in their home; an old home; perceive themselves underweight; exposed to work fumes; experienced sex at ≤14 years; not perform moderate recreational activities. A positive relationship between GLUMM5-1 (OR: 7.50, P<0.001) and GLUMM7-1 (OR: 7.88, P<0.001) with depression was found, with significant interactions with those married/living with partner (P=0.001). Using ML based GLUMM to form ordered depressive clusters from multitudinous lifestyle-environ variables enabled a deeper exploration of the heterogeneous data to uncover better understandings into relationships between the complex mental health factors. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  4. How to Improve Fault Tolerance in Disaster Predictions: A Case Study about Flash Floods Using IoT, ML and Real Data.

    PubMed

    Furquim, Gustavo; Filho, Geraldo P R; Jalali, Roozbeh; Pessin, Gustavo; Pazzi, Richard W; Ueyama, Jó

    2018-03-19

    The rise in the number and intensity of natural disasters is a serious problem that affects the whole world. The consequences of these disasters are significantly worse when they occur in urban districts because of the casualties and extent of the damage to goods and property that is caused. Until now feasible methods of dealing with this have included the use of wireless sensor networks (WSNs) for data collection and machine-learning (ML) techniques for forecasting natural disasters. However, there have recently been some promising new innovations in technology which have supplemented the task of monitoring the environment and carrying out the forecasting. One of these schemes involves adopting IP-based (Internet Protocol) sensor networks, by using emerging patterns for IoT. In light of this, in this study, an attempt has been made to set out and describe the results achieved by SENDI (System for dEtecting and forecasting Natural Disasters based on IoT). SENDI is a fault-tolerant system based on IoT, ML and WSN for the detection and forecasting of natural disasters and the issuing of alerts. The system was modeled by means of ns-3 and data collected by a real-world WSN installed in the town of São Carlos - Brazil, which carries out the data collection from rivers in the region. The fault-tolerance is embedded in the system by anticipating the risk of communication breakdowns and the destruction of the nodes during disasters. It operates by adding intelligence to the nodes to carry out the data distribution and forecasting, even in extreme situations. A case study is also included for flash flood forecasting and this makes use of the ns-3 SENDI model and data collected by WSN.

  5. Can-Evo-Ens: Classifier stacking based evolutionary ensemble system for prediction of human breast cancer using amino acid sequences.

    PubMed

    Ali, Safdar; Majid, Abdul

    2015-04-01

    The diagnostic of human breast cancer is an intricate process and specific indicators may produce negative results. In order to avoid misleading results, accurate and reliable diagnostic system for breast cancer is indispensable. Recently, several interesting machine-learning (ML) approaches are proposed for prediction of breast cancer. To this end, we developed a novel classifier stacking based evolutionary ensemble system "Can-Evo-Ens" for predicting amino acid sequences associated with breast cancer. In this paper, first, we selected four diverse-type of ML algorithms of Naïve Bayes, K-Nearest Neighbor, Support Vector Machines, and Random Forest as base-level classifiers. These classifiers are trained individually in different feature spaces using physicochemical properties of amino acids. In order to exploit the decision spaces, the preliminary predictions of base-level classifiers are stacked. Genetic programming (GP) is then employed to develop a meta-classifier that optimal combine the predictions of the base classifiers. The most suitable threshold value of the best-evolved predictor is computed using Particle Swarm Optimization technique. Our experiments have demonstrated the robustness of Can-Evo-Ens system for independent validation dataset. The proposed system has achieved the highest value of Area Under Curve (AUC) of ROC Curve of 99.95% for cancer prediction. The comparative results revealed that proposed approach is better than individual ML approaches and conventional ensemble approaches of AdaBoostM1, Bagging, GentleBoost, and Random Subspace. It is expected that the proposed novel system would have a major impact on the fields of Biomedical, Genomics, Proteomics, Bioinformatics, and Drug Development. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Automated clinical trial eligibility prescreening: increasing the efficiency of patient identification for clinical trials in the emergency department

    PubMed Central

    Ni, Yizhao; Kennebeck, Stephanie; Dexheimer, Judith W; McAneney, Constance M; Tang, Huaxiu; Lingren, Todd; Li, Qi; Zhai, Haijun; Solti, Imre

    2015-01-01

    Objectives (1) To develop an automated eligibility screening (ES) approach for clinical trials in an urban tertiary care pediatric emergency department (ED); (2) to assess the effectiveness of natural language processing (NLP), information extraction (IE), and machine learning (ML) techniques on real-world clinical data and trials. Data and methods We collected eligibility criteria for 13 randomly selected, disease-specific clinical trials actively enrolling patients between January 1, 2010 and August 31, 2012. In parallel, we retrospectively selected data fields including demographics, laboratory data, and clinical notes from the electronic health record (EHR) to represent profiles of all 202795 patients visiting the ED during the same period. Leveraging NLP, IE, and ML technologies, the automated ES algorithms identified patients whose profiles matched the trial criteria to reduce the pool of candidates for staff screening. The performance was validated on both a physician-generated gold standard of trial–patient matches and a reference standard of historical trial–patient enrollment decisions, where workload, mean average precision (MAP), and recall were assessed. Results Compared with the case without automation, the workload with automated ES was reduced by 92% on the gold standard set, with a MAP of 62.9%. The automated ES achieved a 450% increase in trial screening efficiency. The findings on the gold standard set were confirmed by large-scale evaluation on the reference set of trial–patient matches. Discussion and conclusion By exploiting the text of trial criteria and the content of EHRs, we demonstrated that NLP-, IE-, and ML-based automated ES could successfully identify patients for clinical trials. PMID:25030032

  7. How to Improve Fault Tolerance in Disaster Predictions: A Case Study about Flash Floods Using IoT, ML and Real Data

    PubMed Central

    Furquim, Gustavo; Filho, Geraldo P. R.; Pessin, Gustavo; Pazzi, Richard W.

    2018-01-01

    The rise in the number and intensity of natural disasters is a serious problem that affects the whole world. The consequences of these disasters are significantly worse when they occur in urban districts because of the casualties and extent of the damage to goods and property that is caused. Until now feasible methods of dealing with this have included the use of wireless sensor networks (WSNs) for data collection and machine-learning (ML) techniques for forecasting natural disasters. However, there have recently been some promising new innovations in technology which have supplemented the task of monitoring the environment and carrying out the forecasting. One of these schemes involves adopting IP-based (Internet Protocol) sensor networks, by using emerging patterns for IoT. In light of this, in this study, an attempt has been made to set out and describe the results achieved by SENDI (System for dEtecting and forecasting Natural Disasters based on IoT). SENDI is a fault-tolerant system based on IoT, ML and WSN for the detection and forecasting of natural disasters and the issuing of alerts. The system was modeled by means of ns-3 and data collected by a real-world WSN installed in the town of São Carlos - Brazil, which carries out the data collection from rivers in the region. The fault-tolerance is embedded in the system by anticipating the risk of communication breakdowns and the destruction of the nodes during disasters. It operates by adding intelligence to the nodes to carry out the data distribution and forecasting, even in extreme situations. A case study is also included for flash flood forecasting and this makes use of the ns-3 SENDI model and data collected by WSN. PMID:29562657

  8. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.

    PubMed

    Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-09-13

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.

  9. Estimating Mixing Heights Using Microwave Temperature Profiler

    NASA Technical Reports Server (NTRS)

    Nielson-Gammon, John; Powell, Christina; Mahoney, Michael; Angevine, Wayne

    2008-01-01

    A paper describes the Microwave Temperature Profiler (MTP) for making measurements of the planetary boundary layer thermal structure data necessary for air quality forecasting as the Mixing Layer (ML) height determines the volume in which daytime pollution is primarily concentrated. This is the first time that an airborne temperature profiler has been used to measure the mixing layer height. Normally, this is done using a radar wind profiler, which is both noisy and large. The MTP was deployed during the Texas 2000 Air Quality Study (TexAQS-2000). An objective technique was developed and tested for estimating the ML height from the MTP vertical temperature profiles. In order to calibrate the technique and evaluate the usefulness of this approach, estimates from a variety of measurements during the TexAQS-2000 were compared. Estimates of ML height were used from radiosondes, radar wind profilers, an aerosol backscatter lidar, and in-situ aircraft measurements in addition to those from the MTP.

  10. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter

    PubMed Central

    Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-01-01

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154

  11. In vitro interactions between different beta-lactam antibiotics and fosfomycin against bloodstream isolates of enterococci.

    PubMed Central

    Pestel, M; Martin, E; Aucouturier, C; Lemeland, J F; Caron, F

    1995-01-01

    The effects of 16 different beta-lactam-fosfomycin combinations against 50 bloodstream enterococci were compared by a disk diffusion technique. Cefotaxime exhibited the best interaction. By checkerboard studies, the cefotaxime-fosfomycin combination provided a synergistic bacteriostatic effect against 45 of the 50 isolates (MIC of cefotaxime at which 90% of the isolates were inhibited, >2,048 micrograms/ml; MIC of fosfomycin at which 90% of the isolates were inhibited, 128 micrograms/ml; mean of fractional inhibitory concentration indexes, 0.195). By killing curves, cefotaxime (at 64 micrograms/ml) combined with fosfomycin (at > or = 64 micrograms/ml) was bactericidal against 6 of 10 strains tested. PMID:8619593

  12. The effect of machine learning regression algorithms and sample size on individualized behavioral prediction with functional connectivity features.

    PubMed

    Cui, Zaixu; Gong, Gaolang

    2018-06-02

    Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Dropout Prediction in E-Learning Courses through the Combination of Machine Learning Techniques

    ERIC Educational Resources Information Center

    Lykourentzou, Ioanna; Giannoukos, Ioannis; Nikolopoulos, Vassilis; Mpardis, George; Loumos, Vassili

    2009-01-01

    In this paper, a dropout prediction method for e-learning courses, based on three popular machine learning techniques and detailed student data, is proposed. The machine learning techniques used are feed-forward neural networks, support vector machines and probabilistic ensemble simplified fuzzy ARTMAP. Since a single technique may fail to…

  14. A technique for extracting blood samples from mice in fire toxicity tests

    NASA Technical Reports Server (NTRS)

    Bucci, T. J.; Hilado, C. J.; Lopez, M. T.

    1976-01-01

    The extraction of adequate blood samples from moribund and dead mice has been a problem because of the small quantity of blood in each animal and the short time available between the animals' death and coagulation of the blood. These difficulties are particularly critical in fire toxicity tests because removal of the test animals while observing proper safety precautions for personnel is time-consuming. Techniques for extracting blood samples from mice were evaluated, and a technique was developed to obtain up to 0.8 ml of blood from a single mouse after death. The technique involves rapid exposure and cutting of the posterior vena cava and accumulation of blood in the peritoneal space. Blood samples of 0.5 ml or more from individual mice have been consistently obtained as much as 16 minutes after apparent death. Results of carboxyhemoglobin analyses of blood appeared reproducible and consistent with carbon monoxide concentrations in the exposure chamber.

  15. Classifying bent radio galaxies from a mixture of point-like/extended images with Machine Learning.

    NASA Astrophysics Data System (ADS)

    Bastien, David; Oozeer, Nadeem; Somanah, Radhakrishna

    2017-05-01

    The hypothesis that bent radio sources are supposed to be found in rich, massive galaxy clusters and the avalibility of huge amount of data from radio surveys have fueled our motivation to use Machine Learning (ML) to identify bent radio sources and as such use them as tracers for galaxy clusters. The shapelet analysis allowed us to decompose radio images into 256 features that could be fed into the ML algorithm. Additionally, ideas from the field of neuro-psychology helped us to consider training the machine to identify bent galaxies at different orientations. From our analysis, we found that the Random Forest algorithm was the most effective with an accuracy rate of 92% for a classification of point and extended sources as well as an accuracy of 80% for bent and unbent classification.

  16. Humans and Autonomy: Implications of Shared Decision Making for Military Operations

    DTIC Science & Technology

    2017-01-01

    and machine learning transparency are identified as future research opportunities. 15. SUBJECT TERMS autonomy, human factors, intelligent agents...network as either the mission changes or an agent becomes disabled (DSB 2012). Fig. 2 Control structures for human agent teams. Robots without tools... learning (ML) algorithms monitor progress. However, operators have final executive authority; they are able to tweak the plan or choose an option

  17. A Photometric Technique for Determining Fluid Concentration using Consumer-Grade Hardware

    NASA Technical Reports Server (NTRS)

    Leslie, F.; Ramachandran, N.

    1999-01-01

    In support of a separate study to produce an exponential concentration gradient in a magnetic fluid, a noninvasive technique for determining, species concentration from off-the-shelf hardware has been developed. The approach uses a backlighted fluid test cell photographed with a commercial digital camcorder. Because the light extinction coefficient is wavelength dependent, tests were conducted to determine the best filter color to use, although some guidance was also provided using an absorption spectrophotometer. With the appropriate filter in place, the provide attenuation of the light passing, through the test cell was captured by the camcorder. The digital image was analyzed for intensity using, software from Scion Image Corp. downloaded from the Internet. The analysis provides a two-dimensional array of concentration with an average error of 0.0095 ml/ml. This technique is superior to invasive techniques, which require extraction of a sample that disturbs the concentration distribution in the test cell. Refinements of this technique using a true monochromatic laser light Source are also discussed.

  18. Machine Learning-based discovery of closures for reduced models of dynamical systems

    NASA Astrophysics Data System (ADS)

    Pan, Shaowu; Duraisamy, Karthik

    2017-11-01

    Despite the successful application of machine learning (ML) in fields such as image processing and speech recognition, only a few attempts has been made toward employing ML to represent the dynamics of complex physical systems. Previous attempts mostly focus on parameter calibration or data-driven augmentation of existing models. In this work we present a ML framework to discover closure terms in reduced models of dynamical systems and provide insights into potential problems associated with data-driven modeling. Based on exact closure models for linear system, we propose a general linear closure framework from viewpoint of optimization. The framework is based on trapezoidal approximation of convolution term. Hyperparameters that need to be determined include temporal length of memory effect, number of sampling points, and dimensions of hidden states. To circumvent the explicit specification of memory effect, a general framework inspired from neural networks is also proposed. We conduct both a priori and posteriori evaluations of the resulting model on a number of non-linear dynamical systems. This work was supported in part by AFOSR under the project ``LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.

  19. Comparison of talc-Celite and polyelectrolyte 60 in virus recovery from sewage: development of technique and experiments with poliovirus (type 1, Sabin)-contaminated multilitre samples.

    PubMed

    Sattar, S A; Westwood, J C

    1976-11-01

    For virus recovery from sewage, a mixture of talc and Celite was tested as a possible inexpensive substitute for polyelectrolyte 60 (PE 60). After adjustment of pH to 6 and the addition of 45-60 plaque forming units (PFU)/ml of poliovirus type I (Sabin) to the sewage sample under test, 100 ml of it was passed through either a PE 60 (400 mg) or a talc (300 mg)-Celite (100 mg) layer; the layer-adsorbed virus was eluted with 10 ml of 10% fetal calf serum (FCS) in saline (pH 7.2). In these experiments, PE 60 layers recovered 73-80% (mean 76%) of the input virus. In comparison, virus recoveries with the talc-Celite layers were 65-70% (mean 68%). Passage of 5 litres of raw sewage (containing 50 to 1.26 X 10(5) PFU/100 ml of the poliovirus) through the talc (15 g)-Celite (5 g) layers and virus elution with 50 ml of 10% FCS in saline gave virus recoveries of 33-63% (mean 49%). Except for pH adjustment and prefiltration through two layers of gauze to remove large solids, no other sample pretreatment was found to be necessary. Application of this technique to recovery of indigenous viruses from field samples of raw sewage and effluents has been highly satisfactory.

  20. Learning a force field for the martensitic phase transformation in Zr

    NASA Astrophysics Data System (ADS)

    Zong, Hongxiang; Pilania, Ghanshyam; Ramprasad, Rampi; Lookman, Turab

    Atomic simulations provide an effective means to understand the underlying physics of martensitic transformations under extreme conditions. However, this is still a challenge for certain phase transforming metals due to the lack of an accurate classical force field. Quantum molecular dynamics (QMD) simulations are accurate but expensive. During the course of QMD simulations, similar configurations are constantly visited and revisited. Machine Learning can effectively learn from past visits and, therefore, eliminate such redundancies. In this talk, we will discuss the development of a hybrid ML-QMD method in which on-demand, on-the-fly quantum mechanical (QM) calculations are performed to accelerate calculations of interatomic forces at much lower computational costs. Using Zirconium as a model system for which accurate atomisctic potentials are currently unvailable we will demonstrate the feasibility and effectiveness of our approach. Specifically, the computed structural phase transformation behavior within the ML-QMD approach will be compared with available experimental results. Furthermore, results on phonons, stacking fault energies, and activation barriers for the homogeneous martensitic transformation in Zr will be presented.

  1. Novel joint cupping clinical maneuver for ultrasonographic detection of knee joint effusions.

    PubMed

    Uryasev, Oleg; Joseph, Oliver C; McNamara, John P; Dallas, Apostolos P

    2013-11-01

    Knee effusions occur due to traumatic and atraumatic causes. Clinical diagnosis currently relies on several provocative techniques to demonstrate knee joint effusions. Portable bedside ultrasonography (US) is becoming an adjunct to diagnosis of effusions. We hypothesized that a US approach with a clinical joint cupping maneuver increases sensitivity in identifying effusions as compared to US alone. Using unembalmed cadaver knees, we injected fluid to create effusions up to 10 mL. Each effusion volume was measured in a lateral transverse location with respect to the patella. For each effusion we applied a joint cupping maneuver from an inferior approach, and re-measured the effusion. With increased volume of saline infusion, the mean depth of effusion on ultrasound imaging increased as well. Using a 2-mm cutoff, we visualized an effusion without the joint cupping maneuver at 2.5 mL and with the joint cupping technique at 1 mL. Mean effusion diameter increased on average 0.26 cm for the joint cupping maneuver as compared to without the maneuver. The effusion depth was statistically different at 2.5 and 7.5 mL (P < .05). Utilizing a joint cupping technique in combination with US is a valuable tool in assessing knee effusions, especially those of subclinical levels. Effusion measurements are complicated by uneven distribution of effusion fluid. A clinical joint cupping maneuver concentrates the fluid in one recess of the joint, increasing the likelihood of fluid detection using US. © 2013 Elsevier Inc. All rights reserved.

  2. Vitality Stains and Real Time PCR Studies to Delineate the Interactions of Pichia anomala and Aspergillus flavus

    USDA-ARS?s Scientific Manuscript database

    The objectives of this study were to probe the effect of the yeast, P. anomala against A flavus by using real time RT-PCR technique and vitality fluorescent stains. Yeast and fungi were inoculated into a 250 ml-flask containing 50 ml potato dextrose broth (PDB) at yeast to fungus (Y : F) ratios of ...

  3. Validation of different spectrophotometric methods for determination of vildagliptin and metformin in binary mixture

    NASA Astrophysics Data System (ADS)

    Abdel-Ghany, Maha F.; Abdel-Aziz, Omar; Ayad, Miriam F.; Tadros, Mariam M.

    New, simple, specific, accurate, precise and reproducible spectrophotometric methods have been developed and subsequently validated for determination of vildagliptin (VLG) and metformin (MET) in binary mixture. Zero order spectrophotometric method was the first method used for determination of MET in the range of 2-12 μg mL-1 by measuring the absorbance at 237.6 nm. The second method was derivative spectrophotometric technique; utilized for determination of MET at 247.4 nm, in the range of 1-12 μg mL-1. Derivative ratio spectrophotometric method was the third technique; used for determination of VLG in the range of 4-24 μg mL-1 at 265.8 nm. Fourth and fifth methods adopted for determination of VLG in the range of 4-24 μg mL-1; were ratio subtraction and mean centering spectrophotometric methods, respectively. All the results were statistically compared with the reported methods, using one-way analysis of variance (ANOVA). The developed methods were satisfactorily applied to analysis of the investigated drugs and proved to be specific and accurate for quality control of them in pharmaceutical dosage forms.

  4. A Simulation of AI Programming Techniques in BASIC.

    ERIC Educational Resources Information Center

    Mandell, Alan

    1986-01-01

    Explains the functions of and the techniques employed in expert systems. Offers the program "The Periodic Table Expert," as a model for using artificial intelligence techniques in BASIC. Includes the program listing and directions for its use on: Tandy 1000, 1200, and 2000; IBM PC; PC Jr; TRS-80; and Apple computers. (ML)

  5. Analyzing the Effects of Various Concept Mapping Techniques on Learning Achievement under Different Learning Styles

    ERIC Educational Resources Information Center

    Chiou, Chei-Chang; Lee, Li-Tze; Tien, Li-Chu; Wang, Yu-Min

    2017-01-01

    This study explored the effectiveness of different concept mapping techniques on the learning achievement of senior accounting students and whether achievements attained using various techniques are affected by different learning styles. The techniques are computer-assisted construct-by-self-concept mapping (CACSB), computer-assisted…

  6. Malignancy Detection on Mammography Using Dual Deep Convolutional Neural Networks and Genetically Discovered False Color Input Enhancement.

    PubMed

    Teare, Philip; Fishman, Michael; Benzaquen, Oshra; Toledano, Eyal; Elnekave, Eldad

    2017-08-01

    Breast cancer is the most prevalent malignancy in the US and the third highest cause of cancer-related mortality worldwide. Regular mammography screening has been attributed with doubling the rate of early cancer detection over the past three decades, yet estimates of mammographic accuracy in the hands of experienced radiologists remain suboptimal with sensitivity ranging from 62 to 87% and specificity from 75 to 91%. Advances in machine learning (ML) in recent years have demonstrated capabilities of image analysis which often surpass those of human observers. Here we present two novel techniques to address inherent challenges in the application of ML to the domain of mammography. We describe the use of genetic search of image enhancement methods, leading us to the use of a novel form of false color enhancement through contrast limited adaptive histogram equalization (CLAHE), as a method to optimize mammographic feature representation. We also utilize dual deep convolutional neural networks at different scales, for classification of full mammogram images and derivative patches combined with a random forest gating network as a novel architectural solution capable of discerning malignancy with a specificity of 0.91 and a specificity of 0.80. To our knowledge, this represents the first automatic stand-alone mammography malignancy detection algorithm with sensitivity and specificity performance similar to that of expert radiologists.

  7. A Comparison of different learning models used in Data Mining for Medical Data

    NASA Astrophysics Data System (ADS)

    Srimani, P. K.; Koti, Manjula Sanjay

    2011-12-01

    The present study aims at investigating the different Data mining learning models for different medical data sets and to give practical guidelines to select the most appropriate algorithm for a specific medical data set. In practical situations, it is absolutely necessary to take decisions with regard to the appropriate models and parameters for diagnosis and prediction problems. Learning models and algorithms are widely implemented for rule extraction and the prediction of system behavior. In this paper, some of the well-known Machine Learning(ML) systems are investigated for different methods and are tested on five medical data sets. The practical criteria for evaluating different learning models are presented and the potential benefits of the proposed methodology for diagnosis and learning are suggested.

  8. Resonance light scattering technique for the determination of protein with rutin and cetylpyridine bromide system.

    PubMed

    Liu, Yang; Yang, Jinghe; Liu, Shufang; Wu, Xia; Su, Benyu; Wu, Tao

    2005-02-01

    A new resonance light scattering (RLS) assay of protein is presented. In Tris-NaOH (pH = 10.93) buffer, the RLS of rutin-cetylpyridine bromide (CPB) system can be greatly enhanced by protein, including bovine serum albumin (BSA) and human serum albumin (HSA). The enhanced RLS intensities are in proportion to the concentration of proteins in the range of 5 x 10(-9) to 2.5 x 10(-6) g ml(-1) for BSA and 2.5 x 10(-8) to 3.5 x 10(-6) g ml(-1) for HSA. The detection limits (S/N = 3) are 3.0 ng ml(-1) for BSA and 10.0 ng ml(-1) for HSA. Samples are determined satisfactorily.

  9. Smart catheter flow sensor for real-time continuous regional cerebral blood flow monitoring

    NASA Astrophysics Data System (ADS)

    Li, Chunyan; Wu, Pei-Ming; Hartings, Jed A.; Wu, Zhizhen; Ahn, Chong H.; LeDoux, David; Shutter, Lori A.; Narayan, Raj K.

    2011-12-01

    We present a smart catheter flow sensor for real-time, continuous, and quantitative measurement of regional cerebral blood flow using in situ temperature and thermal conductivity compensation. The flow sensor operates in a constant-temperature mode and employs a periodic heating and cooling technique. This approach ensures zero drift and provides highly reliable data with microelectromechanical system-based thin film sensors. The developed flow sensor has a sensitivity of 0.973 mV/ml/100 g/min in the range from 0 to 160 ml/100 g/min with a linear correlation coefficient of R2 = 0.9953. It achieves a resolution of 0.25 ml/100 g/min and an accuracy better than 5 ml/100 g/min.

  10. Characterization of a New Type of Human Papillomavirus That Causes Skin Warts

    PubMed Central

    Orth, Gérard; Favre, Michel; Croissant, Odile

    1977-01-01

    A human papillomavirus (HPV) was isolated from the lesions of a patient (ML) bearing numerous hand common warts. This virus was compared with the well-characterized HPV found in typical plantar warts (plantar HPV). ML and plantar HPV DNAs have similar molecular weights (5.26 × 106 and 5.23 × 106, respectively) but were shown to be different by restriction enzyme analysis. When the cleavage products of both DNAs by endonuclease EcoRI, BamI, HpaI, or Hind were analyzed by electron microscopy, one, two, one, and four fragments were detected for ML HPV DNA instead of the two, one, two, and six fragments, respectively, detected for plantar HPV DNA. In contrast to plantar HPV DNA, a high proportion of ML HPV DNA molecules were resistant to these restriction enzymes. Most, if not all, of the molecules were either resistant to BamI and sensitive to EcoRI or sensitive to BamI and resistant to EcoRI. After denaturation and renaturation of the cleavage products of ML HPV DNA by a mixture of the two enzymes, the circular “heteroduplexes” formed showed one to three heterology loops corresponding to about 4 to 8% of the genome length. No sequence homology was detected between ML and plantar HPV DNAs by cRNA-DNA filter hybridization, by measuring the reassociation kinetics of an iodinated plantar HPV DNA in the presence of a 25-fold excess of ML HPV DNA, or by the heteroduplex technique. The two viruses had distinct electrophoretic polypeptide patterns and showed no antigenic cross-reaction by immunodiffusion or immunofluorescence techniques. Preliminary cRNA-DNA hybridization experiments, using viral DNAs from single or pooled plantar or hand warts, suggest that hand common warts are associated with viruses similar or related to ML HPV. The existence of at least two distinct types of HPVs that cause skin warts was demonstrated; they were provisionally called HPV type 1 and HPV type 2, with plantar HPV and ML HPV as prototypical viruses, respectively. Images PMID:198572

  11. [Application of lower abdominal aorta balloon occlusion technique by ultrasound guiding during caesarean section in patients with pernicious placenta previa].

    PubMed

    Wei, L C; Gong, G Y; Chen, J H; Hou, P Y; Li, Q Y; Zheng, Z Y; Su, Y M; Zheng, Y; Luo, C Z; Zhang, K; Xu, T F; Ye, Y H; Lan, Y J; Wei, X M

    2018-03-27

    Objective: To discuss the feasibility, effect and safety of lower abdominal aorta balloon occlusion technique by ultrasound guiding during caesarean section in patients with pernicious placenta previa. Methods: The clinical data of 40 patients with pernicious placenta previa complicated with placenta accreta from January 2015 to August 2017 in Liuzhou workers hospital were analyzed retrospectively. The study group included 20 cases, which were operated in the way of cesarean section combined lower abdominal aorta balloon occlusion technique by ultrasound guiding, while the control group also included 20 cases, which were operated in the way of the conventional cesarean section without balloon occlusion technique. The bleeding amount, blood transfusion volume, operative total time, hysterectomy and complications of the two groups were compared. Results: The bleeding amount and blood transfusion volume in study group were(850±100)ml and (400±50)ml, which were lower than that of the control group[(2 500±230)ml and (1 500±100)ml], the difference was statistically significant( t =35.624, 16.523, all P <0.05). In addition, the hysterectomy rate in study group was 5%, which was lower than that in the control group(30%), the difference was statistically significant(χ 2 =8.672, P <0.05). And the total time of operation was (2.0±0.5)h in the study group, which was shorter than that in the control group[(3.5±0.4)h]. The difference was statistically significant( t =11.362, P <0.05). No postoperative complications took place in the study group.The blood pressure, heart rate and blood oxygen fluctuated significantly, and the postoperative renal function was significantly reduced in the control group. Conclusions: The lower abdominal aorta balloon occlusion technique by ultrasound guiding during a caesarean section in patients with pernicious placenta previa can effectively control the bleeding during operation, and preserve reproductive function to the utmost degree.Therefore, the technique is safe, feasible, convenient and cheaper, and worthy of being widely applied in clinic.

  12. Minimum inhibitory concentrations of tulathromycin against respiratory bacterial pathogens isolated from clinical cases in European cattle and swine and variability arising from changes in in vitro methodology.

    PubMed

    Godinho, Kevin S; Keane, Sue G; Nanjiani, Ian A; Benchaoui, Hafid A; Sunderland, Simon J; Jones, M Anne; Weatherley, Andrew J; Gootz, Thomas D; Rowan, Tim G

    2005-01-01

    The in vitro activity of tulathromycin was evaluated against common bovine and porcine respiratory pathogens collected from outbreaks of clinical disease across eight European countries from 1998 to 2001. Minimum inhibitory concentrations (MICs) for one isolate of each bacterial species from each outbreak were determined using a broth microdilution technique. The lowest concentrations inhibiting the growth of 90% of isolates (MIC90) for tulathromycin were 2 microg/ml for Mannheimia (Pasteurella) haemolytica, 1 microg/ml for Pasteurella multocida (bovine), and 2 microg/ml for Pasteurella multocida (porcine) and ranged from 0.5 to 4 microg/ml for Histophilus somni (Haemophilus somnus) and from 4 to 16 microg/ml for Actinobacillus pleuropneumoniae. Isolates were retested in the presence of serum. The activity of tulathromycin against fastidious organisms was affected by culture conditions, and MICs were reduced in the presence of serum.

  13. The Effects of Learning Strategies on Mathematical Literacy: A Comparison between Lower and Higher Achieving Countries

    ERIC Educational Resources Information Center

    Magen-Nagar, Noga

    2016-01-01

    The purpose of the current study is to explore the effects of learning strategies on Mathematical Literacy (ML) of students in higher and lower achieving countries. To address this issue, the study utilizes PISA2002 data to conduct a multi-level analysis (HLM) of Hong Kong and Israel students. In PISA2002, Israel was rated 31st in Mathematics,…

  14. Coupling Matched Molecular Pairs with Machine Learning for Virtual Compound Optimization.

    PubMed

    Turk, Samo; Merget, Benjamin; Rippmann, Friedrich; Fulle, Simone

    2017-12-26

    Matched molecular pair (MMP) analyses are widely used in compound optimization projects to gain insights into structure-activity relationships (SAR). The analysis is traditionally done via statistical methods but can also be employed together with machine learning (ML) approaches to extrapolate to novel compounds. The here introduced MMP/ML method combines a fragment-based MMP implementation with different machine learning methods to obtain automated SAR decomposition and prediction. To test the prediction capabilities and model transferability, two different compound optimization scenarios were designed: (1) "new fragments" which occurs when exploring new fragments for a defined compound series and (2) "new static core and transformations" which resembles for instance the identification of a new compound series. Very good results were achieved by all employed machine learning methods especially for the new fragments case, but overall deep neural network models performed best, allowing reliable predictions also for the new static core and transformations scenario, where comprehensive SAR knowledge of the compound series is missing. Furthermore, we show that models trained on all available data have a higher generalizability compared to models trained on focused series and can extend beyond chemical space covered in the training data. Thus, coupling MMP with deep neural networks provides a promising approach to make high quality predictions on various data sets and in different compound optimization scenarios.

  15. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    NASA Astrophysics Data System (ADS)

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  16. Transvesical robotic simple prostatectomy: initial clinical experience.

    PubMed

    Leslie, Scott; Abreu, Andre Luis de Castro; Chopra, Sameer; Ramos, Patrick; Park, Daniel; Berger, Andre K; Desai, Mihir M; Gill, Inderbir S; Aron, Monish

    2014-08-01

    Despite significant developments in transurethral surgery for benign prostatic hyperplasia (BPH), simple prostatectomy remains an excellent option for patients with large glands. To describe our technique of transvesical robotic simple prostatectomy (RSP). From May 2011 to April 2013, 25 patients underwent RSP. We performed RSP using our technique. Baseline demographics, pathology data, perioperative complications, 90-d complications, and functional outcomes were assessed. Mean patient age was 72.9 yr (range: 54-88), baseline International Prostate Symptom Score (IPSS) was 23.9 (range: 9-35), prostate volume was 149.6 ml (range: 91-260), postvoid residual (PVR) was 208.1 ml (range: 72-800), maximum flow rate (Qmax) was 11.3 ml/s, and preoperative prostate-specific antigen was 9.4 ng/ml (range: 1.9-56.3). Eight patients were catheter dependent before surgery. Mean operative time was 214 min (range: 165-345), estimated blood loss was 143 ml (range: 50-350), and the hospital stay was 4 d (range: 2-8). There were no intraoperative complications and no conversions to open surgery. Five patients had a concomitant robotic procedure performed. Early functional outcomes demonstrated significant improvement from baseline with an 85% reduction in mean IPSS (p<0.0001), an 82.2% reduction in mean PVR (p=0.014), and a 77% increase in mean Qmax (p=0.20). This study is limited by small sample size and short follow-up period. One patient had a urinary tract infection; two had recurrent hematuria, one requiring transfusion; one patient had clot retention and extravasation, requiring reoperation. Our technique of RSP is safe and effective. Good functional outcomes suggest it is a viable option for BPH and larger glands and can be used for patients requiring concomitant procedures. We describe the technique and report the initial results of a series of cases of transvesical robotic simple prostatectomy. The procedure is both feasible and safe and a good option for benign prostatic hyperplasia with larger glands. Copyright © 2013 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  17. The Development of Teaching and Learning in Bright-Field Microscopy Technique

    ERIC Educational Resources Information Center

    Iskandar, Yulita Hanum P.; Mahmud, Nurul Ethika; Wahab, Wan Nor Amilah Wan Abdul; Jamil, Noor Izani Noor; Basir, Nurlida

    2013-01-01

    E-learning should be pedagogically-driven rather than technologically-driven. The objectives of this study are to develop an interactive learning system in bright-field microscopy technique in order to support students' achievement of their intended learning outcomes. An interactive learning system on bright-field microscopy technique was…

  18. And What Did You Learn in Your PhD Program?

    ERIC Educational Resources Information Center

    Mohrig, Jerry R.

    1988-01-01

    Surveys the outlook presented by former and present chemistry and biochemistry doctoral students toward their graduate program. Poses questions to determine what aspects are deemed important. Suggests seminars and quality advisors are important factors. (ML)

  19. What type of drinker are you?

    MedlinePlus

    ... beer, one 5-ounce (148 mL) glass of wine, 1 wine cooler, 1 cocktail, or 1 shot of hard ... this important distinction for online health information and services. Learn more about A.D.A.M.'s editorial ...

  20. Family Life.

    ERIC Educational Resources Information Center

    Naturescope, 1986

    1986-01-01

    Focuses on various aspects of mammal family life ranging from ways different species are born to how different mammals are raised. Learning activities include making butter from cream, creating birth announcements for mammals, and playing a password game on family life. (ML)

  1. Optimization of antibacterial activity by Gold-Thread (Coptidis Rhizoma Franch) against Streptococcus mutans using evolutionary operation-factorial design technique.

    PubMed

    Choi, Ung-Kyu; Kim, Mi-Hyang; Lee, Nan-Hee

    2007-11-01

    This study was conducted to find the optimum extraction condition of Gold-Thread for antibacterial activity against Streptococcus mutans using The evolutionary operation-factorial design technique. Higher antibacterial activity was achieved in a higher extraction temperature (R2 = -0.79) and in a longer extraction time (R2 = -0.71). Antibacterial activity was not affected by differentiation of the ethanol concentration in the extraction solvent (R2 = -0.12). The maximum antibacterial activity of clove against S. mutans determined by the EVOP-factorial technique was obtained at 80 degrees C extraction temperature, 26 h extraction time, and 50% ethanol concentration. The population of S. mutans decreased from 6.110 logCFU/ml in the initial set to 4.125 logCFU/ml in the third set.

  2. Anesthetic efficacy of 1.8 mL versus 3.6 mL of 4% articaine with 1:100,000 epinephrine as a primary buccal infiltration of the mandibular first molar.

    PubMed

    Martin, Matthew; Nusstein, John; Drum, Melissa; Reader, Al; Beck, Mike

    2011-05-01

    No study has compared 1.8 mL and 3.6 mL 4% articaine with 1:100,000 epinephrine in a mandibular buccal infiltration of the first molar. The authors conducted a prospective, randomized, single-blind, crossover study comparing the degree of pulpal anesthesia obtained with 1.8 mL and 3.6 mL 4% articaine with 1:100,000 epinephrine as a primary infiltration in the mandibular first molar. Eighty-six asymptomatic adult subjects randomly received a primary mandibular buccal first molar infiltration of 1.8 mL or 3.6 mL 4% articaine with 1:100,000 epinephrine in two separate appointments. The authors used an electric pulp tester to test the first molar for anesthesia in 3-minute cycles for 90 minutes after the injections. Compared with the 1.8-mL volume of 4% articaine with 1:100,000 epinephrine, the 3.6-mL volume showed a statistically higher success rate (70% vs 50%). The anesthetic efficacy of 3.6 mL 4% articaine with 1:100,000 epinephrine is better than 1.8 mL of the same anesthetic solution in a primary mandibular buccal infiltration of the first molar. However, the success rate of 70% is not high enough to support its use as a primary injection technique in the mandibular first molar. Copyright © 2011 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  3. Comparison and optimization of machine learning methods for automated classification of circulating tumor cells.

    PubMed

    Lannin, Timothy B; Thege, Fredrik I; Kirby, Brian J

    2016-10-01

    Advances in rare cell capture technology have made possible the interrogation of circulating tumor cells (CTCs) captured from whole patient blood. However, locating captured cells in the device by manual counting bottlenecks data processing by being tedious (hours per sample) and compromises the results by being inconsistent and prone to user bias. Some recent work has been done to automate the cell location and classification process to address these problems, employing image processing and machine learning (ML) algorithms to locate and classify cells in fluorescent microscope images. However, the type of machine learning method used is a part of the design space that has not been thoroughly explored. Thus, we have trained four ML algorithms on three different datasets. The trained ML algorithms locate and classify thousands of possible cells in a few minutes rather than a few hours, representing an order of magnitude increase in processing speed. Furthermore, some algorithms have a significantly (P < 0.05) higher area under the receiver operating characteristic curve than do other algorithms. Additionally, significant (P < 0.05) losses to performance occur when training on cell lines and testing on CTCs (and vice versa), indicating the need to train on a system that is representative of future unlabeled data. Optimal algorithm selection depends on the peculiarities of the individual dataset, indicating the need of a careful comparison and optimization of algorithms for individual image classification tasks. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.

  4. Deep learning for single-molecule science

    NASA Astrophysics Data System (ADS)

    Albrecht, Tim; Slabaugh, Gregory; Alonso, Eduardo; Al-Arif, SM Masudur R.

    2017-10-01

    Exploring and making predictions based on single-molecule data can be challenging, not only due to the sheer size of the datasets, but also because a priori knowledge about the signal characteristics is typically limited and poor signal-to-noise ratio. For example, hypothesis-driven data exploration, informed by an expectation of the signal characteristics, can lead to interpretation bias or loss of information. Equally, even when the different data categories are known, e.g., the four bases in DNA sequencing, it is often difficult to know how to make best use of the available information content. The latest developments in machine learning (ML), so-called deep learning (DL) offer interesting, new avenues to address such challenges. In some applications, such as speech and image recognition, DL has been able to outperform conventional ML strategies and even human performance. However, to date DL has not been applied much in single-molecule science, presumably in part because relatively little is known about the ‘internal workings’ of such DL tools within single-molecule science as a field. In this Tutorial, we make an attempt to illustrate in a step-by-step guide how one of those, a convolutional neural network (CNN), may be used for base calling in DNA sequencing applications. We compare it with a SVM as a more conventional ML method, and discuss some of the strengths and weaknesses of the approach. In particular, a ‘deep’ neural network has many features of a ‘black box’, which has important implications on how we look at and interpret data.

  5. Rapid Membrane Filtration-Epifluorescent Microscopy Technique for Direct Enumeration of Bacteria in Raw Milk

    PubMed Central

    Pettipher, Graham L.; Mansell, Roderick; McKinnon, Charles H.; Cousins, Christina M.

    1980-01-01

    Membrane filtration and epifluorescent microscopy were used for the direct enumeration of bacteria in raw milk. Somatic cells were lysed by treatment with trypsin and Triton X-100 so that 2 ml of milk containing up to 5 × 106 somatic cells/ml could be filtered. The majority of the bacteria (ca. 80%) remained intact and were concentrated on the membrane. After being stained with acridine organe, the bacteria fluoresced under ultraviolet light and could easily be counted. The clump count of orange fluorescing cells on the membrane correlated well (r = 0.91) with the corresponding plate count for farm, tanker, and silo milks. Differences between counts obtained by different operators and between the membrane clump count and plate count were not significant. The technique is rapid, taking less than 25 min, inexpensive, costing less than 50 cents per sample, and is suitable for milks containing 5 × 103 to 5 × 108 bacteria per ml. Images PMID:16345515

  6. Tracking Active Learning in the Medical School Curriculum: A Learning-Centered Approach.

    PubMed

    McCoy, Lise; Pettit, Robin K; Kellar, Charlyn; Morgan, Christine

    2018-01-01

    Medical education is moving toward active learning during large group lecture sessions. This study investigated the saturation and breadth of active learning techniques implemented in first year medical school large group sessions. Data collection involved retrospective curriculum review and semistructured interviews with 20 faculty. The authors piloted a taxonomy of active learning techniques and mapped learning techniques to attributes of learning-centered instruction. Faculty implemented 25 different active learning techniques over the course of 9 first year courses. Of 646 hours of large group instruction, 476 (74%) involved at least 1 active learning component. The frequency and variety of active learning components integrated throughout the year 1 curriculum reflect faculty familiarity with active learning methods and their support of an active learning culture. This project has sparked reflection on teaching practices and facilitated an evolution from teacher-centered to learning-centered instruction.

  7. Tracking Active Learning in the Medical School Curriculum: A Learning-Centered Approach

    PubMed Central

    McCoy, Lise; Pettit, Robin K; Kellar, Charlyn; Morgan, Christine

    2018-01-01

    Background: Medical education is moving toward active learning during large group lecture sessions. This study investigated the saturation and breadth of active learning techniques implemented in first year medical school large group sessions. Methods: Data collection involved retrospective curriculum review and semistructured interviews with 20 faculty. The authors piloted a taxonomy of active learning techniques and mapped learning techniques to attributes of learning-centered instruction. Results: Faculty implemented 25 different active learning techniques over the course of 9 first year courses. Of 646 hours of large group instruction, 476 (74%) involved at least 1 active learning component. Conclusions: The frequency and variety of active learning components integrated throughout the year 1 curriculum reflect faculty familiarity with active learning methods and their support of an active learning culture. This project has sparked reflection on teaching practices and facilitated an evolution from teacher-centered to learning-centered instruction. PMID:29707649

  8. Deep learning with convolutional neural network in radiology.

    PubMed

    Yasaka, Koichiro; Akai, Hiroyuki; Kunimatsu, Akira; Kiryu, Shigeru; Abe, Osamu

    2018-04-01

    Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.

  9. Formal Validation of Fault Management Design Solutions

    NASA Technical Reports Server (NTRS)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  10. Drinking behaviour and water turnover rates of Antarctic fur seal pups: implications for the estimation of milk intake by isotopic dilution.

    PubMed

    Lea, Mary-Anne; Bonadonna, Francesco; Hindell, Mark A; Guinet, Christophe; Goldsworthy, Simon D

    2002-06-01

    The estimation of milk consumption in free-ranging seals using tritium dilution techniques makes the key assumption that the animals drink no pre-formed water during the experimental period. However, frequent observations of unweaned Antarctic fur seal pups drinking water at Iles Kerguelen necessitated the testing of this assumption. We estimated water flux rates of 30 pups (10.7+/-0.3 kg) in four experimental groups by isotopic dilution over 4 days. The groups were: (1) pups held in an open air enclosure without access to water to estimate fasting metabolic water production (MWP); (2) free-ranging pups not administered additional water; (3) pups held in an open air enclosure and given a total of 300 ml of fresh water to verify technique accuracy; and (4) free-ranging pups given 200 ml of fresh water. Pups without access to water exhibited water flux rates (20.5+/-0.8 ml kg(-1)d(-1)), which were significantly lower than those observed for the free-ranging group (33.0+/-1.7 ml kg(-1) d(-1)). Mean estimated pre-formed water intake for the free-ranging experimental groups was 12.6 ml kg(-1) d(-1). Thus, MWP, measured as total water intake during fasting, may be significantly over-estimated in free-ranging Antarctic fur seal pups at Iles Kerguelen and at other sites and subsequently milk intake rates may be underestimated.

  11. Isolation and Genetic Characterization of a Mutation Affecting Ribosomal Resistance to Cycloheximide in Tetrahymena

    PubMed Central

    Ares, Manuel; Bruns, Peter J.

    1978-01-01

    A dominant mutation at a new locus affecting resistance to cycloheximide has been isolated by exploiting a synergistic relationship with a previously known mutation for cycloheximide resistance in Tetrahymena. The new mutation (ChxB) was induced in a line homozygous for ChxA and was recovered from that background by a new technique termed interrupted genomic exclusion. Segregation data from the interrupted genomic exclusion suggest that ChxA and ChxB are separate, linked loci showing 30% recombination. Minimal lethal doses of cycloheximide for the four possible combinations of the wild-type and mutant alleles of these two genes are: wild type 6 µg/ml, ChxA 125 µg/ml, ChxB 10 µg/ml, ChxA-ChxB 175 µg/ml. PMID:730051

  12. Retention of Basic Life Support in Laypeople: Mastery Learning vs. Time-based Education.

    PubMed

    Boet, Sylvain; Bould, M Dylan; Pigford, Ashlee-Ann; Rössler, Bernhard; Nambyiah, Pratheeban; Li, Qi; Bunting, Alexandra; Schebesta, Karl

    2017-01-01

    To compare the effectiveness of a mastery learning (ML) versus a time-based (TB) BLS course for the acquisition and retention of BLS knowledge and skills in laypeople. After ethics approval, laypeople were randomized to a ML or TB BLS course based on the American Heart Association (AHA) Heartsaver course. In the ML group, subjects practiced and received feedback at six BLS stations until they reached a pre-determined level of performance. The TB group received a standard AHA six-station BLS course. All participants took the standard in-course BLS skills test at the end of their course. BLS skills and knowledge were tested using a high-fidelity scenario and knowledge questionnaire upon course completion (immediate post-test) and after four months (retention test). Video recorded scenarios were assessed by two blinded, independent raters using the AHA skills checklist. Forty-three subjects were included in analysis (23ML;20TB). For primary outcome, subjects' performance did not change after four months, regardless of the teaching modality (TB from (median[IQR]) 8.0[6.125;8.375] to 8.5[5.625;9.0] vs. ML from 8.0[7.0;9.0] to 7.0[6.0;8.0], p = 0.12 for test phase, p = 0.21 for interaction between effect of teaching modality and test phase). For secondary outcomes, subjects acquired knowledge between pre- and immediate post-tests (p < 0.005), and partially retained the acquired knowledge up to four months (p < 0.005) despite a decrease between immediate post-test and retention test (p = 0.009), irrespectively of the group (p = 0.59) (TB from 63.3[48.3;73.3] to 93.3[81.7;100.0] and then 93.3[81.7;93.3] vs. ML from 60.0[46.7;66.7] to 93.3[80.0;100.0] and then 80.0[73.3;93.3]). Regardless of the group after 4 months, chest compression depth improved (TB from 39.0[35.0;46.0] to 48.5[40.25;58.0] vs. ML from 40.0[37.0;47.0] to 45.0[37.0;52.0]; p = 0.012), but not the rate (TB from 118.0[114.0;125.0] to 120.5[113.0;129.5] vs. ML from 119.0[113.0;130.0] to 123.0[102.0;132.0]; p = 0.70). All subjects passed the in-course BLS skills test. Pass/fail rates were poor in both groups at both the simulated immediate post-test (ML = 1/22;TB = 0/20; p = 0.35) and retention test (ML pass/fail = 1/22, TB pass/fail = 0/20; p = 0.35). The ML course was slightly longer than the TB course (108[94;117] min vs. 95[89;102] min; p = 0.003). There was no major benefit of a ML compared to a TB BLS course for the acquisition and four-month retention of knowledge or skills among laypeople.

  13. Tested Demonstrations.

    ERIC Educational Resources Information Center

    Fenster, Ariel E.; And Others

    1988-01-01

    Identifies a technique using methylene blue and glucose to explain a genetically related enzyme shortage causing blue skin in humans. Offers a laser technique to study solubility of silver salts of chloride and chromate. Encourages the use of models and class participation in the study of chirality and enantiomers. (ML)

  14. Chemical approach to solvent removal during nanoencapsulation: its application to preparation of PLGA nanoparticles with non-halogenated solvent

    NASA Astrophysics Data System (ADS)

    Lee, Youngme; Sah, Eric; Sah, Hongkee

    2015-11-01

    The objective of this study was to develop a new oil-in-water emulsion-based nanoencapsulation method for the preparation of PLGA nanoparticles using a non-halogenated solvent. PLGA (60-150 mg) was dissolved in 3 ml of methyl propionate, which was vortexed with 4 ml of a 0.5-4 % polyvinyl alcohol solution. This premix was sonicated for 2 min, added into 30 ml of the aqueous polyvinyl alcohol solution, and reacted with 3 ml of 10 N NaOH. Solvent removal was achieved by the alkaline hydrolysis of methyl propionate dissolved in an aqueous phase into water-soluble methanol and sodium propionate. It was a simple but effective technique to quickly harden nanoemulsion droplets into nanoparticles. The appearing PLGA nanoparticles were recovered by ultracentrifugation and/or dialysis, lyophilized with trehalose, and redispersed by water. This nanoencapsulation technique permitted a control of their mean diameters over 151.7 ± 3.8 to 440.2 ± 22.2 nm at mild processing conditions. When the aqueous polyvinyl alcohol concentration was set at ≥1 %, nanoparticles showed uniform distributions with polydispersity indices below 0.1. There were no significant changes in their mean diameters and size distribution patterns before and after lyophilization. When mestranol was encapsulated into nanoparticles, the drug was completely nanoencapsulated: depending on experimental conditions, their encapsulation efficiencies were determined to be 99.4 ± 7.2 to 105.8 ± 6.3 %. This simple, facile nanoencapsulation technique might have versatile applications for the preparation of polymeric nanoparticulate dosage forms.

  15. Reduced injection pressures using a compressed air injection technique (CAIT): an in vitro study.

    PubMed

    Tsui, Ban C H; Knezevich, Mark P; Pillay, Jennifer J

    2008-01-01

    High injection pressures have been associated with intraneural injection and persistent neurological injury in animals. Our objective was to test whether a reported simple compressed air injection technique (CAIT) would limit the generation of injection pressures to below a suggested 1,034 mm Hg limit in an in vitro model. After ethics board approval, 30 consenting anesthesiologists injected saline into a semiclosed system. Injection pressures using 30 mL syringes connected to a 22 gauge needle and containing 20 mL of saline were measured for 60 seconds using: (1) a typical "syringe feel" method, and (2) CAIT, thereby drawing 10 mL of air above the saline and compressing this to 5 mL prior to and during injections. All anesthesiologists performed the syringe feel method before introduction and demonstration of CAIT. Using CAIT, no anesthesiologist generated pressures above 1,034 mm Hg, while 29 of 30 produced pressures above this limit at some time using the syringe feel method. The mean pressure using CAIT was lower (636 +/- 71 vs. 1378 +/- 194 mm Hg, P = .025), and the syringe feel method resulted in higher peak pressures (1,875 +/- 206 vs. 715 +/- 104 mm Hg, P = .000). This study demonstrated that CAIT can effectively keep injection pressures under 1,034 mm Hg in this in vitro model. Animal and clinical studies will be needed to determine whether CAIT will allow objective, real-time pressure monitoring. If high pressure injections are proven to contribute to nerve injury in humans, this technique may have the potential to improve the safety of peripheral nerve blocks.

  16. Lung cancer perfusion: can we measure pulmonary and bronchial circulation simultaneously?

    PubMed

    Yuan, Xiaodong; Zhang, Jing; Ao, Guokun; Quan, Changbin; Tian, Yuan; Li, Hong

    2012-08-01

    To describe a new CT perfusion technique for assessing the dual blood supply in lung cancer and present the initial results. This study was approved by the institutional review board. A CT protocol was developed, and a dual-input CT perfusion (DI-CTP) analysis model was applied and evaluated regarding the blood flow fractions in lung tumours. The pulmonary trunk and the descending aorta were selected as the input arteries for the pulmonary circulation and the bronchial circulation respectively. Pulmonary flow (PF), bronchial flow (BF), and a perfusion index (PI, = PF/ (PF + BF)) were calculated using the maximum slope method. After written informed consent was obtained, 13 consecutive subjects with primary lung cancer underwent DI-CTP. Perfusion results are as follows: PF, 13.45 ± 10.97 ml/min/100 ml; BF, 48.67 ± 28.87 ml/min/100 ml; PI, 21 % ± 11 %. BF is significantly larger than PF, P < 0.001. There is a negative correlation between the tumour volume and perfusion index (r = 0.671, P = 0.012). The dual-input CT perfusion analysis method can be applied successfully to lung tumours. Initial results demonstrate a dual blood supply in primary lung cancer, in which the systemic circulation is dominant, and that the proportion of the two circulation systems is moderately dependent on tumour size. A new CT perfusion technique can assess lung cancer's dual blood supply. A dual blood supply was confirmed with dominant bronchial circulation in lung cancer. The proportion of the two circulations is moderately dependent on tumour size. This new technique may benefit the management of lung cancer.

  17. Apical extrusion of debris and irrigants using two hand and three engine-driven instrumentation techniques.

    PubMed

    Ferraz, C C; Gomes, N V; Gomes, B P; Zaia, A A; Teixeira, F B; Souza-Filho, F J

    2001-07-01

    To evaluate the weight of debris and irrigant volume extruded apically from extracted teeth in vitro after endodontic instrumentation using the balanced force technique, a hybrid hand instrumentation technique, and three engine-driven techniques utilizing nickel-titanium instruments (ProFile .04, Quantec 2000 and Pow-R). Five groups of 20 extracted human teeth with single canals were instrumented using one or other of five techniques: balanced force, hybrid, Quantec 2000, ProFile .04, or Pow-R. Debris extruded from the apical foramen during instrumentation were collected into preweighed 1.5 mL tubes. Following instrumentation, the volume of extruded irrigant fluid was determined by visual comparison to control centrifuge tubes filled with 0.25 mL increments of distilled water. The weight of dry extruded dentine debris was also established. Overall, the engine-driven techniques extruded less debris than the manual ones. However, there was no statistical difference between the balanced force technique and the engine-driven methods. The volume of irrigant extruded through the apex was directly associated with the weight of extruded debris, except within the ProFile group. The hybrid technique was associated with the greatest extrusion of both debris and irrigant. Overall, the engine-driven nickel-titanium systems were associated with less apical extrusion.

  18. Beyond where to how: a machine learning approach for sensing mobility contexts using smartphone sensors.

    PubMed

    Guinness, Robert E

    2015-04-28

    This paper presents the results of research on the use of smartphone sensors (namely, GPS and accelerometers), geospatial information (points of interest, such as bus stops and train stations) and machine learning (ML) to sense mobility contexts. Our goal is to develop techniques to continuously and automatically detect a smartphone user's mobility activities, including walking, running, driving and using a bus or train, in real-time or near-real-time (<5 s). We investigated a wide range of supervised learning techniques for classification, including decision trees (DT), support vector machines (SVM), naive Bayes classifiers (NB), Bayesian networks (BN), logistic regression (LR), artificial neural networks (ANN) and several instance-based classifiers (KStar, LWLand IBk). Applying ten-fold cross-validation, the best performers in terms of correct classification rate (i.e., recall) were DT (96.5%), BN (90.9%), LWL (95.5%) and KStar (95.6%). In particular, the DT-algorithm RandomForest exhibited the best overall performance. After a feature selection process for a subset of algorithms, the performance was improved slightly. Furthermore, after tuning the parameters of RandomForest, performance improved to above 97.5%. Lastly, we measured the computational complexity of the classifiers, in terms of central processing unit (CPU) time needed for classification, to provide a rough comparison between the algorithms in terms of battery usage requirements. As a result, the classifiers can be ranked from lowest to highest complexity (i.e., computational cost) as follows: SVM, ANN, LR, BN, DT, NB, IBk, LWL and KStar. The instance-based classifiers take considerably more computational time than the non-instance-based classifiers, whereas the slowest non-instance-based classifier (NB) required about five-times the amount of CPU time as the fastest classifier (SVM). The above results suggest that DT algorithms are excellent candidates for detecting mobility contexts in smartphones, both in terms of performance and computational complexity.

  19. Beyond Where to How: A Machine Learning Approach for Sensing Mobility Contexts Using Smartphone Sensors †

    PubMed Central

    Guinness, Robert E.

    2015-01-01

    This paper presents the results of research on the use of smartphone sensors (namely, GPS and accelerometers), geospatial information (points of interest, such as bus stops and train stations) and machine learning (ML) to sense mobility contexts. Our goal is to develop techniques to continuously and automatically detect a smartphone user's mobility activities, including walking, running, driving and using a bus or train, in real-time or near-real-time (<5 s). We investigated a wide range of supervised learning techniques for classification, including decision trees (DT), support vector machines (SVM), naive Bayes classifiers (NB), Bayesian networks (BN), logistic regression (LR), artificial neural networks (ANN) and several instance-based classifiers (KStar, LWLand IBk). Applying ten-fold cross-validation, the best performers in terms of correct classification rate (i.e., recall) were DT (96.5%), BN (90.9%), LWL (95.5%) and KStar (95.6%). In particular, the DT-algorithm RandomForest exhibited the best overall performance. After a feature selection process for a subset of algorithms, the performance was improved slightly. Furthermore, after tuning the parameters of RandomForest, performance improved to above 97.5%. Lastly, we measured the computational complexity of the classifiers, in terms of central processing unit (CPU) time needed for classification, to provide a rough comparison between the algorithms in terms of battery usage requirements. As a result, the classifiers can be ranked from lowest to highest complexity (i.e., computational cost) as follows: SVM, ANN, LR, BN, DT, NB, IBk, LWL and KStar. The instance-based classifiers take considerably more computational time than the non-instance-based classifiers, whereas the slowest non-instance-based classifier (NB) required about five-times the amount of CPU time as the fastest classifier (SVM). The above results suggest that DT algorithms are excellent candidates for detecting mobility contexts in smartphones, both in terms of performance and computational complexity. PMID:25928060

  20. CARSVM: a class association rule-based classification framework and its application to gene expression data.

    PubMed

    Kianmehr, Keivan; Alhajj, Reda

    2008-09-01

    In this study, we aim at building a classification framework, namely the CARSVM model, which integrates association rule mining and support vector machine (SVM). The goal is to benefit from advantages of both, the discriminative knowledge represented by class association rules and the classification power of the SVM algorithm, to construct an efficient and accurate classifier model that improves the interpretability problem of SVM as a traditional machine learning technique and overcomes the efficiency issues of associative classification algorithms. In our proposed framework: instead of using the original training set, a set of rule-based feature vectors, which are generated based on the discriminative ability of class association rules over the training samples, are presented to the learning component of the SVM algorithm. We show that rule-based feature vectors present a high-qualified source of discrimination knowledge that can impact substantially the prediction power of SVM and associative classification techniques. They provide users with more conveniences in terms of understandability and interpretability as well. We have used four datasets from UCI ML repository to evaluate the performance of the developed system in comparison with five well-known existing classification methods. Because of the importance and popularity of gene expression analysis as real world application of the classification model, we present an extension of CARSVM combined with feature selection to be applied to gene expression data. Then, we describe how this combination will provide biologists with an efficient and understandable classifier model. The reported test results and their biological interpretation demonstrate the applicability, efficiency and effectiveness of the proposed model. From the results, it can be concluded that a considerable increase in classification accuracy can be obtained when the rule-based feature vectors are integrated in the learning process of the SVM algorithm. In the context of applicability, according to the results obtained from gene expression analysis, we can conclude that the CARSVM system can be utilized in a variety of real world applications with some adjustments.

  1. A new scheme for strain typing of methicillin-resistant Staphylococcus aureus on the basis of matrix-assisted laser desorption ionization time-of-flight mass spectrometry by using machine learning approach.

    PubMed

    Wang, Hsin-Yao; Lee, Tzong-Yi; Tseng, Yi-Ju; Liu, Tsui-Ping; Huang, Kai-Yao; Chang, Yung-Ta; Chen, Chun-Hsien; Lu, Jang-Jih

    2018-01-01

    Methicillin-resistant Staphylococcus aureus (MRSA), one of the most important clinical pathogens, conducts an increasing number of morbidity and mortality in the world. Rapid and accurate strain typing of bacteria would facilitate epidemiological investigation and infection control in near real time. Matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry is a rapid and cost-effective tool for presumptive strain typing. To develop robust method for strain typing based on MALDI-TOF spectrum, machine learning (ML) is a promising algorithm for the construction of predictive model. In this study, a strategy of building templates of specific types was used to facilitate generating predictive models of methicillin-resistant Staphylococcus aureus (MRSA) strain typing through various ML methods. The strain types of the isolates were determined through multilocus sequence typing (MLST). The area under the receiver operating characteristic curve (AUC) and the predictive accuracy of the models were compared. ST5, ST59, and ST239 were the major MLST types, and ST45 was the minor type. For binary classification, the AUC values of various ML methods ranged from 0.76 to 0.99 for ST5, ST59, and ST239 types. In multiclass classification, the predictive accuracy of all generated models was more than 0.83. This study has demonstrated that ML methods can serve as a cost-effective and promising tool that provides preliminary strain typing information about major MRSA lineages on the basis of MALDI-TOF spectra.

  2. Structural brain changes versus self-report: machine-learning classification of chronic fatigue syndrome patients.

    PubMed

    Sevel, Landrew S; Boissoneault, Jeff; Letzen, Janelle E; Robinson, Michael E; Staud, Roland

    2018-05-30

    Chronic fatigue syndrome (CFS) is a disorder associated with fatigue, pain, and structural/functional abnormalities seen during magnetic resonance brain imaging (MRI). Therefore, we evaluated the performance of structural MRI (sMRI) abnormalities in the classification of CFS patients versus healthy controls and compared it to machine learning (ML) classification based upon self-report (SR). Participants included 18 CFS patients and 15 healthy controls (HC). All subjects underwent T1-weighted sMRI and provided visual analogue-scale ratings of fatigue, pain intensity, anxiety, depression, anger, and sleep quality. sMRI data were segmented using FreeSurfer and 61 regions based on functional and structural abnormalities previously reported in patients with CFS. Classification was performed in RapidMiner using a linear support vector machine and bootstrap optimism correction. We compared ML classifiers based on (1) 61 a priori sMRI regional estimates and (2) SR ratings. The sMRI model achieved 79.58% classification accuracy. The SR (accuracy = 95.95%) outperformed both sMRI models. Estimates from multiple brain areas related to cognition, emotion, and memory contributed strongly to group classification. This is the first ML-based group classification of CFS. Our findings suggest that sMRI abnormalities are useful for discriminating CFS patients from HC, but SR ratings remain most effective in classification tasks.

  3. Artificial intelligence, physiological genomics, and precision medicine.

    PubMed

    Williams, Anna Marie; Liu, Yong; Regner, Kevin R; Jotterand, Fabrice; Liu, Pengyuan; Liang, Mingyu

    2018-04-01

    Big data are a major driver in the development of precision medicine. Efficient analysis methods are needed to transform big data into clinically-actionable knowledge. To accomplish this, many researchers are turning toward machine learning (ML), an approach of artificial intelligence (AI) that utilizes modern algorithms to give computers the ability to learn. Much of the effort to advance ML for precision medicine has been focused on the development and implementation of algorithms and the generation of ever larger quantities of genomic sequence data and electronic health records. However, relevance and accuracy of the data are as important as quantity of data in the advancement of ML for precision medicine. For common diseases, physiological genomic readouts in disease-applicable tissues may be an effective surrogate to measure the effect of genetic and environmental factors and their interactions that underlie disease development and progression. Disease-applicable tissue may be difficult to obtain, but there are important exceptions such as kidney needle biopsy specimens. As AI continues to advance, new analytical approaches, including those that go beyond data correlation, need to be developed and ethical issues of AI need to be addressed. Physiological genomic readouts in disease-relevant tissues, combined with advanced AI, can be a powerful approach for precision medicine for common diseases.

  4. Treatment of lateral epicondilitis using three different local injection modalities: a randomized prospective clinical trial.

    PubMed

    Dogramaci, Yunus; Kalaci, Aydiner; Savaş, Nazan; Duman, I Gokhan; Yanat, A Nedim

    2009-10-01

    To determine the effectiveness of three different local injection modalities in the treatment of lateral epicondilitis. In a prospective randomized study on lateral epicondilitis, 75 patients were divided into three equal groups A, B and C (n = 25) and were treated using three different method of local injection. The patients in group A were treated with local injection of a steroid (1 mL triamcinolone) combined with local anaesthetic (1 mL lidocaine), those in group B were treated with injection of local anaesthetic (1 mL lidocaine) combined with peppering technique and those in group C with local injection of a steroid (1 mL triamcinolone) combined with local anaesthetic (1 mL lidocaine) and peppering technique. The outcome was defined by measuring the elbow pain during the activity using a 10-cm visual analogue scale (VAS) and satisfaction with the treatment using a scoring system based on the criteria of the Verhaar et al. at 3 weeks and 6 months after the injection and compared with the pre-treatment condition. There were significant (P = 0.006) differences in the successful outcomes between the three groups at 6 months. In group C in which local steroid + peppering injection technique were used; excellent results were obtained in 84% of patients comparing to 36% and 48% for patients in groups A and B, respectively. The successful outcomes were statistically higher in group C comparing to group A (P = 0.002) and group B (P = 0.011). In all groups, there was a significantly lower pain (VAS) at the 3-week and 6-month follow-ups comparing to the pre-treatment condition. VAS measured at 6-month follow-up were significantly lower in group C comparing to other groups (P = 0.002). In the treatment of lateral epicondilitis, combination of corticosteroid injections with peppering is more effective than corticosteroid injections or peppering injections alone and produces better clinical results.

  5. Minimally invasive video-assisted thyroidectomy: Ascending the learning curve

    PubMed Central

    Capponi, Michela Giulii; Bellotti, Carlo; Lotti, Marco; Ansaloni, Luca

    2015-01-01

    BACKGROUND: Minimally invasive video-assisted thyroidectomy (MIVAT) is a technically demanding procedure and requires a surgical team skilled in both endocrine and endoscopic surgery. The aim of this report is to point out some aspects of the learning curve of the video-assisted thyroid surgery, through the analysis of our preliminary series of procedures. PATIENTS AND METHODS: Over a period of 8 months, we selected 36 patients for minimally invasive video-assisted surgery of the thyroid. The patients were considered eligible if they presented with a nodule not exceeding 35 mm and total thyroid volume <20 ml; presence of biochemical and ultrasound signs of thyroiditis and pre-operative diagnosis of cancer were exclusion criteria. We analysed surgical results, conversion rate, operating time, post-operative complications, hospital stay and cosmetic outcomes of the series. RESULTS: We performed 36 total thyroidectomy and in one case we performed a consensual parathyroidectomy. The procedure was successfully carried out in 33 out of 36 cases (conversion rate 8.3%). The mean operating time was 109 min (range: 80-241 min) and reached a plateau after 29 MIVAT. Post-operative complications included three transient recurrent nerve palsies and two transient hypocalcemias; no definitive hypoparathyroidism was registered. The cosmetic result was considered excellent by most patients. CONCLUSIONS: Advances in skills and technology allow surgeons to easily reproduce the standard open total thyroidectomy with video-assistance. Although the learning curve represents a time-consuming step, training remains a crucial point in gaining a reasonable confidence with video-assisted surgical technique. PMID:25883451

  6. Development and validation of simple spectrophotometric and chemometric methods for simultaneous determination of empagliflozin and metformin: Applied to recently approved pharmaceutical formulation

    NASA Astrophysics Data System (ADS)

    Ayoub, Bassam M.

    2016-11-01

    New univariate spectrophotometric method and multivariate chemometric approach were developed and compared for simultaneous determination of empagliflozin and metformin manipulating their zero order absorption spectra with application on their pharmaceutical preparation. Sample enrichment technique was used to increase concentration of empagliflozin after extraction from tablets to allow its simultaneous determination with metformin without prior separation. Validation parameters according to ICH guidelines were satisfactory over the concentration range of 2-12 μg mL- 1 for both drugs using simultaneous equation with LOD values equal to 0.20 μg mL- 1 and 0.19 μg mL- 1, LOQ values equal to 0.59 μg mL- 1 and 0.58 μg mL- 1 for empagliflozin and metformin, respectively. While the optimum results for the chemometric approach using partial least squares method (PLS-2) were obtained using concentration range of 2-10 μg mL- 1. The optimized validated methods are suitable for quality control laboratories enable fast and economic determination of the recently approved pharmaceutical combination Synjardy® tablets.

  7. Interdiffusion in nanometer-scale multilayers investigated by in situ low-angle x-ray diffraction

    NASA Astrophysics Data System (ADS)

    Wang, Wei-Hua; Bai, Hai Yang; Zhang, Ming; Zhao, J. H.; Zhang, X. Y.; Wang, W. K.

    1999-04-01

    An in situ low-angle x-ray diffraction technique is used to investigate interdiffusion phenomena in various metal-metal and metal-amorphous Si nanometer-scale compositionally modulated multilayers (ML's). The temperature-dependent interdiffusivities are obtained by accurately monitoring the decay of the first-order modulation peak as a function of annealing time. Activation enthalpies and preexponential factors for the interdiffusion in the Fe-Ti, Ag-Bi, Fe-Mo, Mo-Si, Ni-Si, Nb-Si, and Ag-Si ML's are determined. Activation enthalpies and preexponential factors for the interdiffusion in the ML's are very small compared with that in amorphous alloys and crystalline solids. The relation between the atomic-size difference and interdiffusion in the ML's are investigated. The observed interdiffusion characteristics are compared with that in amorphous alloys and crystalline α-Zr, α-Ti, and Si. The experimental results suggest that a collective atomic-jumping mechanism govern the interdiffusion in the ML's, the collective proposal involving 8-15 atoms moving between extended nonequilibrium defects by thermal activation. The role of the interdiffusion in the solid-state reaction in the ML's is also discussed.

  8. Knowledge discovery and system biology in molecular medicine: an application on neurodegenerative diseases.

    PubMed

    Fattore, Matteo; Arrigo, Patrizio

    2005-01-01

    The possibility to study an organism in terms of system theory has been proposed in the past, but only the advancement of molecular biology techniques allow us to investigate the dynamical properties of a biological system in a more quantitative and rational way than before . These new techniques can gave only the basic level view of an organisms functionality. The comprehension of its dynamical behaviour depends on the possibility to perform a multiple level analysis. Functional genomics has stimulated the interest in the investigation the dynamical behaviour of an organism as a whole. These activities are commonly known as System Biology, and its interests ranges from molecules to organs. One of the more promising applications is the 'disease modeling'. The use of experimental models is a common procedure in pharmacological and clinical researches; today this approach is supported by 'in silico' predictive methods. This investigation can be improved by a combination of experimental and computational tools. The Machine Learning (ML) tools are able to process different heterogeneous data sources, taking into account this peculiarity, they could be fruitfully applied to support a multilevel data processing (molecular, cellular and morphological) that is the prerequisite for the formal model design; these techniques can allow us to extract the knowledge for mathematical model development. The aim of our work is the development and implementation of a system that combines ML and dynamical models simulations. The program is addressed to the virtual analysis of the pathways involved in neurodegenerative diseases. These pathologies are multifactorial diseases and the relevance of the different factors has not yet been well elucidated. This is a very complex task; in order to test the integrative approach our program has been limited to the analysis of the effects of a specific protein, the Cyclin dependent kinase 5 (CDK5) which relies on the induction of neuronal apoptosis. The system has a modular structure centred on a textual knowledge discovery approach. The text mining is the only way to enhance the capability to extract ,from multiple data sources, the information required for the dynamical simulator. The user may access the publically available modules through the following site: http://biocomp.ge.ismac.cnr.it.

  9. In Vitro Susceptibilities of Isolates of Sporothrix schenckii to Itraconazole and Terbinafine

    PubMed Central

    Kohler, Lidiane Meire; Monteiro, Paulo César Fialho; Hahn, Rosane Christine; Hamdan, Júnia Soares

    2004-01-01

    Thirty isolates of the yeast form of Sporothrix schenckii were evaluated for in vitro susceptibility to itraconazole and terbinafine by the recommended NCCLS modified technique (M27-A2). The MICs of itraconazole obtained oscillated between 0.062 and 4.0 μg/ml, and those of terbinafine oscillated between 0.007 and 0.50 μg/ml; therefore, terbinafine showed greater in vitro activity. PMID:15365033

  10. Water-tight knee arthrotomy closure: comparison of a novel single bidirectional barbed self-retaining running suture versus conventional interrupted sutures.

    PubMed

    Nett, Michael; Avelar, Rui; Sheehan, Michael; Cushner, Fred

    2011-03-01

    Standard medial parapatellar arthrotomies of 10 cadaveric knees were closed with either conventional interrupted absorbable sutures (control group, mean of 19.4 sutures) or a single running knotless bidirectional barbed absorbable suture (experimental group). Water-tightness of the arthrotomy closure was compared by simulating a tense hemarthrosis and measuring arthrotomy leakage over 3 minutes. Mean total leakage was 356 mL and 89 mL in the control and experimental groups, respectively (p = 0.027). Using 8 of the 10 knees (4 closed with control sutures, 4 closed with an experimental suture), a tense hemarthrosis was again created, and iatrogenic suture rupture was performed: a proximal suture was cut at 1 minute; a distal suture was cut at 2 minutes. The impact of suture rupture was compared by measuring total arthrotomy leakage over 3 minutes. Mean total leakage was 601 mL and 174 mL in the control and experimental groups, respectively (p = 0.3). In summary, using a cadaveric model, arthrotomies closed with a single bidirectional barbed running suture were statistically significantly more water-tight than those closed using a standard interrupted technique. The sample size was insufficient to determine whether the two closure techniques differed in leakage volume after suture rupture.

  11. CellML and associated tools and techniques.

    PubMed

    Garny, Alan; Nickerson, David P; Cooper, Jonathan; Weber dos Santos, Rodrigo; Miller, Andrew K; McKeever, Steve; Nielsen, Poul M F; Hunter, Peter J

    2008-09-13

    We have, in the last few years, witnessed the development and availability of an ever increasing number of computer models that describe complex biological structures and processes. The multi-scale and multi-physics nature of these models makes their development particularly challenging, not only from a biological or biophysical viewpoint but also from a mathematical and computational perspective. In addition, the issue of sharing and reusing such models has proved to be particularly problematic, with the published models often lacking information that is required to accurately reproduce the published results. The International Union of Physiological Sciences Physiome Project was launched in 1997 with the aim of tackling the aforementioned issues by providing a framework for the modelling of the human body. As part of this initiative, the specifications of the CellML mark-up language were released in 2001. Now, more than 7 years later, the time has come to assess the situation, in particular with regard to the tools and techniques that are now available to the modelling community. Thus, after introducing CellML, we review and discuss existing editors, validators, online repository, code generators and simulation environments, as well as the CellML Application Program Interface. We also address possible future directions including the need for additional mark-up languages.

  12. Quantification of 4-Methylimidazole in soft drinks, sauces and vinegars of Greek market using two liquid chromatography techniques.

    PubMed

    Tzatzarakis, Manolis N; Vakonaki, Elena; Moti, Sofia; Alegakis, Athanasios; Tsitsimpikou, Christina; Tsakiris, Ioannis; Goumenou, Marina; Nosyrev, Alexander E; Rizos, Apostolos K; Tsatsakis, Aristidis M

    2017-09-01

    The substance 4-methylimidazole (4-MEI) has raised several concerns regarding its toxicity to humans, although no harmonized classification has yet been decided. The regulatory limits for food products set by various authorities in Europe and the USA differ considerably. The purpose of the present study is to compare two liquid chromatography techniques in order to determine the levels of 4-MEI in food products from the Greek market and roughly estimate the possible exposure and relevant health risk for the consumers. A total of thirty-four samples (soft drinks, beers, balsamic vinegars, energy drinks and sauces) were collected and analyzed. The quality parameters for both analytical methodologies (linearity, accuracy, inter day precision, recovery) are presented. No detectable levels of 4-MEI are found in beers and soft drink samples, other than cola type. On the other hand, 4-MEI was detected in all cola type soft drinks (15.8-477.0 ng/ml), energy drinks (57.1%, 6.6-22.5 ng/ml) and vinegar samples (66.7%, 9.7-3034.7 ng/ml), while only one of the sauce samples was found to have a detectable level of 17.5 ng/ml 4-MEI. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Thermo and mechanoluminescence of Dy3+ activated K2Mg2(SO4)3 phosphor

    NASA Astrophysics Data System (ADS)

    Panigrahi, A. K.; Dhoble, S. J.; Kher, R. S.; Moharil, S. V.

    2003-08-01

    A solid state diffusion method for the preparation of (K2 : Dy)Mg2(SO4)3 and (K2 : Dy,P)Mg2(SO4)3 phosphors is reported. Thermoluminescence (TL) and mechanoluminescence (ML) characteristics are studied. TL, shown by the (K2 : Dy,P)Mg2(SO4)3 phosphor is 60% as intense as the conventional CaSO4 : Dy phosphor used in the TLD of ionization radiation. It has a linear TL dose response and a negligible fading. These properties of (K2 : Dy,P)Mg2(SO4)3 should be suitable in dosimetry of ionization radiation using TL technique. ML of (K2 : Dy)Mg2(SO4)3 shows one peak which has been observed in ML intensity versus time curve. The ML peak shows the recombination of electrons with free radicals (anion radicals produced by γ-irradiation) released from traps during the mechanical pressure applied on the Dy activated K2Mg2(SO4)3 phosphor. This ML mechanism is proposed for γ-irradiated sulfate based phosphors. It has been found that the total light output, i.e. ML intensity, increases with concentration of dopant, strain rate and irradiation dose of the phosphor. Mechanoluminescence and ML emission spectra of (K2 : Dy)Mg2(SO4)3 were recorded for better understanding of the ML process. The TL and ML measurements have also been performed to elucidate the mechanism of ML. Some correlation between ML and TL has also been found.

  14. Assessment of quality outcomes for robotic pancreaticoduodenectomy: identification of the learning curve.

    PubMed

    Boone, Brian A; Zenati, Mazen; Hogg, Melissa E; Steve, Jennifer; Moser, Arthur James; Bartlett, David L; Zeh, Herbert J; Zureikat, Amer H

    2015-05-01

    Quality assessment is an important instrument to ensure optimal surgical outcomes, particularly during the adoption of new surgical technology. The use of the robotic platform for complex pancreatic resections, such as the pancreaticoduodenectomy, requires close monitoring of outcomes during its implementation phase to ensure patient safety is maintained and the learning curve identified. To report the results of a quality analysis and learning curve during the implementation of robotic pancreaticoduodenectomy (RPD). A retrospective review of a prospectively maintained database of 200 consecutive patients who underwent RPD in a large academic center from October 3, 2008, through March 1, 2014, was evaluated for important metrics of quality. Patients were analyzed in groups of 20 to minimize demographic differences and optimize the ability to detect statistically meaningful changes in performance. Robotic pancreaticoduodenectomy. Optimization of perioperative outcome parameters. No statistical differences in mortality rates or major morbidity were noted during the study. Statistical improvements in estimated blood loss and conversions to open surgery occurred after 20 cases (600 mL vs 250 mL [P = .002] and 35.0% vs 3.3% [P < .001], respectively), incidence of pancreatic fistula after 40 cases (27.5% vs 14.4%; P = .04), and operative time after 80 cases (581 minutes vs 417 minutes [P < .001]). Complication rates, lengths of stay, and readmission rates showed continuous improvement that did not reach statistical significance. Outcomes for the last 120 cases (representing optimized metrics beyond the learning curve) included a mean operative time of 417 minutes, median estimated blood loss of 250 mL, a conversion rate of 3.3%, 90-day mortality of 3.3%, a clinically significant (grade B/C) pancreatic fistula rate of 6.9%, and a median length of stay of 9 days. Continuous assessment of quality metrics allows for safe implementation of RPD. We identified several inflexion points corresponding to optimization of performance metrics for RPD that can be used as benchmarks for surgeons who are adopting this technology.

  15. BACLAB: A Computer Simulation of a Medical Bacteriology Laboratory--An Aid for Teaching Tertiary Level Microbiology.

    ERIC Educational Resources Information Center

    Lewington, J.; And Others

    1985-01-01

    Describes a computer simulation program which helps students learn the main biochemical tests and profiles for identifying medically important bacteria. Also discusses the advantages and applications of this type of approach. (ML)

  16. A PCR technique to detect enterotoxigenic and verotoxigenic Escherichia coli in boar semen samples.

    PubMed

    Bussalleu, E; Pinart, E; Yeste, M; Briz, M; Sancho, S; Torner, E; Bonet, S

    2012-08-01

    In semen, bacteria's isolation from a pure culture is complex, laborious and easily alterable by the presence of antibiotics and inhibitors. We developed a PCR technique to detect the presence of the enterotoxigenic (ETEC) and verotoxigenic Escherichia coli (VTEC) (strains with high prevalence in the swine industry) in semen by adapting the protocols developed by Zhang et al. (2007) and Yilmaz et al. (2006). We artificially inoculated extended semen samples at different infective concentrations of bacteria (from 10(2) to 10(8) bacteria ml(-1)) with two enterotoxigenic and verotoxigenic strains, and performed two multiplex and one conventional PCR. This technique proved to be a quick, useful and reliable tool to detect the presence of ETEC and VTEC up to an infective dose of 10(5) bacteria ml(-1) in semen. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Comparison of Nerve Stimulation-guided Axillary Brachial Plexus Block, Single Injection versus Four Injections: A Prospective Randomized Double-blind Study.

    PubMed

    Badiger, Santoshi V; Desai, Sameer N

    2017-01-01

    A variety of techniques have been described for the axillary block using nerve stimulator, either with single injection, two, three, or four separate injections. Identification of all the four nerves is more difficult and time-consuming than other methods. Aim of the present study is to compare success rate, onset, and duration of sensory and motor anesthesia of axillary block using nerve stimulator, either with single injection after identification of any one of the four nerves or four separate injections following identification of each of nerve. Prospective, randomized, double-blind study. Patients undergoing forearm and hand surgeries under axillary block. One hundred patients, aged 18-75 years, were randomly allocated into two groups of 50 each. Axillary block was performed under the guidance of nerve stimulator with a mixture of 18 ml of 1.5% lignocaine and 18 ml of 0.5% bupivacaine. In the first group ( n = 50), all 36 ml of local anesthetic was injected after the identification of motor response to any one of the nerves and in Group 2, all the four nerves were identified by the motor response, and 9 ml of local anesthetic was injected at each of the nerves. The success rate of the block, onset, and duration of sensory and motor block was assessed. Categorical variables were compared using the Chi-square test, and continuous variables were compared using independent t -test. The success rate of the block with four injection technique was higher compared to single-injection technique (84% vs. 56%, P = 0.02). Four injection groups had a faster onset of sensory and motor block and prolonged duration of analgesia compared to single-injection group ( P < 0.001). There were no significant differences in the incidence of accidental arterial puncture and hemodynamic parameter between the groups. Identification of all the four nerves produced higher success rate and better quality of the block when compared to single-injection technique.

  18. Measurement of the absolute optical properties and cerebral blood volume of the adult human head with hybrid differential and spatially resolved spectroscopy

    NASA Astrophysics Data System (ADS)

    Leung, Terence S.; Tachtsidis, Ilias; Smith, Martin; Delpy, David T.; Elwell, Clare E.

    2006-02-01

    A hybrid differential and spatially resolved spectroscopy (SRS) technique has been developed to measure absolute absorption coefficient (μa), reduced scattering coefficient (μ's) and cerebral blood volume (CBV) in the adult human head. A spectrometer with both differential and SRS capabilities has been used to carry out measurements in 12 subjects. Two versions of the calculation have been considered using the hybrid technique, with one considering water as a chromophore as well as oxy- and deoxy-haemoglobin, and one ignoring water. The CBV has also been measured using a previously described technique based on changing the arterial saturation (SaO2) measured separately by a pulse oximeter, resulting in mean ± SD CBVa (intra-individual coefficient of variation) = 2.22 ± 1.06 ml/100 g (29.9%). (The superscript on CBV indicates the different calculation basis.) Using the hybrid technique with water ignored, CBV0 = 3.18 ± 0.73 ml/100 g (10.0%), μ0a(813 nm) = 0.010 ± 0.003 mm-1 and μ'0s(813 nm) = 1.19 ± 0.55 mm-1 (data quoted at 813 nm). With water considered, CBVw = 3.05 ± 0.77 ml/100 g (10.5%), μwa(813 nm) = 0.010 ± 0.003 mm-1 and μ'ws(813 nm) = 1.28 ± 0.56 mm-1. The mean biases between CBV0/CBVw, CBV0/CBVa and CBVw/CBVa are 0.14 ± 0.09, 0.79 ± 1.22 and 0.65 ± 1.24 ml/100 g. The mean biases between μ0a(813 nm)/μwa(813 nm) and μ'0s(813 nm)/μ'ws(813 nm) are (5.9 ± 10.0) × 10-4 mm-1 and -0.084 ± 0.266 mm-1, respectively. The method we describe extends the functionality of the current SRS instrumentation.

  19. [Application of uterine lower part breakwater-like suture operation in placenta previa].

    PubMed

    Zhao, Y; Zhu, J W; Wu, D; Wang, Q H; Lu, S S; Liu, X X; Zou, L

    2018-04-25

    Objective: To explore the efficacy and safety of uterine lower posterior wall breakwater-like suture technique in controlling the intraoperative bleeding of placenta previa. Methods: From June 2016 to June 2017, 47 patients were diagnosed placenta previa in Union Hospital, Tongji Medical College of Huazhong University of Science and Technology. Posterior wall breakwater-like suture technique was used preferentially, as for cases with poor myometrium layer, lower anterior wall stitch suture was used at the same time. Bilateral descending branches of uterine artery ligation and Cook balloon compression of uterine lower segment was conducted when necessary. The clinic data of the 47 cases were analyzed. Results: Thirty cases (63.8, 30/47) were diagnosed placenta inccreta or percreta by ultrasound or MRI preoperatively. Senventeen cases were diagnosed as placenta accreta (36.2%, 17/47) . Thirty-four cases had the previous history of cesarean section. The average cervical canal length of 47 patients was (2.8±0.9) cm. There were 19 cases (40.4%,19/47) with 1 time posterior wall breakwater-like sutured and 16 cases (34.0%,16/47) with 2 or 3 times posterior wall breakwater-like sutured; 12 cases (25.5%,12/47) were treated with anterior wall stitch suture simultaneously.Ten cases (21.3%, 10/47) underwent uterine artery ligation, 17 cases (36.2%, 17/47) underwent COOK balloon compression on the staxis surface of lower segment. None of them had postpartum hemorrhage or performed internal iliac artery embolization. The median blood loss in the operation was 700 ml, the percentiles 25 was 500 ml, and the percentiles 75 was 1 200 ml. The blood loss ≥1 000 ml in 18 (38.3%, 18/47) patients,and the most serious one was 2 500 ml. The median blood transfusion volume (including allogenetic transfusion and autotransfusion) was 450 ml, the percentiles 25 was 228 ml, and the percentiles 75 was 675 ml. The average vaginal bleeding volume was (150±63) ml first day after operation. The mean hospitalization time was (4.7±1.0) days. The mean gestational weeks of pregnancy termination was (36.1±1.5) weeks, and the mean birth weight of newborns was (2 817±492) g. Apgar score:1-minute 7.8±1.1, 5-minute 8.9±0.8. No neonatal death, 16 cases were transferred to neonatal ICU (34.0%, 16/47) mainly for premature delivery and low birth weight. No complication was found in 6 months post-operation. Conclusions: Uterine posterior wall breakwater-like suture technique is a simple, safe and effective way in controlling intraoperative bleeding of placental previa.Lower anterior wall stitch suture could effectively stop bleeding and restore the normal uterine shape. Combined application of various methods could significantly reduce the incidence of postpartum hemorrhage and hysterectomy, and improve maternal and fetal prognosis.

  20. Ultrasensitive prostate specific antigen assay following laparoscopic radical prostatectomy--an outcome measure for defining the learning curve.

    PubMed

    Viney, R; Gommersall, L; Zeif, J; Hayne, D; Shah, Z H; Doherty, A

    2009-07-01

    Radical retropubic prostatectomy (RRP) performed laparoscopically is a popular treatment with curative intent for organ-confined prostate cancer. After surgery, prostate specific antigen (PSA) levels drop to low levels which can be measured with ultrasensitive assays. This has been described in the literature for open RRP but not for laparoscopic RRP. This paper describes PSA changes in the first 300 consecutive patients undergoing non-robotic laparoscopic RRP by a single surgeon. To use ultrasensitive PSA (uPSA) assays to measure a PSA nadir in patients having laparoscopic radical prostatectomy below levels recorded by standard assays. The aim was to use uPSA nadir at 3 months' post-prostatectomy as an early surrogate end-point of oncological outcome. In so doing, laparoscopic oncological outcomes could then be compared with published results from other open radical prostatectomy series with similar end-points. Furthermore, this end-point could be used in the assessment of the surgeon's learning curve. Prospective, comprehensive, demographic, clinical, biochemical and operative data were collected from all patients undergoing non-robotic laparoscopic RRP. We present data from the first 300 consecutive patients undergoing laparoscopic RRP by a single surgeon. uPSA was measured every 3 months post surgery. Median follow-up was 29 months (minimum 3 months). The likelihood of reaching a uPSA of < or = 0.01 ng/ml at 3 months is 73% for the first 100 patients. This is statistically lower when compared with 83% (P < 0.05) for the second 100 patients and 80% for the third 100 patients (P < 0.05). Overall, 84% of patients with pT2 disease and 66% patients with pT3 disease had a uPSA of < or = 0.01 ng/ml at 3 months. Pre-operative PSA, PSA density and Gleason score were not correlated with outcome as determined by a uPSA of < or = 0.01 ng/ml at 3 months. Positive margins correlate with outcome as determined by a uPSA of < or = 0.01 ng/ml at 3 months but operative time and tumour volume do not (P < 0.05). Attempt at nerve sparing had no adverse effect on achieving a uPSA of < or = 0.01 ng/ml at 3 months. uPSA can be used as an early end-point in the analysis of oncological outcomes after radical prostatectomy. It is one of many measures that can be used in calculating a surgeon's learning curve for laparoscopic radical prostatectomy and in bench-marking performance. With experience, a surgeon can achieve in excess of an 80% chance of obtaining a uPSA nadir of < or = 0.01 ng/ml at 3 months after laparoscopic RRP for a British population. This is equivalent to most published open series.

  1. Surface EMG signals based motion intent recognition using multi-layer ELM

    NASA Astrophysics Data System (ADS)

    Wang, Jianhui; Qi, Lin; Wang, Xiao

    2017-11-01

    The upper-limb rehabilitation robot is regard as a useful tool to help patients with hemiplegic to do repetitive exercise. The surface electromyography (sEMG) contains motion information as the electric signals are generated and related to nerve-muscle motion. These sEMG signals, representing human's intentions of active motions, are introduced into the rehabilitation robot system to recognize upper-limb movements. Traditionally, the feature extraction is an indispensable part of drawing significant information from original signals, which is a tedious task requiring rich and related experience. This paper employs a deep learning scheme to extract the internal features of the sEMG signals using an advanced Extreme Learning Machine based auto-encoder (ELMAE). The mathematical information contained in the multi-layer structure of the ELM-AE is used as the high-level representation of the internal features of the sEMG signals, and thus a simple ELM can post-process the extracted features, formulating the entire multi-layer ELM (ML-ELM) algorithm. The method is employed for the sEMG based neural intentions recognition afterwards. The case studies show the adopted deep learning algorithm (ELM-AE) is capable of yielding higher classification accuracy compared to the Principle Component Analysis (PCA) scheme in 5 different types of upper-limb motions. This indicates the effectiveness and the learning capability of the ML-ELM in such motion intent recognition applications.

  2. Brainstorming: weighted voting prediction of inhibitors for protein targets.

    PubMed

    Plewczynski, Dariusz

    2011-09-01

    The "Brainstorming" approach presented in this paper is a weighted voting method that can improve the quality of predictions generated by several machine learning (ML) methods. First, an ensemble of heterogeneous ML algorithms is trained on available experimental data, then all solutions are gathered and a consensus is built between them. The final prediction is performed using a voting procedure, whereby the vote of each method is weighted according to a quality coefficient calculated using multivariable linear regression (MLR). The MLR optimization procedure is very fast, therefore no additional computational cost is introduced by using this jury approach. Here, brainstorming is applied to selecting actives from large collections of compounds relating to five diverse biological targets of medicinal interest, namely HIV-reverse transcriptase, cyclooxygenase-2, dihydrofolate reductase, estrogen receptor, and thrombin. The MDL Drug Data Report (MDDR) database was used for selecting known inhibitors for these protein targets, and experimental data was then used to train a set of machine learning methods. The benchmark dataset (available at http://bio.icm.edu.pl/∼darman/chemoinfo/benchmark.tar.gz ) can be used for further testing of various clustering and machine learning methods when predicting the biological activity of compounds. Depending on the protein target, the overall recall value is raised by at least 20% in comparison to any single machine learning method (including ensemble methods like random forest) and unweighted simple majority voting procedures.

  3. Effective Classroom Demonstration of Soil Reinforcing Techniques.

    ERIC Educational Resources Information Center

    Williams, John Wharton; Fox, Dennis James

    1986-01-01

    Presents a model for demonstrating soil mass stabilization. Explains how this approach can assist students in understanding the various types of soil reinforcement techniques, their relative contribution to increased soil strength, and some of their limitations. A working drawing of the model and directives for construction are included. (ML)

  4. Epidural volume extension: A novel technique and its efficacy in high risk cases

    PubMed Central

    Tiwari, Akhilesh Kumar; Singh, Rajeev Ratan; Anupam, Rudra Pratap; Ganguly, S.; Tomar, Gaurav Singh

    2012-01-01

    We present a unique case series restricting ourselves only to the high-risk case of different specialities who underwent successful surgery in our Institute by using epidural volume extension's technique using 1 mL of 0.5% ropivacaine and 25 μg of fentanyl. PMID:25885627

  5. Opportunities to Create Active Learning Techniques in the Classroom

    ERIC Educational Resources Information Center

    Camacho, Danielle J.; Legare, Jill M.

    2015-01-01

    The purpose of this article is to contribute to the growing body of research that focuses on active learning techniques. Active learning techniques require students to consider a given set of information, analyze, process, and prepare to restate what has been learned--all strategies are confirmed to improve higher order thinking skills. Active…

  6. Mutagen and Oncogen Study on JP-4

    DTIC Science & Technology

    1978-09-01

    random bred mice were killed by cranial blow , decapitated and bled. The liver was immediately dissected from the animal using aseptic technique and placed...chemical. Positive Controls--N-methylnitrosoguanidine (MNNG) at a concentration of 10 pg/ml was used as the positive control agent in nonactivation tests...The positive control agent in activation tests was 3,4-benzo(u)pyrene (BiP) at a concentration of 10 pg/ml. EXPERIMENTAL DESIGN Cell Preparation

  7. Plasmodium falciparum: a simplified technique for obtaining singly infected erythrocytes.

    PubMed

    Puthia, Manoj K; Tan, Kevin S W

    2005-02-01

    We report the development of a simple technique involving 15 ml polypropylene tubes and a rotatory incubator for obtaining erythrocytes singly infected with Plasmodium falciparum. This technique will be useful for cloning of the parasite. Our finding that P. falciparum merozoite invasion is inhibited during rotation suggests that this method may also be useful for the study of parasite-erythrocyte interactions under dynamic circulatory conditions.

  8. A relational learning approach to Structure-Activity Relationships in drug design toxicity studies.

    PubMed

    Camacho, Rui; Pereira, Max; Costa, Vítor Santos; Fonseca, Nuno A; Adriano, Carlos; Simões, Carlos J V; Brito, Rui M M

    2011-09-16

    It has been recognized that the development of new therapeutic drugs is a complex and expensive process. A large number of factors affect the activity in vivo of putative candidate molecules and the propensity for causing adverse and toxic effects is recognized as one of the major hurdles behind the current "target-rich, lead-poor" scenario. Structure-Activity Relationship (SAR) studies, using relational Machine Learning (ML) algorithms, have already been shown to be very useful in the complex process of rational drug design. Despite the ML successes, human expertise is still of the utmost importance in the drug development process. An iterative process and tight integration between the models developed by ML algorithms and the know-how of medicinal chemistry experts would be a very useful symbiotic approach. In this paper we describe a software tool that achieves that goal--iLogCHEM. The tool allows the use of Relational Learners in the task of identifying molecules or molecular fragments with potential to produce toxic effects, and thus help in stream-lining drug design in silico. It also allows the expert to guide the search for useful molecules without the need to know the details of the algorithms used. The models produced by the algorithms may be visualized using a graphical interface, that is of common use amongst researchers in structural biology and medicinal chemistry. The graphical interface enables the expert to provide feedback to the learning system. The developed tool has also facilities to handle the similarity bias typical of large chemical databases. For that purpose the user can filter out similar compounds when assembling a data set. Additionally, we propose ways of providing background knowledge for Relational Learners using the results of Graph Mining algorithms. Copyright 2011 The Author(s). Published by Journal of Integrative Bioinformatics.

  9. Machine learning for prediction of 30-day mortality after ST elevation myocardial infraction: An Acute Coronary Syndrome Israeli Survey data mining study.

    PubMed

    Shouval, Roni; Hadanny, Amir; Shlomo, Nir; Iakobishvili, Zaza; Unger, Ron; Zahger, Doron; Alcalai, Ronny; Atar, Shaul; Gottlieb, Shmuel; Matetzky, Shlomi; Goldenberg, Ilan; Beigel, Roy

    2017-11-01

    Risk scores for prediction of mortality 30-days following a ST-segment elevation myocardial infarction (STEMI) have been developed using a conventional statistical approach. To evaluate an array of machine learning (ML) algorithms for prediction of mortality at 30-days in STEMI patients and to compare these to the conventional validated risk scores. This was a retrospective, supervised learning, data mining study. Out of a cohort of 13,422 patients from the Acute Coronary Syndrome Israeli Survey (ACSIS) registry, 2782 patients fulfilled inclusion criteria and 54 variables were considered. Prediction models for overall mortality 30days after STEMI were developed using 6 ML algorithms. Models were compared to each other and to the Global Registry of Acute Coronary Events (GRACE) and Thrombolysis In Myocardial Infarction (TIMI) scores. Depending on the algorithm, using all available variables, prediction models' performance measured in an area under the receiver operating characteristic curve (AUC) ranged from 0.64 to 0.91. The best models performed similarly to the Global Registry of Acute Coronary Events (GRACE) score (0.87 SD 0.06) and outperformed the Thrombolysis In Myocardial Infarction (TIMI) score (0.82 SD 0.06, p<0.05). Performance of most algorithms plateaued when introduced with 15 variables. Among the top predictors were creatinine, Killip class on admission, blood pressure, glucose level, and age. We present a data mining approach for prediction of mortality post-ST-segment elevation myocardial infarction. The algorithms selected showed competence in prediction across an increasing number of variables. ML may be used for outcome prediction in complex cardiology settings. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  10. Using Machine-Learned Bayesian Belief Networks to Predict Perioperative Risk of Clostridium Difficile Infection Following Colon Surgery

    PubMed Central

    Bilchik, Anton; Eberhardt, John; Kalina, Philip; Nissan, Aviram; Johnson, Eric; Avital, Itzhak; Stojadinovic, Alexander

    2012-01-01

    Background Clostridium difficile (C-Diff) infection following colorectal resection is an increasing source of morbidity and mortality. Objective We sought to determine if machine-learned Bayesian belief networks (ml-BBNs) could preoperatively provide clinicians with postoperative estimates of C-Diff risk. Methods We performed a retrospective modeling of the Nationwide Inpatient Sample (NIS) national registry dataset with independent set validation. The NIS registries for 2005 and 2006 were used for initial model training, and the data from 2007 were used for testing and validation. International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) codes were used to identify subjects undergoing colon resection and postoperative C-Diff development. The ml-BBNs were trained using a stepwise process. Receiver operating characteristic (ROC) curve analysis was conducted and area under the curve (AUC), positive predictive value (PPV), and negative predictive value (NPV) were calculated. Results From over 24 million admissions, 170,363 undergoing colon resection met the inclusion criteria. Overall, 1.7% developed postoperative C-Diff. Using the ml-BBN to estimate C-Diff risk, model AUC is 0.75. Using only known a priori features, AUC is 0.74. The model has two configurations: a high sensitivity and a high specificity configuration. Sensitivity, specificity, PPV, and NPV are 81.0%, 50.1%, 2.6%, and 99.4% for high sensitivity and 55.4%, 81.3%, 3.5%, and 99.1% for high specificity. C-Diff has 4 first-degree associates that influence the probability of C-Diff development: weight loss, tumor metastases, inflammation/infections, and disease severity. Conclusions Machine-learned BBNs can produce robust estimates of postoperative C-Diff infection, allowing clinicians to identify high-risk patients and potentially implement measures to reduce its incidence or morbidity. PMID:23611947

  11. The Outcome of Primary Subglandular Breast Augmentation Using Tumescent Local Anesthesia.

    PubMed

    Rusciani, Antonio; Pietramaggiori, Giorgio; Troccola, Antonietta; Santoprete, Stefano; Rotondo, Antonio; Curinga, Giuseppe

    2016-01-01

    Tumescent local anesthesia (TLA) technique to obtain regional anesthesia and vasoconstriction of the skin and subcutaneous tissues is routinely adopted for several plastic surgery procedures. Here, we describe the use of TLA in primary subglandular breast augmentation. This series evaluates advantages and disadvantages of TLA in elective augmentation breast surgery as well as patients' response to this procedure. Between December 2008 and November 2011, 150 patients underwent bilateral primary subglandular breast augmentation under TLA and conscious sedation in the presence of a board-certified anesthesiologist. Midazolam 0.05 mg/kg IV and ranitidine 100 mg IV were given as premedication. Tumescent local anesthesia was composed of 25 mL of lidocaine 2%, 8 mEq of sodium bicarbonate, and 1 mL of epinephrine (1 mg/1 mL) in 1000 mL of 0.9% NS. The solution was delivered between the pectoral fascia and the mammary gland via a spinal needle. After infiltration, 45 minutes were allowed before surgery for local anesthetic effects to take place. The mean age of the patients was 34.3 years. The average amount of tumescent solution infiltrated was 1150 mL, with a maximal dose of 17 mg/kg of lidocaine used. Operating time was 45 minutes and recovery room time averaged 125 minutes. Minor complications were found in a total of 9 (5.3%) patients, with no main surgery-related complications such as hematoma or seroma formation. Breast augmentation under TLA and conscious sedation proved to be safe in the presence of a board-certified anesthesiologist and when performed with meticulous surgical technique.

  12. Preoperative Subconjunctival Injection of Mitomycin C Versus Intraoperative Topical Application as an Adjunctive Treatment for Surgical Removal of Primary Pterygium

    PubMed Central

    Ghoneim, Ehab M.; Abd-El Ghny, Ahmed A.; Gab-Allah, Amro A.; Kamal, Mohamed Z.

    2011-01-01

    Purpose: To compare the efficacy of preoperative local injection of mitomycin C (MMC) to intraoperative application of MMC in the prevention of pterygium recurrence after surgical removal. Materials and Methods: Seventy eyes of 70 patients with primary pterygia were randomly allocated to two groups. The first group (Group A, 35 eyes) received 0.1 ml of 0.15 mg/ml of subconjunctival MMC injected into the head of the pterygium 24 h before surgical excision with the bare sclera technique. The second group (Group B 35 eyes) underwent surgical removal with the bare sclera technique with intraoperative application of MMC (0.15 mg/ml) over bare sclera for 3 min. The study was performed between March 2007 and December 2008, and follow up was performed for 1 year postoperatively. Differences between frequencies in both groups were compared by the Chi-square test or Fisher exact test. Differences between means in both groups were compared by Student’s t-test. P < 0.05 was considered significant. Results: The rate of pterygium recurrence was 5.70% in Group A and 8.57% in Group B at 1 year postoperatively (P>0.05). Postoperatively, scleral thinning occurred in one eye in each group that resolved by 5 months postoperatively. No serious postoperative complications occurred in either group. Conclusion: Preoperative local injection of 0.15 mg/ml MMC is as effective as intraoperative topical application of 0.15 mg/ml MMC for preventing pterygium recurrence after surgical removal. PMID:21572732

  13. Molecular imprinted opal closest-packing photonic crystals for the detection of trace 17β-estradiol in aqueous solution.

    PubMed

    Sai, Na; Wu, Yuntang; Sun, Zhong; Huang, Guowei; Gao, Zhixian

    2015-11-01

    A novel opal closest-packing (OCP) photonic crystal (PC) was prepared by the introduction of molecular imprinting technique into the OCP PC. This molecular imprinted (MI)-OCP PC was fabricated via a vertical convective self-assembly method using 17β-estradiol (E2) as template molecules for monitoring E2 in aqueous solution. Morphology characterization showed that the MI-OCP PC possessed a highly ordered three-dimensional (3D) periodically-ordered structure, showing the desired structural color. The proposed PC material displayed a reduced reflection intensity when detecting E2 in water environment, because the molecular imprinting recognition events make the optical characteristics of PC change. The Bragg diffraction intensity decreased by 19.864 a.u. with the increase of E2 concentration from 1.5 ng mL(-1) to 364.5 ng mL(-1) within 6 min, whereas there were no obvious peak intensity changes for estriol, estrone, cholesterol, testosterone and diethylstilbestrol, indicating that the MI-OCP PC had selective and rapid response for E2 molecules. The adsorption results showed that the OCP structure and homogeneous layers were created in the MI-OCP PC with higher adsorption capacity. Thus, it was learned the MI-OCP PC is a simple prepared, sensitive, selective, and easy operative material, which shows promising use in routine supervision for residue detection in food and environment. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. [Brain Perfusion, Cognitive Functions, and Vascular Age in Middle Aged Patients With Essential Arterial Hypertension].

    PubMed

    Parfenov, V A; Ostroumova, T M; Pеrepelova, E M; Perepelov, V A; Kochetkov, A I; Ostroumova, O D

    2018-05-01

    This study aimed to assess the cognitive functions and cerebral blood flow measured with arterial spin labeling (ASL) and their possible correlations with vascular age in untreated middle-aged patients with grade 1-2 essential arterial hypertension (EAH). We examined 73 subjects aged 40-59 years (33 with EAH and 40 healthy volunteers [controls]). Neuropsychological assessment included Montreal Cognitive Assessment (MoCA), Trail Making test (part A and part B), Stroop Color and Word Test, verbal fluency test (phonemic verbal fluency and semantic verbal fluency), 10‑item word list learning task. All subjects underwent brain MRI. MRI protocol included ASL. Vascular age was calculated by two techniques - using Framingham Heart Study risk tables and SCORE project scales. Patients with EAH had lower performance on phonemic verbal fluency test and lower mean MoCA score (29.2±1.4 vs. 28.1±1.7 points) compared to controls (13.4±3.2, р=0.002; 29.2±1.4, p=0.001, respectively). White matter hyperintensities (WMH) were present in 7.5 % controls and in 51.5 % EAH patients (р=0.0002). Cerebral blood flow (CBF) in EAH patients was lower in both right (39.1±5.6 vs. 45.8±3.2 ml / 100 g / min) and left frontal lobes of the brain (39.2±6.2 и 45.2±3.6 ml / 100 g / min, respectively) compared to controls (р.

  15. A Comparative Study of Different EEG Reference Choices for Diagnosing Unipolar Depression.

    PubMed

    Mumtaz, Wajid; Malik, Aamir Saeed

    2018-06-02

    The choice of an electroencephalogram (EEG) reference has fundamental importance and could be critical during clinical decision-making because an impure EEG reference could falsify the clinical measurements and subsequent inferences. In this research, the suitability of three EEG references was compared while classifying depressed and healthy brains using a machine-learning (ML)-based validation method. In this research, the EEG data of 30 unipolar depressed subjects and 30 age-matched healthy controls were recorded. The EEG data were analyzed in three different EEG references, the link-ear reference (LE), average reference (AR), and reference electrode standardization technique (REST). The EEG-based functional connectivity (FC) was computed. Also, the graph-based measures, such as the distances between nodes, minimum spanning tree, and maximum flow between the nodes for each channel pair, were calculated. An ML scheme provided a mechanism to compare the performances of the extracted features that involved a general framework such as the feature extraction (graph-based theoretic measures), feature selection, classification, and validation. For comparison purposes, the performance metrics such as the classification accuracies, sensitivities, specificities, and F scores were computed. When comparing the three references, the diagnostic accuracy showed better performances during the REST, while the LE and AR showed less discrimination between the two groups. Based on the results, it can be concluded that the choice of appropriate reference is critical during the clinical scenario. The REST reference is recommended for future applications of EEG-based diagnosis of mental illnesses.

  16. Comparison of MRI segmentation techniques for measuring liver cyst volumes in autosomal dominant polycystic kidney disease.

    PubMed

    Farooq, Zerwa; Behzadi, Ashkan Heshmatzadeh; Blumenfeld, Jon D; Zhao, Yize; Prince, Martin R

    To compare MRI segmentation methods for measuring liver cyst volumes in autosomal dominant polycystic kidney disease (ADPKD). Liver cyst volumes in 42 ADPKD patients were measured using region growing, thresholding and cyst diameter techniques. Manual segmentation was the reference standard. Root mean square deviation was 113, 155, and 500 for cyst diameter, thresholding and region growing respectively. Thresholding error for cyst volumes below 500ml was 550% vs 17% for cyst volumes above 500ml (p<0.001). For measuring volume of a small number of cysts, cyst diameter and manual segmentation methods are recommended. For severe disease with numerous, large hepatic cysts, thresholding is an acceptable alternative. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Improving Crotalidae polyvalent immune Fab reconstitution times.

    PubMed

    Quan, Asia N; Quan, Dan; Curry, Steven C

    2010-06-01

    Crotalidae polyvalent immune Fab (CroFab) is used to treat rattlesnake envenomations in the United States. Time to infusion may be a critical factor in the treatment of these bites. Per manufacturer's instructions, 10 mL of sterile water for injection (SWI) and hand swirling are recommended for reconstitution. We wondered whether completely filling vials with 25 mL of SWI would result in shorter reconstitution times than using 10-mL volumes and how hand mixing compared to mechanical agitation of vials or leaving vials undisturbed. Six sets of 5 vials were filled with either 10 mL or 25 mL. Three mixing techniques were used as follows: undisturbed; agitation with a mechanical agitator; and continuous hand rolling and inverting of vials. Dissolution was determined by observation and time to complete dissolution for each vial. Nonparametric 2-tailed P values were calculated. Filling vials completely with 25 mL resulted in quicker dissolution than using 10-mL volumes, regardless of mixing method (2-tailed P = .024). Mixing by hand was shorter than other methods (P < .001). Reconstitution with 25 mL and hand mixing resulted in the shortest dissolution times (median, 1.1 minutes; range, 0.9-1.3 minutes). This appeared clinically important because dissolution times using 10 mL and mechanical rocking of vials (median, 26.4 minutes) or leaving vials undisturbed (median, 33.6 minutes) was several-fold longer. Hand mixing after filling vials completely with 25 mL results in shorter dissolution times than using 10 mL or other methods of mixing and is recommended, especially when preparing initial doses of CroFab. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  18. Least squares parameter estimation methods for material decomposition with energy discriminating detectors

    PubMed Central

    Le, Huy Q.; Molloi, Sabee

    2011-01-01

    Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar to the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg∕ml) and iodine (4, 12, 20, 28, 36, and 44 mg∕ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30∕70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg∕ml) and iodine (5, 15, 25, 35, and 45 mg∕ml). The x-ray transport process was simulated where the Beer–Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues. Quantification with this technique was accurate with errors of 9.83% and 6.61% for HA and iodine, respectively. Calibration at one point (one breast size) showed increased errors as the mismatch in breast diameters between calibration and measurement increased. A four-point calibration successfully decomposed breast diameter spanning the entire range from 8 to 20 cm. For a 14 cm breast, errors were reduced from 5.44% to 1.75% and from 6.17% to 3.27% with the multipoint calibration for HA and iodine, respectively. Conclusions: The results of the simulation study showed that a CT system based on CZT detectors in conjunction with least squares minimization technique can be used to decompose four materials. The calibrated least squares parameter estimation decomposition technique performed the best, separating and accurately quantifying the concentrations of hydroxyapatite and iodine. PMID:21361193

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le, Huy Q.; Molloi, Sabee

    Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar tomore » the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg/ml) and iodine (4, 12, 20, 28, 36, and 44 mg/ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30/70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg/ml) and iodine (5, 15, 25, 35, and 45 mg/ml). The x-ray transport process was simulated where the Beer-Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues. Quantification with this technique was accurate with errors of 9.83% and 6.61% for HA and iodine, respectively. Calibration at one point (one breast size) showed increased errors as the mismatch in breast diameters between calibration and measurement increased. A four-point calibration successfully decomposed breast diameter spanning the entire range from 8 to 20 cm. For a 14 cm breast, errors were reduced from 5.44% to 1.75% and from 6.17% to 3.27% with the multipoint calibration for HA and iodine, respectively. Conclusions: The results of the simulation study showed that a CT system based on CZT detectors in conjunction with least squares minimization technique can be used to decompose four materials. The calibrated least squares parameter estimation decomposition technique performed the best, separating and accurately quantifying the concentrations of hydroxyapatite and iodine.« less

  20. Alternative approaches to surgical hemostasis in patients with morbidly adherent placenta undergoing fertility-sparing surgery.

    PubMed

    Shmakov, Roman G; Vinitskiy, Aleksandr A; Chuprinin, Vladimir D; Yarotskaya, Ekaterina L; Sukhikh, Gennady T

    2018-02-05

    To evaluate the efficacy of different methods of surgical hemostasis, including the ligation of internal iliac arteries (IIA), temporary occlusion of the common iliac artery (CIA) and combined compression hemostasis, during cesarean section in patients with morbidly adherent placenta (MAP). The study included 54 patients with MAP. All patients underwent cesarean section with application of surgical hemostasis techniques. In Group 1 (n = 15), ligation of IIA was performed, in Group 2 (n = 18) extravasal temporary occlusion of CIA, and in Group 3 (n = 21) combined compression hemostasis was applied. The latter technique included placement of bilateral tourniquets on the upper uterine pedicles and on the cervicoisthmic segment, and controlled Zhukovsky balloon tamponade of the uterus, with subsequent resection of the uterine wall with abnormal placental invasion, evacuation of placenta from the uterine cavity and closure of the uterine wall defect with a double suture. The studied outcomes were total blood loss, duration of surgery, the hemoglobin level alteration, hysterectomy rate, and length of postoperative hospital stay. Total blood loss in Group 1 was 2440 ± 1215 ml, in Group 2 - 2186 ± 1353 ml, and in Group 3 - 1295 ± 520.3 ml (p = .0045). In Group 3, the lowest number of cases with blood loss >2000 ml was observed [8 (53.3%) versus 9 (50.0%) and 2 (9.5%), respectively; p = .0411]. The duration of surgery, the hemoglobin level alteration, hysterectomy rate, and length of hospital stay after delivery did not differ significantly between the groups. All surgical techniques used in the study were effective to decrease the blood loss during cesarean section in patients with MAP; however, the combined compression hemostasis showed the highest efficacy.

  1. Percutaneous Direct Needle Puncture and Transcatheter N-butyl Cyanoacrylate Injection Techniques for the Embolization of Pseudoaneurysms and Aneurysms of Arteries Supplying the Hepato-pancreato-biliary System and Gastrointestinal Tract

    PubMed Central

    Yadav, Rajanikant R; Boruah, Deb K; Bhattacharyya, Vishwaroop; Prasad, Raghunandan; Kumar, Sheo; Saraswat, V A; Kapoor, V K; Saxena, Rajan

    2016-01-01

    Aims: The aim of this study was to evaluate the safety and clinical efficacy of percutaneous direct needle puncture and transcatheter N-butyl cyanoacrylate (NBCA) injection techniques for the embolization of pseudoaneurysms and aneurysms of arteries supplying the hepato-pancreato-biliary (HPB) system and gastrointestinal (GI) tract. Subjects and Methods: A hospital-based cross-sectional retrospective study was conducted, where the study group comprised 11 patients with pseudoaneurysms/aneurysms of arteries supplying the HPB system and GI tract presenting to a tertiary care center from January 2015 to June 2016. Four patients (36.4%) underwent percutaneous direct needle puncture of pseudoaneurysms with NBCA injection, 3 patients (27.3%) underwent transcatheter embolization with NBCA as sole embolic agent, and in 4 patients (36.4%), transcatheter NBCA injection was done along with coil embolization. Results: This retrospective study comprised 11 patients (8 males and 3 females) with mean age of 35.8 years ± 1.6 (standard deviation [SD]). The mean volume of NBCA: ethiodized oil (lipiodol) mixture injected by percutaneous direct needle puncture was 0.62 ml ± 0.25 (SD) (range = 0.5–1 ml), and by transcatheter injection, it was 0.62 ml ± 0.37 (SD) (range = 0.3–1.4 ml). Embolization with NBCA was technically and clinically successful in all patients (100%). No recurrence of bleeding or recurrence of pseudoaneurysm/aneurysm was noted in our study. Conclusions: Percutaneous direct needle puncture of visceral artery pseudoaneurysms and NBCA glue injection and transcatheter NBCA injection for embolization of visceral artery pseudoaneurysms and aneurysms are cost-effective techniques that can be used when coil embolization is not feasible or has failed. PMID:28123838

  2. The efficacy of infiltration anaesthesia for adult mandibular incisors: a randomised double-blind cross-over trial comparing articaine and lidocaine buccal and buccal plus lingual infiltrations.

    PubMed

    Jaber, A; Whitworth, J M; Corbett, I P; Al-Baqshi, B; Kanaa, M D; Meechan, J G

    2010-11-01

    To compare the efficacy of 2% lidocaine and 4% articaine both with 1:100,000 adrenaline in anaesthetising the pulps of mandibular incisors. Thirty-one healthy adult volunteers received the following local anaesthetic regimens adjacent to a mandibular central incisor: 1) buccal infiltration of 1.8 mL lidocaine plus dummy lingual injection (LB), 2) buccal plus lingual infiltrations of 0.9 mL lidocaine (LBL), 3) buccal infiltration of 1.8 mL articaine plus dummy lingual injection (AB), 4) buccal plus lingual infiltrations of 0.9 mL articaine (ABL). Pulp sensitivities of the central incisor and contralateral lateral incisor were assessed electronically. Anaesthetic efficacy was determined by two methods: 1) Recording the number of episodes with no responses to maximal electronic pulp tester stimulation during the course of the study period, 2) recording the number of volunteers with no response to maximal pulp tester stimulation within 15 min and maintained for 45 min (defined as sustained anaesthesia). Data were analysed by McNemar, chi-square, Mann-Whitney and paired t-tests. For both test teeth, the number of episodes of no sensation on maximal stimulation was significantly greater after articaine than lidocaine for both techniques. The split buccal plus lingual dose was more effective than the buccal injection alone for both solutions (p <0.001). 4% articaine was more effective than 2% lidocaine when comparing sustained anaesthesia in both teeth for each technique (p <0.001), however, there was no difference in sustained anaesthesia between techniques for either tooth or solution. 4% articaine was more effective than 2% lidocaine (both with 1:100,000 adrenaline) in anaesthetising the pulps of lower incisor teeth after buccal or buccal plus lingual infiltrations.

  3. Supercritical fluid extraction of phenolic compounds and antioxidants from grape (Vitis labrusca B.) seeds.

    PubMed

    Ghafoor, Kashif; Al-Juhaimi, Fahad Y; Choi, Yong Hee

    2012-12-01

    Supercritical fluid extraction (SFE) technique was applied and optimized for temperature, CO₂ pressure and ethanol (modifier) concentration using orthogonal array design and response surface methodology for the extract yield, total phenols and antioxidants from grape (Vitis labrusca B.) seeds. Effects of extraction temperature and pressure were found to be significant for all these response variables in SFE process. Optimum SFE conditions (44 ~ 46 °C temperature and 153 ~ 161 bar CO₂ pressure) along with ethanol (<7 %) as modifier, for the maximum predicted values of extract yield (12.09 %), total phenols (2.41 mg GAE/ml) and antioxidants (7.08 mg AAE/ml), were used to obtain extracts from grape seeds. The predicted values matched well with the experimental values (12.32 % extract yield, 2.45 mg GAE/ml total phenols and 7.08 mg AAE/ml antioxidants) obtained at optimum SFE conditions. The antiradical assay showed that SFE extracts of grape seeds can scavenge more than 85 % of 1, 1-diphenyl-2-picrylhydrazyl (DPPH) radicals. The grape seeds extracts were also analyzed for hydroxybenzoic acids which included gallic acid (1.21 ~ 3.84 μg/ml), protocatechuic acid (3.57 ~ 11.78 μg/ml) and p-hydroxybenzoic acid (206.72 ~ 688.18 μg/ml).

  4. [Sensitivity, specificity and prognostic value of CEA in colorectal cancer: results of a Tunisian series and literature review].

    PubMed

    Bel Hadj Hmida, Y; Tahri, N; Sellami, A; Yangui, N; Jlidi, R; Beyrouti, M I; Krichen, M S; Masmoudi, H

    2001-01-01

    In order to determine the sensitivity of CEA in the diagnosis of colo-rectal carcinoma, we studied a series of 48 patients with colo-rectal carcinoma (1992-1996). The sensitivity was at 52% with a reference value of 5 ng/ml and 68.7% for a reference value of 2.5 ng/ml. With a reference value of 5 ng/ml, the sensitivity of CEA was at 37% only for patients with colo-rectal carcinoma at Dukes B stage, 66.6% for patients at stage C and 75% for patients at stage D. The dosage of CEA was carried out with a sandwich immunoenzymatic technique in tube. There is no statistic significant correlation between the pre-operative rate of CEA and the localisation of the tumor and its histologic type; in contrast, it was significantly correlated with the ganglionnary metastasis. A significant relationship between the pre-operative rate of CEA and the Dukes stage was found for a reference value of 10 ng/ml but not for a reference value of 5 ng/ml. We calculated the specificity of the CEA for the cancers of colon and rectum which was at 76.98% with a reference value of 5 ng/ml and 86% with a reference value of 10 ng/ml.

  5. Figure Analysis: A Teaching Technique to Promote Visual Literacy and Active Learning

    ERIC Educational Resources Information Center

    Wiles, Amy M.

    2016-01-01

    Learning often improves when active learning techniques are used in place of traditional lectures. For many of these techniques, however, students are expected to apply concepts that they have already grasped. A challenge, therefore, is how to incorporate active learning into the classroom of courses with heavy content, such as molecular-based…

  6. The Effect of Learning Based on Technology Model and Assessment Technique toward Thermodynamic Learning Achievement

    NASA Astrophysics Data System (ADS)

    Makahinda, T.

    2018-02-01

    The purpose of this research is to find out the effect of learning model based on technology and assessment technique toward thermodynamic achievement by controlling students intelligence. This research is an experimental research. The sample is taken through cluster random sampling with the total respondent of 80 students. The result of the research shows that the result of learning of thermodynamics of students who taught the learning model of environmental utilization is higher than the learning result of student thermodynamics taught by simulation animation, after controlling student intelligence. There is influence of student interaction, and the subject between models of technology-based learning with assessment technique to student learning result of Thermodynamics, after controlling student intelligence. Based on the finding in the lecture then should be used a thermodynamic model of the learning environment with the use of project assessment technique.

  7. Multivariate decoding of cerebral blood flow measures in a clinical model of on-going postsurgical pain.

    PubMed

    O'Muircheartaigh, Jonathan; Marquand, Andre; Hodkinson, Duncan J; Krause, Kristina; Khawaja, Nadine; Renton, Tara F; Huggins, John P; Vennart, William; Williams, Steven C R; Howard, Matthew A

    2015-02-01

    Recent reports of multivariate machine learning (ML) techniques have highlighted their potential use to detect prognostic and diagnostic markers of pain. However, applications to date have focussed on acute experimental nociceptive stimuli rather than clinically relevant pain states. These reports have coincided with others describing the application of arterial spin labeling (ASL) to detect changes in regional cerebral blood flow (rCBF) in patients with on-going clinical pain. We combined these acquisition and analysis methodologies in a well-characterized postsurgical pain model. The principal aims were (1) to assess the classification accuracy of rCBF indices acquired prior to and following surgical intervention and (2) to optimise the amount of data required to maintain accurate classification. Twenty male volunteers, requiring bilateral, lower jaw third molar extraction (TME), underwent ASL examination prior to and following individual left and right TME, representing presurgical and postsurgical states, respectively. Six ASL time points were acquired at each exam. Each ASL image was preceded by visual analogue scale assessments of alertness and subjective pain experiences. Using all data from all sessions, an independent Gaussian Process binary classifier successfully discriminated postsurgical from presurgical states with 94.73% accuracy; over 80% accuracy could be achieved using half of the data (equivalent to 15 min scan time). This work demonstrates the concept and feasibility of time-efficient, probabilistic prediction of clinically relevant pain at the individual level. We discuss the potential of ML techniques to impact on the search for novel approaches to diagnosis, management, and treatment to complement conventional patient self-reporting. © 2014 The Authors. Human Brain Mapping Published by Wiley Periodicals, Inc. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

  8. Parents' Workplace Experiences and Family Communication Patterns.

    ERIC Educational Resources Information Center

    Ritchie, L. David

    1997-01-01

    Gathers data from 178 parents of adolescents to elucidate observed relationships between social class and family communication patterns. Finds parents generalize from their own experiences--particularly in the workplace--consistent with M.L. Kohn's theory of learning generalization. Finds conversation orientation to be positively associated and…

  9. 77 FR 71019 - Japan Lessons-Learned Project Directorate Interim Staff Guidance JLD-ISG-2012-04; Guidance on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-28

    ... Insights from the Fukushima Dai-ichi Accident,'' dated March 12, 2012 (ADAMS Accession No. ML12053A340... resulting nuclear accident, at the Fukushima Dai-ichi nuclear power plant in March 2011. Enclosure 1 to the...

  10. MPS and ML

    MedlinePlus

    ... Contact Us Family Support Programs Learn Give Support Research Advocacy About P.O. Box 14686 Durham, NC 27709-4686 Toll Free: 877.MPS.1001 Local: 919.806.0101 Our Nonprofit 501(c)(3) is: 11-2734849 Copyright © National MPS Society. All Rights Reserved. Web Design & Development by TheeDesign

  11. Rapid imaging, detection and quantification of Giardia lamblia cysts using mobile-phone based fluorescent microscopy and machine learning.

    PubMed

    Koydemir, Hatice Ceylan; Gorocs, Zoltan; Tseng, Derek; Cortazar, Bingen; Feng, Steve; Chan, Raymond Yan Lok; Burbano, Jordi; McLeod, Euan; Ozcan, Aydogan

    2015-03-07

    Rapid and sensitive detection of waterborne pathogens in drinkable and recreational water sources is crucial for treating and preventing the spread of water related diseases, especially in resource-limited settings. Here we present a field-portable and cost-effective platform for detection and quantification of Giardia lamblia cysts, one of the most common waterborne parasites, which has a thick cell wall that makes it resistant to most water disinfection techniques including chlorination. The platform consists of a smartphone coupled with an opto-mechanical attachment weighing ~205 g, which utilizes a hand-held fluorescence microscope design aligned with the camera unit of the smartphone to image custom-designed disposable water sample cassettes. Each sample cassette is composed of absorbent pads and mechanical filter membranes; a membrane with 8 μm pore size is used as a porous spacing layer to prevent the backflow of particles to the upper membrane, while the top membrane with 5 μm pore size is used to capture the individual Giardia cysts that are fluorescently labeled. A fluorescence image of the filter surface (field-of-view: ~0.8 cm(2)) is captured and wirelessly transmitted via the mobile-phone to our servers for rapid processing using a machine learning algorithm that is trained on statistical features of Giardia cysts to automatically detect and count the cysts captured on the membrane. The results are then transmitted back to the mobile-phone in less than 2 minutes and are displayed through a smart application running on the phone. This mobile platform, along with our custom-developed sample preparation protocol, enables analysis of large volumes of water (e.g., 10-20 mL) for automated detection and enumeration of Giardia cysts in ~1 hour, including all the steps of sample preparation and analysis. We evaluated the performance of this approach using flow-cytometer-enumerated Giardia-contaminated water samples, demonstrating an average cyst capture efficiency of ~79% on our filter membrane along with a machine learning based cyst counting sensitivity of ~84%, yielding a limit-of-detection of ~12 cysts per 10 mL. Providing rapid detection and quantification of microorganisms, this field-portable imaging and sensing platform running on a mobile-phone could be useful for water quality monitoring in field and resource-limited settings.

  12. The Value of Successful MBSE Adoption

    NASA Technical Reports Server (NTRS)

    Parrott, Edith

    2016-01-01

    The value of successful adoption of Model Based System Engineering (MBSE) practices is hard to quantify. Most engineers and project managers look at the success in terms of cost. But there are other ways to quantify the value of MBSE and the steps necessary to achieve adoption. The Glenn Research Center (GRC) has been doing Model-Based Engineering (design, structural, etc.) for years, but the system engineering side has not. Since 2010, GRC has been moving from documents centric to MBSE/SysML. Project adoption of MBSE has been slow, but is steadily increasing in both MBSE usage and complexity of generated products. Sharing of knowledge of lessons learned in the implementation of MBSE/SysML is key for others who want to be successful. Along with GRC's implementation, NASA is working hard to increase the successful implementation of MBSE across all the other centers by developing guidelines, templates and libraries for projects to utilize. This presentation will provide insight into recent GRC and NASA adoption efforts, lessons learned and best practices.

  13. High-Level Prediction Signals in a Low-Level Area of the Macaque Face-Processing Hierarchy.

    PubMed

    Schwiedrzik, Caspar M; Freiwald, Winrich A

    2017-09-27

    Theories like predictive coding propose that lower-order brain areas compare their inputs to predictions derived from higher-order representations and signal their deviation as a prediction error. Here, we investigate whether the macaque face-processing system, a three-level hierarchy in the ventral stream, employs such a coding strategy. We show that after statistical learning of specific face sequences, the lower-level face area ML computes the deviation of actual from predicted stimuli. But these signals do not reflect the tuning characteristic of ML. Rather, they exhibit identity specificity and view invariance, the tuning properties of higher-level face areas AL and AM. Thus, learning appears to endow lower-level areas with the capability to test predictions at a higher level of abstraction than what is afforded by the feedforward sweep. These results provide evidence for computational architectures like predictive coding and suggest a new quality of functional organization of information-processing hierarchies beyond pure feedforward schemes. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Modeling Music Emotion Judgments Using Machine Learning Methods

    PubMed Central

    Vempala, Naresh N.; Russo, Frank A.

    2018-01-01

    Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML) methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion. PMID:29354080

  15. Communication: Understanding molecular representations in machine learning: The role of uniqueness and target similarity

    NASA Astrophysics Data System (ADS)

    Huang, Bing; von Lilienfeld, O. Anatole

    2016-10-01

    The predictive accuracy of Machine Learning (ML) models of molecular properties depends on the choice of the molecular representation. Inspired by the postulates of quantum mechanics, we introduce a hierarchy of representations which meet uniqueness and target similarity criteria. To systematically control target similarity, we simply rely on interatomic many body expansions, as implemented in universal force-fields, including Bonding, Angular (BA), and higher order terms. Addition of higher order contributions systematically increases similarity to the true potential energy and predictive accuracy of the resulting ML models. We report numerical evidence for the performance of BAML models trained on molecular properties pre-calculated at electron-correlated and density functional theory level of theory for thousands of small organic molecules. Properties studied include enthalpies and free energies of atomization, heat capacity, zero-point vibrational energies, dipole-moment, polarizability, HOMO/LUMO energies and gap, ionization potential, electron affinity, and electronic excitations. After training, BAML predicts energies or electronic properties of out-of-sample molecules with unprecedented accuracy and speed.

  16. Modeling Music Emotion Judgments Using Machine Learning Methods.

    PubMed

    Vempala, Naresh N; Russo, Frank A

    2017-01-01

    Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML) methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.

  17. Schinus terebinthifolius countercurrent chromatography (Part III): Method transfer from small countercurrent chromatography column to preparative centrifugal partition chromatography ones as a part of method development.

    PubMed

    das Neves Costa, Fernanda; Hubert, Jane; Borie, Nicolas; Kotland, Alexis; Hewitson, Peter; Ignatova, Svetlana; Renault, Jean-Hugues

    2017-03-03

    Countercurrent chromatography (CCC) and centrifugal partition chromatography (CPC) are support free liquid-liquid chromatography techniques sharing the same basic principles and features. Method transfer has previously been demonstrated for both techniques but never from one to another. This study aimed to show such a feasibility using fractionation of Schinus terebinthifolius berries dichloromethane extract as a case study. Heptane - ethyl acetate - methanol -water (6:1:6:1, v/v/v/v) was used as solvent system with masticadienonic and 3β-masticadienolic acids as target compounds. The optimized separation methodology previously described in Part I and II, was scaled up from an analytical hydrodynamic CCC column (17.4mL) to preparative hydrostatic CPC instruments (250mL and 303mL) as a part of method development. Flow-rate and sample loading were further optimized on CPC. Mobile phase linear velocity is suggested as a transfer invariant parameter if the CPC column contains sufficient number of partition cells. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Comparison of closed circuit and Fick-derived oxygen consumption in patients undergoing simultaneous aortocaval occlusion.

    PubMed

    Hofland, J; Tenbrinck, R; van Eijck, C H J; Eggermont, A M M; Gommers, D; Erdmann, W

    2003-04-01

    Agreement between continuously measured oxygen consumption during quantitative closed system anaesthesia and intermittently Fick-derived calculated oxygen consumption was assessed in 11 patients undergoing simultaneous occlusion of the aorta and inferior vena cava for hypoxic treatment of pancreatic cancer. All patients were mechanically ventilated using a quantitative closed system anaesthesia machine (PhysioFlex) and had pulmonary and radial artery catheters inserted. During the varying haemodynamic conditions that accompany this procedure, 73 paired measurements were obtained. A significant correlation between Fick-derived and closed system-derived oxygen consumption was found (r = 0.78, p = 0.006). Linear regression showed that Fick-derived measure = [(1.19 x closed system derived measure) - 72], with the overall closed circuit-derived values being higher. However, the level of agreement between the two techniques was poor. Bland-Altman analysis found that the bias was 36 ml.min(-1), precision 39 ml.min(-1), difference between 95% limits of agreement 153 ml.min(-1). Therefore, we conclude that the two measurement techniques are not interchangeable in a clinical setting.

  19. Painting galaxies into dark matter halos using machine learning

    NASA Astrophysics Data System (ADS)

    Agarwal, Shankar; Davé, Romeel; Bassett, Bruce A.

    2018-05-01

    We develop a machine learning (ML) framework to populate large dark matter-only simulations with baryonic galaxies. Our ML framework takes input halo properties including halo mass, environment, spin, and recent growth history, and outputs central galaxy and halo baryonic properties including stellar mass (M*), star formation rate (SFR), metallicity (Z), neutral (H I) and molecular (H_2) hydrogen mass. We apply this to the MUFASA cosmological hydrodynamic simulation, and show that it recovers the mean trends of output quantities with halo mass highly accurately, including following the sharp drop in SFR and gas in quenched massive galaxies. However, the scatter around the mean relations is under-predicted. Examining galaxies individually, at z = 0 the stellar mass and metallicity are accurately recovered (σ ≲ 0.2 dex), but SFR and H I show larger scatter (σ ≳ 0.3 dex); these values improve somewhat at z = 1, 2. Remarkably, ML quantitatively recovers second parameter trends in galaxy properties, e.g. that galaxies with higher gas content and lower metallicity have higher SFR at a given M*. Testing various ML algorithms, we find that none perform significantly better than the others, nor does ensembling improve performance, likely because none of the algorithms reproduce the large observed scatter around the mean properties. For the random forest algorithm, we find that halo mass and nearby (˜200 kpc) environment are the most important predictive variables followed by growth history, while halo spin and ˜Mpc scale environment are not important. Finally we study the impact of additionally inputting key baryonic properties M*, SFR, and Z, as would be available e.g. from an equilibrium model, and show that particularly providing the SFR enables H I to be recovered substantially more accurately.

  20. Resolving Transition Metal Chemical Space: Feature Selection for Machine Learning and Structure-Property Relationships.

    PubMed

    Janet, Jon Paul; Kulik, Heather J

    2017-11-22

    Machine learning (ML) of quantum mechanical properties shows promise for accelerating chemical discovery. For transition metal chemistry where accurate calculations are computationally costly and available training data sets are small, the molecular representation becomes a critical ingredient in ML model predictive accuracy. We introduce a series of revised autocorrelation functions (RACs) that encode relationships of the heuristic atomic properties (e.g., size, connectivity, and electronegativity) on a molecular graph. We alter the starting point, scope, and nature of the quantities evaluated in standard ACs to make these RACs amenable to inorganic chemistry. On an organic molecule set, we first demonstrate superior standard AC performance to other presently available topological descriptors for ML model training, with mean unsigned errors (MUEs) for atomization energies on set-aside test molecules as low as 6 kcal/mol. For inorganic chemistry, our RACs yield 1 kcal/mol ML MUEs on set-aside test molecules in spin-state splitting in comparison to 15-20× higher errors for feature sets that encode whole-molecule structural information. Systematic feature selection methods including univariate filtering, recursive feature elimination, and direct optimization (e.g., random forest and LASSO) are compared. Random-forest- or LASSO-selected subsets 4-5× smaller than the full RAC set produce sub- to 1 kcal/mol spin-splitting MUEs, with good transferability to metal-ligand bond length prediction (0.004-5 Å MUE) and redox potential on a smaller data set (0.2-0.3 eV MUE). Evaluation of feature selection results across property sets reveals the relative importance of local, electronic descriptors (e.g., electronegativity, atomic number) in spin-splitting and distal, steric effects in redox potential and bond lengths.

  1. Effect of iron doping on structural and microstructural properties of nanocrystalline ZnSnO3 thin films prepared by spray pyrolysis techniques

    NASA Astrophysics Data System (ADS)

    Pathan, Idris G.; Suryawanshi, Dinesh N.; Bari, Anil R.; Patil, Lalchand A.

    2018-05-01

    This work presents the effect of iron doping having different volume ratios (1 ml, 2.5 ml and 5 ml) on the structural, microstructural and electrical properties of zinc stannate thin films, prepared by spray pyrolysis method. These properties were characterized with X-ray diffraction (XRD) and Transmission Electron Microscope (TEM). In our study, XRD pattern indicates that ZnSnO3 has a perovskite phase with face exposed hexahedron structure. The electron diffraction fringes observed are in consistent with the peak observed in XRD patterns. Moreover the sensor reported in our study is cost-effective, user friendly and easy to fabricate.

  2. Antimicrobial and cytotoxic activity of Marrubium alysson and Retama raetam grown in Tunisia.

    PubMed

    Hayet, Edziri; Samia, Ammar; Patrick, Groh; Ali, Mahjoub Mohamed; Maha, Mastouri; Laurent, Gutmann; Mighri, Zine; Mahjoub, Laouni

    2007-05-15

    Antibacterial and antifungal activities of extracts obtained from M. alysson, R. raetam were tested using a solid medium technique. We showed that the petroleum ether extract of M. alysson had a Minimum Inhibitory Concentration (MIC) varied from 128 to 2000 microg mL(-1) against different Enterobacteriaceae and antifungal activity against Candida glabrata, Candida albicans, Candida parapsilosis and Candida kreusei with a MIC of 256 microg mL(-1). The ethyl acetate extract of R. raetam showed the best activity against Gram positive organisms with MICs of 128 to 256 microg mL(-1) against methicillin resistant Staphylococcus aureus but low activity against the different Candida species.

  3. [The modification in surgical technique of incision and closure vault of the vagina during vaginal hysterectomy on the incidence of vault haematoma].

    PubMed

    Malinowski, Andrzej; Mołas, Justyna; Maciołek-Blewniewska, Grazyna; Cieślak, Jarosław

    2006-02-01

    Vault haematoma is one of the most common complication of vaginal hysterectomy. The aim of this work was to analyse the effects of a modification of incision and closure technique of the vaginal vault on the incidence of vault haematoma after vaginal hysterectomy. The study group consisted of 333 women of whom 49 (group A) underwent vaginal hysterectomy traditional technique of incision and closure of the vaginal vault, an 284 (group B) modified technique. Following parameters were evaluated: number of vault haematomas, blood loss, postoperative fever, required antibiotics, length of hospital stay. The risk of vault haematoma was significantly lower in the group B (1,06% vs 12,4%). Loss of blood was higher in group A (310 ml vs 206 ml). Incidence of postoperative fever was in 12,2% patients from group A, and 1,4% from group B. The length of hospitalization was lower for women in group B (4,3 days compared with 7,3 days). The modification of incision and closure technique of the vaginal vault during vaginal hysterectomy is recommended to minimise intra- and postoperative complications.

  4. Two-surgeon technique for liver transection using precoagulation by a soft-coagulation system and ultrasonic dissection.

    PubMed

    Yamada, Nobuya; Amano, Ryosuke; Kimura, Kenjiro; Murata, Akihiro; Yashiro, Masakazu; Tanaka, Sayaka; Wakasa, Kenichi; Hirakawa, Kosei

    2015-01-01

    A soft-coagulation system (SCS) was introduced as an effective device to reduce blood loss in hepatectomy. Here we evaluated the efficacy of a two-surgeon technique using precoagulation by an SCS and the Cavitron Ultrasonic Surgical Aspirator (CUSA) for liver transection. The 163 patients with liver tumors were divided into two groups (conventional group and two-surgeon group). Liver transection was conducted using saline-coupled bipolar electrocautery and CUSA in 102 patients (conventional group). In 61 patients (the two-surgeon group), a two-surgeon technique using precoagulation by an SCS and CUSA for liver resection was performed. The median blood loss was significantly less in the two-surgeon group compared to the conventional group (354.8 mL vs. 557.8 mL, respec tively: p = 0.0011). The postoperative hospital stay was significantly shorter in the two-surgeon group compared to the conventional group (12.7 days vs. 15.5 days, p = 0.0035). The two-surgeon technique using precoagulation by an SCS and CUSA was significantly reduced blood loss during liver transection, and associated with low morbidity and mortality. This technique may be useful for many hepatobiliary surgeons.

  5. The Effect of Student Learning Styles, Race and Gender on Learning Outcomes: The Case of Public Goods

    ERIC Educational Resources Information Center

    Devaraj, Nirupama; Raman, Jaishankar

    2014-01-01

    We investigate the impact of active learning techniques, specifically experiment based learning, in a Principles of Economics class. Our case study demonstrates that when using pedagogical techniques intended to facilitate active learning, teachers should be intentional about incorporating components of learning that appeal to students with…

  6. γ-Aminobutyric acid ameliorates fluoride-induced hypothyroidism in male Kunming mice.

    PubMed

    Yang, Haoyue; Xing, Ronge; Liu, Song; Yu, Huahua; Li, Pengcheng

    2016-02-01

    This study evaluated the protective effects of γ-aminobutyric acid (GABA), a non-protein amino acid and anti-oxidant, against fluoride-induced hypothyroidism in mice. Light microscope sample preparation technique and TEM sample preparation technique were used to assay thyroid microstructure and ultrastructure; enzyme immunoassay method was used to assay hormone and protein levels; immunohistochemical staining method was used to assay apoptosis of thyroid follicular epithelium cells. Subacute injection of sodium fluoride (NaF) decreased blood T4, T3 and thyroid hormone-binding globulin (TBG) levels to 33.98 μg/l, 3 2.8 ng/ml and 11.67 ng/ml, respectively. In addition, fluoride intoxication induced structural abnormalities in thyroid follicles. Our results showed that treatment of fluoride-exposed mice with GABA appreciably decreased metabolic toxicity induced by fluoride and restored the microstructural and ultrastructural organisation of the thyroid gland towards normalcy. Compared with the negative control group, GABA treatment groups showed significantly upregulated T4, T3 and TBG levels (42.34 μg/l, 6.54 ng/ml and 18.78 ng/ml, respectively; P<0.05), properly increased TSH level and apoptosis inhibition in thyroid follicular epithelial cells. To the best of our knowledge, this is the first study to establish the therapeutic efficacy of GABA as a natural antioxidant in inducing thyroprotection against fluoride-induced toxicity. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Responses to the lowering of magnesium and calcium concentrations in the cerebrospinal fluid of unanesthetized sheep.

    PubMed

    Allsop, T F; Pauli, J V

    1975-12-01

    A technique for ventriculolumbar perfusion of the cerebrospinal fluid space has been used to study the neuromuscular effects of low concentrations of magnesium and calcium in the cerebrospinal fluid of conscious sheep. Perfusion with synthetic cerebrospinal fluid solutions containing less than 0-6 mg magnesium/100 ml produced episodes of tetany which were abolished by perfusion with a solution of normal magnesium concentration. This suggests that the low cerebrospinal fluid magnesium concentrations reported in cases of hypomagneseamic tetany may result in changes within the central nervous system that could produce the nervous signs. Perfusates with a calcium concentration below 2-0 mg/100 ml caused hyperpnoea and continuous muscle tremors. Magnesium (0-6 mg/100 ml) and calcium (2-0 mg/100 ml) perfused simultaneously acted synergistically to produce signs characteristic of low levels of each of the ions.

  8. Catheter drainage of pleural fluid collections and pneumothorax.

    PubMed

    Frendin, J; Obel, N

    1997-06-01

    A technique for virtually atraumatic placement of small size chest catheters for suction drainage of pleural effusions and pneumothorax in the dog and cat is described. Thirty-nine dogs and two cats were treated for pyothorax (10 cases), hydrothorax (eight), chylothorax (three), haemothorax (three), haemothorax/ pneumothorax (three) and pneumothorax (14). In all 41 cases, thin or viscous fluid and/or air were efficiently drained. The mean period of drainage was four days (range, 0.5 to 18 days). The average amount of fluid removed from each patient in 24 hours was 530 ml in pyothorax cases (range, 140 to 1100 ml) and 1300 ml in the other cases (range, 20 to 5000 ml). In 40 cases there were no complications related to the procedure. One dog with severe pleural adhesions was euthanased because of lung perforation and pneumothorax secondary to misplacement of the catheter.

  9. Evaporative water loss in man in a gravity-free environment

    NASA Technical Reports Server (NTRS)

    Leach, C. S.; Leonard, J. I.; Rambaut, P. C.; Johnson, P. C.

    1978-01-01

    Daily evaporative water losses (EWL) during the three Skylab missions were measured indirectly using mass and water-balance techniques. The mean daily values of EWL for the nine crew members who averaged 1 hr of daily exercise were: preflight 1,750 + or - 37 (SE) ml or 970 + or - 20 ml/sq m and inflight 1,560 + or - 26 ml or 860 + or - 14 ml/sq m. Although it was expected the EWL would increase in the hypobaric environment of Skylab, an average decrease from preflight sea-level conditions of 11% was measured. The results suggest that weightlessness decreased sweat losses during exercise and possibly reduced insensible skin losses. The weightlessness environment apparently promotes the formation of an observed sweat film on the skin surface during exercise by reducing convective flow and sweat drippage, resulting in high levels of skin wettedness that favor sweat suppression.

  10. Quantification of gastric emptying and duodenogastric reflux stroke volumes using three-dimensional guided digital color Doppler imaging.

    PubMed

    Hausken, T; Li, X N; Goldman, B; Leotta, D; Ødegaard, S; Martin, R W

    2001-07-01

    To develop a non-invasive method for evaluating gastric emptying and duodenogastric reflux stroke volumes using three-dimensional (3D) guided digital color Doppler imaging. The technique involved color Doppler digital images of transpyloric flow in which the 3D position and orientation of the images were known by using a magnetic location system. In vitro, the system was found to slightly underestimate the reference flow (by average 8.8%). In vivo (five volunteers), stroke volume of gastric emptying episodes lasted on average only 0.69 s with a volume on average of 4.3 ml (range 1.1-7.4 ml), and duodenogastric reflux episodes on average 1.4 s with a volume of 8.3 ml (range 1.3-14.1 ml). With the appropriate instrument settings, orientation determined color Doppler can be used for stroke volume quantification of gastric emptying and duodenogastric reflux episodes.

  11. Comparative Analysis of RF Emission Based Fingerprinting Techniques for ZigBee Device Classification

    DTIC Science & Technology

    quantify the differences invarious RF fingerprinting techniques via comparative analysis of MDA/ML classification results. The findings herein demonstrate...correct classification rates followed by COR-DNA and then RF-DNA in most test cases and especially in low Eb/N0 ranges, where ZigBee is designed to operate.

  12. Improving Students' Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology.

    PubMed

    Dunlosky, John; Rawson, Katherine A; Marsh, Elizabeth J; Nathan, Mitchell J; Willingham, Daniel T

    2013-01-01

    Many students are being left behind by an educational system that some people believe is in crisis. Improving educational outcomes will require efforts on many fronts, but a central premise of this monograph is that one part of a solution involves helping students to better regulate their learning through the use of effective learning techniques. Fortunately, cognitive and educational psychologists have been developing and evaluating easy-to-use learning techniques that could help students achieve their learning goals. In this monograph, we discuss 10 learning techniques in detail and offer recommendations about their relative utility. We selected techniques that were expected to be relatively easy to use and hence could be adopted by many students. Also, some techniques (e.g., highlighting and rereading) were selected because students report relying heavily on them, which makes it especially important to examine how well they work. The techniques include elaborative interrogation, self-explanation, summarization, highlighting (or underlining), the keyword mnemonic, imagery use for text learning, rereading, practice testing, distributed practice, and interleaved practice. To offer recommendations about the relative utility of these techniques, we evaluated whether their benefits generalize across four categories of variables: learning conditions, student characteristics, materials, and criterion tasks. Learning conditions include aspects of the learning environment in which the technique is implemented, such as whether a student studies alone or with a group. Student characteristics include variables such as age, ability, and level of prior knowledge. Materials vary from simple concepts to mathematical problems to complicated science texts. Criterion tasks include different outcome measures that are relevant to student achievement, such as those tapping memory, problem solving, and comprehension. We attempted to provide thorough reviews for each technique, so this monograph is rather lengthy. However, we also wrote the monograph in a modular fashion, so it is easy to use. In particular, each review is divided into the following sections: General description of the technique and why it should work How general are the effects of this technique?  2a. Learning conditions  2b. Student characteristics  2c. Materials  2d. Criterion tasks Effects in representative educational contexts Issues for implementation Overall assessment The review for each technique can be read independently of the others, and particular variables of interest can be easily compared across techniques. To foreshadow our final recommendations, the techniques vary widely with respect to their generalizability and promise for improving student learning. Practice testing and distributed practice received high utility assessments because they benefit learners of different ages and abilities and have been shown to boost students' performance across many criterion tasks and even in educational contexts. Elaborative interrogation, self-explanation, and interleaved practice received moderate utility assessments. The benefits of these techniques do generalize across some variables, yet despite their promise, they fell short of a high utility assessment because the evidence for their efficacy is limited. For instance, elaborative interrogation and self-explanation have not been adequately evaluated in educational contexts, and the benefits of interleaving have just begun to be systematically explored, so the ultimate effectiveness of these techniques is currently unknown. Nevertheless, the techniques that received moderate-utility ratings show enough promise for us to recommend their use in appropriate situations, which we describe in detail within the review of each technique. Five techniques received a low utility assessment: summarization, highlighting, the keyword mnemonic, imagery use for text learning, and rereading. These techniques were rated as low utility for numerous reasons. Summarization and imagery use for text learning have been shown to help some students on some criterion tasks, yet the conditions under which these techniques produce benefits are limited, and much research is still needed to fully explore their overall effectiveness. The keyword mnemonic is difficult to implement in some contexts, and it appears to benefit students for a limited number of materials and for short retention intervals. Most students report rereading and highlighting, yet these techniques do not consistently boost students' performance, so other techniques should be used in their place (e.g., practice testing instead of rereading). Our hope is that this monograph will foster improvements in student learning, not only by showcasing which learning techniques are likely to have the most generalizable effects but also by encouraging researchers to continue investigating the most promising techniques. Accordingly, in our closing remarks, we discuss some issues for how these techniques could be implemented by teachers and students, and we highlight directions for future research. © The Author(s) 2013.

  13. Prostate Cancer Probability Prediction By Machine Learning Technique.

    PubMed

    Jović, Srđan; Miljković, Milica; Ivanović, Miljan; Šaranović, Milena; Arsić, Milena

    2017-11-26

    The main goal of the study was to explore possibility of prostate cancer prediction by machine learning techniques. In order to improve the survival probability of the prostate cancer patients it is essential to make suitable prediction models of the prostate cancer. If one make relevant prediction of the prostate cancer it is easy to create suitable treatment based on the prediction results. Machine learning techniques are the most common techniques for the creation of the predictive models. Therefore in this study several machine techniques were applied and compared. The obtained results were analyzed and discussed. It was concluded that the machine learning techniques could be used for the relevant prediction of prostate cancer.

  14. Autologous Fat Grafting to the Breast Using REVOLVE System to Reduce Clinical Costs.

    PubMed

    Brzezienski, Mark A; Jarrell, John A

    2016-09-01

    With the increasing popularity of fat grafting over the past decade, the techniques for harvest, processing and preparation, and transfer of the fat cells have evolved to improve efficiency and consistency. The REVOLVE System is a fat processing device used in autologous fat grafting which eliminates much of the specialized equipment as well as the labor intensive and time consuming efforts of the original Coleman technique of fat processing. This retrospective study evaluates the economics of fat grafting, comparing traditional Coleman processing to the REVOLVE System. From June 2013 through December 2013, 88 fat grafting cases by a single-surgeon were reviewed. Timed procedures using either the REVOLVE System or Coleman technique were extracted from the group. Data including fat grafting procedure time, harvested volume, harvest and recipient sites, and concurrent procedures were gathered. Cost and utilization assessments were performed comparing the economics between the groups using standard values of operating room costs provided by the study hospital. Thirty-seven patients with timed procedures were identified, 13 of which were Coleman technique patients and twenty-four (24) were REVOLVE System patients. The average rate of fat transfer was 1.77 mL/minute for the Coleman technique and 4.69 mL/minute for the REVOLVE System, which was a statistically significant difference (P < 0.0001) between the 2 groups. Cost analysis comparing the REVOLVE System and Coleman techniques demonstrates a dramatic divergence in the price per mL of transferred fat at 75 mL when using the previously calculated rates for each group. This single surgeon's experience with the REVOLVE System for fat processing establishes economic support for its use in specific high-volume fat grafting cases. Cost analysis comparing the REVOLVE System and Coleman techniques suggests that in cases of planned fat transfer of 75 mL or more, using the REVOLVE System for fat processing is more economically beneficial. This study may serve as a guide to plastic surgeons in deciding which cases might be appropriate for the use of the REVOLVE System and is the first report comparing economics of fat grafting with the traditional Coleman technique and the REVOLVE System.

  15. Physical and kinematic properties of cryopreserved camel sperm after elimination of semen viscosity by different techniques.

    PubMed

    El-Bahrawy, Khalid; Rateb, Sherif; Khalifa, Marwa; Monaco, Davide; Lacalandra, Giovanni

    2017-12-01

    This investigation aimed to determine the influence of using different techniques for liquefaction of semen on post-thaw physical and dynamic characteristics of camel spermatozoa. A total of 144 ejaculates were collected from 3 adult camels, Camelus dromedarius, twice-weekly over 3 consecutive breeding seasons. A raw aliquot of each ejaculate was evaluated for physical and morphological properties, whereas the remaining portion was diluted (1:3) with glycerolated Tris lactose egg yolk extender, and was further subjected to one of the following liquefaction treatments: control (untreated), 5μl/ml α-amylase, 0.1mg/ml papain, 5u/ml bromelain, or 40-kHz nominal ultrasound frequency. The post-thaw objective assessment of cryopreserved spermatozoa, in all groups, was performed by a computer-assisted sperm analysis (CASA) system. The results revealed that all liquefaction treatments improved (P<0.05) post-thaw motility, viability and sperm motion criteria. However, an adverse effect (P<0.05) was observed in acrosome integrity, sperm cell membrane integrity and percent of normal sperm in all enzymatically-treated specimens compared to both control and ultrasound-treated semen. These results elucidate the efficiency of utilizing ultrasound technology for viscosity elimination of camel semen. In addition, developing enzymatic semen liquefaction techniques is imperious to benefit from when applying assisted reproductive technologies, particularly AI and IVF, in camels. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Quantitative measurement of cerebral blood flow during hypothermia with a time-resolved near-infrared technique

    NASA Astrophysics Data System (ADS)

    Fazel Bakhsheshi, Mohammad; Diop, Mamadou; St Lawrence, Keith; Lee, Ting-Yim

    2012-02-01

    Hypothermia, in which the brain is cooled to 32-33 °C, has been shown to be neuroprotective for brain injury caused by hypoxia-ischemia, head trauma, or neonatal asphyxia. Neuroprotective effect of Hypothermia is partly due to suppression of brain metabolism and cerebral blood flow (CBF). The ability to measure CBF at the bedside provides a means of detecting, and thereby preventing, secondary ischemia during neuro intensive care before brain injury occurs. The purpose of the present study is to investigate the ability of a time-resolved near-infrared (TR-NIR) bolus-tracking method using indocyanine green as an intravascular flow tracer to measure CBF during cooling in a newborn animal model. For validation, CBF was independently measured by computed tomography (CT) perfusion. The results show a good agreement between CBF obtained with the two methods (R2 ~ 0.84, Δ ~ 5.84 ml. min -1.100 g -1, 32-38.5 °C), demonstrating the ability of the TR-NIR technique to non-invasively measure absolute CBF in-vivo during dynamic hypothermia. The TR-NIR technique reveals that CBF decreases from 54.3 +/- 5.4 ml. min -1.100 g -1, at normothermia (Tbrain of 38.5 °C), to 33.8 +/- 0.9 ml. min -1.100 g -1 at Tbrain of 32 °C during the hypothermia treatment.

  17. Skinner Boxing.

    ERIC Educational Resources Information Center

    Bower, Bruce

    1986-01-01

    Presents summaries and opposing views of six of B. F. Skinner's major tenets of behavioristic psychology. Relates conflicting positions on issues as environmental determination, problem solving techniques, cultural reinforcement, and mental processing. (ML)

  18. Bicentric evaluation of six anti-toxoplasma immunoglobulin G (IgG) automated immunoassays and comparison to the Toxo II IgG Western blot.

    PubMed

    Maudry, Arnaud; Chene, Gautier; Chatelain, Rémi; Patural, Hugues; Bellete, Bahrie; Tisseur, Bernard; Hafid, Jamal; Raberin, Hélène; Beretta, Sophie; Sung, Roger Tran Manh; Belot, Georges; Flori, Pierre

    2009-09-01

    A comparative study of the Toxoplasma IgG(I) and IgG(II) Access (Access I and II, respectively; Beckman Coulter Inc.), AxSYM Toxo IgG (AxSYM; Abbott Diagnostics), Vidas Toxo IgG (Vidas; bioMerieux, Marcy l'Etoile, France), Immulite Toxo IgG (Immulite; Siemens Healthcare Diagnostics Inc.), and Modular Toxo IgG (Modular; Roche Diagnostics, Basel, Switzerland) tests was done with 406 consecutive serum samples. The Toxo II IgG Western blot (LDBio, Lyon, France) was used as a reference technique in the case of intertechnique discordance. Of the 406 serum samples tested, the results for 35 were discordant by the different techniques. Using the 175 serum samples with positive results, we evaluated the standardization of the titrations obtained (in IU/ml); the medians (second quartiles) obtained were 9.1 IU/ml for the AxSYM test, 21 IU/ml for the Access I test, 25.7 IU/ml for the Access II test, 32 IU/ml for the Vidas test, 34.6 IU/ml for the Immulite test, and 248 IU/ml for the Modular test. For all the immunoassays tested, the following relative sensitivity and specificity values were found: 89.7 to 100% for the Access II test, 89.7 to 99.6% for the Immulite test, 90.2 to 99.6% for the AxSYM test, 91.4 to 99.6% for the Vidas test, 94.8 to 99.6% for the Access I test, and 98.3 to 98.7% for the Modular test. Among the 406 serum samples, we did not find any false-positive values by two different tests for the same serum sample. Except for the Modular test, which prioritized sensitivity, it appears that the positive cutoff values suggested by the pharmaceutical companies are very high (either for economical or for safety reasons). This led to imperfect sensitivity, a large number of unnecessary serological follow-ups of pregnant women, and difficulty in determining the serological status of immunosuppressed individuals.

  19. In vitro validation of an ultrasonic flowmeter in order to measure the functional residual capacity in newborns.

    PubMed

    Wauer, Juliane; Leier, Tim U; Henschen, Matthias; Wauer, Roland R; Schmalisch, Gerd

    2003-05-01

    Ultrasonic transit-time airflow meters (UFM) allow simultaneous measurements of volume flow V'(t) and molar mass MM(t) of the breathing gas in the mainstream. Consequently, by using a suitable tracer gas the functional residual capacity (FRC) of the lungs can be measured by a gas wash-in/wash-out technique. The aim of this study was to investigate the in vitro accuracy of a multiple-breath wash-in/wash-out technique for FRC measurements using 4% sulphur hexafluoride (SF6) in air. V'(t) and MM(t) were measured with a Spiroson SCIENTIFIC flowmeter (ECO Medics, CH) with 1.3 ml dead space. Linearity of airflow and MM were tested using different tidal volumes (V(T)) and breathing gases with different O2 and SF6 concentrations. To determine the accuracy of FRC measurements SF6 wash-in and wash-out curves from four mechanical lung models (FRC of 22, 53, 102 and 153 ml) were evaluated by the Spiroson. For each model five measurements were performed with a physiological V(T)/FRC ratio of 0.3 and constant respiratory rate of 30 min(-1). The error of measured V(T) (range 4-60 ml) was <2.5%. There was a strong correlation between the measured and calculated MM of different breathing gases (r = 0.989), and the measuring accuracy was better than 1%. The measured FRC of the four models were 20.3, 49.7, 104.3 and 153.4 ml with a coefficient of variation of 16.5%, 4.5%, 4.9% and 3%. Accordingly, for FRC <100 ml the in vitro accuracy was better than 8% and for FRC >100 ml better than 2.5%. The determination of FRC by MM measurements using the UFM is a simple and cost-effective alternative to conventionally used gas analysers with an acceptable accuracy for many clinical purposes.

  20. High-dose tranexamic acid reduces intraoperative and postoperative blood loss in posterior lumbar interbody fusion.

    PubMed

    Kushioka, Junichi; Yamashita, Tomoya; Okuda, Shinya; Maeno, Takafumi; Matsumoto, Tomiya; Yamasaki, Ryoji; Iwasaki, Motoki

    2017-03-01

    OBJECTIVE Tranexamic acid (TXA), a synthetic antifibrinolytic drug, has been reported to reduce blood loss in orthopedic surgery, but there have been few reports of its use in spine surgery. Previous studies included limitations in terms of different TXA dose regimens, different levels and numbers of fused segments, and different surgical techniques. Therefore, the authors decided to strictly limit TXA dose regimens, surgical techniques, and fused segments in this study. There have been no reports of using TXA for prevention of intraoperative and postoperative blood loss in posterior lumbar interbody fusion (PLIF). The purpose of the study was to evaluate the efficacy of high-dose TXA in reducing blood loss and its safety during single-level PLIF. METHODS The study was a nonrandomized, case-controlled trial. Sixty consecutive patients underwent single-level PLIF at a single institution. The first 30 patients did not receive TXA. The next 30 patients received 2000 mg of intravenous TXA 15 minutes before the skin incision was performed and received the same dose again 16 hours after the surgery. Intra- and postoperative blood loss was compared between the groups. RESULTS There were no statistically significant differences in preoperative parameters of age, sex, body mass index, preoperative diagnosis, or operating time. The TXA group experienced significantly less intraoperative blood loss (mean 253 ml) compared with the control group (mean 415 ml; p < 0.01). The TXA group also had significantly less postoperative blood loss over 40 hours (mean 321 ml) compared with the control group (mean 668 ml; p < 0.01). Total blood loss in the TXA group (mean 574 ml) was significantly lower than in the control group (mean 1080 ml; p < 0.01). From 2 hours to 40 hours, postoperative blood loss in the TXA group was consistently significantly lower. There were no perioperative complications, including thromboembolic events. CONCLUSIONS High-dose TXA significantly reduced both intra- and postoperative blood loss without causing any complications during or after single-level PLIF.

  1. Maximization of the usage of coronary CTA derived plaque information using a machine learning based algorithm to improve risk stratification; insights from the CONFIRM registry.

    PubMed

    van Rosendael, Alexander R; Maliakal, Gabriel; Kolli, Kranthi K; Beecy, Ashley; Al'Aref, Subhi J; Dwivedi, Aeshita; Singh, Gurpreet; Panday, Mohit; Kumar, Amit; Ma, Xiaoyue; Achenbach, Stephan; Al-Mallah, Mouaz H; Andreini, Daniele; Bax, Jeroen J; Berman, Daniel S; Budoff, Matthew J; Cademartiri, Filippo; Callister, Tracy Q; Chang, Hyuk-Jae; Chinnaiyan, Kavitha; Chow, Benjamin J W; Cury, Ricardo C; DeLago, Augustin; Feuchtner, Gudrun; Hadamitzky, Martin; Hausleiter, Joerg; Kaufmann, Philipp A; Kim, Yong-Jin; Leipsic, Jonathon A; Maffei, Erica; Marques, Hugo; Pontone, Gianluca; Raff, Gilbert L; Rubinshtein, Ronen; Shaw, Leslee J; Villines, Todd C; Gransar, Heidi; Lu, Yao; Jones, Erica C; Peña, Jessica M; Lin, Fay Y; Min, James K

    Machine learning (ML) is a field in computer science that demonstrated to effectively integrate clinical and imaging data for the creation of prognostic scores. The current study investigated whether a ML score, incorporating only the 16 segment coronary tree information derived from coronary computed tomography angiography (CCTA), provides enhanced risk stratification compared with current CCTA based risk scores. From the multi-center CONFIRM registry, patients were included with complete CCTA risk score information and ≥3 year follow-up for myocardial infarction and death (primary endpoint). Patients with prior coronary artery disease were excluded. Conventional CCTA risk scores (conventional CCTA approach, segment involvement score, duke prognostic index, segment stenosis score, and the Leaman risk score) and a score created using ML were compared for the area under the receiver operating characteristic curve (AUC). Only 16 segment based coronary stenosis (0%, 1-24%, 25-49%, 50-69%, 70-99% and 100%) and composition (calcified, mixed and non-calcified plaque) were provided to the ML model. A boosted ensemble algorithm (extreme gradient boosting; XGBoost) was used and the entire data was randomly split into a training set (80%) and testing set (20%). First, tuned hyperparameters were used to generate a trained model from the training data set (80% of data). Second, the performance of this trained model was independently tested on the unseen test set (20% of data). In total, 8844 patients (mean age 58.0 ± 11.5 years, 57.7% male) were included. During a mean follow-up time of 4.6 ± 1.5 years, 609 events occurred (6.9%). No CAD was observed in 48.7% (3.5% event), non-obstructive CAD in 31.8% (6.8% event), and obstructive CAD in 19.5% (15.6% event). Discrimination of events as expressed by AUC was significantly better for the ML based approach (0.771) vs the other scores (ranging from 0.685 to 0.701), P < 0.001. Net reclassification improvement analysis showed that the improved risk stratification was the result of down-classification of risk among patients that did not experience events (non-events). A risk score created by a ML based algorithm, that utilizes standard 16 coronary segment stenosis and composition information derived from detailed CCTA reading, has greater prognostic accuracy than current CCTA integrated risk scores. These findings indicate that a ML based algorithm can improve the integration of CCTA derived plaque information to improve risk stratification. Published by Elsevier Inc.

  2. [Influence of the albumin fraction in the plasma oncotic pressure (author's transl)].

    PubMed

    Rodríguez Portillo, M; Trujillo Rodríguez, F; Aznar Reig, A

    1979-12-15

    This work analyzes the influence which albumin fraction exerts upon plasma oncotic pressure. With this objective three different groups were studied, each one of which was composed of subjects with identical total proteinemia and variable albuminemia. The first group: nine subjects with 6.2 g/100 ml proteinemia and albumin values between 3.2 and 3.8 g/100 ml; the second group: seven healthy subjects with 6.4 g/100 ml proteinemia and the level of albumina between 3 and 4 g/100 ml; the third group: subjects with proteinemia at 6.6 g/100 ml and extreme values of albumin between 3.1 and 4.3 g/100 ml. Plasma oncotic pressure was determined by means of an electronic osmometer, according to the described technique. With a proteinemia constant at 6.2 g/100 ml, a 0.6 percent fluctuation of the albumin concentration induced a variation in the plasma oncotic pressure of up to 20.4 per cent. In cases of proteinemia remaining constant at 6.4 g/100 ml, the oscillation of albumin levels between 3 and 4 g/100 ml represented a change in the plasmatic oncotic pressure of 32.58 per cent. In the third group, the influence of the albuminemia was lesser (23.1 per cent variability in the plasma oncotic pressure, with an oscillation of 1.2 g/100 ml in albuminemia). The existence of variable values of plasma oncotic pressure corresponding to cases with identical proteinemia and albuminemia, lead us to consider the powerful influence exerted upon the plasma oncotic pressure by other factors which affect the mass-structure and the electrical charges of proteins.

  3. Improving Word Learning in Children Using an Errorless Technique

    ERIC Educational Resources Information Center

    Warmington, Meesha; Hitch, Graham J.; Gathercole, Susan E.

    2013-01-01

    The current experiment examined the relative advantage of an errorless learning technique over an errorful one in the acquisition of novel names for unfamiliar objects in typically developing children aged between 7 and 9 years. Errorless learning led to significantly better learning than did errorful learning. Processing speed and vocabulary…

  4. Examining Online Learning Patterns with Data Mining Techniques in Peer-Moderated and Teacher-Moderated Courses

    ERIC Educational Resources Information Center

    Hung, Jui-Long; Crooks, Steven M.

    2009-01-01

    The student learning process is important in online learning environments. If instructors can "observe" online learning behaviors, they can provide adaptive feedback, adjust instructional strategies, and assist students in establishing patterns of successful learning activities. This study used data mining techniques to examine and…

  5. [Extracellular fluid, plasma and interstitial volume in cirrhotic patients without clinical edema or ascites].

    PubMed

    Noguera Viñas, E C; Hames, W; Mothe, G; Barrionuevo, M P

    1989-01-01

    Extracellular fluid volume (E.C.F.) and plasma volume (P.V.), were measured with sodium sulfate labeled with 35I and 131I human serum albumin, respectively, by the dilution technique in control subjects and in cirrhotic patients without clinical ascites or edema, renal or hepatic failure, gastrointestinal bleeding or diuretics. Results are expressed as mean +/- DS in both ml/m2 and ml/kg. In normal subjects E.C.F. (n = 8) was 7,533 +/- 817 ml/m2 (201.3 +/- 182 ml/kg), P.V. (n = 11) 1,767 +/- 337 ml/m2 (47.2 +/- 9.3 ml/kg), and interstitial fluid (I.S.F.) (n = 7) 5,758 +/- 851 ml/m2 (Table 2). In cirrhotic patients E.C.F. (n = 11) was 10,318 +/- 2,980 ml/m2 (261.7 +/- 76.8 ml/kg), P.V. (n = 12) 2,649 +/- 558 ml/m2 (67.7 +/- 15.6 ml/kg) and I.S.F. (n = 11) 7,866 +/- 2,987 ml/m2 (Table 3). Cirrhotic patients compared with normal subjects have hypervolemia due to a significant E.C.F. and P.V. expansion (p less than 0.02 and less than 0.001 respectively) (Fig. 1). Reasons for E.C.F. and P.V. abnormalities in cirrhotic patients may reflect urinary sodium retention related to portal hipertension which stimulates aldosterone release or enhanced renal tubular sensitivity to the hormone. However, it is also possible that these patients, in the presence of hypoalbuminemia (Table 1), have no clinical edema or ascites due to increased glomerular filtration, suppressed release of vasopressin, increased natriuretic factor, and urinary prostaglandin excretion, in response to the intravascular expansion, all of which increased solute and water delivery to the distal nephron and improved renal water excretion. We conclude that in our clinical experience cirrhotic patients without ascites or edema have hypervolemia because of a disturbance in E.C.F.

  6. Chemical composition, toxicity and larvicidal and antifungal activities of Persea americana (avocado) seed extracts.

    PubMed

    Leite, João Jaime Giffoni; Brito, Erika Helena Salles; Cordeiro, Rossana Aguiar; Brilhante, Raimunda Sâmia Nogueira; Sidrim, José Júlio Costa; Bertini, Luciana Medeiros; Morais, Selene Maia de; Rocha, Marcos Fábio Gadelha

    2009-01-01

    The present study had the aim of testing the hexane and methanol extracts of avocado seeds, in order to determine their toxicity towards Artemia salina, evaluate their larvicidal activity towards Aedes aegypti and investigate their in vitro antifungal potential against strains of Candida spp, Cryptococcus neoformans and Malassezia pachydermatis through the microdilution technique. In toxicity tests on Artemia salina, the hexane and methanol extracts from avocado seeds showed LC50 values of 2.37 and 24.13 mg mL-1 respectively. Against Aedes aegypti larvae, the LC50 results obtained were 16.7 mg mL-1 for hexane extract and 8.87 mg mL-1 for methanol extract from avocado seeds. The extracts tested were also active against all the yeast strains tested in vitro, with differing results such that the minimum inhibitory concentration of the hexane extract ranged from 0.625 to 1.25mg L-(1), from 0.312 to 0.625 mg mL-1 and from 0.031 to 0.625 mg mL-1, for the strains of Candida spp, Cryptococcus neoformans and Malassezia pachydermatis, respectively. The minimal inhibitory concentration for the methanol extract ranged from 0.125 to 0.625 mg mL-1, from 0.08 to 0.156 mg mL-1 and from 0.312 to 0.625 mg mL-1, for the strains of Candida spp., Cryptococcus neoformans and Malassezia pachydermatis, respectively.

  7. Age related prostate-specific antigen reference range among men in south-East Caspian Sea.

    PubMed

    Mansourian, A R; Ghaemi, E O; Ahmadi, A R; Marjani, A; Moradi, A; Saifi, A

    2007-05-01

    The purpose of this study was to describe the distribution of serum prostate specific antigen (PSA) and to determine age-specific reference range in a population of Persian men. Venous blood samples were taken from 287 men, from Gorgan located in the North of Iran, South-East of Caspian Sea, aged 15 > or = 80 year. The serum PSA levels was measured using Enzyme-linked Immunosorbant-Assay (ELISA) technique and age-specific range for PSA level was determined. The serum prostate-specific antigen level for six age group of 15-40 years, 41-50 years, 51-60 years, 61-70 years, 71-80 years and >80 years were mainly in the range of 0-2.5 ng mL(-1), for 76.6%, 2.6-4 ng mL(-1) for 9.1% and as whole 85.7% of all men in this study had < or = 4 ng mL(-1), 8.7 and 5.6% all men of six age group had PSA level of 4.1-10 ng mL(-1) and >10 ng mL(-1), respectively. The findings of present study indicated that a large proportion (76.6%) men in this region have a lower PSA level of 0-2.5 ng mL(-1) and only 9.1% of men have PSA level of 2.6-4 ng mL(-1). It is therefore concluded that acceptable reference range of 0-4 ng mL(-1) for PSA level require further reassessment.

  8. MRI of the small bowel: can sufficient bowel distension be achieved with small volumes of oral contrast?

    PubMed

    Kinner, Sonja; Kuehle, Christiane A; Herbig, Sebastian; Haag, Sebastian; Ladd, Susanne C; Barkhausen, Joerg; Lauenstein, Thomas C

    2008-11-01

    Sufficient luminal distension is mandatory for small bowel imaging. However, patients often are unable to ingest volumes of currently applied oral contrast compounds. The aim of this study was to evaluate if administration of low doses of an oral contrast agent with high-osmolarity leads to sufficient and diagnostic bowel distension. Six healthy volunteers ingested at different occasions 150, 300 and 450 ml of a commercially available oral contrast agent (Banana Smoothie Readi-Cat, E-Z-EM; 194 mOsmol/l). Two-dimensional TrueFISP data sets were acquired in 5-min intervals up to 45 min after contrast ingestion. Small bowel distension was quantified using a visual five-grade ranking (5 = very good distension, 1 = collapsed bowel). Results were statistically compared using a Wilcoxon-Rank test. Ingestion of 450 ml and 300 ml resulted in a significantly better distension than 150 ml. The all-over average distension value for 450 ml amounted to 3.4 (300 ml: 3.0, 150 ml: 2.3) and diagnostic bowel distension could be found throughout the small intestine. Even 45 min after ingestion of 450 ml the jejunum and ileum could be reliably analyzed. Small bowel imaging with low doses of contrast leads to diagnostic distension values in healthy subjects when a high-osmolarity substance is applied. These findings may help to further refine small bowel MRI techniques, but need to be confirmed in patients with small bowel disorders.

  9. Antimicrobial Activity of Pomegranate and Green Tea Extract on Propionibacterium Acnes, Propionibacterium Granulosum, Staphylococcus Aureus and Staphylococcus Epidermidis.

    PubMed

    Li, Zhaoping; Summanen, Paula H; Downes, Julia; Corbett, Karen; Komoriya, Tomoe; Henning, Susanne M; Kim, Jenny; Finegold, Sydney M

    2015-06-01

    We used pomegranate extract (POMx), pomegranate juice (POM juice) and green tea extract (GT) to establish in vitro activities against bacteria implicated in the pathogenesis of acne. Minimum inhibitory concentrations (MIC) of 94 Propionibacterium acnes, Propionibacterium granulosum, Staphylococcus aureus, and Staphylococcus epidermidis strains were determined by Clinical and Laboratory Standards Institute-approved agar dilution technique. Total phenolics content of the phytochemicals was determined using the Folin-Ciocalteu method and the polyphenol composition by HPLC. Bacteria were identified by 16S rRNA sequence analysis. GT MIC of 400 μg/ml or less was obtained for 98% of the strains tested. 64% of P. acnes strains had POMx MICs at 50 μg/ml whereas 36% had MIC >400 μg/ml. POMx, POM juice, and GT showed inhibitory activity against all the P. granulosum strains at ≤100 μg/ml. POMx and GT inhibited all the S. aureus strains at 400 μg/ml or below, and POM juice had an MIC of 200 μg/ml against 17 S. aureus strains. POMx inhibited S. epidermidis strains at 25 μg/ml, whereas POM juice MICs were ≥200 μg/ml. The antibacterial properties of POMx and GT on the most common bacteria associated with the development and progression of acne suggest that these extracts may offer a better preventative/therapeutic regimen with fewer side effects than those currently available.

  10. Defining the local nerve blocks for feline distal thoracic limb surgery: a cadaveric study

    PubMed Central

    Enomoto, Masataka; Lascelles, B Duncan X; Gerard, Mathew P

    2016-01-01

    Objectives Though controversial, onychectomy remains a commonly performed distal thoracic limb surgical procedure in cats. Peripheral nerve block techniques have been proposed in cats undergoing onychectomy but evidence of efficacy is lacking. Preliminary tests of the described technique using cadavers resulted in incomplete staining of nerves. The aim of this study was to develop nerve block methods based on cadaveric dissections and test these methods with cadaveric dye injections. Methods Ten pairs of feline thoracic limbs (n = 20) were dissected and superficial branches of the radial nerve (RSbr nn.), median nerve (M n.), dorsal branch of ulnar nerve (UDbr n.), superficial branch of palmar branch of ulnar nerve (UPbrS n.) and deep branch of palmar branch of ulnar nerve (UPbrDp n.) were identified. Based on these dissections, a four-point block was developed and tested using dye injections in another six pairs of feline thoracic limbs (n = 12). Using a 25 G × 5/8 inch needle and 1 ml syringe, 0.07 ml/kg methylene blue was injected at the site of the RSbr nn., 0.04 ml/kg at the injection site of the UDbr n., 0.08 ml/kg at the injection site of the M n. and UPbrS n., and 0.01 ml/kg at the injection site of the UPbrDp n. The length and circumference of each nerve that was stained was measured. Results Positive staining of all nerves was observed in 12/12 limbs. The lengths stained for RSbr nn., M n., UDbr n., UPbrS n. and UPbrDp n. were 34.9 ± 5.3, 26.4 ± 4.8, 29.2 ± 4.0, 39.1 ± 4.3 and 17.5 ± 3.3 mm, respectively. The nerve circumferences stained were 93.8 ± 15.5, 95.8 ± 9.7, 100 ± 0.0, 100 ± 0.0 and 93.8 ± 15.5%, respectively. Conclusions and relevance This described four-point injection method may be an effective perioperative analgesia technique for feline distal thoracic limb procedures. PMID:26250858

  11. Development of Hplc Techniques for the Analysis of Trace Metal Species in the Primary Coolant of a Pressurised Water Reactor.

    NASA Astrophysics Data System (ADS)

    Barron, Keiron Robert Philip

    Available from UMI in association with The British Library. The need to monitor corrosion products in the primary circuit of a pressurised water reactor (PWR), at a concentration of 10pg ml^{-1} is discussed. A review of trace and ultra-trace metal analysis, relevant to the specific requirements imposed by primary coolant chemistry, indicated that high performance liquid chromatography (HPLC), coupled with preconcentration of sample was an ideal technique. A HPLC system was developed to determine trace metal species in simulated PWR primary coolant. In order to achieve the desired detection limit an on-line preconcentration system had to be developed. Separations were performed on Aminex A9 and Benson BC-X10 analytical columns. Detection was by post column reaction with Eriochrome Black T and Calmagite Linear calibrations of 2.5-100ng of cobalt (the main species of interest), were achieved using up to 200ml samples. The detection limit for a 200ml sample was 10pg ml^{-1}. In order to achieve the desired aim of on-line collection of species at 300^circ C, the use of inorganic ion-exchangers is essential. A novel application, utilising the attractive features of the inorganic ion-exchangers titanium dioxide, zirconium dioxide, zirconium arsenophosphate and pore controlled glass beads, was developed for the preconcentration of trace metal species at temperature and pressure. The performance of these exchangers, at ambient and 300^ circC was assessed by their inclusion in the developed analytical system and by the use of radioisotopes. The particular emphasis during the development has been upon accuracy, reproducibility of recovery, stability of reagents and system contamination, studied by the use of radioisotopes and response to post column reagents. This study in conjunction with work carried out at Winfrith, resulted in a monitoring system that could follow changes in coolant chemistry, on deposition and release of metal species in simulated PWR water loops. On -line detection of cobalt at 11pg ml^{ -1} was recorded, something which previously could not be performed by other techniques.

  12. What is the best method to fit time-resolved data? A comparison of the residual minimization and the maximum likelihood techniques as applied to experimental time-correlated, single-photon counting data

    DOE PAGES

    Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; ...

    2016-02-10

    The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as “residual minimization” (RM) and “maximum likelihood” (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number ofmore » “photon counts” was approximately 20, 200, 1000, 3000, and 6000 and there were about 2–200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM methods (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson’s weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. Here, the robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.« less

  13. Spatial variation of physicochemical and bacteriological parameters elucidation with GIS in Rangat Bay, Middle Andaman, India

    NASA Astrophysics Data System (ADS)

    Dheenan, P. S.; Jha, Dilip Kumar; Vinithkumar, N. V.; Ponmalar, A. Angelin; Venkateshwaran, P.; Kirubagaran, R.

    2014-01-01

    The purpose of this study was to determine the concentration, distribution of bacteria and physicochemical property of surface seawater in Rangat Bay, Middle Andaman, Andaman Islands (India). The bay experiences tidal variations. Perhaps physicochemical properties of seawater in Rangat Bay were found not to vary significantly. The concentration of faecal streptococci was high (2.2 × 103 CFU/100 mL) at creek and harbour area, whereas total coliforms were high (7.0 × 102 CFU/100 mL) at mangrove area. Similarly, total heterotrophic bacterial concentration was high (5.92 × 104 CFU/100 mL) in mangrove and harbour area. The Vibrio cholerae and Vibrio parahaemolyticus concentration was high (4.2 × 104 CFU/100 mL and 9 × 103 CFU/100 mL) at open sea. Cluster analysis showed grouping of stations in different tidal periods. The spatial maps clearly depicted the bacterial concentration pattern in the bay. The combined approach of multivariate analysis and spatial mapping techniques was proved to be useful in the current study.

  14. Deep Artificial Neural Networks and Neuromorphic Chips for Big Data Analysis: Pharmaceutical and Bioinformatics Applications.

    PubMed

    Pastur-Romay, Lucas Antón; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana Belén

    2016-08-11

    Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure-Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron-Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.

  15. Deep Artificial Neural Networks and Neuromorphic Chips for Big Data Analysis: Pharmaceutical and Bioinformatics Applications

    PubMed Central

    Pastur-Romay, Lucas Antón; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana Belén

    2016-01-01

    Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure–Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron–Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods. PMID:27529225

  16. Role of artificial intelligence in the care of patients with nonsmall cell lung cancer.

    PubMed

    Rabbani, Mohamad; Kanevsky, Jonathan; Kafi, Kamran; Chandelier, Florent; Giles, Francis J

    2018-04-01

    Lung cancer is the leading cause of cancer death worldwide. In up to 57% of patients, it is diagnosed at an advanced stage and the 5-year survival rate ranges between 10%-16%. There has been a significant amount of research using machine learning to generate tools using patient data to improve outcomes. This narrative review is based on research material obtained from PubMed up to Nov 2017. The search terms include "artificial intelligence," "machine learning," "lung cancer," "Nonsmall Cell Lung Cancer (NSCLC)," "diagnosis" and "treatment." Recent studies support the use of computer-aided systems and the use of radiomic features to help diagnose lung cancer earlier. Other studies have looked at machine learning (ML) methods that offer prognostic tools to doctors and help them in choosing personalized treatment options for their patients based on molecular, genetics and histological features. Combining artificial intelligence approaches into health care may serve as a beneficial tool for patients with NSCLC, and this review outlines these benefits and current shortcomings throughout the continuum of care. We present a review of the various applications of ML methods in NSCLC as it relates to improving diagnosis, treatment and outcomes. © 2018 Stichting European Society for Clinical Investigation Journal Foundation.

  17. Two techniques for eliminating luminol interference material and flow system configurations for luminol and firefly luciferase systems

    NASA Technical Reports Server (NTRS)

    Thomas, R. R.

    1976-01-01

    Two methods for eliminating luminol interference materials are described. One method eliminates interference from organic material by pre-reacting a sample with dilute hydrogen peroxide. The reaction rate resolution method for eliminating inorganic forms of interference is also described. The combination of the two methods makes the luminol system more specific for bacteria. Flow system designs for both the firefly luciferase and luminol bacteria detection systems are described. The firefly luciferase flow system incorporating nitric acid extraction and optimal dilutions has a functional sensitivity of 3 x 100,000 E. coli/ml. The luminol flow system incorporates the hydrogen peroxide pretreatment and the reaction rate resolution techniques for eliminating interference. The functional sensitivity of the luminol flow system is 1 x 10,000 E. coli/ml.

  18. Machine Learning Techniques in Clinical Vision Sciences.

    PubMed

    Caixinha, Miguel; Nunes, Sandrina

    2017-01-01

    This review presents and discusses the contribution of machine learning techniques for diagnosis and disease monitoring in the context of clinical vision science. Many ocular diseases leading to blindness can be halted or delayed when detected and treated at its earliest stages. With the recent developments in diagnostic devices, imaging and genomics, new sources of data for early disease detection and patients' management are now available. Machine learning techniques emerged in the biomedical sciences as clinical decision-support techniques to improve sensitivity and specificity of disease detection and monitoring, increasing objectively the clinical decision-making process. This manuscript presents a review in multimodal ocular disease diagnosis and monitoring based on machine learning approaches. In the first section, the technical issues related to the different machine learning approaches will be present. Machine learning techniques are used to automatically recognize complex patterns in a given dataset. These techniques allows creating homogeneous groups (unsupervised learning), or creating a classifier predicting group membership of new cases (supervised learning), when a group label is available for each case. To ensure a good performance of the machine learning techniques in a given dataset, all possible sources of bias should be removed or minimized. For that, the representativeness of the input dataset for the true population should be confirmed, the noise should be removed, the missing data should be treated and the data dimensionally (i.e., the number of parameters/features and the number of cases in the dataset) should be adjusted. The application of machine learning techniques in ocular disease diagnosis and monitoring will be presented and discussed in the second section of this manuscript. To show the clinical benefits of machine learning in clinical vision sciences, several examples will be presented in glaucoma, age-related macular degeneration, and diabetic retinopathy, these ocular pathologies being the major causes of irreversible visual impairment.

  19. Predictions of new AB O3 perovskite compounds by combining machine learning and density functional theory

    NASA Astrophysics Data System (ADS)

    Balachandran, Prasanna V.; Emery, Antoine A.; Gubernatis, James E.; Lookman, Turab; Wolverton, Chris; Zunger, Alex

    2018-04-01

    We apply machine learning (ML) methods to a database of 390 experimentally reported A B O3 compounds to construct two statistical models that predict possible new perovskite materials and possible new cubic perovskites. The first ML model classified the 390 compounds into 254 perovskites and 136 that are not perovskites with a 90% average cross-validation (CV) accuracy; the second ML model further classified the perovskites into 22 known cubic perovskites and 232 known noncubic perovskites with a 94% average CV accuracy. We find that the most effective chemical descriptors affecting our classification include largely geometric constructs such as the A and B Shannon ionic radii, the tolerance and octahedral factors, the A -O and B -O bond length, and the A and B Villars' Mendeleev numbers. We then construct an additional list of 625 A B O3 compounds assembled from charge conserving combinations of A and B atoms absent from our list of known compounds. Then, using the two ML models constructed on the known compounds, we predict that 235 of the 625 exist in a perovskite structure with a confidence greater than 50% and among them that 20 exist in the cubic structure (albeit, the latter with only ˜50 % confidence). We find that the new perovskites are most likely to occur when the A and B atoms are a lanthanide or actinide, when the A atom is an alkali, alkali earth, or late transition metal atom, or when the B atom is a p -block atom. We also compare the ML findings with the density functional theory calculations and convex hull analyses in the Open Quantum Materials Database (OQMD), which predicts the T =0 K ground-state stability of all the A B O3 compounds. We find that OQMD predicts 186 of 254 of the perovskites in the experimental database to be thermodynamically stable within 100 meV/atom of the convex hull and predicts 87 of the 235 ML-predicted perovskite compounds to be thermodynamically stable within 100 meV/atom of the convex hull, including 6 of these to be in cubic structures. We suggest these 87 as the most promising candidates for future experimental synthesis of novel perovskites.

  20. Immobilization of Bacillus amyloliquefaciens SP1 and its alkaline protease in various matrices for effective hydrolysis of casein.

    PubMed

    Guleria, Shiwani; Walia, Abhishek; Chauhan, Anjali; Shirkot, C K

    2016-12-01

    An extracellular alkaline protease producing B. amyloliquefaciens SP1 was isolated from apple rhizosphere having multifarious plant growth-promoting activities. B. amyloliquefaciens SP1 protease was immobilized using various concentrations of calcium alginate, agar and polyacrylamide to determine the optimum concentration for formation of the beads. Enzyme activity before immobilization (at 60 °C, pH 8.0 for 5 min) was 3580 µg/ml/min. The results of immobilization with various matrices revealed that 3 % calcium alginate (2829.92 µg/ml/min), 2 % agar (2600 µg/ml/min) and 10 % polyacrylamide (5698.99 µg/ml/min) were optimum concentrations for stable bead formation. Immobilized enzyme reusability results indicated that calcium alginate, agar and polyacrylamide beads retained 25.63, 22.05 and 34.04 % activity in their fifth repeated cycle, respectively. In cell immobilization technique, the free movement of microorganisms is restricted in the process, and a semi-continuous system of fermentation can be used. In the present work, this technique has been used for alkaline protease production using different matrices. Polyacrylamide (10 %) was found with the highest total alkaline protease titer, i.e., 24,847 µg/ml/min semi-continuously for 18 days as compared to agar (total enzyme titer: 5800 in 10 days) and calcium alginate (total enzyme titer: 13,010 in 15 days). This present study reported that polyacrylamide (10 %) among different matrices has maximum potential of immobilization of B. amyloliquefaciens SP1 and its detergent stable alkaline protease with effective application in bloodstain removal.

  1. Pressures of Wilderness Improvised Wound Irrigation Techniques: How Do They Compare?

    PubMed

    Luck, John B; Campagne, Danielle; Falcón Banchs, Roberto; Montoya, Jason; Spano, Susanne J

    2016-12-01

    Compare the pressures measured by improvised irrigation techniques to a commercial device and to prior reports. Devices tested included a commercial 500-mL compressible plastic bottle with splash guard, a 10-mL syringe, a 10-mL syringe with a 14-ga angiocatheter (with needle removed), a 50-mL Sawyer syringe, a plastic bag punctured with a 14-ga needle, a plastic bottle with cap punctured by a 14-ga needle, a plastic bottle with sports top, and a bladder-style hydration system. Each device was leveled on a support, manually compressed, and aimed toward a piece of glass. A high-speed camera placed behind the glass recorded the height of the stream upon impact at its highest and lowest point. Measurements were recorded 5 times for each device. Pressures in pounds per square inch (psi) were calculated. The syringe and angiocatheter pressures measured the highest pressures (16-49 psi). The 50-mL syringe (7-11 psi), 14-ga punctured water bottle (7-25 psi), and water bottle with sports top (3-7 psi) all measured at or above the commercial device (4-5 psi). Only the bladder-style hydration system (1-2 psi) and plastic bag with 14-ga needle puncture (2-3 psi) did not reach pressures generated by the commercial device. Pressures are consistent with those previously reported. All systems using compressible water bottles and all syringe-based systems provided pressures at or exceeding a commercial wound irrigation device. A 14-ga punctured plastic bag and bladder-style hydration pack failed to generate similar irrigation pressures. Copyright © 2016 Wilderness Medical Society. All rights reserved.

  2. Solid-phase extraction followed by liquid chromatography quadrupole time-of-flight tandem mass spectrometry for the selective determination of fungicides in wine samples.

    PubMed

    Fontana, A R; Rodríguez, I; Ramil, M; Altamirano, J C; Cela, R

    2011-04-22

    In this work, a reliable and selective procedure for the determination of thirteen fungicides in red and white wine samples is proposed. Solid-phase extraction (SPE) and liquid chromatography (LC) tandem mass spectrometry (MS/MS), based on a hybrid quadrupole time-of-flight (QTOF) system, were used as sample preparation and determination techniques, respectively. Extraction and purification of target analytes was carried out simultaneously by using a reversed-phase Oasis HLB (200mg) SPE cartridge combined with acetonitrile as elution solvent. Fungicides were determined operating the electrospray source in the positive ionization mode, with MS/MS conditions adjusted to obtain at least two intense product ions per compound, or registering two transitions per species when a single product was noticed. High selective MS/MS chromatograms were extracted using a mass window of 20 ppms for each product ion. Considering external calibration as quantification technique, the overall recoveries (accuracy) of the procedure ranged between 81% and 114% for red and white wine samples (10-20 mL), spiked at different concentrations between 5 and 100 ng mL(-1). Relative standard deviations of the above data stayed below 12% and the limits of quantification (LOQs) of the method, calculated for 10 mL of wine, varied between 0.1 ng mL(-1) for cyprodinil (CYP) and 0.7 ng mL(-1) for myclobutanil (MYC). The optimized method was applied to seventeen commercial wines produced in Spain and obtained from local supermarkets. Nine fungicides were determined, at levels above the LOQs of the method, in the above samples. The maximum concentrations and the highest occurrence frequencies corresponded to metalaxyl (MET) and iprovalicarb (IPR). Copyright © 2011 Elsevier B.V. All rights reserved.

  3. The EmulSiv filter removes microbial contamination from propofol but is not a substitute for aseptic technique.

    PubMed

    Hall, Wendy C E; Jolly, Donald T; Hrazdil, Jiri; Galbraith, John C; Greacen, Maria; Clanachan, Alexander S

    2003-01-01

    To evaluate the ability of the EmulSiv filter (EF) to remove extrinsic microbial contaminants from propofol. Aliquots of Staphylococcus aureus (S. aureus), Candida albicans (C. albicans), Klebsiella pneumoniae (K. pneumoniae), Moraxella osloensis (M. osloensis), Enterobacter agglomerans (E. agglomerans), Escherichia coli (E. coli), Serratia marcescens (S. marcescens), Moraxella catarrhalis (M. catarrhalis), Haemophilus influenzae (H. influenzae) and Campylobacter jejuni (C. jejuni) were inoculated into vials containing 20 mL of sterile propofol. The unfiltered inoculated propofol solutions served as controls. Ten millilitres and 20 mL samples of the inoculated propofol were filtered through the EF. All solutions were then subplated onto three culture plates using a precision 1 micro L calibrated platinum loop and incubated. The number of colony forming units (CFU) were counted. Data were analyzed using a one-sample t test, and a P value of less than 0.05 was selected as the level of statistical significance. The EF was able to completely remove CFU of S. aureus, C. albicans, K. pneumoniae, M. osloensis, E. agglomerans, E. coli, S. marcescens, and M. catarrhalis (P < 0.05). A small number of H. influenzae CFU were able to evade filtration in both the 10 mL and 20 mL samples. C. jejuni CFU were able to evade filtration in only the 10 mL sample. The EF removes the majority of microbial contaminates from propofol with the exception of H. influenzae and C. jejuni. Although the EF is capable of removing most of the microbial contamination produced by H. influenzae and C. jejuni, a few CFU are capable of evading filtration. Consequently, even the use of a filter capable of removing microbial contaminants is not a substitute for meticulous aseptic technique and prompt administration when propofol is used.

  4. AFRRI (Armed Forces Radiobiology Research Institute) Reports, July, August and September 1987.

    DTIC Science & Technology

    1987-11-01

    mononuclear cell layer obtained after Percol isolation contained approximately 90% mono- cytes as assessed by esterase staining. In most experiments...forming cell) were assayed using the double layer agar technique basically as described by Hagan et al. (22). The culture medium was double strength CMRL...trypticase soy broth, 20 g/ml L-asparagine. and penicillin-streptomycin. In the bottom layer of 35 mm plastic Petri dishes was 1 ml of a 1:1 mixture of culture

  5. Diode-pumped passively mode-locked and passively stabilized Nd3+:BaY2F8 laser

    NASA Astrophysics Data System (ADS)

    Agnesi, Antonio; Guandalini, Annalisa; Tomaselli, Alessandra; Sani, Elisa; Toncelli, Alessandra; Tonelli, Mauro

    2004-07-01

    Continuous-wave mode locking (CW-ML) of a diode-pumped Nd3+:BaY2F8 laser is reported for the first time to our knowledge. Pulses as short as 4.8 ps were measured with a total output power of almost equal to 1 W at 1049 nm, corresponding to 3.4 W of absorbed power from the pump diode at 806 nm. A novel technique for passive stabilization of CW-ML has been demonstrated.

  6. Variation in behavioral engagement during an active learning activity leads to differential knowledge gains in college students.

    PubMed

    LaDage, Lara D; Tornello, Samantha L; Vallejera, Jennilyn M; Baker, Emily E; Yan, Yue; Chowdhury, Anik

    2018-03-01

    There are many pedagogical techniques used by educators in higher education; however, some techniques and activities have been shown to be more beneficial to student learning than others. Research has demonstrated that active learning and learning in which students cognitively engage with the material in a multitude of ways result in better understanding and retention. The aim of the present study was to determine which of three pedagogical techniques led to improvement in learning and retention in undergraduate college students. Subjects partook in one of three different types of pedagogical engagement: hands-on learning with a model, observing someone else manipulate the model, and traditional lecture-based presentation. Students were then asked to take an online quiz that tested their knowledge of the new material, both immediately after learning the material and 2 wk later. Students who engaged in direct manipulation of the model scored higher on the assessment immediately after learning the material compared with the other two groups. However, there were no differences among the three groups when assessed after a 2-wk retention interval. Thus active learning techniques that involve direct interaction with the material can lead to learning benefits; however, how these techniques benefit long-term retention of the information is equivocal.

  7. Atomic characterization of Si nanoclusters embedded in SiO2 by atom probe tomography

    PubMed Central

    2011-01-01

    Silicon nanoclusters are of prime interest for new generation of optoelectronic and microelectronics components. Physical properties (light emission, carrier storage...) of systems using such nanoclusters are strongly dependent on nanostructural characteristics. These characteristics (size, composition, distribution, and interface nature) are until now obtained using conventional high-resolution analytic methods, such as high-resolution transmission electron microscopy, EFTEM, or EELS. In this article, a complementary technique, the atom probe tomography, was used for studying a multilayer (ML) system containing silicon clusters. Such a technique and its analysis give information on the structure at the atomic level and allow obtaining complementary information with respect to other techniques. A description of the different steps for such analysis: sample preparation, atom probe analysis, and data treatment are detailed. An atomic scale description of the Si nanoclusters/SiO2 ML will be fully described. This system is composed of 3.8-nm-thick SiO layers and 4-nm-thick SiO2 layers annealed 1 h at 900°C. PMID:21711666

  8. Effect of Active Learning Techniques on Students' Choice of Approach to Learning in Dentistry: A South African Case Study

    ERIC Educational Resources Information Center

    Khan, S.

    2011-01-01

    The purpose of this article is to report on empirical work, related to a techniques module, undertaken with the dental students of the University of the Western Cape, South Africa. I will relate how a range of different active learning techniques (tutorials; question papers and mock tests) assisted students to adopt a deep approach to learning in…

  9. Developing Modular and Adaptable Courseware Using TeachML.

    ERIC Educational Resources Information Center

    Wehner, Frank; Lorz, Alexander

    This paper presents the use of an XML grammar for two complementary projects--CHAMELEON (Cooperative Hypermedia Adaptive MultimEdia Learning Objects) and EIT (Enabling Informal Teamwork). Areas of applications are modular courseware documents and the collaborative authoring process of didactical units. A number of requirements for a suitable…

  10. Integration Framework for Heterogeneous Analysis Components: Building a Context Aware Virtual Analyst

    DTIC Science & Technology

    2014-11-01

    understands commands) modes are supported. By default, Julius comes with the Japanese language support. English acoustic and language models are...GUI, natura atar represent gue managem s the activitie ystem to und ry that suppo the Dialogu der to call arning (ML) learning ca r and feedb

  11. Anti-hepatocarcinoma effects of berberine-nanostructured lipid carriers against human HepG2, Huh7, and EC9706 cancer cell lines

    NASA Astrophysics Data System (ADS)

    Meng, Xiang-Ping; Fan, Hua; Wang, Yi-fei; Wang, Zhi-ping; Chen, Tong-sheng

    2016-10-01

    Hepatocarcinoma and esophageal squamous cell carcinomas threaten human life badly. It is a current issue to seek the effective natural remedy from plant to treat cancer due to the resistance of the advanced hepatocarcinoma and esophageal carcinoma to chemotherapy. Berberine (Ber), an isoquinoline derivative alkaloid, has a wide range of pharmacological properties and is considered to have anti-hepatocarcinoma and antiesophageal carcinoma effects. However its low oral bioavailability restricts its wide application. In this report, Ber loaded nanostructured lipid carriers (Ber-NLC) was prepared by hot melting and then high pressure homogenization technique. The in vitro anti-hepatocarcinoma and antiesophageal carcinoma effects of Ber-NLC relative to efficacy of bulk Ber were evaluated. The particle size and zeta potential of Ber-NLC were 189.3 +/- 3.7 nm and -19.3 +/- 1.4 mV, respectively. MTT assay showed that Ber-NLC effectively inhibited the proliferation of human HepG2 and Huh7 and EC9706 cells, and the corresponding IC50 value was 9.1 μg/ml, 4.4 μg/ml, and 6.3 μg/ml (18.3μg/ml, 6.5μg/ml, and 12.4μg/ml μg/ml of bulk Ber solution), respectively. These results suggest that the delivery of Ber-NLC is a promising approach for treating tumors.

  12. In vitro antitumor efficacy of berberine: solid lipid nanoparticles against human HepG2, Huh7 and EC9706 cancer cell lines

    NASA Astrophysics Data System (ADS)

    Meng, Xiang-Ping; Wang, Xiao; Wang, Huai-ling; Chen, Tong-sheng; Wang, Yi-fei; Wang, Zhi-ping

    2016-03-01

    Hepatocarcinoma and esophageal squamous cell carcinomas threaten human life badly. It is a current issue to seek the effective natural remedy from plant to treat cancer due to the resistance of the advanced hepatocarcinoma and esophageal carcinoma to chemotherapy. Berberine (Ber), an isoquinoline derivative alkaloid, has a wide range of pharmacological properties and is considered to have anti-hepatocarcinoma and antiesophageal carcinoma effects. However its low oral bioavailability restricts its wide application. In this report, Ber loaded solid lipid nanoparticles (Ber-SLN) was prepared by hot melting and then high pressure homogenization technique. The in vitro anti-hepatocarcinoma and antiesophageal carcinoma effects of Ber-SLN relative to efficacy of bulk Ber were evaluated. The particle size and zeta potential of Ber-SLN were 154.3 ± 4.1 nm and -11.7 ± 1.8 mV, respectively. MTT assay showed that Ber-SLN effectively inhibited the proliferation of human HepG2 and Huh7 and EC9706 cells, and the corresponding IC50 value was 10.6 μg/ml, 5.1 μg/ml, and 7.3 μg/ml (18.3μg/ml, 6.5μg/ml, and 12.4μg/ml μg/ml of bulk Ber solution), respectively. These results suggest that the delivery of Ber-SLN is a promising approach for treating tumors.

  13. Practical utility of on-line clearance and blood temperature monitors as noninvasive techniques to measure hemodialysis blood access flow.

    PubMed

    Fontseré, Néstor; Blasco, Miquel; Maduell, Francisco; Vera, Manel; Arias-Guillen, Marta; Herranz, Sandra; Blanco, Teresa; Barrufet, Marta; Burrel, Marta; Montaña, Javier; Real, Maria Isabel; Mestres, Gaspar; Riambau, Vicenç; Campistol, Josep M

    2011-01-01

    Access blood flow (Qa) measurements are recommended by the current guidelines as one of the most important components in vascular access maintenance programs. This study evaluates the efficiency of Qa measurement with on-line conductivity (OLC-Qa) and blood temperature monitoring (BTM-Qa) in comparison with the gold standard saline dilution method (SDM-Qa). 50 long-term hemodialysis patients (42 arteriovenous fistulas/8 arteriovenous grafts) were studied. Bland-Altman and Lin's coefficient (ρ(c)) were used to study accuracy and precision. Mean values were 1,021.7 ± 502.4 ml/min SDM-Qa, 832.8 ± 574.3 ml/min OLC-Qa (p = 0.007) and 1,094.9 ± 491.9 ml/min with BTM-Qa (p = NS). Biases and ρ(c) obtained were -188.8 ml/min (ρ(c) 0.58) OLC-Qa and 73.2 ml/min (ρ(c) 0.89) BTM-Qa. The limits of agreement (bias ± 1.96 SD) obtained were from -1,119 to 741.3 ml/min (OLC-Qa) and -350.6 to 497.2 ml/min (BTM-Qa). BTM-Qa and OLC-Qa are valid noninvasive and practical methods to estimate Qa, although BTM-Qa was more accurate and had better concordance than OLC-Qa compared with SDM-Qa. Copyright © 2010 S. Karger AG, Basel.

  14. Getting the Most Out of Dual-Listed Courses: Involving Undergraduate Students in Discussion Through Active Learning Techniques

    NASA Astrophysics Data System (ADS)

    Tasich, C. M.; Duncan, L. L.; Duncan, B. R.; Burkhardt, B. L.; Benneyworth, L. M.

    2015-12-01

    Dual-listed courses will persist in higher education because of resource limitations. The pedagogical differences between undergraduate and graduate STEM student groups and the underlying distinction in intellectual development levels between the two student groups complicate the inclusion of undergraduates in these courses. Active learning techniques are a possible remedy to the hardships undergraduate students experience in graduate-level courses. Through an analysis of both undergraduate and graduate student experiences while enrolled in a dual-listed course, we implemented a variety of learning techniques used to complement the learning of both student groups and enhance deep discussion. Here, we provide details concerning the implementation of four active learning techniques - role play, game, debate, and small group - that were used to help undergraduate students critically discuss primary literature. Student perceptions were gauged through an anonymous, end-of-course evaluation that contained basic questions comparing the course to other courses at the university and other salient aspects of the course. These were given as a Likert scale on which students rated a variety of statements (1 = strongly disagree, 3 = no opinion, and 5 = strongly agree). Undergraduates found active learning techniques to be preferable to traditional techniques with small-group discussions being rated the highest in both enjoyment and enhanced learning. The graduate student discussion leaders also found active learning techniques to improve discussion. In hindsight, students of all cultures may be better able to take advantage of such approaches and to critically read and discuss primary literature when written assignments are used to guide their reading. Applications of active learning techniques can not only address the gap between differing levels of students, but also serve as a complement to student engagement in any science course design.

  15. Social Learning Network Analysis Model to Identify Learning Patterns Using Ontology Clustering Techniques and Meaningful Learning

    ERIC Educational Resources Information Center

    Firdausiah Mansur, Andi Besse; Yusof, Norazah

    2013-01-01

    Clustering on Social Learning Network still not explored widely, especially when the network focuses on e-learning system. Any conventional methods are not really suitable for the e-learning data. SNA requires content analysis, which involves human intervention and need to be carried out manually. Some of the previous clustering techniques need…

  16. Learning Programming Technique through Visual Programming Application as Learning Media with Fuzzy Rating

    ERIC Educational Resources Information Center

    Buditjahjanto, I. G. P. Asto; Nurlaela, Luthfiyah; Ekohariadi; Riduwan, Mochamad

    2017-01-01

    Programming technique is one of the subjects at Vocational High School in Indonesia. This subject contains theory and application of programming utilizing Visual Programming. Students experience some difficulties to learn textual learning. Therefore, it is necessary to develop media as a tool to transfer learning materials. The objectives of this…

  17. How Students Learn: Improving Teaching Techniques for Business Discipline Courses

    ERIC Educational Resources Information Center

    Cluskey, Bob; Elbeck, Matt; Hill, Kathy L.; Strupeck, Dave

    2011-01-01

    The focus of this paper is to familiarize business discipline faculty with cognitive psychology theories of how students learn together with teaching techniques to assist and improve student learning. Student learning can be defined as the outcome from the retrieval (free recall) of desired information. Student learning occurs in two processes.…

  18. Navigating the Active Learning Swamp: Creating an Inviting Environment for Learning.

    ERIC Educational Resources Information Center

    Johnson, Marie C.; Malinowski, Jon C.

    2001-01-01

    Reports on a survey of faculty members (n=29) asking them to define active learning, to rate how effectively different teaching techniques contribute to active learning, and to list the three teaching techniques they use most frequently. Concludes that active learning requires establishing an environment rather than employing a specific teaching…

  19. Development and Validation of Stability-Indicating Derivative Spectrophotometric Methods for Determination of Dronedarone Hydrochloride

    NASA Astrophysics Data System (ADS)

    Chadha, R.; Bali, A.

    2016-05-01

    Rapid, sensitive, cost effective and reproducible stability-indicating derivative spectrophotometric methods have been developed for the estimation of dronedarone HCl employing peak-zero (P-0) and peak-peak (P-P) techniques, and their stability-indicating potential assessed in forced degraded solutions of the drug. The methods were validated with respect to linearity, accuracy, precision and robustness. Excellent linearity was observed in concentrations 2-40 μg/ml ( r 2 = 0.9986). LOD and LOQ values for the proposed methods ranged from 0.42-0.46 μg/ml and 1.21-1.27 μg/ml, respectively, and excellent recovery of the drug was obtained in the tablet samples (99.70 ± 0.84%).

  20. Calibration standard of body tissue with magnetic nanocomposites for MRI and X-ray imaging

    NASA Astrophysics Data System (ADS)

    Rahn, Helene; Woodward, Robert; House, Michael; Engineer, Diana; Feindel, Kirk; Dutz, Silvio; Odenbach, Stefan; StPierre, Tim

    2016-05-01

    We present a first study of a long-term phantom for Magnetic Resonance Imaging (MRI) and X-ray imaging of biological tissues with magnetic nanocomposites (MNC) suitable for 3-dimensional and quantitative imaging of tissues after, e.g. magnetically assisted cancer treatments. We performed a cross-calibration of X-ray microcomputed tomography (XμCT) and MRI with a joint calibration standard for both imaging techniques. For this, we have designed a phantom for MRI and X-ray computed tomography which represents biological tissue enriched with MNC. The developed phantoms consist of an elastomer with different concentrations of multi-core MNC. The matrix material is a synthetic thermoplastic gel, PermaGel (PG). The developed phantoms have been analyzed with Nuclear Magnetic Resonance (NMR) Relaxometry (Bruker minispec mq 60) at 1.4 T to obtain R2 transverse relaxation rates, with SQUID (Superconducting QUantum Interference Device) magnetometry and Inductively Coupled Plasma Mass Spectrometry (ICP-MS) to verify the magnetite concentration, and with XμCT and 9.4 T MRI to visualize the phantoms 3-dimensionally and also to obtain T2 relaxation times. A specification of a sensitivity range is determined for standard imaging techniques X-ray computed tomography (XCT) and MRI as well as with NMR. These novel phantoms show a long-term stability over several months up to years. It was possible to suspend a particular MNC within the PG reaching a concentration range from 0 mg/ml to 6.914 mg/ml. The R2 relaxation rates from 1.4 T NMR-relaxometry show a clear connection (R2=0.994) with MNC concentrations between 0 mg/ml and 4.5 mg/ml. The MRI experiments have shown a linear correlation of R2 relaxation and MNC concentrations as well but in a range between MNC concentrations of 0 mg/ml and 1.435 mg/ml. It could be shown that XμCT displays best moderate and high MNC concentrations. The sensitivity range for this particular XμCT apparatus yields from 0.569 mg/ml to 6.914 mg/ml. The cross-calibration has defined a shared sensitivity range of XμCT, 1.4 T NMR relaxometer minispec, and 9.4 T MRI. The shared sensitivity range for the measuring method (NMR relaxometry) and the imaging modalities (XμCT and MRI) is from 0.569 mg/ml, limited by XμCT, and 1.435 mg/ml, limited by MRI. The presented phantoms have been found to be suitable to act as a body tissue substitute for XCT imaging as well as an acceptable T2 phantom of biological tissue enriched with magnetic nanoparticles for MRI.

  1. The application of machine learning techniques in the clinical drug therapy.

    PubMed

    Meng, Huan-Yu; Jin, Wan-Lin; Yan, Cheng-Kai; Yang, Huan

    2018-05-25

    The development of a novel drug is an extremely complicated process that includes the target identification, design and manufacture, and proper therapy of the novel drug, as well as drug dose selection, drug efficacy evaluation, and adverse drug reaction control. Due to the limited resources, high costs, long duration, and low hit-to-lead ratio in the development of pharmacogenetics and computer technology, machine learning techniques have assisted novel drug development and have gradually received more attention by researchers. According to current research, machine learning techniques are widely applied in the process of the discovery of new drugs and novel drug targets, the decision surrounding proper therapy and drug dose, and the prediction of drug efficacy and adverse drug reactions. In this article, we discussed the history, workflow, and advantages and disadvantages of machine learning techniques in the processes mentioned above. Although the advantages of machine learning techniques are fairly obvious, the application of machine learning techniques is currently limited. With further research, the application of machine techniques in drug development could be much more widespread and could potentially be one of the major methods used in drug development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  2. [Application of 3D visualization technique in breast cancer surgery with immediate breast reconstruction using laparoscopically harvested pedicled latissimus dorsi muscle flap].

    PubMed

    Zhang, Pu-Sheng; Wang, Li-Kun; Luo, Yun-Feng; Shi, Fu-Jun; He, Lin-Yun; Zeng, Cheng-Bing; Zhang, Yu; Fang, Chi-Hua

    2017-08-20

    To study the value of 3D visualization technique in breast-preserving surgery for breast cancer with immediate breast reconstruction using laparoscopically harvested pedicled latissimus dorsi muscle flap. From January, 2015 to May, 2016, 30 patients with breast cancer underwent breast-preserving surgery with immediate breast reconstruction using pedicled latissimus dorsi muscle flap. The CT data of the arterial phase and venous phase were collected preoperatively and imported into the self-developed medical image 3D visualization system for image segmentation and 3D reconstruction. The 3D models were imported into the simulation surgery platform for virtual surgery to prepare for subsequent surgeries. The cosmetic outcomes of the patients were evaluated 6 months after the surgery. Another 18 patients with breast cancer who underwent laparoscopic latissimus dorsi muscle breast reconstruction without using 3D visualization technique from January to December, 2014 served as the control group. The data of the operative time, intraoperative blood loss and postoperative appearance of the breasts were analyzed. The reconstructed 3D model clearly displayed the anatomical structures of the breast, armpit, latissimus dorsi muscle and vessels and their anatomical relationship in all the 30 cases. Immediate breast reconstruction was performed successfully in all the cases with median operation time of 226 min (range, 210 to 420 min), a median blood loss of 95 mL (range, 73 to 132 mL). Evaluation of the appearance of the breast showed excellent results in 22 cases, good appearance in 6 cases and acceptable appearance in 2 cases. In the control group, the median operation time was 283 min (range, 256 to 313 min) and the median blood loss was 107 mL (range, 79 to 147 mL) with excellent appearance of the breasts in 10 cases, good appearance in 4 cases and acceptable appearance in 4 cases. 3D reconstruction technique can clearly display the morphology of the latissimus dorsi and the thoracic dorsal artery, allows calculation of the volume of the breast and the latissimus dorsi, and helps in defining the scope of resection of the latissimus dorsi to avoid injuries of the pedicled vessels. This technique also helps to shorten the operation time, reduce intraoperative bleeding, and improve the appearance of the reconstructed breast using pedicled latissimus dorsi muscle flap.

  3. The Effectiveness of Active and Traditional Teaching Techniques in the Orthopedic Assessment Laboratory

    ERIC Educational Resources Information Center

    Nottingham, Sara; Verscheure, Susan

    2010-01-01

    Active learning is a teaching methodology with a focus on student-centered learning that engages students in the educational process. This study implemented active learning techniques in an orthopedic assessment laboratory, and the effects of these teaching techniques. Mean scores from written exams, practical exams, and final course evaluations…

  4. A Comparative Study of Serum Exosome Isolation Using Differential Ultracentrifugation and Three Commercial Reagents.

    PubMed

    Helwa, Inas; Cai, Jingwen; Drewry, Michelle D; Zimmerman, Arthur; Dinkins, Michael B; Khaled, Mariam Lotfy; Seremwe, Mutsa; Dismuke, W Michael; Bieberich, Erhard; Stamer, W Daniel; Hamrick, Mark W; Liu, Yutao

    2017-01-01

    Exosomes play a role in cell-to-cell signaling and serve as possible biomarkers. Isolating exosomes with reliable quality and substantial concentration is a major challenge. Our purpose is to compare the exosomes extracted by three different exosome isolation kits (miRCURY, ExoQuick, and Invitrogen Total Exosome Isolation Reagent) and differential ultracentrifugation (UC) using six different volumes of a non-cancerous human serum (5 ml, 1 ml, 500 μl, 250 μl, 100 μl, and 50 μl) and three different volumes (1 ml, 500 μl and 100 μl) of six individual commercial serum samples collected from human donors. The smaller starting volumes (100 μl and 50 μl) are used to mimic conditions of limited availability of heterogeneous biological samples. The isolated exosomes were characterized based upon size, quantity, zeta potential, CD63 and CD9 protein expression, and exosomal RNA (exRNA) quality and quantity using several complementary methods: nanoparticle tracking analysis (NTA) with ZetaView, western blot, transmission electron microscopy (TEM), the Agilent Bioanalyzer system, and droplet digital PCR (ddPCR). Our NTA results showed that all isolation techniques produced exosomes within the expected size range (40-150 nm). The three kits, though, produced a significantly higher yield (80-300 fold) of exosomes as compared to UC for all serum volumes, except 5 mL. We also found that exosomes isolated by the different techniques and serum volumes had similar zeta potentials to previous studies. Western blot analysis and TEM immunogold labelling confirmed the expression of two common exosomal protein markers, CD63 and CD9, in samples isolated by all techniques. All exosome isolations yielded high quality exRNA, containing mostly small RNA with a peak between 25 and 200 nucleotides in size. ddPCR results indicated that exosomes isolated from similar serum volumes but different isolation techniques rendered similar concentrations of two selected exRNA: hsa-miR-16 and hsa-miR-451. In summary, the three commercial exosome isolation kits are viable alternatives to UC, even when limited amounts of biological samples are available.

  5. Healthcare Learning Community and Student Retention

    ERIC Educational Resources Information Center

    Johnson, Sherryl W.

    2014-01-01

    Teaching, learning, and retention processes have evolved historically to include multifaceted techniques beyond the traditional lecture. This article presents related results of a study using a healthcare learning community in a southwest Georgia university. The value of novel techniques and tools in promoting student learning and retention…

  6. Automation of energy demand forecasting

    NASA Astrophysics Data System (ADS)

    Siddique, Sanzad

    Automation of energy demand forecasting saves time and effort by searching automatically for an appropriate model in a candidate model space without manual intervention. This thesis introduces a search-based approach that improves the performance of the model searching process for econometrics models. Further improvements in the accuracy of the energy demand forecasting are achieved by integrating nonlinear transformations within the models. This thesis introduces machine learning techniques that are capable of modeling such nonlinearity. Algorithms for learning domain knowledge from time series data using the machine learning methods are also presented. The novel search based approach and the machine learning models are tested with synthetic data as well as with natural gas and electricity demand signals. Experimental results show that the model searching technique is capable of finding an appropriate forecasting model. Further experimental results demonstrate an improved forecasting accuracy achieved by using the novel machine learning techniques introduced in this thesis. This thesis presents an analysis of how the machine learning techniques learn domain knowledge. The learned domain knowledge is used to improve the forecast accuracy.

  7. RobotReviewer: evaluation of a system for automatically assessing bias in clinical trials.

    PubMed

    Marshall, Iain J; Kuiper, Joël; Wallace, Byron C

    2016-01-01

    To develop and evaluate RobotReviewer, a machine learning (ML) system that automatically assesses bias in clinical trials. From a (PDF-formatted) trial report, the system should determine risks of bias for the domains defined by the Cochrane Risk of Bias (RoB) tool, and extract supporting text for these judgments. We algorithmically annotated 12,808 trial PDFs using data from the Cochrane Database of Systematic Reviews (CDSR). Trials were labeled as being at low or high/unclear risk of bias for each domain, and sentences were labeled as being informative or not. This dataset was used to train a multi-task ML model. We estimated the accuracy of ML judgments versus humans by comparing trials with two or more independent RoB assessments in the CDSR. Twenty blinded experienced reviewers rated the relevance of supporting text, comparing ML output with equivalent (human-extracted) text from the CDSR. By retrieving the top 3 candidate sentences per document (top3 recall), the best ML text was rated more relevant than text from the CDSR, but not significantly (60.4% ML text rated 'highly relevant' v 56.5% of text from reviews; difference +3.9%, [-3.2% to +10.9%]). Model RoB judgments were less accurate than those from published reviews, though the difference was <10% (overall accuracy 71.0% with ML v 78.3% with CDSR). Risk of bias assessment may be automated with reasonable accuracy. Automatically identified text supporting bias assessment is of equal quality to the manually identified text in the CDSR. This technology could substantially reduce reviewer workload and expedite evidence syntheses. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  8. Persistent effects of prior chronic exposure to corticosterone on reward-related learning and motivation in rodents.

    PubMed

    Olausson, Peter; Kiraly, Drew D; Gourley, Shannon L; Taylor, Jane R

    2013-02-01

    Repeated or prolonged exposure to stress has profound effects on a wide spectrum of behavioral and neurobiological processes and has been associated with the pathophysiology of depression. The multifaceted nature of this disorder includes despair, anhedonia, diminished motivation, and disrupted cognition, and it has been proposed that depression is also associated with reduced reward-motivated learning. We have previously reported that prior chronic corticosterone exposure to mice produces a lasting depressive-like state that can be reversed by chronic antidepressant treatment. In the present study, we tested the effects of prior chronic exposure to corticosterone (50 μg/ml) administered to rats or to mice in drinking water for 14 days followed by dose-tapering over 9 days. The exposure to corticosterone produced lasting deficits in the acquisition of reward-related learning tested on a food-motivated instrumental task conducted 10-20 days after the last day of full dose corticosterone exposure. Rats exposed to corticosterone also displayed reduced responding on a progressive ratio schedule of reinforcement when tested on day 21 after exposure. Amitriptyline (200 mg/ml in drinking water) exposure for 14 days to mice produced the opposite effect, enhancing food-motivated instrumental acquisition and performance. Repeated treatment with amitriptyline (5 mg/kg, intraperitoneally; bid) subsequent to corticosterone exposure also prevented the corticosterone-induced deficits in rats. These results are consistent with aberrant reward-related learning and motivational processes in depressive states and provide new evidence that stress-induced neuroadaptive alterations in cortico-limbic-striatal brain circuits involved in learning and motivation may play a critical role in aspects of mood disorders.

  9. Evaluation of the learning curve for thulium laser enucleation of the prostate with the aid of a simulator tool but without tutoring: comparison of two surgeons with different levels of endoscopic experience.

    PubMed

    Saredi, Giovanni; Pirola, Giacomo Maria; Pacchetti, Andrea; Lovisolo, Jon Alexander; Borroni, Giacomo; Sembenini, Federico; Marconi, Alberto Mario

    2015-06-09

    The aim of this study was to determine the learning curve for thulium laser enucleation of the prostate (ThuLEP) for two surgeons with different levels of urological endoscopic experience. From June 2012 to August 2013, ThuLEP was performed on 100 patients in our institution. We present the results of a prospective evaluation during which we analyzed data related to the learning curves for two surgeons of different levels of experience. The prostatic adenoma volumes ranged from 30 to 130 mL (average 61.2 mL). Surgeons A and B performed 48 and 52 operations, respectively. Six months after surgery, all patients were evaluated with the International Prostate Symptom Score questionnaire, uroflowmetry, and prostate-specific antigen test. Introduced in 2010, ThuLEP consists of blunt enucleation of the prostatic apex and lobes using the sheath of the resectoscope. This maneuver allows clearer visualization of the enucleation plane and precise identification of the prostatic capsule. These conditions permit total resection of the prostatic adenoma and coagulation of small penetrating vessels, thereby reducing the laser emission time. Most of the complications in this series were encountered during morcellation, which in some cases was performed under poor vision because of venous bleeding due to surgical perforation of the capsule during enucleation. Based on this analysis, we concluded that it is feasible for laser-naive urologists with endoscopic experience to learn to perform ThuLEP without tutoring. Those statements still require further validation in larger multicentric study cohort by several surgeon. The main novelty during the learning process was the use of a simulator that faithfully reproduced all of the surgical steps in prostates of various shapes and volumes.

  10. Learning Physics-based Models in Hydrology under the Framework of Generative Adversarial Networks

    NASA Astrophysics Data System (ADS)

    Karpatne, A.; Kumar, V.

    2017-12-01

    Generative adversarial networks (GANs), that have been highly successful in a number of applications involving large volumes of labeled and unlabeled data such as computer vision, offer huge potential for modeling the dynamics of physical processes that have been traditionally studied using simulations of physics-based models. While conventional physics-based models use labeled samples of input/output variables for model calibration (estimating the right parametric forms of relationships between variables) or data assimilation (identifying the most likely sequence of system states in dynamical systems), there is a greater opportunity to explore the full power of machine learning (ML) methods (e.g, GANs) for studying physical processes currently suffering from large knowledge gaps, e.g. ground-water flow. However, success in this endeavor requires a principled way of combining the strengths of ML methods with physics-based numerical models that are founded on a wealth of scientific knowledge. This is especially important in scientific domains like hydrology where the number of data samples is small (relative to Internet-scale applications such as image recognition where machine learning methods has found great success), and the physical relationships are complex (high-dimensional) and non-stationary. We will present a series of methods for guiding the learning of GANs using physics-based models, e.g., by using the outputs of physics-based models as input data to the generator-learner framework, and by using physics-based models as generators trained using validation data in the adversarial learning framework. These methods are being developed under the broad paradigm of theory-guided data science that we are developing to integrate scientific knowledge with data science methods for accelerating scientific discovery.

  11. Dissociation between learning and memory impairment and other sickness behaviours during simulated Mycoplasma infection in rats.

    PubMed

    Swanepoel, Tanya; Harvey, Brian H; Harden, Lois M; Laburn, Helen P; Mitchell, Duncan

    2011-11-01

    To investigate potential consequences for learning and memory, we have simulated the effects of Mycoplasma infection, in rats, by administering fibroblast-stimulating lipopepide-1 (FSL-1), a pyrogenic moiety of Mycoplasma salivarium. We measured the effects on body temperature, cage activity, food intake, and on spatial learning and memory in a Morris Water Maze. Male Sprague-Dawley rats had radio transponders implanted to measure abdominal temperature and cage activity. After recovery, rats were assigned randomly to receive intraperitoneal (I.P.) injections of FSL-1 (500 or 1000 μg kg(-1) in 1 ml kg(-1) phosphate-buffered saline; PBS) or vehicle (PBS, 1 ml kg(-1)). Body mass and food intake were measured daily. Training in the Maze commenced 18 h after injections and continued daily for four days. Spatial memory was assessed on the fifth day. In other rats, we measured concentrations of brain pro-inflammatory cytokines, interleukin (IL)-1β and IL-6, at 3 and 18 h after injections. FSL-1 administration induced a dose-dependent fever (∼1°C) for two days, lethargy (∼78%) for four days, anorexia (∼65%) for three days and body mass stunting (∼6%) for at least four days. Eighteen hours after FSL-1 administration, when concentrations of IL-1β, but not that of IL-6, were elevated in both the hypothalamus and the hippocampus, and when rats were febrile, lethargic and anorexic, learning in the Maze was unaffected. There also was no memory impairment. Our results support emerging evidence that impaired learning and memory is not inevitable during simulated infection. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. The implementation of portfolio assessment by the educators on the mathematics learning process in senior high school

    NASA Astrophysics Data System (ADS)

    Lestariani, Ida; Sujadi, Imam; Pramudya, Ikrar

    2018-05-01

    Portfolio assessment can shows the development of the ability of learners in a period through the work so that can be seen progress monitored learning of each learner. The purpose of research to describe and know the implementation of portfolio assessment on the mathematics learning process with the Senior High school math teacher class X as the subject because of the importance of applying the assessment for the progress of learning outcomes of learners. This research includes descriptive qualitative research type. Techniques of data collecting is done by observation method, interview and documentation. Data collection then validated using triangulation technique that is observation technique, interview and documentation. Data analysis technique is done by data reduction, data presentation and conclusion. The results showed that the steps taken by teachers in applying portfolio assessment obtained focused on learning outcomes. Student learning outcomes include homework and daily tests. Based on the results of research can be concluded that the implementation of portfolio assessment is the form of learning results are scored. Teachers have not yet implemented other portfolio assessment techniques such as student work.

  13. Percutaneous ethanol injection of large autonomous hyperfunctioning thyroid nodules.

    PubMed

    Tarantino, L; Giorgio, A; Mariniello, N; de Stefano, G; Perrotta, A; Aloisio, V; Tamasi, S; Forestieri, M C; Esposito, F; Esposito, F; Finizia, L; Voza, A

    2000-01-01

    To verify the effectiveness of percutaneous ethanol injection (PEI) in the treatment of large (>30-mL) hyperfunctioning thyroid nodules. Twelve patients (eight women, four men; age range, 26-76 years) with a large hyperfunctioning thyroid nodule (volume range, 33-90 mL; mean, 46.08 mL) underwent PEI treatment under ultrasonographic (US) guidance. US was used to calculate the volume of the nodules and to assess the diffusion of the ethanol in the lesions during the procedure. When incomplete necrosis of the nodule was depicted at scintigraphy performed 3 months after treatment, additional PEI sessions were performed. Four to 11 PEI sessions (mean, seven) were performed in each patient, with an injection of 3-14 mL of 99.8% ethanol per session (total amount of ethanol per patient, 30-108 mL; mean, 48.5 mL). At scintigraphy after treatment in all patients, recovery of extranodular uptake, absence of uptake in the nodule, and normalization of thyroid-stimulating hormone (thyrotropin) levels were observed. In all patients, US showed volume reductions of 30%-50% after 3 months and 40%-80% after 6-9 months. Side effects were self-limiting in all patients. During the 6-48-month follow-up, no recurrence was observed. PEI is an effective and safe technique for the treatment of large hyperfunctioning thyroid nodules.

  14. Spectro-photometric determinations of Mn, Fe and Cu in aluminum master alloys

    NASA Astrophysics Data System (ADS)

    Rehan; Naveed, A.; Shan, A.; Afzal, M.; Saleem, J.; Noshad, M. A.

    2016-08-01

    Highly reliable, fast and cost effective Spectro-photometric methods have been developed for the determination of Mn, Fe & Cu in aluminum master alloys, based on the development of calibration curves being prepared via laboratory standards. The calibration curves are designed so as to induce maximum sensitivity and minimum instrumental error (Mn 1mg/100ml-2mg/100ml, Fe 0.01mg/100ml-0.2mg/100ml and Cu 2mg/100ml-10mg/ 100ml). The developed Spectro-photometric methods produce accurate results while analyzing Mn, Fe and Cu in certified reference materials. Particularly, these methods are suitable for all types of Al-Mn, Al-Fe and Al-Cu master alloys (5%, 10%, 50% etc. master alloys).Moreover, the sampling practices suggested herein include a reasonable amount of analytical sample, which truly represent the whole lot of a particular master alloy. Successive dilution technique was utilized to meet the calibration curve range. Furthermore, the workout methods were also found suitable for the analysis of said elements in ordinary aluminum alloys. However, it was observed that Cush owed a considerable interference with Fe, the later one may not be accurately measured in the presence of Cu greater than 0.01 %.

  15. Quantitative forecasting of PTSD from early trauma responses: a Machine Learning application.

    PubMed

    Galatzer-Levy, Isaac R; Karstoft, Karen-Inge; Statnikov, Alexander; Shalev, Arieh Y

    2014-12-01

    There is broad interest in predicting the clinical course of mental disorders from early, multimodal clinical and biological information. Current computational models, however, constitute a significant barrier to realizing this goal. The early identification of trauma survivors at risk of post-traumatic stress disorder (PTSD) is plausible given the disorder's salient onset and the abundance of putative biological and clinical risk indicators. This work evaluates the ability of Machine Learning (ML) forecasting approaches to identify and integrate a panel of unique predictive characteristics and determine their accuracy in forecasting non-remitting PTSD from information collected within 10 days of a traumatic event. Data on event characteristics, emergency department observations, and early symptoms were collected in 957 trauma survivors, followed for fifteen months. An ML feature selection algorithm identified a set of predictors that rendered all others redundant. Support Vector Machines (SVMs) as well as other ML classification algorithms were used to evaluate the forecasting accuracy of i) ML selected features, ii) all available features without selection, and iii) Acute Stress Disorder (ASD) symptoms alone. SVM also compared the prediction of a) PTSD diagnostic status at 15 months to b) posterior probability of membership in an empirically derived non-remitting PTSD symptom trajectory. Results are expressed as mean Area Under Receiver Operating Characteristics Curve (AUC). The feature selection algorithm identified 16 predictors, present in ≥ 95% cross-validation trials. The accuracy of predicting non-remitting PTSD from that set (AUC = .77) did not differ from predicting from all available information (AUC = .78). Predicting from ASD symptoms was not better then chance (AUC = .60). The prediction of PTSD status was less accurate than that of membership in a non-remitting trajectory (AUC = .71). ML methods may fill a critical gap in forecasting PTSD. The ability to identify and integrate unique risk indicators makes this a promising approach for developing algorithms that infer probabilistic risk of chronic posttraumatic stress psychopathology based on complex sources of biological, psychological, and social information. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. The Effect of Higher Education Faculty Training in Improvisational Theatre Techniques on Student Learning and Perceptions of Engagement and Faculty Perceptions of Teaching and Learning

    ERIC Educational Resources Information Center

    Massie, DeAnna

    2017-01-01

    College instructors are content experts but ineffective at creating engaging and productive learning environments. This mixed methods study explored how improvisational theatre techniques affect college instructors' ability to increase student engagement and learning. Theoretical foundations included engagement, active learning, collaboration and…

  17. Problem based learning with scaffolding technique on geometry

    NASA Astrophysics Data System (ADS)

    Bayuningsih, A. S.; Usodo, B.; Subanti, S.

    2018-05-01

    Geometry as one of the branches of mathematics has an important role in the study of mathematics. This research aims to explore the effectiveness of Problem Based Learning (PBL) with scaffolding technique viewed from self-regulation learning toward students’ achievement learning in mathematics. The research data obtained through mathematics learning achievement test and self-regulated learning (SRL) questionnaire. This research employed quasi-experimental research. The subjects of this research are students of the junior high school in Banyumas Central Java. The result of the research showed that problem-based learning model with scaffolding technique is more effective to generate students’ mathematics learning achievement than direct learning (DL). This is because in PBL model students are more able to think actively and creatively. The high SRL category student has better mathematic learning achievement than middle and low SRL categories, and then the middle SRL category has better than low SRL category. So, there are interactions between learning model with self-regulated learning in increasing mathematic learning achievement.

  18. Use of the learning conversation improves instructor confidence in life support training: An open randomised controlled cross-over trial comparing teaching feedback mechanisms.

    PubMed

    Baldwin, Lydia J L; Jones, Christopher M; Hulme, Jonathan; Owen, Andrew

    2015-11-01

    Feedback is vital for the effective delivery of skills-based education. We sought to compare the sandwich technique and learning conversation structured methods of feedback delivery in competency-based basic life support (BLS) training. Open randomised crossover study undertaken between October 2014 and March 2015 at the University of Birmingham, United Kingdom. Six-hundred and forty healthcare students undertaking a European Resuscitation Council (ERC) BLS course were enrolled, each of whom was randomised to receive teaching using either the sandwich technique or the learning conversation. Fifty-eight instructors were randomised to initially teach using either the learning conversation or sandwich technique, prior to crossing-over and teaching with the alternative technique after a pre-defined time period. Outcome measures included skill acquisition as measured by an end-of-course competency assessment, instructors' perception of teaching with each feedback technique and candidates' perception of the feedback they were provided with. Scores assigned to use of the learning conversation by instructors were significantly more favourable than for the sandwich technique across all but two assessed domains relating to instructor perception of the feedback technique, including all skills-based domains. No difference was seen in either assessment pass rates (80.9% sandwich technique vs. 77.2% learning conversation; OR 1.2, 95% CI 0.85-1.84; p=0.29) or any domain relating to candidates' perception of their teaching technique. This is the first direct comparison of two feedback techniques in clinical medical education using both quantitative and qualitative methodology. The learning conversation is preferred by instructors providing competency-based life support training and is perceived to favour skills acquisition. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Student Support to WL/ML and WL/AA

    DTIC Science & Technology

    1993-01-01

    the program, so please be candid. I. How did you learn about the Student Support Program? Check one. 14 a) Advertisement (flyer, brochure, campus paper...scientific research 25 27 3 1 3. I was satisfied with the way I spent my time 41 13 2 0 4. I learned a lot 32 21 3 0 5. I feel I contributed to the research...are being sought out and tested daily with the hope that from deep within these crystalline fretworks a signal may appear leading to further study and

  20. Digital Preservation and Deep Infrastructure; Dublin Core Metadata Initiative Progress Report and Workplan for 2002; Video Gaming, Education and Digital Learning Technologies: Relevance and Opportunities; Digital Collections of Real World Objects; The MusArt Music-Retrieval System: An Overview; eML: Taking Mississippi Libraries into the 21st Century.

    ERIC Educational Resources Information Center

    Granger, Stewart; Dekkers, Makx; Weibel, Stuart L.; Kirriemuir, John; Lensch, Hendrik P. A.; Goesele, Michael; Seidel, Hans-Peter; Birmingham, William; Pardo, Bryan; Meek, Colin; Shifrin, Jonah; Goodvin, Renee; Lippy, Brooke

    2002-01-01

    One opinion piece and five articles in this issue discuss: digital preservation infrastructure; accomplishments and changes in the Dublin Core Metadata Initiative in 2001 and plans for 2002; video gaming and how it relates to digital libraries and learning technologies; overview of a music retrieval system; and the online version of the…

  1. Microbial fouling community analysis of the cooling water system of a nuclear test reactor with emphasis on sulphate reducing bacteria.

    PubMed

    Balamurugan, P; Joshi, M Hiren; Rao, T S

    2011-10-01

    Culture and molecular-based techniques were used to characterize bacterial diversity in the cooling water system of a fast breeder test reactor (FBTR). Techniques were selected for special emphasis on sulphate-reducing bacteria (SRB). Water samples from different locations of the FBTR cooling water system, in addition to biofilm scrapings from carbon steel coupons and a control SRB sample were characterized. Whole genome extraction of the water samples and SRB diversity by group specific primers were analysed using nested PCR and denaturing gradient gel electrophoresis (DGGE). The results of the bacterial assay in the cooling water showed that the total culturable bacteria (TCB) ranged from 10(3) to 10(5) cfu ml(-1); iron-reducing bacteria, 10(3) to 10(5) cfu ml(-1); iron oxidizing bacteria, 10(2) to 10(3) cfu ml(-1) and SRB, 2-29 cfu ml(-1). However, the counts of the various bacterial types in the biofilm sample were 2-3 orders of magnitude higher. SRB diversity by the nested PCR-DGGE approach showed the presence of groups 1, 5 and 6 in the FBTR cooling water system; however, groups 2, 3 and 4 were not detected. The study demonstrated that the PCR protocol influenced the results of the diversity analysis. The paper further discusses the microbiota of the cooling water system and its relevance in biofouling.

  2. Laparoscopic cholecystectomy under segmental thoracic spinal anaesthesia: a feasibility study.

    PubMed

    van Zundert, A A J; Stultiens, G; Jakimowicz, J J; Peek, D; van der Ham, W G J M; Korsten, H H M; Wildsmith, J A W

    2007-05-01

    Laparoscopic surgery is normally performed under general anaesthesia, but regional techniques have been found beneficial, usually in the management of patients with major medical problems. Encouraged by such experience, we performed a feasibility study of segmental spinal anaesthesia in healthy patients. Twenty ASA I or II patients undergoing elective laparoscopic cholecystectomy received a segmental (T10 injection) spinal anaesthetic using 1 ml of bupivacaine 5 mg ml-1 mixed with 0.5 ml of sufentanil 5 microg ml-1. Other drugs were only given (systemically) to manage patient anxiety, pain, nausea, hypotension, or pruritus during or after surgery. The patients were reviewed 3 days postoperatively by telephone. The spinal anaesthetic was performed easily in all patients, although one complained of paraesthesiae which responded to slight needle withdrawal. The block was effective for surgery in all 20 patients, six experiencing some discomfort which was readily treated with small doses of fentanyl, but none requiring conversion to general anaesthesia. Two patients required midazolam for anxiety and two ephedrine for hypotension. Recovery was uneventful and without sequelae, only three patients (all for surgical reasons) not being discharged home on the day of operation. This preliminary study has shown that segmental spinal anaesthesia can be used successfully and effectively for laparoscopic surgery in healthy patients. However, the use of an anaesthetic technique involving needle insertion into the vertebral canal above the level of termination of the spinal cord requires great caution and should be restricted in application until much larger numbers of patients have been studied.

  3. Artificial neural network assisted kinetic spectrophotometric technique for simultaneous determination of paracetamol and p-aminophenol in pharmaceutical samples using localized surface plasmon resonance band of silver nanoparticles

    NASA Astrophysics Data System (ADS)

    Khodaveisi, Javad; Dadfarnia, Shayessteh; Haji Shabani, Ali Mohammad; Rohani Moghadam, Masoud; Hormozi-Nezhad, Mohammad Reza

    2015-03-01

    Spectrophotometric analysis method based on the combination of the principal component analysis (PCA) with the feed-forward neural network (FFNN) and the radial basis function network (RBFN) was proposed for the simultaneous determination of paracetamol (PAC) and p-aminophenol (PAP). This technique relies on the difference between the kinetic rates of the reactions between analytes and silver nitrate as the oxidizing agent in the presence of polyvinylpyrrolidone (PVP) which is the stabilizer. The reactions are monitored at the analytical wavelength of 420 nm of the localized surface plasmon resonance (LSPR) band of the formed silver nanoparticles (Ag-NPs). Under the optimized conditions, the linear calibration graphs were obtained in the concentration range of 0.122-2.425 μg mL-1 for PAC and 0.021-5.245 μg mL-1 for PAP. The limit of detection in terms of standard approach (LODSA) and upper limit approach (LODULA) were calculated to be 0.027 and 0.032 μg mL-1 for PAC and 0.006 and 0.009 μg mL-1 for PAP. The important parameters were optimized for the artificial neural network (ANN) models. Statistical parameters indicated that the ability of the both methods is comparable. The proposed method was successfully applied to the simultaneous determination of PAC and PAP in pharmaceutical preparations.

  4. nmrML: A Community Supported Open Data Standard for the Description, Storage, and Exchange of NMR Data.

    PubMed

    Schober, Daniel; Jacob, Daniel; Wilson, Michael; Cruz, Joseph A; Marcu, Ana; Grant, Jason R; Moing, Annick; Deborde, Catherine; de Figueiredo, Luis F; Haug, Kenneth; Rocca-Serra, Philippe; Easton, John; Ebbels, Timothy M D; Hao, Jie; Ludwig, Christian; Günther, Ulrich L; Rosato, Antonio; Klein, Matthias S; Lewis, Ian A; Luchinat, Claudio; Jones, Andrew R; Grauslys, Arturas; Larralde, Martin; Yokochi, Masashi; Kobayashi, Naohiro; Porzel, Andrea; Griffin, Julian L; Viant, Mark R; Wishart, David S; Steinbeck, Christoph; Salek, Reza M; Neumann, Steffen

    2018-01-02

    NMR is a widely used analytical technique with a growing number of repositories available. As a result, demands for a vendor-agnostic, open data format for long-term archiving of NMR data have emerged with the aim to ease and encourage sharing, comparison, and reuse of NMR data. Here we present nmrML, an open XML-based exchange and storage format for NMR spectral data. The nmrML format is intended to be fully compatible with existing NMR data for chemical, biochemical, and metabolomics experiments. nmrML can capture raw NMR data, spectral data acquisition parameters, and where available spectral metadata, such as chemical structures associated with spectral assignments. The nmrML format is compatible with pure-compound NMR data for reference spectral libraries as well as NMR data from complex biomixtures, i.e., metabolomics experiments. To facilitate format conversions, we provide nmrML converters for Bruker, JEOL and Agilent/Varian vendor formats. In addition, easy-to-use Web-based spectral viewing, processing, and spectral assignment tools that read and write nmrML have been developed. Software libraries and Web services for data validation are available for tool developers and end-users. The nmrML format has already been adopted for capturing and disseminating NMR data for small molecules by several open source data processing tools and metabolomics reference spectral libraries, e.g., serving as storage format for the MetaboLights data repository. The nmrML open access data standard has been endorsed by the Metabolomics Standards Initiative (MSI), and we here encourage user participation and feedback to increase usability and make it a successful standard.

  5. Human immunodeficiency virus bDNA assay for pediatric cases.

    PubMed

    Avila, M M; Liberatore, D; Martínez Peralta, L; Biglione, M; Libonatti, O; Coll Cárdenas, P; Hodara, V L

    2000-01-01

    Techniques to quantify plasma HIV-1 RNA viral load (VL) are commercially available, and they are adequate for monitoring adults infected by HIV and treated with antiretroviral drugs. Little experience on HIV VL has been reported in pediatric cases. In Argentina, the evaluation of several assays for VL in pediatrics are now being considered. To evaluate the pediatric protocol for bDNA assay in HIV-infected children, 25 samples from HIV-infected children (according to CDC criteria for pediatric AIDS) were analyzed by using Quantiplex HIV RNA 2.0 Assay (Chiron Corporation) following the manufacturer's recommendations in a protocol that uses 50 microliters of patient's plasma (sensitivity: 10,000 copies/ml). When HIV-RNA was not detected, samples were run with the 1 ml standard bDNA protocol (sensitivity: 500 HIV-RNA c/ml). Nine samples belonged to infants under 12 months of age (group A) and 16 were over 12 months (group B). All infants under one year of age had high HIV-RNA copies in plasma. VL ranged from 30,800 to 2,560,000 RNA copies/ml (median = 362,000 c/ml) for group A and < 10,000 to 554,600 c/ml (median = < 10,000) for group B. Only 25% of children in group B had detectable HIV-RNA. By using the standard test of quantification, none of the patients had non detectable HIV-RNA, ranging between 950 and 226,200 c/ml for group B (median = 23,300 RNA c/ml). The suggested pediatric protocol could be useful in children under 12 months of age, but 1 ml standard protocol must be used for older children. Samples with undetectable results from children under one year of age should be repeated using the standard protocol.

  6. Functional brain imaging in irritable bowel syndrome with rectal balloon-distention by using fMRI.

    PubMed

    Yuan, Yao-Zong; Tao, Ran-Jun; Xu, Bin; Sun, Jing; Chen, Ke-Min; Miao, Fei; Zhang, Zhong-Wei; Xu, Jia-Yu

    2003-06-01

    Irritable bowel syndrome (IBS) is characterized by abdominal pain and changes in stool habits. Visceral hypersensitivity is a key factor in the pathophysiology of IBS. The aim of this study was to examine the effect of rectal balloon-distention stimulus by blood oxygenation level-dependent functional magnetic resonance imaging (BOLD-fMRI) in visceral pain center and to compare the distribution, extent, and intensity of activated areas between IBS patients and normal controls. Twenty-six patients with IBS and eleven normal controls were tested for rectal sensation, and the subjective pain intensity at 90 ml and 120 ml rectal balloon-distention was reported by using Visual Analogue Scale. Then, BOLD-fMRI was performed at 30 ml, 60 ml, 90 ml, and 120 ml rectal balloon-distention in all subjects. Rectal distention stimulation increased the activity of anterior cingulate cortex (35/37), insular cortex (37/37), prefrontal cortex (37/37), and thalamus (35/37) in most cases. At 120 ml of rectal balloon-distention, the activation area and percentage change in MR signal intensity of the regions of interest (ROI) at IC, PFC, and THAL were significantly greater in patients with IBS than that in controls. Score of pain sensation at 90 ml and 120 ml rectal balloon-distention was significantly higher in patients with IBS than that in controls. Using fMRI, some patients with IBS can be detected having visceral hypersensitivity in response to painful rectal balloon-distention. fMRI is an objective brain imaging technique to measure the change in regional cerebral activation more precisely. In this study, IC and PFC of the IBS patients were the major loci of the CNS processing of visceral perception.

  7. Fountain Flow cytometry, a new technique for the rapid detection and enumeration of microorganisms in aqueous samples.

    PubMed

    Johnson, Paul E; Deromedi, Anthony J; Lebaron, Philippe; Catala, Philippe; Cash, Jennifer

    2006-12-01

    Pathogenic microorganisms are known to cause widespread waterborne disease worldwide. There is an urgent need to develop a technique for the real-time detection of pathogens in environmental samples at low concentrations, <10 microorganisms/ml, in large sample volumes, > or =100 ml. A novel method, Fountain Flowtrade mark cytometry, for the rapid and sensitive detection of individual microorganisms in aqueous samples is presented. Each sample is first incubated with a fluorescent label and then passed as a stream in front of a laser, which excites the label. The fluorescence is detected with a CCD imager as the sample flows toward the imager along its optical axis. The feasibility of Fountain Flow cytometry (FFC) is demonstrated by the detection of Escherichia coli labeled with ChemChrome CV6 and SYBR Gold in buffer and natural river water. Detections of labeled E. coli were made in aqueous suspensions with an efficiency of 96% +/- 14% down to a concentration approximately 200 bacteria/ml. The feasibility of FFC is demonstrated by the detection of E. coli in buffer and natural river water. FFC should apply to the detection of a wide range of pathogenic microorganisms including amoebae.

  8. The effect of the solution flow rate on the properties of zinc oxide (ZnO) thin films deposited by ultrasonic spray

    NASA Astrophysics Data System (ADS)

    Attaf, A.; Benkhetta, Y.; Saidi, H.; Bouhdjar, A.; Bendjedidi, H.; Nouadji, M.; Lehraki, N.

    2015-03-01

    In this work, we used a system based on ultrasonic spray pyrolysis technique. By witch, we have deposited thin films of zinc oxide (ZnO) with the variation of solution flow rate from 50 ml / h to 150 ml / h, and set other parameters such as the concentration of the solution, the deposition time, substrate temperature and the nozzel -substrate distance. In order to study the influence of the solution flow rate on the properties of the films produced, we have several characterization techniques such as X-ray diffraction to determine the films structure, the scanning electron microscopy SEM for the morphology of the surfaces, EDS spectroscopy for the chemical composition, UV-Visible-Nir spectroscopy for determination the optical proprieties of thin films.The experimental results show that: the films have hexagonal structure at the type (wurtzite), the average size of grains varies from 20.11 to 32.45 nm, the transmittance of the films equals 80% in visible rang and the band gap is varied between 3.274 and 3.282 eV, when the solution flow rate increases from 50 to 150 ml/h.

  9. Determination of dissolved aluminum in water samples

    USGS Publications Warehouse

    Afifi, A.A.

    1983-01-01

    A technique has been modified for determination of a wide range of concentrations of dissolved aluminum (Al) in water and has been tested. In this technique, aluminum is complexed with 8-hydroxyquinoline at pH 8.3 to minimize interferences, then extracted with methyl isobutyl ketone (MIBK). The extract is analyzed colorimetrically at 395 nm. This technique is used to analyze two forms of monomeric Al, nonlabile (organic complexes) and labile (free, Al, Al sulfate, fluoride and hydroxide complexes). A detection limit 2 ug/L is possible with 25-ml samples and 10-ml extracts. The detection limit can be decreased by increasing the volume of the sample and (or) decreasing the volume of the methyl isobutyl ketone extract. The analytical uncertainty of this method is approximately + or - 5 percent. The standard addition technique provides a recovery test for this technique and ensures precision in samples of low Al concentrations. The average percentage recovery of the added Al plus the amount originally present was 99 percent. Data obtained from analyses of filtered standard solutions indicated that Al is adsorbed on various types of filters. However, the relationship between Al concentrations and adsorption remains linear. A test on standard solutions also indicated that Al is not adsorbed on nitric acid-washed polyethylene and polypropylene bottle wells. (USGS)

  10. Comparative Study on the Different Testing Techniques in Tree Classification for Detecting the Learning Motivation

    NASA Astrophysics Data System (ADS)

    Juliane, C.; Arman, A. A.; Sastramihardja, H. S.; Supriana, I.

    2017-03-01

    Having motivation to learn is a successful requirement in a learning process, and needs to be maintained properly. This study aims to measure learning motivation, especially in the process of electronic learning (e-learning). Here, data mining approach was chosen as a research method. For the testing process, the accuracy comparative study on the different testing techniques was conducted, involving Cross Validation and Percentage Split. The best accuracy was generated by J48 algorithm with a percentage split technique reaching at 92.19 %. This study provided an overview on how to detect the presence of learning motivation in the context of e-learning. It is expected to be good contribution for education, and to warn the teachers for whom they have to provide motivation.

  11. Effects of Enhancement Techniques on L2 Incidental Vocabulary Learning

    ERIC Educational Resources Information Center

    Duan, Shiping

    2018-01-01

    Enhancement Techniques are conducive to incidental vocabulary learning. This study investigated the effects of two types of enhancement techniques-multiple-choice glosses (MC) and L1 single-gloss (SG) on L2 incidental learning of new words and retention of them. A total of 89 university learners of English as a Foreign Language (EFL) were asked to…

  12. Power-assisted liposuction and the pull-through technique for the treatment of gynecomastia.

    PubMed

    Lista, Frank; Ahmad, Jamil

    2008-03-01

    Gynecomastia is a common condition affecting many adolescent and adult males. Surgical techniques utilizing a variety of incisions, excisions, suction-assisted lipectomy, ultrasound-assisted liposuction, power-assisted liposuction, or some combination of these methods have been used in the treatment of gynecomastia. This article describes the authors' method of using power-assisted liposuction and the pull-through technique to treat gynecomastia. This technique involves the use of power-assisted liposuction to remove fatty breast tissue. The pull-through technique is then performed utilizing several instruments to sever the subdermal attachments of fibroglandular breast tissue; this tissue is removed through the incision used for liposuction. Finally, power-assisted liposuction is performed again to contour the remaining breast tissue. A chart review of 99 consecutive patients (197 breasts) treated between January of 2003 and November of 2006 was performed. Ninety-six patients (192 breasts) were successfully treated using this technique. Power-assisted liposuction was performed in all cases, and the average volume aspirated per breast was 459 ml (range, 25 to 1400 ml). Using the pull-through technique, the authors were able to remove between 5 and 70 g of tissue per breast. Complications were minimal (1.0 percent of breasts), and no revisions were required. Since January of 2003, the authors have used this technique to successfully treat 97 percent of their gynecomastia patients. Combining power-assisted liposuction and the pull-through technique has proven to be a versatile approach for the treatment of gynecomastia and consistently produces a naturally contoured male breast while resulting in a single inconspicuous scar.

  13. Development of enantioselective chemiluminescence flow- and sequential-injection immunoassays for alpha-amino acids.

    PubMed

    Silvaieh, Hossein; Schmid, Martin G; Hofstetter, Oliver; Schurig, Volker; Gübitz, Gerald

    2002-01-01

    The development of an enantioselective flow-through chemiluminescence immunosensor for amino acids is described. The approach is based on a competitive assay using enantioselective antibodies. Two different instrumental approaches, a flow-injection (FIA) and a sequential-injection system (SIA), are used. Compared to the flow-injection technique, the sequential injection-mode showed better repeatability. Both systems use an immunoreactor consisting of a flow cell packed with immobilized haptens. The haptens (4-amino-L- or D-phenylalanine) are immobilized onto a hydroxysuccinimide-activated polymer (Affi-prep 10) via a tyramine spacer. Stereoselective antibodies, raised against 4-amino-L- or D-phenylalanine, are labeled with an acridinium ester. Stereoselective inhibition of binding of the acridinum-labeled antibodies to the immobilized hapten by amino acids takes place. Chiral recognition was observed not only for the hapten molecule but also for a series of different amino acids. One assay cycle including regeneration takes 6:30 min in the FIA mode and 4:40 min in the SIA mode. Using D-phenylalanine as a sample, the detection limit was found to be 6.13 pmol/ml (1.01 ng/ml) for the flow-injection immunoassay (FIIA) and 1.76 pmol/ml (0.29 ng/ml ) for the sequential-injection immunoassay (SIIA) which can be lowered to 0.22 pmol/ml (0.036 ng/ml) or 0.064 pmol/ml (0.01 ng/ml) by using a stopped flow system. The intra-assay repeatability was found to be about 5% RSD and the inter-assay repeatability below 6% (within 3 days).

  14. Modified UTAUT2 Model for M-Learning among Students in India

    ERIC Educational Resources Information Center

    Bharati, V. Jayendra; Srikanth, R.

    2018-01-01

    Ubiquitous technologies have a great potential to enrich students' academic experience. Students are more interested in using interactive learning techniques apart from the traditional learning techniques. Several research studies for m-learning has been done in the USA, UK concentrating on students undergoing a graduation degree, especially…

  15. Behavioral Functions of the Mesolimbic Dopaminergic System: an Affective Neuroethological Perspective

    PubMed Central

    Alcaro, Antonio; Huber, Robert; Panksepp, Jaak

    2008-01-01

    The mesolimbic dopaminergic (ML-DA) system has been recognized for its central role in motivated behaviors, various types of reward, and, more recently, in cognitive processes. Functional theories have emphasized DA's involvement in the orchestration of goal-directed behaviors, and in the promotion and reinforcement of learning. The affective neuroethological perspective presented here, views the ML-DA system in terms of its ability to activate an instinctual emotional appetitive state (SEEKING) evolved to induce organisms to search for all varieties of life-supporting stimuli and to avoid harms. A description of the anatomical framework in which the ML system is embedded is followed by the argument that the SEEKING disposition emerges through functional integration of ventral basal ganglia (BG) into thalamocortical activities. Filtering cortical and limbic input that spread into BG, DA transmission promotes the “release” of neural activity patterns that induce active SEEKING behaviors when expressed at the motor level. Reverberation of these patterns constitutes a neurodynamic process for the inclusion of cognitive and perceptual representations within the extended networks of the SEEKING urge. In this way, the SEEKING disposition influences attention, incentive salience, associative learning, and anticipatory predictions. In our view, the rewarding properties of drugs of abuse are, in part, caused by the activation of the SEEKING disposition, ranging from appetitive drive to persistent craving depending on the intensity of the affect. The implications of such a view for understanding addiction are considered, with particular emphasis on factors predisposing individuals to develop compulsive drug seeking behaviors. PMID:17905440

  16. Behavioral functions of the mesolimbic dopaminergic system: an affective neuroethological perspective.

    PubMed

    Alcaro, Antonio; Huber, Robert; Panksepp, Jaak

    2007-12-01

    The mesolimbic dopaminergic (ML-DA) system has been recognized for its central role in motivated behaviors, various types of reward, and, more recently, in cognitive processes. Functional theories have emphasized DA's involvement in the orchestration of goal-directed behaviors and in the promotion and reinforcement of learning. The affective neuroethological perspective presented here views the ML-DA system in terms of its ability to activate an instinctual emotional appetitive state (SEEKING) evolved to induce organisms to search for all varieties of life-supporting stimuli and to avoid harms. A description of the anatomical framework in which the ML system is embedded is followed by the argument that the SEEKING disposition emerges through functional integration of ventral basal ganglia (BG) into thalamocortical activities. Filtering cortical and limbic input that spreads into BG, DA transmission promotes the "release" of neural activity patterns that induce active SEEKING behaviors when expressed at the motor level. Reverberation of these patterns constitutes a neurodynamic process for the inclusion of cognitive and perceptual representations within the extended networks of the SEEKING urge. In this way, the SEEKING disposition influences attention, incentive salience, associative learning, and anticipatory predictions. In our view, the rewarding properties of drugs of abuse are, in part, caused by the activation of the SEEKING disposition, ranging from appetitive drive to persistent craving depending on the intensity of the affect. The implications of such a view for understanding addiction are considered, with particular emphasis on factors predisposing individuals to develop compulsive drug seeking behaviors.

  17. Application of machine learning techniques to lepton energy reconstruction in water Cherenkov detectors

    NASA Astrophysics Data System (ADS)

    Drakopoulou, E.; Cowan, G. A.; Needham, M. D.; Playfer, S.; Taani, M.

    2018-04-01

    The application of machine learning techniques to the reconstruction of lepton energies in water Cherenkov detectors is discussed and illustrated for TITUS, a proposed intermediate detector for the Hyper-Kamiokande experiment. It is found that applying these techniques leads to an improvement of more than 50% in the energy resolution for all lepton energies compared to an approach based upon lookup tables. Machine learning techniques can be easily applied to different detector configurations and the results are comparable to likelihood-function based techniques that are currently used.

  18. Influence of irrigation and obturation techniques on artificial lateral root canal filling capacity.

    PubMed

    Silva, Emmanuel J; Herrera, Daniel R; Souza-Júnior, Eduardo J; Teixeira, João M

    2013-01-01

    The aim of this study was to evaluate the influence of two different irrigation protocols on artificial lateral root canal filling capacity using different obturation techniques. Sixty single-root human teeth were used. Two artificial lateral canals were created in the apical third. Root canals were instrumented up to a 45 K-file to the working length. Before each file, root canals were irrigated either with 2 mL of 2.5% NaOCl or 2% chlorhexidine gel with further irrigation with saline solution and 3 mL of 17% EDTA. Specimens were randomly divided into three groups according to the obturation technique: (1) lateral compaction technique; (2) Tagger hybrid technique; and (3) thermoplasticized technique using BeeFill 2 in 1. All groups used AH Plus as the root canal sealer. The specimens were decalcified and cleared in methyl salicylate. The total length of lateral canals was observed under X30 magnification with a stereomicroscope and measured on the buccal and lingual root surfaces using Leica IM50 software. The data were submitted to ANOVA and Tukey test (p < 0.05). Among the obturation techniques, BeeFill 2 in 1 showed deeper penetration into all lateral canals than the lateral compaction or Tagger hybrid techniques (p < 0.05). The lateral compaction group showed the worst results (p < 0.05). Irrigants did not affect the outcome; there was no difference between NaOCl and chlorhexidine when the same obturation technique was used (p > 0.05). Regardless of the irrigant used during endodontic procedures, the thermoplasticized techniques showed higher penetration behavior for filling artificial lateral canals than the lateral compaction technique.

  19. Model Based Document and Report Generation for Systems Engineering

    NASA Technical Reports Server (NTRS)

    Delp, Christopher; Lam, Doris; Fosse, Elyse; Lee, Cin-Young

    2013-01-01

    As Model Based Systems Engineering (MBSE) practices gain adoption, various approaches have been developed in order to simplify and automate the process of generating documents from models. Essentially, all of these techniques can be unified around the concept of producing different views of the model according to the needs of the intended audience. In this paper, we will describe a technique developed at JPL of applying SysML Viewpoints and Views to generate documents and reports. An architecture of model-based view and document generation will be presented, and the necessary extensions to SysML with associated rationale will be explained. A survey of examples will highlight a variety of views that can be generated, and will provide some insight into how collaboration and integration is enabled. We will also describe the basic architecture for the enterprise applications that support this approach.

  20. The analytical application and spectral investigation of DNA-CPB-emodin and sensitive determination of DNA by resonance Rayleigh light scattering technique

    NASA Astrophysics Data System (ADS)

    Bi, Shuyun; Wang, Yu; Wang, Tianjiao; Pang, Bo; Zhao, Tingting

    2013-01-01

    A new sensitive DNA probe containing cetylpyridinium bromide (CPB) and emodin (an effective component of Chinese herbal medicine) was developed using the resonance Rayleigh light scattering (RLS) technique. A novel assay was first developed to detect DNA at nanogram level based on the ternary system of DNA-CPB-emodin. The RLS signal of DNA was enhanced remarkably in the presence of emodin-CPB, and the enhanced RLS intensity at 340.0 nm was in direct proportion to DNA concentration in the range of 0.01-2.72 μg mL-1 with a good linear relationship. The detection limit was 1.5 ng mL-1. Three synthetic DNA samples were measured obtaining satisfactory results, the recovery was 97.6-107.3%.

  1. Model based document and report generation for systems engineering

    NASA Astrophysics Data System (ADS)

    Delp, C.; Lam, D.; Fosse, E.; Lee, Cin-Young

    As Model Based Systems Engineering (MBSE) practices gain adoption, various approaches have been developed in order to simplify and automate the process of generating documents from models. Essentially, all of these techniques can be unified around the concept of producing different views of the model according to the needs of the intended audience. In this paper, we will describe a technique developed at JPL of applying SysML Viewpoints and Views to generate documents and reports. An architecture of model-based view and document generation will be presented, and the necessary extensions to SysML with associated rationale will be explained. A survey of examples will highlight a variety of views that can be generated, and will provide some insight into how collaboration and integration is enabled. We will also describe the basic architecture for the enterprise applications that support this approach.

  2. Study on modification of the Misgav Ladach method for cesarean section.

    PubMed

    Li, M; Zou, L; Zhu, J

    2001-01-01

    172 cases of pregnant women scheduled for delivery by cesarean section were randomly assigned to 59 cases in modification group with modified Misgav Ladach technique, 57 cases in Misgav Ladach group with Misgav Ladach technique and 56 cases in Pfannenstiel group with Pfannenstiel technique from May to Dec. 1999. The modified points included: transversely incising the fascia 2 to 3 cm, then dividing it bluntly; without opening and dissociating the visceral peritoneum; two layers suturing of low transverse uterine incision; closing the skin by continuous suturing. Results showed the average delivery time in the modification group was (3.6 +/- 2.6) min and (5.7 +/- 2.9) min in the Misgav Ladach group (P < 0.05). Median operating time was (28.3 +/- 5.4) min in modification group compared with (27.5 +/- 6.5) min in the Misgav Ladach group (P > 0.05). Average blood loss was (128 +/- 35) ml in modification group compared with (212 +/- 147) ml in the Pfannenstiel group (P < 0.05). It was concluded that the modified Misgav Ladach technique not only preserved all advantages of Misgav Ladach method, but also had additional advantages, such as faster in delivering the fetus, less damage, easier mastering for obstetricians.

  3. Metacognitive Strategies: A Foundation for Early Word Spelling and Reading in Kindergartners with SLI

    ERIC Educational Resources Information Center

    Schiff, Rachel; Nuri Ben-Shushan, Yohi; Ben-Artzi, Elisheva

    2017-01-01

    This study assessed the effect of metacognitive instruction on the spelling and word reading of Hebrew-speaking children with specific language impairment (SLI). Participants were 67 kindergarteners with SLI in a supported learning context. Children were classified into three spelling instruction groups: (a) metalinguistic instruction (ML), (b) ML…

  4. Model-Based Systems Engineering Pilot Program at NASA Langley

    NASA Technical Reports Server (NTRS)

    Vipavetz, Kevin G.; Murphy, Douglas G.; Infeld, Samatha I.

    2012-01-01

    NASA Langley Research Center conducted a pilot program to evaluate the benefits of using a Model-Based Systems Engineering (MBSE) approach during the early phase of the Materials International Space Station Experiment-X (MISSE-X) project. The goal of the pilot was to leverage MBSE tools and methods, including the Systems Modeling Language (SysML), to understand the net gain of utilizing this approach on a moderate size flight project. The System Requirements Review (SRR) success criteria were used to guide the work products desired from the pilot. This paper discusses the pilot project implementation, provides SysML model examples, identifies lessons learned, and describes plans for further use on MBSE on MISSE-X.

  5. Perspective: Machine learning potentials for atomistic simulations

    NASA Astrophysics Data System (ADS)

    Behler, Jörg

    2016-11-01

    Nowadays, computer simulations have become a standard tool in essentially all fields of chemistry, condensed matter physics, and materials science. In order to keep up with state-of-the-art experiments and the ever growing complexity of the investigated problems, there is a constantly increasing need for simulations of more realistic, i.e., larger, model systems with improved accuracy. In many cases, the availability of sufficiently efficient interatomic potentials providing reliable energies and forces has become a serious bottleneck for performing these simulations. To address this problem, currently a paradigm change is taking place in the development of interatomic potentials. Since the early days of computer simulations simplified potentials have been derived using physical approximations whenever the direct application of electronic structure methods has been too demanding. Recent advances in machine learning (ML) now offer an alternative approach for the representation of potential-energy surfaces by fitting large data sets from electronic structure calculations. In this perspective, the central ideas underlying these ML potentials, solved problems and remaining challenges are reviewed along with a discussion of their current applicability and limitations.

  6. Water quality of Danube Delta systems: ecological status and prediction using machine-learning algorithms.

    PubMed

    Stoica, C; Camejo, J; Banciu, A; Nita-Lazar, M; Paun, I; Cristofor, S; Pacheco, O R; Guevara, M

    2016-01-01

    Environmental issues have a worldwide impact on water bodies, including the Danube Delta, the largest European wetland. The Water Framework Directive (2000/60/EC) implementation operates toward solving environmental issues from European and national level. As a consequence, the water quality and the biocenosis structure was altered, especially the composition of the macro invertebrate community which is closely related to habitat and substrate heterogeneity. This study aims to assess the ecological status of Southern Branch of the Danube Delta, Saint Gheorghe, using benthic fauna and a computational method as an alternative for monitoring the water quality in real time. The analysis of spatial and temporal variability of unicriterial and multicriterial indices were used to assess the current status of aquatic systems. In addition, chemical status was characterized. Coliform bacteria and several chemical parameters were used to feed machine-learning (ML) algorithms to simulate a real-time classification method. Overall, the assessment of the water bodies indicated a moderate ecological status based on the biological quality elements or a good ecological status based on chemical and ML algorithms criteria.

  7. Choosing the Most Effective Pattern Classification Model under Learning-Time Constraint.

    PubMed

    Saito, Priscila T M; Nakamura, Rodrigo Y M; Amorim, Willian P; Papa, João P; de Rezende, Pedro J; Falcão, Alexandre X

    2015-01-01

    Nowadays, large datasets are common and demand faster and more effective pattern analysis techniques. However, methodologies to compare classifiers usually do not take into account the learning-time constraints required by applications. This work presents a methodology to compare classifiers with respect to their ability to learn from classification errors on a large learning set, within a given time limit. Faster techniques may acquire more training samples, but only when they are more effective will they achieve higher performance on unseen testing sets. We demonstrate this result using several techniques, multiple datasets, and typical learning-time limits required by applications.

  8. Predicting novel microRNA: a comprehensive comparison of machine learning approaches.

    PubMed

    Stegmayer, Georgina; Di Persia, Leandro E; Rubiolo, Mariano; Gerard, Matias; Pividori, Milton; Yones, Cristian; Bugnon, Leandro A; Rodriguez, Tadeo; Raad, Jonathan; Milone, Diego H

    2018-05-23

    The importance of microRNAs (miRNAs) is widely recognized in the community nowadays because these short segments of RNA can play several roles in almost all biological processes. The computational prediction of novel miRNAs involves training a classifier for identifying sequences having the highest chance of being precursors of miRNAs (pre-miRNAs). The big issue with this task is that well-known pre-miRNAs are usually few in comparison with the hundreds of thousands of candidate sequences in a genome, which results in high class imbalance. This imbalance has a strong influence on most standard classifiers, and if not properly addressed in the model and the experiments, not only performance reported can be completely unrealistic but also the classifier will not be able to work properly for pre-miRNA prediction. Besides, another important issue is that for most of the machine learning (ML) approaches already used (supervised methods), it is necessary to have both positive and negative examples. The selection of positive examples is straightforward (well-known pre-miRNAs). However, it is difficult to build a representative set of negative examples because they should be sequences with hairpin structure that do not contain a pre-miRNA. This review provides a comprehensive study and comparative assessment of methods from these two ML approaches for dealing with the prediction of novel pre-miRNAs: supervised and unsupervised training. We present and analyze the ML proposals that have appeared during the past 10 years in literature. They have been compared in several prediction tasks involving two model genomes and increasing imbalance levels. This work provides a review of existing ML approaches for pre-miRNA prediction and fair comparisons of the classifiers with same features and data sets, instead of just a revision of published software tools. The results and the discussion can help the community to select the most adequate bioinformatics approach according to the prediction task at hand. The comparative results obtained suggest that from low to mid-imbalance levels between classes, supervised methods can be the best. However, at very high imbalance levels, closer to real case scenarios, models including unsupervised and deep learning can provide better performance.

  9. Learning and Tuning of Fuzzy Rules

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1997-01-01

    In this chapter, we review some of the current techniques for learning and tuning fuzzy rules. For clarity, we refer to the process of generating rules from data as the learning problem and distinguish it from tuning an already existing set of fuzzy rules. For learning, we touch on unsupervised learning techniques such as fuzzy c-means, fuzzy decision tree systems, fuzzy genetic algorithms, and linear fuzzy rules generation methods. For tuning, we discuss Jang's ANFIS architecture, Berenji-Khedkar's GARIC architecture and its extensions in GARIC-Q. We show that the hybrid techniques capable of learning and tuning fuzzy rules, such as CART-ANFIS, RNN-FLCS, and GARIC-RB, are desirable in development of a number of future intelligent systems.

  10. Magnetic solid phase extraction coupled with desorption corona beam ionization-mass spectrometry for rapid analysis of antidepressants in human body fluids.

    PubMed

    Chen, Di; Zheng, Hao-Bo; Huang, Yun-Qing; Hu, Yu-Ning; Yu, Qiong-Wei; Yuan, Bi-Feng; Feng, Yu-Qi

    2015-08-21

    Ambient ionization techniques show good potential in rapid analysis of target compounds. However, a direct application of these ambient ionization techniques for the determination of analytes in a complex matrix is difficult due to the matrix interference and ion suppression. To resolve this problem, here we developed a strategy by coupling magnetic solid phase extraction (MSPE) with desorption corona beam ionization (DCBI)-mass spectrometry (MS). As a proof of concept, the pyrrole-coated Fe3O4 magnetic nanoparticles (Fe3O4@Ppy) were prepared and used for the extraction of antidepressants. After extraction, the Fe3O4@Ppy with trapped antidepressants was then directly subjected to DCBI-MS analysis with the aid of a homemade magnetic glass capillary. As the MSPE process is rapid and the direct DCBI-MS analysis does not need solvent desorption or chromatographic separation processes, the overall analysis can be completed within 3 min. The proposed MSPE-DCBI-MS method was then successfully used to determine antidepressants in human urine and plasma. The calibration curves were obtained in the range of 0.005-0.5 μg mL(-1) for urine and 0.02-1 μg mL(-1) for plasma with reasonable linearity (R(2) > 0.951). The limits of detection of three antidepressants were in the range of 0.2-1 ng mL(-1) for urine and 2-5 ng mL(-1) for plasma. Acceptable reproducibility for rapid analysis was achieved with relative standard deviations less than 19.1% and the relative recoveries were 85.2-118.7%. Taken together, the developed MSPE-DCBI-MS strategy offers a powerful capacity for rapid analysis of target compounds in a complex matrix, which would greatly expand the applications of ambient ionization techniques with plentiful magnetic sorbents.

  11. Technical note: adipose tissue blood flow in miniature swine (Sus scrofa) using the 133xenon washout technique.

    PubMed

    Moher, H E; Carey, G B

    2002-05-01

    The purpose of this study was to examine the 133xenon washout technique as a viable method for measuring adipose tissue blood flow (ATBF) in swine. Using a total of 32 female Yucatan miniature swine (Sus scrofa), the partition coefficient for 133xenon in swine subcutaneous adipose tissue was determined and ATBF was measured at rest and under various physiological conditions. These conditions included feeding, anesthesia, epinephrine infusion, and acute exercise. The effects of epinephrine and acute exercise were examined in both sedentary and exercise-trained swine. The partition coefficient value for 133xenon in swine subcutaneous adipose tissue was 9.23+/-0.26 mL/g (mean +/- SD, n = 10). The average value for resting ATBF in swine was 3.98+/-2.72 mL/(100 g tissue-min) (n = 19). Feeding increased ATBF by approximately fivefold over fasting values, and isoflurane anesthesia significantly decreased ATBF compared to rest (1.64+/-1.12 vs 3.92+/-4.22 mL/[100 g x min], n = 10). A 30-min epinephrine infusion (1 microg/[kg BW x min]) significantly increased ATBF from a resting value of 3.13+/-2.61 to 10.35+/-5.31 mL/(100 g x min) (n = 12). Epinephrine infusion into exercise-trained swine increased ATBF to the same extent as when infused into sedentary swine. An acute, 20-min bout of exercise significantly increased ATBF in swine, and the sedentary swine showed a larger increase in ATBF than their exercise-trained littermates relative to rest: 7.83 vs 2.98 mL/(100 g x min). In conclusion, the 133xenon washout technique appears to be a viable method for measuring ATBF in swine; our findings are comparable to swine ATBF values reported using the microsphere method and are consistent with values reported in animal and human studies.

  12. Limited transfer of long-term motion perceptual learning with double training.

    PubMed

    Liang, Ju; Zhou, Yifeng; Fahle, Manfred; Liu, Zili

    2015-01-01

    A significant recent development in visual perceptual learning research is the double training technique. With this technique, Xiao, Zhang, Wang, Klein, Levi, and Yu (2008) have found complete transfer in tasks that had previously been shown to be stimulus specific. The significance of this finding is that this technique has since been successful in all tasks tested, including motion direction discrimination. Here, we investigated whether or not this technique could generalize to longer-term learning, using the method of constant stimuli. Our task was learning to discriminate motion directions of random dots. The second leg of training was contrast discrimination along a new average direction of the same moving dots. We found that, although exposure of moving dots along a new direction facilitated motion direction discrimination, this partial transfer was far from complete. We conclude that, although perceptual learning is transferrable under certain conditions, stimulus specificity also remains an inherent characteristic of motion perceptual learning.

  13. Recent developments in machine learning applications in landslide susceptibility mapping

    NASA Astrophysics Data System (ADS)

    Lun, Na Kai; Liew, Mohd Shahir; Matori, Abdul Nasir; Zawawi, Noor Amila Wan Abdullah

    2017-11-01

    While the prediction of spatial distribution of potential landslide occurrences is a primary interest in landslide hazard mitigation, it remains a challenging task. To overcome the scarceness of complete, sufficiently detailed geomorphological attributes and environmental conditions, various machine-learning techniques are increasingly applied to effectively map landslide susceptibility for large regions. Nevertheless, limited review papers are devoted to this field, particularly on the various domain specific applications of machine learning techniques. Available literature often report relatively good predictive performance, however, papers discussing the limitations of each approaches are quite uncommon. The foremost aim of this paper is to narrow these gaps in literature and to review up-to-date machine learning and ensemble learning techniques applied in landslide susceptibility mapping. It provides new readers an introductory understanding on the subject matter and researchers a contemporary review of machine learning advancements alongside the future direction of these techniques in the landslide mitigation field.

  14. Advanced methods in NDE using machine learning approaches

    NASA Astrophysics Data System (ADS)

    Wunderlich, Christian; Tschöpe, Constanze; Duckhorn, Frank

    2018-04-01

    Machine learning (ML) methods and algorithms have been applied recently with great success in quality control and predictive maintenance. Its goal to build new and/or leverage existing algorithms to learn from training data and give accurate predictions, or to find patterns, particularly with new and unseen similar data, fits perfectly to Non-Destructive Evaluation. The advantages of ML in NDE are obvious in such tasks as pattern recognition in acoustic signals or automated processing of images from X-ray, Ultrasonics or optical methods. Fraunhofer IKTS is using machine learning algorithms in acoustic signal analysis. The approach had been applied to such a variety of tasks in quality assessment. The principal approach is based on acoustic signal processing with a primary and secondary analysis step followed by a cognitive system to create model data. Already in the second analysis steps unsupervised learning algorithms as principal component analysis are used to simplify data structures. In the cognitive part of the software further unsupervised and supervised learning algorithms will be trained. Later the sensor signals from unknown samples can be recognized and classified automatically by the algorithms trained before. Recently the IKTS team was able to transfer the software for signal processing and pattern recognition to a small printed circuit board (PCB). Still, algorithms will be trained on an ordinary PC; however, trained algorithms run on the Digital Signal Processor and the FPGA chip. The identical approach will be used for pattern recognition in image analysis of OCT pictures. Some key requirements have to be fulfilled, however. A sufficiently large set of training data, a high signal-to-noise ratio, and an optimized and exact fixation of components are required. The automated testing can be done subsequently by the machine. By integrating the test data of many components along the value chain further optimization including lifetime and durability prediction based on big data becomes possible, even if components are used in different versions or configurations. This is the promise behind German Industry 4.0.

  15. Influence of acidic beverage (Coca-Cola) on pharmacokinetics of ibuprofen in healthy rabbits.

    PubMed

    Kondal, Amit; Garg, S K

    2003-11-01

    The study was aimed at determining the effect of Coca-Cola on the pharmacokinetics of ibuprofen in rabbits. In a cross-over study, ibuprofen was given orally in a dose of 56 mg/kg, prepared as 0.5% suspension in carboxymethyl cellulose (CMC) and blood samples (1 ml) were drawn at different time intervals from 0-12 hr. After a washout period of 7 days, Coca-Cola in a dose of (5 ml/kg) was administered along with ibuprofen (56 mg/kg) and blood samples were drawn from 0-12 hr. To these rabbits, 5 ml/kg Coca-Cola was administered once daily for another 7 days. On 8th day, Coca-Cola (5 ml/kg) along with ibuprofen (56 mg/kg), prepared as a suspension was administered and blood samples (1 ml each) were drawn at similar time intervals. Plasma was separated and assayed for ibuprofen by HPLC technique and various pharmacokinetic parameters were calculated. The Cmax and AUC0-alpha of ibuprofen were significantly increased after single and multiple doses of Coca-Cola, thereby indicating increased extent of absorption of ibuprofen. The results warrant the reduction of ibuprofen daily dosage, frequency when administered with Coca-Cola.

  16. Use of EPR Spin-Trapping Techniques to Detect Radicals from Rat Lung Lavage Fluid Following Sulfur Mustard Vapor Exposure

    DTIC Science & Technology

    1993-05-13

    lung injury. Anesthetized rats were intratracheally intubated and exposed to 0.35 mg HD vapor over 50 min, Immediately, 1 hr or 24 hr after exposure...lungs were lavaged with the spin trap, alpha-phenyl-t-butyl nitrone (PBN; 0.35 mg/ml). Recovered lavage fluid was assayed by EPR spectroscopy for...in EtOH (100 Ml), or EtOH alone (control), was placed in a water Jacketed (37° C) vapor generator and the rats .ere exposed for 50 min. by which time

  17. Estimation of calcium and magnesium in serum and urine by atomic absorption spectrophotometry

    PubMed Central

    Thin, Christian G.; Thomson, Patricia A.

    1967-01-01

    A method has been described for the estimation of calcium and magnesium in serum and urine using atomic absorption spectrophotometry. The precision and accuracy of the techniques have been determined and were found to be acceptable. The range of values for calcium and magnesium in the sera of normal adults was found to be: serum calcium (corrected to a specific gravity of 1·026) 8·38-10·08 mg. per 100 ml.; serum magnesium 1·83-2·43 mg. per 100 ml. PMID:5602562

  18. Practising What We Teach: Vocational Teachers Learn to Research through Applying Action Learning Techniques

    ERIC Educational Resources Information Center

    Lasky, Barbara; Tempone, Irene

    2004-01-01

    Action learning techniques are well suited to the teaching of organisation behaviour students because of their flexibility, inclusiveness, openness, and respect for individuals. They are no less useful as a tool for change for vocational teachers, learning, of necessity, to become researchers. Whereas traditional universities have always had a…

  19. The Journal of the Society for Accelerative Learning and Teaching, Volume 7.

    ERIC Educational Resources Information Center

    Journal of the Society for Accelerative Learning and Teaching, 1982

    1982-01-01

    The four 1982 numbers of the Journal of the Society for Accelerative Learning and Teaching (SALT) include articles on: a comparison of the Tomatis Method and Suggestopedia; the CLC system of accelerated learning; Suggestopedia in the English-as-a-second-language classroom; experiments with SALT techniques; accelerative learning techniques for…

  20. Attitudes of Nigerian Secondary School Teachers towards Media-Based Learning.

    ERIC Educational Resources Information Center

    Ekpo, C. M.

    This document presents results of a study assessing the attitudes of secondary school teachers towards media based learning. The study explores knowledge of and exposure to media based learning techniques of a cross section of Nigerian secondary school teachers. Factors that affect the use of media based learning technique are sought. Media based…

Top