A molecular fragment cheminformatics roadmap for mesoscopic simulation.
Truszkowski, Andreas; Daniel, Mirco; Kuhn, Hubert; Neumann, Stefan; Steinbeck, Christoph; Zielesny, Achim; Epple, Matthias
2014-12-01
Mesoscopic simulation studies the structure, dynamics and properties of large molecular ensembles with millions of atoms: Its basic interacting units (beads) are no longer the nuclei and electrons of quantum chemical ab-initio calculations or the atom types of molecular mechanics but molecular fragments, molecules or even larger molecular entities. For its simulation setup and output a mesoscopic simulation kernel software uses abstract matrix (array) representations for bead topology and connectivity. Therefore a pure kernel-based mesoscopic simulation task is a tedious, time-consuming and error-prone venture that limits its practical use and application. A consequent cheminformatics approach tackles these problems and provides solutions for a considerably enhanced accessibility. This study aims at outlining a complete cheminformatics roadmap that frames a mesoscopic Molecular Fragment Dynamics (MFD) simulation kernel to allow its efficient use and practical application. The molecular fragment cheminformatics roadmap consists of four consecutive building blocks: An adequate fragment structure representation (1), defined operations on these fragment structures (2), the description of compartments with defined compositions and structural alignments (3), and the graphical setup and analysis of a whole simulation box (4). The basis of the cheminformatics approach (i.e. building block 1) is a SMILES-like line notation (denoted f SMILES) with connected molecular fragments to represent a molecular structure. The f SMILES notation and the following concepts and methods for building blocks 2-4 are outlined with examples and practical usage scenarios. It is shown that the requirements of the roadmap may be partly covered by already existing open-source cheminformatics software. Mesoscopic simulation techniques like MFD may be considerably alleviated and broadened for practical use with a consequent cheminformatics layer that successfully tackles its setup subtleties and conceptual usage hurdles. Molecular Fragment Cheminformatics may be regarded as a crucial accelerator to propagate MFD and similar mesoscopic simulation techniques in the molecular sciences. Graphical abstractA molecular fragment cheminformatics roadmap for mesoscopic simulation.
Zhang, Xinyuan; Zheng, Nan; Rosania, Gus R
2008-09-01
Cell-based molecular transport simulations are being developed to facilitate exploratory cheminformatic analysis of virtual libraries of small drug-like molecules. For this purpose, mathematical models of single cells are built from equations capturing the transport of small molecules across membranes. In turn, physicochemical properties of small molecules can be used as input to simulate intracellular drug distribution, through time. Here, with mathematical equations and biological parameters adjusted so as to mimic a leukocyte in the blood, simulations were performed to analyze steady state, relative accumulation of small molecules in lysosomes, mitochondria, and cytosol of this target cell, in the presence of a homogenous extracellular drug concentration. Similarly, with equations and parameters set to mimic an intestinal epithelial cell, simulations were also performed to analyze steady state, relative distribution and transcellular permeability in this non-target cell, in the presence of an apical-to-basolateral concentration gradient. With a test set of ninety-nine monobasic amines gathered from the scientific literature, simulation results helped analyze relationships between the chemical diversity of these molecules and their intracellular distributions.
Zhang, Xinyuan; Zheng, Nan
2008-01-01
Cell-based molecular transport simulations are being developed to facilitate exploratory cheminformatic analysis of virtual libraries of small drug-like molecules. For this purpose, mathematical models of single cells are built from equations capturing the transport of small molecules across membranes. In turn, physicochemical properties of small molecules can be used as input to simulate intracellular drug distribution, through time. Here, with mathematical equations and biological parameters adjusted so as to mimic a leukocyte in the blood, simulations were performed to analyze steady state, relative accumulation of small molecules in lysosomes, mitochondria, and cytosol of this target cell, in the presence of a homogenous extracellular drug concentration. Similarly, with equations and parameters set to mimic an intestinal epithelial cell, simulations were also performed to analyze steady state, relative distribution and transcellular permeability in this non-target cell, in the presence of an apical-to-basolateral concentration gradient. With a test set of ninety-nine monobasic amines gathered from the scientific literature, simulation results helped analyze relationships between the chemical diversity of these molecules and their intracellular distributions. Electronic supplementary material The online version of this article (doi:10.1007/s10822-008-9194-7) contains supplementary material, which is available to authorized users. PMID:18338229
Bietz, Stefan; Inhester, Therese; Lauck, Florian; Sommer, Kai; von Behren, Mathias M; Fährrolfes, Rainer; Flachsenberg, Florian; Meyder, Agnes; Nittinger, Eva; Otto, Thomas; Hilbig, Matthias; Schomburg, Karen T; Volkamer, Andrea; Rarey, Matthias
2017-11-10
Nowadays, computational approaches are an integral part of life science research. Problems related to interpretation of experimental results, data analysis, or visualization tasks highly benefit from the achievements of the digital era. Simulation methods facilitate predictions of physicochemical properties and can assist in understanding macromolecular phenomena. Here, we will give an overview of the methods developed in our group that aim at supporting researchers from all life science areas. Based on state-of-the-art approaches from structural bioinformatics and cheminformatics, we provide software covering a wide range of research questions. Our all-in-one web service platform ProteinsPlus (http://proteins.plus) offers solutions for pocket and druggability prediction, hydrogen placement, structure quality assessment, ensemble generation, protein-protein interaction classification, and 2D-interaction visualization. Additionally, we provide a software package that contains tools targeting cheminformatics problems like file format conversion, molecule data set processing, SMARTS editing, fragment space enumeration, and ligand-based virtual screening. Furthermore, it also includes structural bioinformatics solutions for inverse screening, binding site alignment, and searching interaction patterns across structure libraries. The software package is available at http://software.zbh.uni-hamburg.de. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Estimation of hydrolysis rate constants for carbamates
Cheminformatics based tools, such as the Chemical Transformation Simulator under development in EPA’s Office of Research and Development, are being increasingly used to evaluate chemicals for their potential to degrade in the environment or be transformed through metabolism...
Guha, Rajarshi; Wiggins, Gary D; Wild, David J; Baik, Mu-Hyun; Pierce And, Marlon E; Fox, Geoffrey C
Some of the latest trends in cheminformatics, computation, and the world wide web are reviewed with predictions of how these are likely to impact the field of cheminformatics in the next five years. The vision and some of the work of the Chemical Informatics and Cyberinfrastructure Collaboratory at Indiana University are described, which we base around the core concepts of e-Science and cyberinfrastructure that have proven successful in other fields. Our chemical informatics cyberinfrastructure is realized by building a flexible, generic infrastructure for cheminformatics tools and databases, exporting "best of breed" methods as easily-accessible web APIs for cheminformaticians, scientists, and researchers in other disciplines, and hosting a unique chemical informatics education program aimed at scientists and cheminformatics practitioners in academia and industry.
Of possible cheminformatics futures.
Oprea, Tudor I; Taboureau, Olivier; Bologa, Cristian G
2012-01-01
For over a decade, cheminformatics has contributed to a wide array of scientific tasks from analytical chemistry and biochemistry to pharmacology and drug discovery; and although its contributions to decision making are recognized, the challenge is how it would contribute to faster development of novel, better products. Here we address the future of cheminformatics with primary focus on innovation. Cheminformatics developers often need to choose between "mainstream" (i.e., accepted, expected) and novel, leading-edge tools, with an increasing trend for open science. Possible futures for cheminformatics include the worst case scenario (lack of funding, no creative usage), as well as the best case scenario (complete integration, from systems biology to virtual physiology). As "-omics" technologies advance, and computer hardware improves, compounds will no longer be profiled at the molecular level, but also in terms of genetic and clinical effects. Among potentially novel tools, we anticipate machine learning models based on free text processing, an increased performance in environmental cheminformatics, significant decision-making support, as well as the emergence of robot scientists conducting automated drug discovery research. Furthermore, cheminformatics is anticipated to expand the frontiers of knowledge and evolve in an open-ended, extensible manner, allowing us to explore multiple research scenarios in order to avoid epistemological "local information minimum trap".
Cheminformatics Applications and Physicochemical Property ...
The registration of new chemicals under the Toxic Substances Control Act (TSCA) and new pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) requires knowledge of the process science underlying the transport and transformation of organic chemicals in natural ecosystems. The purpose of this presentation is to demonstrate how cheminformatics, using chemical terms language in combination with the output of physicochemical property calculators, can be employed to encode this knowledge and make it available to the appropriate decision makers. The encoded process science is realized through the execution of reaction libraries in simulators such as EPA’s Chemical Transformation Simulator (CTS). In support of the CTS, reaction libraries have, or are currently being developed for a number of transformation processes including hydrolysis, abiotic reduction, photolysis and disinfection by-product formation. Examples of how the process science in the peer-reviewed literature is being encoded will be presented. The purpose of this presentation is to demonstrate how cheminformatics, using chemical terms language in combination with the output of physicochemical property calculators, can be employed to encode this knowledge and make it available to the appropriate decision makers.
Potta, Thrimoorthy; Zhen, Zhuo; Grandhi, Taraka Sai Pavan; Christensen, Matthew D.; Ramos, James; Breneman, Curt M.; Rege, Kaushal
2014-01-01
We describe the combinatorial synthesis and cheminformatics modeling of aminoglycoside antibiotics-derived polymers for transgene delivery and expression. Fifty-six polymers were synthesized by polymerizing aminoglycosides with diglycidyl ether cross-linkers. Parallel screening resulted in identification of several lead polymers that resulted in high transgene expression levels in cells. The role of polymer physicochemical properties in determining efficacy of transgene expression was investigated using Quantitative Structure-Activity Relationship (QSAR) cheminformatics models based on Support Vector Regression (SVR) and ‘building block’ polymer structures. The QSAR model exhibited high predictive ability, and investigation of descriptors in the model, using molecular visualization and correlation plots, indicated that physicochemical attributes related to both, aminoglycosides and diglycidyl ethers facilitated transgene expression. This work synergistically combines combinatorial synthesis and parallel screening with cheminformatics-based QSAR models for discovery and physicochemical elucidation of effective antibiotics-derived polymers for transgene delivery in medicine and biotechnology. PMID:24331709
Basith, Shaherin; Cui, Minghua; Macalino, Stephani J. Y.; Park, Jongmi; Clavio, Nina A. B.; Kang, Soosung; Choi, Sun
2018-01-01
The primary goal of rational drug discovery is the identification of selective ligands which act on single or multiple drug targets to achieve the desired clinical outcome through the exploration of total chemical space. To identify such desired compounds, computational approaches are necessary in predicting their drug-like properties. G Protein-Coupled Receptors (GPCRs) represent one of the largest and most important integral membrane protein families. These receptors serve as increasingly attractive drug targets due to their relevance in the treatment of various diseases, such as inflammatory disorders, metabolic imbalances, cardiac disorders, cancer, monogenic disorders, etc. In the last decade, multitudes of three-dimensional (3D) structures were solved for diverse GPCRs, thus referring to this period as the “golden age for GPCR structural biology.” Moreover, accumulation of data about the chemical properties of GPCR ligands has garnered much interest toward the exploration of GPCR chemical space. Due to the steady increase in the structural, ligand, and functional data of GPCRs, several cheminformatics approaches have been implemented in its drug discovery pipeline. In this review, we mainly focus on the cheminformatics-based paradigms in GPCR drug discovery. We provide a comprehensive view on the ligand– and structure-based cheminformatics approaches which are best illustrated via GPCR case studies. Furthermore, an appropriate combination of ligand-based knowledge with structure-based ones, i.e., integrated approach, which is emerging as a promising strategy for cheminformatics-based GPCR drug design is also discussed. PMID:29593527
Cheminformatics in Drug Discovery, an Industrial Perspective.
Chen, Hongming; Kogej, Thierry; Engkvist, Ola
2018-05-18
Cheminformatics has established itself as a core discipline within large scale drug discovery operations. It would be impossible to handle the amount of data generated today in a small molecule drug discovery project without persons skilled in cheminformatics. In addition, due to increased emphasis on "Big Data", machine learning and artificial intelligence, not only in the society in general, but also in drug discovery, it is expected that the cheminformatics field will be even more important in the future. Traditional areas like virtual screening, library design and high-throughput screening analysis are highlighted in this review. Applying machine learning in drug discovery is an area that has become very important. Applications of machine learning in early drug discovery has been extended from predicting ADME properties and target activity to tasks like de novo molecular design and prediction of chemical reactions. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Early phase drug discovery: cheminformatics and computational techniques in identifying lead series.
Duffy, Bryan C; Zhu, Lei; Decornez, Hélène; Kitchen, Douglas B
2012-09-15
Early drug discovery processes rely on hit finding procedures followed by extensive experimental confirmation in order to select high priority hit series which then undergo further scrutiny in hit-to-lead studies. The experimental cost and the risk associated with poor selection of lead series can be greatly reduced by the use of many different computational and cheminformatic techniques to sort and prioritize compounds. We describe the steps in typical hit identification and hit-to-lead programs and then describe how cheminformatic analysis assists this process. In particular, scaffold analysis, clustering and property calculations assist in the design of high-throughput screening libraries, the early analysis of hits and then organizing compounds into series for their progression from hits to leads. Additionally, these computational tools can be used in virtual screening to design hit-finding libraries and as procedures to help with early SAR exploration. Copyright © 2012 Elsevier Ltd. All rights reserved.
Cheminformatics approaches and structure-based rules are being used to evaluate and explore the ToxCast chemical landscape and associated high-throughput screening (HTS) data. We have shown that the library provides comprehensive coverage of the knowledge domains and target inven...
Fragment informatics and computational fragment-based drug design: an overview and update.
Sheng, Chunquan; Zhang, Wannian
2013-05-01
Fragment-based drug design (FBDD) is a promising approach for the discovery and optimization of lead compounds. Despite its successes, FBDD also faces some internal limitations and challenges. FBDD requires a high quality of target protein and good solubility of fragments. Biophysical techniques for fragment screening necessitate expensive detection equipment and the strategies for evolving fragment hits to leads remain to be improved. Regardless, FBDD is necessary for investigating larger chemical space and can be applied to challenging biological targets. In this scenario, cheminformatics and computational chemistry can be used as alternative approaches that can significantly improve the efficiency and success rate of lead discovery and optimization. Cheminformatics and computational tools assist FBDD in a very flexible manner. Computational FBDD can be used independently or in parallel with experimental FBDD for efficiently generating and optimizing leads. Computational FBDD can also be integrated into each step of experimental FBDD and help to play a synergistic role by maximizing its performance. This review will provide critical analysis of the complementarity between computational and experimental FBDD and highlight recent advances in new algorithms and successful examples of their applications. In particular, fragment-based cheminformatics tools, high-throughput fragment docking, and fragment-based de novo drug design will provide the focus of this review. We will also discuss the advantages and limitations of different methods and the trends in new developments that should inspire future research. © 2012 Wiley Periodicals, Inc.
Metabolomics and Cheminformatics Analysis of Antifungal Function of Plant Metabolites
Cuperlovic-Culf, Miroslava; Rajagopalan, NandhaKishore; Tulpan, Dan; Loewen, Michele C.
2016-01-01
Fusarium head blight (FHB), primarily caused by Fusarium graminearum, is a devastating disease of wheat. Partial resistance to FHB of several wheat cultivars includes specific metabolic responses to inoculation. Previously published studies have determined major metabolic changes induced by pathogens in resistant and susceptible plants. Functionality of the majority of these metabolites in resistance remains unknown. In this work we have made a compilation of all metabolites determined as selectively accumulated following FHB inoculation in resistant plants. Characteristics, as well as possible functions and targets of these metabolites, are investigated using cheminformatics approaches with focus on the likelihood of these metabolites acting as drug-like molecules against fungal pathogens. Results of computational analyses of binding properties of several representative metabolites to homology models of fungal proteins are presented. Theoretical analysis highlights the possibility for strong inhibitory activity of several metabolites against some major proteins in Fusarium graminearum, such as carbonic anhydrases and cytochrome P450s. Activity of several of these compounds has been experimentally confirmed in fungal growth inhibition assays. Analysis of anti-fungal properties of plant metabolites can lead to the development of more resistant wheat varieties while showing novel application of cheminformatics approaches in the analysis of plant/pathogen interactions. PMID:27706030
Cheminformatics and Computational Chemistry: A Powerful ...
The registration of new chemicals under the Toxicological Substances Control Act (TSCA) and new pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) requires knowledge of the process science underlying the transformation of organic chemicals in natural ecosystems. The purpose of this presentation is to demonstrate how cheminformatics using chemical terms language in combination with the output of physicochemical property calculators can be employed to encode this knowledge and make it available to the appropriate decision makers. The encoded process science is realized through the execution of reaction libraries in simulators such as EPA’s Chemical Transformation Simulator (CTS). In support of the CTS, reaction libraries have or are currently being developed for a number of transformation processes including hydrolysis, abiotic reduction, photolysis and disinfection by-product formation. Examples of how the process science available in the peer-reviewed literature is being encoded will be presented. Presented at the 252nd American Chemical Society National Meeting:Aquatic Chemistry: Symposium in Honor of Professor Alan T. Stone
Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit
O'Boyle, Noel M; Morley, Chris; Hutchison, Geoffrey R
2008-01-01
Background Scripting languages such as Python are ideally suited to common programming tasks in cheminformatics such as data analysis and parsing information from files. However, for reasons of efficiency, cheminformatics toolkits such as the OpenBabel toolkit are often implemented in compiled languages such as C++. We describe Pybel, a Python module that provides access to the OpenBabel toolkit. Results Pybel wraps the direct toolkit bindings to simplify common tasks such as reading and writing molecular files and calculating fingerprints. Extensive use is made of Python iterators to simplify loops such as that over all the molecules in a file. A Pybel Molecule can be easily interconverted to an OpenBabel OBMol to access those methods or attributes not wrapped by Pybel. Conclusion Pybel allows cheminformaticians to rapidly develop Python scripts that manipulate chemical information. It is open source, available cross-platform, and offers the power of the OpenBabel toolkit to Python programmers. PMID:18328109
Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit.
O'Boyle, Noel M; Morley, Chris; Hutchison, Geoffrey R
2008-03-09
Scripting languages such as Python are ideally suited to common programming tasks in cheminformatics such as data analysis and parsing information from files. However, for reasons of efficiency, cheminformatics toolkits such as the OpenBabel toolkit are often implemented in compiled languages such as C++. We describe Pybel, a Python module that provides access to the OpenBabel toolkit. Pybel wraps the direct toolkit bindings to simplify common tasks such as reading and writing molecular files and calculating fingerprints. Extensive use is made of Python iterators to simplify loops such as that over all the molecules in a file. A Pybel Molecule can be easily interconverted to an OpenBabel OBMol to access those methods or attributes not wrapped by Pybel. Pybel allows cheminformaticians to rapidly develop Python scripts that manipulate chemical information. It is open source, available cross-platform, and offers the power of the OpenBabel toolkit to Python programmers.
Graph Kernels for Molecular Similarity.
Rupp, Matthias; Schneider, Gisbert
2010-04-12
Molecular similarity measures are important for many cheminformatics applications like ligand-based virtual screening and quantitative structure-property relationships. Graph kernels are formal similarity measures defined directly on graphs, such as the (annotated) molecular structure graph. Graph kernels are positive semi-definite functions, i.e., they correspond to inner products. This property makes them suitable for use with kernel-based machine learning algorithms such as support vector machines and Gaussian processes. We review the major types of kernels between graphs (based on random walks, subgraphs, and optimal assignments, respectively), and discuss their advantages, limitations, and successful applications in cheminformatics. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fourches, Denis; Muratov, Eugene; Tropsha, Alexander
2010-01-01
Molecular modelers and cheminformaticians typically analyze experimental data generated by other scientists. Consequently, when it comes to data accuracy, cheminformaticians are always at the mercy of data providers who may inadvertently publish (partially) erroneous data. Thus, dataset curation is crucial for any cheminformatics analysis such as similarity searching, clustering, QSAR modeling, virtual screening, etc., especially nowadays when the availability of chemical datasets in public domain has skyrocketed in recent years. Despite the obvious importance of this preliminary step in the computational analysis of any dataset, there appears to be no commonly accepted guidance or set of procedures for chemical data curation. The main objective of this paper is to emphasize the need for a standardized chemical data curation strategy that should be followed at the onset of any molecular modeling investigation. Herein, we discuss several simple but important steps for cleaning chemical records in a database including the removal of a fraction of the data that cannot be appropriately handled by conventional cheminformatics techniques. Such steps include the removal of inorganic and organometallic compounds, counterions, salts and mixtures; structure validation; ring aromatization; normalization of specific chemotypes; curation of tautomeric forms; and the deletion of duplicates. To emphasize the importance of data curation as a mandatory step in data analysis, we discuss several case studies where chemical curation of the original “raw” database enabled the successful modeling study (specifically, QSAR analysis) or resulted in a significant improvement of model's prediction accuracy. We also demonstrate that in some cases rigorously developed QSAR models could be even used to correct erroneous biological data associated with chemical compounds. We believe that good practices for curation of chemical records outlined in this paper will be of value to all scientists working in the fields of molecular modeling, cheminformatics, and QSAR studies. PMID:20572635
Cheminformatics Research at the Unilever Centre for Molecular Science Informatics Cambridge.
Fuchs, Julian E; Bender, Andreas; Glen, Robert C
2015-09-01
The Centre for Molecular Informatics, formerly Unilever Centre for Molecular Science Informatics (UCMSI), at the University of Cambridge is a world-leading driving force in the field of cheminformatics. Since its opening in 2000 more than 300 scientific articles have fundamentally changed the field of molecular informatics. The Centre has been a key player in promoting open chemical data and semantic access. Though mainly focussing on basic research, close collaborations with industrial partners ensured real world feedback and access to high quality molecular data. A variety of tools and standard protocols have been developed and are ubiquitous in the daily practice of cheminformatics. Here, we present a retrospective of cheminformatics research performed at the UCMSI, thereby highlighting historical and recent trends in the field as well as indicating future directions.
Cheminformatics Research at the Unilever Centre for Molecular Science Informatics Cambridge
Fuchs, Julian E; Bender, Andreas; Glen, Robert C
2015-01-01
The Centre for Molecular Informatics, formerly Unilever Centre for Molecular Science Informatics (UCMSI), at the University of Cambridge is a world-leading driving force in the field of cheminformatics. Since its opening in 2000 more than 300 scientific articles have fundamentally changed the field of molecular informatics. The Centre has been a key player in promoting open chemical data and semantic access. Though mainly focussing on basic research, close collaborations with industrial partners ensured real world feedback and access to high quality molecular data. A variety of tools and standard protocols have been developed and are ubiquitous in the daily practice of cheminformatics. Here, we present a retrospective of cheminformatics research performed at the UCMSI, thereby highlighting historical and recent trends in the field as well as indicating future directions. PMID:26435758
Computer-aided drug discovery research at a global contract research organization
NASA Astrophysics Data System (ADS)
Kitchen, Douglas B.
2017-03-01
Computer-aided drug discovery started at Albany Molecular Research, Inc in 1997. Over nearly 20 years the role of cheminformatics and computational chemistry has grown throughout the pharmaceutical industry and at AMRI. This paper will describe the infrastructure and roles of CADD throughout drug discovery and some of the lessons learned regarding the success of several methods. Various contributions provided by computational chemistry and cheminformatics in chemical library design, hit triage, hit-to-lead and lead optimization are discussed. Some frequently used computational chemistry techniques are described. The ways in which they may contribute to discovery projects are presented based on a few examples from recent publications.
Computer-aided drug discovery research at a global contract research organization.
Kitchen, Douglas B
2017-03-01
Computer-aided drug discovery started at Albany Molecular Research, Inc in 1997. Over nearly 20 years the role of cheminformatics and computational chemistry has grown throughout the pharmaceutical industry and at AMRI. This paper will describe the infrastructure and roles of CADD throughout drug discovery and some of the lessons learned regarding the success of several methods. Various contributions provided by computational chemistry and cheminformatics in chemical library design, hit triage, hit-to-lead and lead optimization are discussed. Some frequently used computational chemistry techniques are described. The ways in which they may contribute to discovery projects are presented based on a few examples from recent publications.
Spirocyclic systems derived from pyroglutamic acid.
Cowley, Andrew R; Hill, Thomas J; Kocis, Petr; Moloney, Mark G; Stevenson, Robert D; Thompson, Amber L
2011-10-21
The synthesis and likely conformational structure of rigid spirocyclic bislactams and lactam-lactones derived from pyroglutamic acid, and their suitability as lead structures for applications in drug development programmes using cheminformatic analysis, has been investgated.
Delivering the Benefits of Chemical-Biological Integration in ...
Slide Presentation at the German Cheminformatics Conference on Delivering the Benefits of Chemical-Biological Integration in Computational Toxicology at the EPA. Presentation at the German Cheminformatics Conference on Delivering the Benefits of Chemical-Biological Integration in Computational Toxicology at the EPA.
Willighagen, Egon L; Mayfield, John W; Alvarsson, Jonathan; Berg, Arvid; Carlsson, Lars; Jeliazkova, Nina; Kuhn, Stefan; Pluskal, Tomáš; Rojas-Chertó, Miquel; Spjuth, Ola; Torrance, Gilleain; Evelo, Chris T; Guha, Rajarshi; Steinbeck, Christoph
2017-06-06
The Chemistry Development Kit (CDK) is a widely used open source cheminformatics toolkit, providing data structures to represent chemical concepts along with methods to manipulate such structures and perform computations on them. The library implements a wide variety of cheminformatics algorithms ranging from chemical structure canonicalization to molecular descriptor calculations and pharmacophore perception. It is used in drug discovery, metabolomics, and toxicology. Over the last 10 years, the code base has grown significantly, however, resulting in many complex interdependencies among components and poor performance of many algorithms. We report improvements to the CDK v2.0 since the v1.2 release series, specifically addressing the increased functional complexity and poor performance. We first summarize the addition of new functionality, such atom typing and molecular formula handling, and improvement to existing functionality that has led to significantly better performance for substructure searching, molecular fingerprints, and rendering of molecules. Second, we outline how the CDK has evolved with respect to quality control and the approaches we have adopted to ensure stability, including a code review mechanism. This paper highlights our continued efforts to provide a community driven, open source cheminformatics library, and shows that such collaborative projects can thrive over extended periods of time, resulting in a high-quality and performant library. By taking advantage of community support and contributions, we show that an open source cheminformatics project can act as a peer reviewed publishing platform for scientific computing software. Graphical abstract CDK 2.0 provides new features and improved performance.
chemalot and chemalot_knime: Command line programs as workflow tools for drug discovery.
Lee, Man-Ling; Aliagas, Ignacio; Feng, Jianwen A; Gabriel, Thomas; O'Donnell, T J; Sellers, Benjamin D; Wiswedel, Bernd; Gobbi, Alberto
2017-06-12
Analyzing files containing chemical information is at the core of cheminformatics. Each analysis may require a unique workflow. This paper describes the chemalot and chemalot_knime open source packages. Chemalot is a set of command line programs with a wide range of functionalities for cheminformatics. The chemalot_knime package allows command line programs that read and write SD files from stdin and to stdout to be wrapped into KNIME nodes. The combination of chemalot and chemalot_knime not only facilitates the compilation and maintenance of sequences of command line programs but also allows KNIME workflows to take advantage of the compute power of a LINUX cluster. Use of the command line programs is demonstrated in three different workflow examples: (1) A workflow to create a data file with project-relevant data for structure-activity or property analysis and other type of investigations, (2) The creation of a quantitative structure-property-relationship model using the command line programs via KNIME nodes, and (3) The analysis of strain energy in small molecule ligand conformations from the Protein Data Bank database. The chemalot and chemalot_knime packages provide lightweight and powerful tools for many tasks in cheminformatics. They are easily integrated with other open source and commercial command line tools and can be combined to build new and even more powerful tools. The chemalot_knime package facilitates the generation and maintenance of user-defined command line workflows, taking advantage of the graphical design capabilities in KNIME. Graphical abstract Example KNIME workflow with chemalot nodes and the corresponding command line pipe.
myChEMBL: a virtual machine implementation of open data and cheminformatics tools.
Ochoa, Rodrigo; Davies, Mark; Papadatos, George; Atkinson, Francis; Overington, John P
2014-01-15
myChEMBL is a completely open platform, which combines public domain bioactivity data with open source database and cheminformatics technologies. myChEMBL consists of a Linux (Ubuntu) Virtual Machine featuring a PostgreSQL schema with the latest version of the ChEMBL database, as well as the latest RDKit cheminformatics libraries. In addition, a self-contained web interface is available, which can be modified and improved according to user specifications. The VM is available at: ftp://ftp.ebi.ac.uk/pub/databases/chembl/VM/myChEMBL/current. The web interface and web services code is available at: https://github.com/rochoa85/myChEMBL.
Cheminformatic Analysis of the US EPA ToxCast Chemical Library
The ToxCast project is employing high throughput screening (HTS) technologies, along with chemical descriptors and computational models, to develop approaches for screening and prioritizing environmental chemicals for further toxicity testing. ToxCast Phase I generated HTS data f...
Antony Williams is a Computational Chemist at the US Environmental Protection Agency in the National Center for Computational Toxicology. He has been involved in cheminformatics and the dissemination of chemical information for over twenty-five years. He has worked for a Fortune ...
Charting, navigating, and populating natural product chemical space for drug discovery.
Lachance, Hugo; Wetzel, Stefan; Kumar, Kamal; Waldmann, Herbert
2012-07-12
Natural products are a heterogeneous group of compounds with diverse, yet particular molecular properties compared to synthetic compounds and drugs. All relevant analyses show that natural products indeed occupy parts of chemical space not explored by available screening collections while at the same time largely adhering to the rule-of-five. This renders them a valuable, unique, and necessary component of screening libraries used in drug discovery. With ChemGPS-NP on the Web and Scaffold Hunter two tools are available to the scientific community to guide exploration of biologically relevant NP chemical space in a focused and targeted fashion with a view to guide novel synthesis approaches. Several of the examples given illustrate the possibility of bridging the gap between computational methods and compound library synthesis and the possibility of integrating cheminformatics and chemical space analyses with synthetic chemistry and biochemistry to successfully explore chemical space for the identification of novel small molecule modulators of protein function.The examples also illustrate the synergistic potential of the chemical space concept and modern chemical synthesis for biomedical research and drug discovery. Chemical space analysis can map under explored biologically relevant parts of chemical space and identify the structure types occupying these parts. Modern synthetic methodology can then be applied to efficiently fill this “virtual space” with real compounds.From a cheminformatics perspective, there is a clear demand for open-source and easy to use tools that can be readily applied by educated nonspecialist chemists and biologists in their daily research. This will include further development of Scaffold Hunter, ChemGPS-NP, and related approaches on the Web. Such a “cheminformatics toolbox” would enable chemists and biologists to mine their own data in an intuitive and highly interactive process and without the need for specialized computer science and cheminformatics expertise. We anticipate that it may be a viable, if not necessary, step for research initiatives based on large high-throughput screening campaigns,in particular in the pharmaceutical industry, to make the most out of the recent advances in computational tools in order to leverage and take full advantage of the large data sets generated and available in house. There are “holes” in these data sets that can and should be identified and explored by chemistry and biology.
Non-target high resolution mass spectrometry techniques combined with advanced cheminformatics offer huge potential for exploring complex mixtures in our environment – yet also offers plenty of challenges. Peak inventories of several non-target studies from within Europe reveal t...
Cerruela García, G; García-Pedrajas, N; Luque Ruiz, I; Gómez-Nieto, M Á
2018-03-01
This paper proposes a method for molecular activity prediction in QSAR studies using ensembles of classifiers constructed by means of two supervised subspace projection methods, namely nonparametric discriminant analysis (NDA) and hybrid discriminant analysis (HDA). We studied the performance of the proposed ensembles compared to classical ensemble methods using four molecular datasets and eight different models for the representation of the molecular structure. Using several measures and statistical tests for classifier comparison, we observe that our proposal improves the classification results with respect to classical ensemble methods. Therefore, we show that ensembles constructed using supervised subspace projections offer an effective way of creating classifiers in cheminformatics.
Web tools for predictive toxicology model building.
Jeliazkova, Nina
2012-07-01
The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.
Predicting targets of compounds against neurological diseases using cheminformatic methodology
NASA Astrophysics Data System (ADS)
Nikolic, Katarina; Mavridis, Lazaros; Bautista-Aguilera, Oscar M.; Marco-Contelles, José; Stark, Holger; do Carmo Carreiras, Maria; Rossi, Ilaria; Massarelli, Paola; Agbaba, Danica; Ramsay, Rona R.; Mitchell, John B. O.
2015-02-01
Recently developed multi-targeted ligands are novel drug candidates able to interact with monoamine oxidase A and B; acetylcholinesterase and butyrylcholinesterase; or with histamine N-methyltransferase and histamine H3-receptor (H3R). These proteins are drug targets in the treatment of depression, Alzheimer's disease, obsessive disorders, and Parkinson's disease. A probabilistic method, the Parzen-Rosenblatt window approach, was used to build a "predictor" model using data collected from the ChEMBL database. The model can be used to predict both the primary pharmaceutical target and off-targets of a compound based on its structure. Molecular structures were represented based on the circular fingerprint methodology. The same approach was used to build a "predictor" model from the DrugBank dataset to determine the main pharmacological groups of the compound. The study of off-target interactions is now recognised as crucial to the understanding of both drug action and toxicology. Primary pharmaceutical targets and off-targets for the novel multi-target ligands were examined by use of the developed cheminformatic method. Several multi-target ligands were selected for further study, as compounds with possible additional beneficial pharmacological activities. The cheminformatic targets identifications were in agreement with four 3D-QSAR (H3R/D1R/D2R/5-HT2aR) models and by in vitro assays for serotonin 5-HT1a and 5-HT2a receptor binding of the most promising ligand ( 71/MBA-VEG8).
Prediction of Hydrolysis Products of Organic Chemicals under Environmental pH Conditions
Cheminformatics-based software tools can predict the molecular structure of transformation products using a library of transformation reaction schemes. This paper presents the development of such a library for abiotic hydrolysis of organic chemicals under environmentally relevant...
Toxico-Cheminformatics and QSPR Modeling of the Carcinogenic Potency Database
Report on the development of a tiered, confirmatory scheme for prediction of chemical carcinogenicity based on QSAR studies of compounds with available mutagenic and carcinogenic data. For 693 such compounds from the Carcinogenic Potency Database characterized molecular topologic...
Use of chemoinformatics tools- nuts and bolts; Challenges in their regulatory application
Cheminformatics spans a continuum of components from data storage to uncovering new insights that are useful for different decision making contexts. It covers the input, retrieval of data, the manipulation and integration of data through to the visualisation and analysis to trans...
Literature-based cheminformatics for research in chemical toxicity
PubMed is the largest freely available source of published literature available online with access to 27 million citations (as of October 2017). Contained within the literature is an abundance of information about the activity of chemicals in biological systems. Literature inform...
Identification of C3b-Binding Small-Molecule Complement Inhibitors Using Cheminformatics.
Garcia, Brandon L; Skaff, D Andrew; Chatterjee, Arindam; Hanning, Anders; Walker, John K; Wyckoff, Gerald J; Geisbrecht, Brian V
2017-05-01
The complement system is an elegantly regulated biochemical cascade formed by the collective molecular recognition properties and proteolytic activities of more than two dozen membrane-bound or serum proteins. Complement plays diverse roles in human physiology, such as acting as a sentry against invading microorganisms, priming of the adaptive immune response, and removal of immune complexes. However, dysregulation of complement can serve as a trigger for a wide range of human diseases, which include autoimmune, inflammatory, and degenerative conditions. Despite several potential advantages of modulating complement with small-molecule inhibitors, small-molecule drugs are highly underrepresented in the current complement-directed therapeutics pipeline. In this study, we have employed a cheminformatics drug discovery approach based on the extensive structural and functional knowledge available for the central proteolytic fragment of the cascade, C3b. Using parallel in silico screening methodologies, we identified 45 small molecules that putatively bind C3b near ligand-guided functional hot spots. Surface plasmon resonance experiments resulted in the validation of seven dose-dependent C3b-binding compounds. Competition-based biochemical assays demonstrated the ability of several C3b-binding compounds to interfere with binding of the original C3b ligand that guided their discovery. In vitro assays of complement function identified a single complement inhibitory compound, termed cmp-5, and mechanistic studies of the cmp-5 inhibitory mode revealed it acts at the level of C5 activation. This study has led to the identification of a promising new class of C3b-binding small-molecule complement inhibitors and, to our knowledge, provides the first demonstration of cheminformatics-based, complement-directed drug discovery. Copyright © 2017 by The American Association of Immunologists, Inc.
Identification of C3b-binding Small Molecule Complement Inhibitors Using Cheminformatics
Garcia, Brandon L.; Skaff, D. Andrew; Chatterjee, Arindam; Hanning, Anders; Walker, John K.; Wyckoff, Gerald J.; Geisbrecht, Brian V.
2017-01-01
The complement system is an elegantly regulated biochemical cascade formed by the collective molecular recognition properties and proteolytic activities of over two dozen membrane-bound or serum proteins. Complement plays diverse roles in human physiology which include acting as a sentry against invading microorganisms, priming of the adaptive immune response, and removal of immune complexes. However, dysregulation of complement can serve as a trigger for a wide range of human diseases which include autoimmune, inflammatory, and degenerative conditions. Despite several potential advantages of modulating complement with small molecule inhibitors, small molecule drugs are highly underrepresented in the current complement-directed therapeutics pipeline. In this study we have employed a cheminformatics drug discovery approach based on the extensive structural and functional knowledge available for the central proteolytic fragment of the cascade, C3b. Using parallel in silico screening methodologies we identified 45 small molecules which putatively bind C3b near ligand-guided functional hot-spots. Surface plasmon resonance experiments resulted in the validation of seven dose-dependent C3b-binding compounds. Competition-based biochemical assays demonstrated the ability of several C3b-binding compounds to interfere with binding of the original C3b ligand which guided their discovery. In vitro assays of complement function identified a single complement inhibitory compound, termed cmp-5, and mechanistic studies of the cmp-5 inhibitory mode revealed it acts at the level of C5 activation. This study has led to the identification of a promising new class of C3b-binding small molecule complement inhibitors, and to our knowledge, provides the first demonstration of cheminformatics-based complement-directed drug discovery. PMID:28298523
Integrated Environmental Modeling (IEM) systems that account for the fate/transport of organics frequently require physicochemical properties as well as transformation products. A myriad of chemical property databases exist but these can be difficult to access and often do not co...
CREDO: a structural interactomics database for drug discovery
Schreyer, Adrian M.; Blundell, Tom L.
2013-01-01
CREDO is a unique relational database storing all pairwise atomic interactions of inter- as well as intra-molecular contacts between small molecules and macromolecules found in experimentally determined structures from the Protein Data Bank. These interactions are integrated with further chemical and biological data. The database implements useful data structures and algorithms such as cheminformatics routines to create a comprehensive analysis platform for drug discovery. The database can be accessed through a web-based interface, downloads of data sets and web services at http://www-cryst.bioc.cam.ac.uk/credo. Database URL: http://www-cryst.bioc.cam.ac.uk/credo PMID:23868908
2014-01-01
We present four models of solution free-energy prediction for druglike molecules utilizing cheminformatics descriptors and theoretically calculated thermodynamic values. We make predictions of solution free energy using physics-based theory alone and using machine learning/quantitative structure–property relationship (QSPR) models. We also develop machine learning models where the theoretical energies and cheminformatics descriptors are used as combined input. These models are used to predict solvation free energy. While direct theoretical calculation does not give accurate results in this approach, machine learning is able to give predictions with a root mean squared error (RMSE) of ∼1.1 log S units in a 10-fold cross-validation for our Drug-Like-Solubility-100 (DLS-100) dataset of 100 druglike molecules. We find that a model built using energy terms from our theoretical methodology as descriptors is marginally less predictive than one built on Chemistry Development Kit (CDK) descriptors. Combining both sets of descriptors allows a further but very modest improvement in the predictions. However, in some cases, this is a statistically significant enhancement. These results suggest that there is little complementarity between the chemical information provided by these two sets of descriptors, despite their different sources and methods of calculation. Our machine learning models are also able to predict the well-known Solubility Challenge dataset with an RMSE value of 0.9–1.0 log S units. PMID:24564264
Integrated Environmental Modeling (IEM) systems that account for the fate/transport of organics frequently require physicochemical properties as well as transformation products. A myriad of chemical property databases exist but these can be difficult to access and often do not co...
TheOpen PHACTS project (openphacts.org) is a European initiative, constituting a public–private partnership to enable easier, cheaper and faster drug discovery [1]. The project is supported by the Open PHACTS Foundation (www.openphactsfoundation.org) and funded by contributions f...
Hsieh, Jui-Hua; Yin, Shuangye; Wang, Xiang S; Liu, Shubin; Dokholyan, Nikolay V; Tropsha, Alexander
2012-01-23
Poor performance of scoring functions is a well-known bottleneck in structure-based virtual screening (VS), which is most frequently manifested in the scoring functions' inability to discriminate between true ligands vs known nonbinders (therefore designated as binding decoys). This deficiency leads to a large number of false positive hits resulting from VS. We have hypothesized that filtering out or penalizing docking poses recognized as non-native (i.e., pose decoys) should improve the performance of VS in terms of improved identification of true binders. Using several concepts from the field of cheminformatics, we have developed a novel approach to identifying pose decoys from an ensemble of poses generated by computational docking procedures. We demonstrate that the use of target-specific pose (scoring) filter in combination with a physical force field-based scoring function (MedusaScore) leads to significant improvement of hit rates in VS studies for 12 of the 13 benchmark sets from the clustered version of the Database of Useful Decoys (DUD). This new hybrid scoring function outperforms several conventional structure-based scoring functions, including XSCORE::HMSCORE, ChemScore, PLP, and Chemgauss3, in 6 out of 13 data sets at early stage of VS (up 1% decoys of the screening database). We compare our hybrid method with several novel VS methods that were recently reported to have good performances on the same DUD data sets. We find that the retrieved ligands using our method are chemically more diverse in comparison with two ligand-based methods (FieldScreen and FLAP::LBX). We also compare our method with FLAP::RBLB, a high-performance VS method that also utilizes both the receptor and the cognate ligand structures. Interestingly, we find that the top ligands retrieved using our method are highly complementary to those retrieved using FLAP::RBLB, hinting effective directions for best VS applications. We suggest that this integrative VS approach combining cheminformatics and molecular mechanics methodologies may be applied to a broad variety of protein targets to improve the outcome of structure-based drug discovery studies.
Lin, Ying-Ting
2013-04-30
A tandem technique of hard equipment is often used for the chemical analysis of a single cell to first isolate and then detect the wanted identities. The first part is the separation of wanted chemicals from the bulk of a cell; the second part is the actual detection of the important identities. To identify the key structural modifications around ligand binding, the present study aims to develop a counterpart of tandem technique for cheminformatics. A statistical regression and its outliers act as a computational technique for separation. A PPARγ (peroxisome proliferator-activated receptor gamma) agonist cellular system was subjected to such an investigation. Results show that this tandem regression-outlier analysis, or the prioritization of the context equations tagged with features of the outliers, is an effective regression technique of cheminformatics to detect key structural modifications, as well as their tendency of impact to ligand binding. The key structural modifications around ligand binding are effectively extracted or characterized out of cellular reactions. This is because molecular binding is the paramount factor in such ligand cellular system and key structural modifications around ligand binding are expected to create outliers. Therefore, such outliers can be captured by this tandem regression-outlier analysis.
Pedretti, Alessandro; Mazzolari, Angelica; Vistoli, Giulio
2018-05-21
The manuscript describes WarpEngine, a novel platform implemented within the VEGA ZZ suite of software for performing distributed simulations both in local and wide area networks. Despite being tailored for structure-based virtual screening campaigns, WarpEngine possesses the required flexibility to carry out distributed calculations utilizing various pieces of software, which can be easily encapsulated within this platform without changing their source codes. WarpEngine takes advantages of all cheminformatics features implemented in the VEGA ZZ program as well as of its largely customizable scripting architecture thus allowing an efficient distribution of various time-demanding simulations. To offer an example of the WarpEngine potentials, the manuscript includes a set of virtual screening campaigns based on the ACE data set of the DUD-E collections using PLANTS as the docking application. Benchmarking analyses revealed a satisfactory linearity of the WarpEngine performances, the speed-up values being roughly equal to the number of utilized cores. Again, the computed scalability values emphasized that a vast majority (i.e., >90%) of the performed simulations benefit from the distributed platform presented here. WarpEngine can be freely downloaded along with the VEGA ZZ program at www.vegazz.net .
Exploiting PubChem for Virtual Screening
Xie, Xiang-Qun
2011-01-01
Importance of the field PubChem is a public molecular information repository, a scientific showcase of the NIH Roadmap Initiative. The PubChem database holds over 27 million records of unique chemical structures of compounds (CID) derived from nearly 70 million substance depositions (SID), and contains more than 449,000 bioassay records with over thousands of in vitro biochemical and cell-based screening bioassays established, with targeting more than 7000 proteins and genes linking to over 1.8 million of substances. Areas covered in this review This review builds on recent PubChem-related computational chemistry research reported by other authors while providing readers with an overview of the PubChem database, focusing on its increasing role in cheminformatics, virtual screening and toxicity prediction modeling. What the reader will gain These publicly available datasets in PubChem provide great opportunities for scientists to perform cheminformatics and virtual screening research for computer-aided drug design. However, the high volume and complexity of the datasets, in particular the bioassay-associated false positives/negatives and highly imbalanced datasets in PubChem, also creates major challenges. Several approaches regarding the modeling of PubChem datasets and development of virtual screening models for bioactivity and toxicity predictions are also reviewed. Take home message Novel data-mining cheminformatics tools and virtual screening algorithms are being developed and used to retrieve, annotate and analyze the large-scale and highly complex PubChem biological screening data for drug design. PMID:21691435
Cinfony – combining Open Source cheminformatics toolkits behind a common interface
O'Boyle, Noel M; Hutchison, Geoffrey R
2008-01-01
Background Open Source cheminformatics toolkits such as OpenBabel, the CDK and the RDKit share the same core functionality but support different sets of file formats and forcefields, and calculate different fingerprints and descriptors. Despite their complementary features, using these toolkits in the same program is difficult as they are implemented in different languages (C++ versus Java), have different underlying chemical models and have different application programming interfaces (APIs). Results We describe Cinfony, a Python module that presents a common interface to all three of these toolkits, allowing the user to easily combine methods and results from any of the toolkits. In general, the run time of the Cinfony modules is almost as fast as accessing the underlying toolkits directly from C++ or Java, but Cinfony makes it much easier to carry out common tasks in cheminformatics such as reading file formats and calculating descriptors. Conclusion By providing a simplified interface and improving interoperability, Cinfony makes it easy to combine complementary features of OpenBabel, the CDK and the RDKit. PMID:19055766
An important goal of toxicology research is the development of robust methods that use in vitro and chemical structure information to predict in vivo toxicity endpoints. The US EPA ToxCast program is addressing this goal using ~600 in vitro assays to create bioactivity profiles o...
Schneider, Nadine; Sayle, Roger A; Landrum, Gregory A
2015-10-26
Finding a canonical ordering of the atoms in a molecule is a prerequisite for generating a unique representation of the molecule. The canonicalization of a molecule is usually accomplished by applying some sort of graph relaxation algorithm, the most common of which is the Morgan algorithm. There are known issues with that algorithm that lead to noncanonical atom orderings as well as problems when it is applied to large molecules like proteins. Furthermore, each cheminformatics toolkit or software provides its own version of a canonical ordering, most based on unpublished algorithms, which also complicates the generation of a universal unique identifier for molecules. We present an alternative canonicalization approach that uses a standard stable-sorting algorithm instead of a Morgan-like index. Two new invariants that allow canonical ordering of molecules with dependent chirality as well as those with highly symmetrical cyclic graphs have been developed. The new approach proved to be robust and fast when tested on the 1.45 million compounds of the ChEMBL 20 data set in different scenarios like random renumbering of input atoms or SMILES round tripping. Our new algorithm is able to generate a canonical order of the atoms of protein molecules within a few milliseconds. The novel algorithm is implemented in the open-source cheminformatics toolkit RDKit. With this paper, we provide a reference Python implementation of the algorithm that could easily be integrated in any cheminformatics toolkit. This provides a first step toward a common standard for canonical atom ordering to generate a universal unique identifier for molecules other than InChI.
Cheminformatic comparison of approved drugs from natural product versus synthetic origins.
Stratton, Christopher F; Newman, David J; Tan, Derek S
2015-11-01
Despite the recent decline of natural product discovery programs in the pharmaceutical industry, approximately half of all new drug approvals still trace their structural origins to a natural product. Herein, we use principal component analysis to compare the structural and physicochemical features of drugs from natural product-based versus completely synthetic origins that were approved between 1981 and 2010. Drugs based on natural product structures display greater chemical diversity and occupy larger regions of chemical space than drugs from completely synthetic origins. Notably, synthetic drugs based on natural product pharmacophores also exhibit lower hydrophobicity and greater stereochemical content than drugs from completely synthetic origins. These results illustrate that structural features found in natural products can be successfully incorporated into synthetic drugs, thereby increasing the chemical diversity available for small-molecule drug discovery. Copyright © 2015 Elsevier Ltd. All rights reserved.
Relational Agreement Measures for Similarity Searching of Cheminformatic Data Sets.
Rivera-Borroto, Oscar Miguel; García-de la Vega, José Manuel; Marrero-Ponce, Yovani; Grau, Ricardo
2016-01-01
Research on similarity searching of cheminformatic data sets has been focused on similarity measures using fingerprints. However, nominal scales are the least informative of all metric scales, increasing the tied similarity scores, and decreasing the effectivity of the retrieval engines. Tanimoto's coefficient has been claimed to be the most prominent measure for this task. Nevertheless, this field is far from being exhausted since the computer science no free lunch theorem predicts that "no similarity measure has overall superiority over the population of data sets". We introduce 12 relational agreement (RA) coefficients for seven metric scales, which are integrated within a group fusion-based similarity searching algorithm. These similarity measures are compared to a reference panel of 21 proximity quantifiers over 17 benchmark data sets (MUV), by using informative descriptors, a feature selection stage, a suitable performance metric, and powerful comparison tests. In this stage, RA coefficients perform favourably with repect to the state-of-the-art proximity measures. Afterward, the RA-based method outperform another four nearest neighbor searching algorithms over the same data domains. In a third validation stage, RA measures are successfully applied to the virtual screening of the NCI data set. Finally, we discuss a possible molecular interpretation for these similarity variants.
Cheminformatics Analysis of EPA ToxCast Chemical Libraries ...
An important goal of toxicology research is the development of robust methods that use in vitro and chemical structure information to predict in vivo toxicity endpoints. The US EPA ToxCast program is addressing this goal using ~600 in vitro assays to create bioactivity profiles on a set of 320 compounds, mostly pesticide actives, that have well characterized in vivo toxicity. These 320 compounds (EPA-320 set evaluated in Phase I of ToxCast) are a subset of a much larger set of ~10,000 candidates that are of interest to the EPA (called here EPA-10K). Predictive models of in vivo toxicity are being constructed from the in vitro assay data on the EPA-320 chemical set. These models require validation on additional chemicals prior to wide acceptance, and this will be carried out by evaluating compounds from EPA-10K in Phase II of ToxCast. We have used cheminformatics approaches including clustering, data visualization, and QSAR to develop models for EPA-320 that could help prioritizing EPA-10K validation chemicals. Both chemical descriptors, as well as calculated physicochemical properties have been used. Compounds from EPA-10K are prioritized based on their similarity to EPA-320 using different similarity metrics, with similarity thresholds defining the domain of applicability for the predictive models built for EPA-320 set. In addition, prioritized lists of compounds of increasing dissimilarity from the EPA-320 have been produced, to test the ability of the EPA-320
Romero-Durán, Francisco J; Alonso, Nerea; Yañez, Matilde; Caamaño, Olga; García-Mera, Xerardo; González-Díaz, Humberto
2016-04-01
The use of Cheminformatics tools is gaining importance in the field of translational research from Medicinal Chemistry to Neuropharmacology. In particular, we need it for the analysis of chemical information on large datasets of bioactive compounds. These compounds form large multi-target complex networks (drug-target interactome network) resulting in a very challenging data analysis problem. Artificial Neural Network (ANN) algorithms may help us predict the interactions of drugs and targets in CNS interactome. In this work, we trained different ANN models able to predict a large number of drug-target interactions. These models predict a dataset of thousands of interactions of central nervous system (CNS) drugs characterized by > 30 different experimental measures in >400 different experimental protocols for >150 molecular and cellular targets present in 11 different organisms (including human). The model was able to classify cases of non-interacting vs. interacting drug-target pairs with satisfactory performance. A second aim focus on two main directions: the synthesis and assay of new derivatives of TVP1022 (S-analogues of rasagiline) and the comparison with other rasagiline derivatives recently reported. Finally, we used the best of our models to predict drug-target interactions for the best new synthesized compound against a large number of CNS protein targets. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hastings, Janna; Chepelev, Leonid; Willighagen, Egon; Adams, Nico; Steinbeck, Christoph; Dumontier, Michel
2011-01-01
Cheminformatics is the application of informatics techniques to solve chemical problems in silico. There are many areas in biology where cheminformatics plays an important role in computational research, including metabolism, proteomics, and systems biology. One critical aspect in the application of cheminformatics in these fields is the accurate exchange of data, which is increasingly accomplished through the use of ontologies. Ontologies are formal representations of objects and their properties using a logic-based ontology language. Many such ontologies are currently being developed to represent objects across all the domains of science. Ontologies enable the definition, classification, and support for querying objects in a particular domain, enabling intelligent computer applications to be built which support the work of scientists both within the domain of interest and across interrelated neighbouring domains. Modern chemical research relies on computational techniques to filter and organise data to maximise research productivity. The objects which are manipulated in these algorithms and procedures, as well as the algorithms and procedures themselves, enjoy a kind of virtual life within computers. We will call these information entities. Here, we describe our work in developing an ontology of chemical information entities, with a primary focus on data-driven research and the integration of calculated properties (descriptors) of chemical entities within a semantic web context. Our ontology distinguishes algorithmic, or procedural information from declarative, or factual information, and renders of particular importance the annotation of provenance to calculated data. The Chemical Information Ontology is being developed as an open collaborative project. More details, together with a downloadable OWL file, are available at http://code.google.com/p/semanticchemistry/ (license: CC-BY-SA).
Hastings, Janna; Chepelev, Leonid; Willighagen, Egon; Adams, Nico; Steinbeck, Christoph; Dumontier, Michel
2011-01-01
Cheminformatics is the application of informatics techniques to solve chemical problems in silico. There are many areas in biology where cheminformatics plays an important role in computational research, including metabolism, proteomics, and systems biology. One critical aspect in the application of cheminformatics in these fields is the accurate exchange of data, which is increasingly accomplished through the use of ontologies. Ontologies are formal representations of objects and their properties using a logic-based ontology language. Many such ontologies are currently being developed to represent objects across all the domains of science. Ontologies enable the definition, classification, and support for querying objects in a particular domain, enabling intelligent computer applications to be built which support the work of scientists both within the domain of interest and across interrelated neighbouring domains. Modern chemical research relies on computational techniques to filter and organise data to maximise research productivity. The objects which are manipulated in these algorithms and procedures, as well as the algorithms and procedures themselves, enjoy a kind of virtual life within computers. We will call these information entities. Here, we describe our work in developing an ontology of chemical information entities, with a primary focus on data-driven research and the integration of calculated properties (descriptors) of chemical entities within a semantic web context. Our ontology distinguishes algorithmic, or procedural information from declarative, or factual information, and renders of particular importance the annotation of provenance to calculated data. The Chemical Information Ontology is being developed as an open collaborative project. More details, together with a downloadable OWL file, are available at http://code.google.com/p/semanticchemistry/ (license: CC-BY-SA). PMID:21991315
Automated extraction of chemical structure information from digital raster images
Park, Jungkap; Rosania, Gus R; Shedden, Kerby A; Nguyen, Mandee; Lyu, Naesung; Saitou, Kazuhiro
2009-01-01
Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links to scientific research articles. PMID:19196483
Software platform virtualization in chemistry research and university teaching
2009-01-01
Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997
Software platform virtualization in chemistry research and university teaching.
Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver
2009-11-16
Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.
Structure-based classification and ontology in chemistry
2012-01-01
Background Recent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures), while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies. Results We analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches. Conclusion Systems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational utilities including algorithmic, statistical and logic-based tools. For the task of automatic structure-based classification of chemical entities, essential to managing the vast swathes of chemical data being brought online, systems which are capable of hybrid reasoning combining several different approaches are crucial. We provide a thorough review of the available tools and methodologies, and identify areas of open research. PMID:22480202
Harnessing the potential of natural products in drug discovery from a cheminformatics vantage point.
Rodrigues, Tiago
2017-11-15
Natural products (NPs) present a privileged source of inspiration for chemical probe and drug design. Despite the biological pre-validation of the underlying molecular architectures and their relevance in drug discovery, the poor accessibility to NPs, complexity of the synthetic routes and scarce knowledge of their macromolecular counterparts in phenotypic screens still hinder their broader exploration. Cheminformatics algorithms now provide a powerful means of circumventing the abovementioned challenges and unlocking the full potential of NPs in a drug discovery context. Herein, I discuss recent advances in the computer-assisted design of NP mimics and how artificial intelligence may accelerate future NP-inspired molecular medicine.
Fourches, Denis; Barnes, Julie C; Day, Nicola C; Bradley, Paul; Reed, Jane Z; Tropsha, Alexander
2010-01-01
Drug-induced liver injury is one of the main causes of drug attrition. The ability to predict the liver effects of drug candidates from their chemical structures is critical to help guide experimental drug discovery projects toward safer medicines. In this study, we have compiled a data set of 951 compounds reported to produce a wide range of effects in the liver in different species, comprising humans, rodents, and nonrodents. The liver effects for this data set were obtained as assertional metadata, generated from MEDLINE abstracts using a unique combination of lexical and linguistic methods and ontological rules. We have analyzed this data set using conventional cheminformatics approaches and addressed several questions pertaining to cross-species concordance of liver effects, chemical determinants of liver effects in humans, and the prediction of whether a given compound is likely to cause a liver effect in humans. We found that the concordance of liver effects was relatively low (ca. 39-44%) between different species, raising the possibility that species specificity could depend on specific features of chemical structure. Compounds were clustered by their chemical similarity, and similar compounds were examined for the expected similarity of their species-dependent liver effect profiles. In most cases, similar profiles were observed for members of the same cluster, but some compounds appeared as outliers. The outliers were the subject of focused assertion regeneration from MEDLINE as well as other data sources. In some cases, additional biological assertions were identified, which were in line with expectations based on compounds' chemical similarities. The assertions were further converted to binary annotations of underlying chemicals (i.e., liver effect vs no liver effect), and binary quantitative structure-activity relationship (QSAR) models were generated to predict whether a compound would be expected to produce liver effects in humans. Despite the apparent heterogeneity of data, models have shown good predictive power assessed by external 5-fold cross-validation procedures. The external predictive power of binary QSAR models was further confirmed by their application to compounds that were retrieved or studied after the model was developed. To the best of our knowledge, this is the first study for chemical toxicity prediction that applied QSAR modeling and other cheminformatics techniques to observational data generated by the means of automated text mining with limited manual curation, opening up new opportunities for generating and modeling chemical toxicology data.
DSSTox chemical-index files for exposure-related ...
The Distributed Structure-Searchable Toxicity (DSSTox) ARYEXP and GEOGSE files are newly published, structure-annotated files of the chemical-associated and chemical exposure-related summary experimental content contained in the ArrayExpress Repository and Gene Expression Omnibus (GEO) Series (based on data extracted on September 20, 2008). ARYEXP and GEOGSE contain 887 and 1064 unique chemical substances mapped to 1835 and 2381 chemical exposure-related experiment accession IDs, respectively. The standardized files allow one to assess, compare and search the chemical content in each resource, in the context of the larger DSSTox toxicology data network, as well as across large public cheminformatics resources such as PubChem (http://pubchem.ncbi.nlm.nih.gov). The Distributed Structure-Searchable Toxicity (DSSTox) ARYEXP and GEOGSE files are newly published, structure-annotated files of the chemical-associated and chemical exposure-related summary experimental content contained in the ArrayExpress Repository and Gene Expression Omnibus (GEO) Series (based on data extracted on September 20, 2008). ARYEXP and GEOGSE contain 887 and 1064 unique chemical substances mapped to 1835 and 2381 chemical exposure-related experiment accession IDs, respectively. The standardized files allow one to assess, compare and search the chemical content in each resource, in the context of the larger DSSTox toxicology data network, as well as across large public cheminformatics resourc
Thompson, Corbin G; Sedykh, Alexander; Nicol, Melanie R; Muratov, Eugene; Fourches, Denis; Tropsha, Alexander; Kashuba, Angela D M
2014-11-01
The exposure of oral antiretroviral (ARV) drugs in the female genital tract (FGT) is variable and almost unpredictable. Identifying an efficient method to find compounds with high tissue penetration would streamline the development of regimens for both HIV preexposure prophylaxis and viral reservoir targeting. Here we describe the cheminformatics investigation of diverse drugs with known FGT penetration using cluster analysis and quantitative structure-activity relationships (QSAR) modeling. A literature search over the 1950-2012 period identified 58 compounds (including 21 ARVs and representing 13 drug classes) associated with their actual concentration data for cervical or vaginal tissue, or cervicovaginal fluid. Cluster analysis revealed significant trends in the penetrative ability for certain chemotypes. QSAR models to predict genital tract concentrations normalized to blood plasma concentrations were developed with two machine learning techniques utilizing drugs' molecular descriptors and pharmacokinetic parameters as inputs. The QSAR model with the highest predictive accuracy had R(2)test=0.47. High volume of distribution, high MRP1 substrate probability, and low MRP4 substrate probability were associated with FGT concentrations ≥1.5-fold plasma concentrations. However, due to the limited FGT data available, prediction performances of all models were low. Despite this limitation, we were able to support our findings by correctly predicting the penetration class of rilpivirine and dolutegravir. With more data to enrich the models, we believe these methods could potentially enhance the current approach of clinical testing.
Using Graph Indices for the Analysis and Comparison of Chemical Datasets.
Fourches, Denis; Tropsha, Alexander
2013-10-01
In cheminformatics, compounds are represented as points in multidimensional space of chemical descriptors. When all pairs of points found within certain distance threshold in the original high dimensional chemistry space are connected by distance-labeled edges, the resulting data structure can be defined as Dataset Graph (DG). We show that, similarly to the conventional description of organic molecules, many graph indices can be computed for DGs as well. We demonstrate that chemical datasets can be effectively characterized and compared by computing simple graph indices such as the average vertex degree or Randic connectivity index. This approach is used to characterize and quantify the similarity between different datasets or subsets of the same dataset (e.g., training, test, and external validation sets used in QSAR modeling). The freely available ADDAGRA program has been implemented to build and visualize DGs. The approach proposed and discussed in this report could be further explored and utilized for different cheminformatics applications such as dataset diversification by acquiring external compounds, dataset processing prior to QSAR modeling, or (dis)similarity modeling of multiple datasets studied in chemical genomics applications. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
MONA – Interactive manipulation of molecule collections
2013-01-01
Working with small‐molecule datasets is a routine task for cheminformaticians and chemists. The analysis and comparison of vendor catalogues and the compilation of promising candidates as starting points for screening campaigns are but a few very common applications. The workflows applied for this purpose usually consist of multiple basic cheminformatics tasks such as checking for duplicates or filtering by physico‐chemical properties. Pipelining tools allow to create and change such workflows without much effort, but usually do not support interventions once the pipeline has been started. In many contexts, however, the best suited workflow is not known in advance, thus making it necessary to take the results of the previous steps into consideration before proceeding. To support intuition‐driven processing of compound collections, we developed MONA, an interactive tool that has been designed to prepare and visualize large small‐molecule datasets. Using an SQL database common cheminformatics tasks such as analysis and filtering can be performed interactively with various methods for visual support. Great care was taken in creating a simple, intuitive user interface which can be instantly used without any setup steps. MONA combines the interactivity of molecule database systems with the simplicity of pipelining tools, thus enabling the case‐to‐case application of chemistry expert knowledge. The current version is available free of charge for academic use and can be downloaded at http://www.zbh.uni‐hamburg.de/mona. PMID:23985157
Computational chemistry and cheminformatics: an essay on the future.
Glen, Robert Charles
2012-01-01
Computers have changed the way we do science. Surrounded by a sea of data and with phenomenal computing capacity, the methodology and approach to scientific problems is evolving into a partnership between experiment, theory and data analysis. Given the pace of change of the last twenty-five years, it seems folly to speculate on the future, but along with unpredictable leaps of progress there will be a continuous evolution of capability, which points to opportunities and improvements that will certainly appear as our discipline matures.
Redefining Cheminformatics with Intuitive Collaborative Mobile Apps
Clark, Alex M; Ekins, Sean; Williams, Antony J
2012-01-01
Abstract The proliferation of mobile devices such as smartphones and tablet computers has recently been extended to include a growing ecosystem of increasingly sophisticated chemistry software packages, commonly known as apps. The capabilities that these apps can offer to the practicing chemist are approaching those of conventional desktop-based software, but apps tend to be focused on a relatively small range of tasks. To overcome this, chemistry apps must be able to seamlessly transfer data to other apps, and through the network to other devices, as well as to other platforms, such as desktops and servers, using documented file formats and protocols whenever possible. This article describes the development and state of the art with regard to chemistry-aware apps that make use of facile data interchange, and some of the scenarios in which these apps can be inserted into a chemical information workflow to increase productivity. A selection of contemporary apps is used to demonstrate their relevance to pharmaceutical research. Mobile apps represent a novel approach for delivery of cheminformatics tools to chemists and other scientists, and indications suggest that mobile devices represent a disruptive technology for drug discovery, as they have been to many other industries. PMID:23198002
Computational databases, pathway and cheminformatics tools for tuberculosis drug discovery
Ekins, Sean; Freundlich, Joel S.; Choi, Inhee; Sarker, Malabika; Talcott, Carolyn
2010-01-01
We are witnessing the growing menace of both increasing cases of drug-sensitive and drug-resistant Mycobacterium tuberculosis strains and the challenge to produce the first new tuberculosis (TB) drug in well over 40 years. The TB community, having invested in extensive high-throughput screening efforts, is faced with the question of how to optimally leverage this data in order to move from a hit to a lead to a clinical candidate and potentially a new drug. Complementing this approach, yet conducted on a much smaller scale, cheminformatic techniques have been leveraged and are herein reviewed. We suggest these computational approaches should be more optimally integrated in a workflow with experimental approaches to accelerate TB drug discovery. PMID:21129975
Estimation of hydrolysis rate constants for carbamates ...
Cheminformatics based tools, such as the Chemical Transformation Simulator under development in EPA’s Office of Research and Development, are being increasingly used to evaluate chemicals for their potential to degrade in the environment or be transformed through metabolism. Hydrolysis represents a major environmental degradation pathway; unfortunately, only a small fraction of hydrolysis rates for about 85,000 chemicals on the Toxic Substances Control Act (TSCA) inventory are in public domain, making it critical to develop in silico approaches to estimate hydrolysis rate constants. In this presentation, we compare three complementary approaches to estimate hydrolysis rates for carbamates, an important chemical class widely used in agriculture as pesticides, herbicides and fungicides. Fragment-based Quantitative Structure Activity Relationships (QSARs) using Hammett-Taft sigma constants are widely published and implemented for relatively simple functional groups such as carboxylic acid esters, phthalate esters, and organophosphate esters, and we extend these to carbamates. We also develop a pKa based model and a quantitative structure property relationship (QSPR) model, and evaluate them against measured rate constants using R square and root mean square (RMS) error. Our work shows that for our relatively small sample size of carbamates, a Hammett-Taft based fragment model performs best, followed by a pKa and a QSPR model. This presentation compares three comp
UNICON: A Powerful and Easy-to-Use Compound Library Converter.
Sommer, Kai; Friedrich, Nils-Ole; Bietz, Stefan; Hilbig, Matthias; Inhester, Therese; Rarey, Matthias
2016-06-27
The accurate handling of different chemical file formats and the consistent conversion between them play important roles for calculations in complex cheminformatics workflows. Working with different cheminformatic tools often makes the conversion between file formats a mandatory step. Such a conversion might become a difficult task in cases where the information content substantially differs. This paper describes UNICON, an easy-to-use software tool for this task. The functionality of UNICON ranges from file conversion between standard formats SDF, MOL2, SMILES, PDB, and PDBx/mmCIF via the generation of 2D structure coordinates and 3D structures to the enumeration of tautomeric forms, protonation states, and conformer ensembles. For this purpose, UNICON bundles the key elements of the previously described NAOMI library in a single, easy-to-use command line tool.
TOXICO-CHEMINFORMATICS AND QSAR MODELING OF ...
This abstract concludes that QSAR approaches combined with toxico-chemoinformatics descriptors can enhance predictive toxicology models. This abstract concludes that QSAR approaches combined with toxico-chemoinformatics descriptors can enhance predictive toxicology models.
Advances in Toxico-Cheminformatics: Supporting a New ...
EPA’s National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction through the harnessing of legacy toxicity data, creation of data linkages, and generation of new high-throughput screening (HTS) data. The DSSTox project is working to improve public access to quality structure-annotated chemical toxicity information in less summarized forms than traditionally employed in SAR modeling, and in ways that facilitate both data-mining and read-across. Both DSSTox Structure-Files and the dedicated on-line DSSTox Structure-Browser are enabling seamless structure-based searching and linkages to and from previously isolated, chemically indexed public toxicity data resources (e.g., NTP, EPA IRIS, CPDB). Most recently, structure-enabled search capabilities have been extended to chemical exposure-related microarray experiments in the public EBI Array Express database, additionally linking this resource to the NIEHS CEBS toxicogenomics database. The public DSSTox chemical and bioassay inventory has been recently integrated into PubChem, allowing a user to take full advantage of PubChem structure-activity and bioassay clustering features. The DSSTox project is providing cheminformatics support for EPA’s ToxCastTM project, as well as supporting collaborations with the National Toxicology Program (NTP) HTS and the NIH Chemical Genomics Center (NCGC). Phase I of the ToxCastTM project is generating HT
Recent Developments in Toxico-Cheminformatics; Supporting ...
EPA's National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction through the harnessing of legacy toxicity data, creation of data linkages, and generation of new high-content and high-thoughput screening data. In association with EPA's ToxCast, ToxRefDB, and ACToR projects, the DSSTox project provides cheminformatics support and, in addition, is improving public access to quality structure-annotated chemical toxicity information in less summarized forms than traditionally employed in SAR modeling, and in ways that facilitate data-mining and data read-across. The latest DSSTox version of the Carcinogenic Potency Database file (CPDBAS) illustrates ways in which various summary definitions of carcinogenic activity can be employed in modeling and data mining. DSSTox Structure-Browser provides structure searchability across all published DSSTox toxicity-related inventory, and is enabling linkages between previously isolated toxicity data resources associated with environmental and industrial chemicals. The public DSSTox inventory also has been integrated into PubChem, allowing a user to take full advantage of PubChem structure-activity and bioassay clustering features. Phase I of the ToxCast project is generating high-throughput screening data from several hundred biochemical and cell-based assays for a set of 320 chemicals, mostly pesticide actives with rich toxicology profiles. Incorporating
Development of a Terpenoid Alkaloid-like Compound Library Based on the Humulene Skeleton.
Kikuchi, Haruhisa; Nishimura, Takehiro; Kwon, Eunsang; Kawai, Junya; Oshima, Yoshiteru
2016-10-24
Many natural terpenoid alkaloid conjugates show biological activity because their structures contain both sp 3 -rich terpenoid scaffolds and nitrogen-containing alkaloid scaffolds. However, their biosynthesis utilizes a limited set of compounds as sources of the terpenoid moiety. The production of terpenoid alkaloids containing various types of terpenoid moiety may provide useful, chemically diverse compound libraries for drug discovery. Herein, we report the construction of a library of terpenoid alkaloid-like compounds based on Lewis-acid-catalyzed transannulation of humulene diepoxide and subsequent sequential olefin metathesis. Cheminformatic analysis quantitatively showed that the synthesized terpenoid alkaloid-like compound library has a high level of three-dimensional-shape diversity. Extensive pharmacological screening of the library has led to the identification of promising compounds for the development of antihypolipidemic drugs. Therefore, the synthesis of terpenoid alkaloid-like compound libraries based on humulene is well suited to drug discovery. Synthesis of terpenoid alkaloid-like compounds based on several natural terpenoids is an effective strategy for producing chemically diverse libraries. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
An algorithm to identify functional groups in organic molecules.
Ertl, Peter
2017-06-07
The concept of functional groups forms a basis of organic chemistry, medicinal chemistry, toxicity assessment, spectroscopy and also chemical nomenclature. All current software systems to identify functional groups are based on a predefined list of substructures. We are not aware of any program that can identify all functional groups in a molecule automatically. The algorithm presented in this article is an attempt to solve this scientific challenge. An algorithm to identify functional groups in a molecule based on iterative marching through its atoms is described. The procedure is illustrated by extracting functional groups from the bioactive portion of the ChEMBL database, resulting in identification of 3080 unique functional groups. A new algorithm to identify all functional groups in organic molecules is presented. The algorithm is relatively simple and full details with examples are provided, therefore implementation in any cheminformatics toolkit should be relatively easy. The new method allows the analysis of functional groups in large chemical databases in a way that was not possible using previous approaches. Graphical abstract .
IADE: a system for intelligent automatic design of bioisosteric analogs
NASA Astrophysics Data System (ADS)
Ertl, Peter; Lewis, Richard
2012-11-01
IADE, a software system supporting molecular modellers through the automatic design of non-classical bioisosteric analogs, scaffold hopping and fragment growing, is presented. The program combines sophisticated cheminformatics functionalities for constructing novel analogs and filtering them based on their drug-likeness and synthetic accessibility using automatic structure-based design capabilities: the best candidates are selected according to their similarity to the template ligand and to their interactions with the protein binding site. IADE works in an iterative manner, improving the fitness of designed molecules in every generation until structures with optimal properties are identified. The program frees molecular modellers from routine, repetitive tasks, allowing them to focus on analysis and evaluation of the automatically designed analogs, considerably enhancing their work efficiency as well as the area of chemical space that can be covered. The performance of IADE is illustrated through a case study of the design of a nonclassical bioisosteric analog of a farnesyltransferase inhibitor—an analog that has won a recent "Design a Molecule" competition.
IADE: a system for intelligent automatic design of bioisosteric analogs.
Ertl, Peter; Lewis, Richard
2012-11-01
IADE, a software system supporting molecular modellers through the automatic design of non-classical bioisosteric analogs, scaffold hopping and fragment growing, is presented. The program combines sophisticated cheminformatics functionalities for constructing novel analogs and filtering them based on their drug-likeness and synthetic accessibility using automatic structure-based design capabilities: the best candidates are selected according to their similarity to the template ligand and to their interactions with the protein binding site. IADE works in an iterative manner, improving the fitness of designed molecules in every generation until structures with optimal properties are identified. The program frees molecular modellers from routine, repetitive tasks, allowing them to focus on analysis and evaluation of the automatically designed analogs, considerably enhancing their work efficiency as well as the area of chemical space that can be covered. The performance of IADE is illustrated through a case study of the design of a nonclassical bioisosteric analog of a farnesyltransferase inhibitor--an analog that has won a recent "Design a Molecule" competition.
Cheminformatics and the Semantic Web: adding value with linked data and enhanced provenance
Frey, Jeremy G; Bird, Colin L
2013-01-01
Cheminformatics is evolving from being a field of study associated primarily with drug discovery into a discipline that embraces the distribution, management, access, and sharing of chemical data. The relationship with the related subject of bioinformatics is becoming stronger and better defined, owing to the influence of Semantic Web technologies, which enable researchers to integrate heterogeneous sources of chemical, biochemical, biological, and medical information. These developments depend on a range of factors: the principles of chemical identifiers and their role in relationships between chemical and biological entities; the importance of preserving provenance and properly curated metadata; and an understanding of the contribution that the Semantic Web can make at all stages of the research lifecycle. The movements toward open access, open source, and open collaboration all contribute to progress toward the goals of integration. PMID:24432050
Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D; Duvenaud, David; Maclaurin, Dougal; Blood-Forsythe, Martin A; Chae, Hyun Sik; Einzinger, Markus; Ha, Dong-Gwang; Wu, Tony; Markopoulos, Georgios; Jeon, Soonok; Kang, Hosuk; Miyazaki, Hiroshi; Numata, Masaki; Kim, Sunghan; Huang, Wenliang; Hong, Seong Ik; Baldo, Marc; Adams, Ryan P; Aspuru-Guzik, Alán
2016-10-01
Virtual screening is becoming a ground-breaking tool for molecular discovery due to the exponential growth of available computer time and constant improvement of simulation and machine learning techniques. We report an integrated organic functional material design process that incorporates theoretical insight, quantum chemistry, cheminformatics, machine learning, industrial expertise, organic synthesis, molecular characterization, device fabrication and optoelectronic testing. After exploring a search space of 1.6 million molecules and screening over 400,000 of them using time-dependent density functional theory, we identified thousands of promising novel organic light-emitting diode molecules across the visible spectrum. Our team collaboratively selected the best candidates from this set. The experimentally determined external quantum efficiencies for these synthesized candidates were as large as 22%.
NASA Astrophysics Data System (ADS)
Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Duvenaud, David; MacLaurin, Dougal; Blood-Forsythe, Martin A.; Chae, Hyun Sik; Einzinger, Markus; Ha, Dong-Gwang; Wu, Tony; Markopoulos, Georgios; Jeon, Soonok; Kang, Hosuk; Miyazaki, Hiroshi; Numata, Masaki; Kim, Sunghan; Huang, Wenliang; Hong, Seong Ik; Baldo, Marc; Adams, Ryan P.; Aspuru-Guzik, Alán
2016-10-01
Virtual screening is becoming a ground-breaking tool for molecular discovery due to the exponential growth of available computer time and constant improvement of simulation and machine learning techniques. We report an integrated organic functional material design process that incorporates theoretical insight, quantum chemistry, cheminformatics, machine learning, industrial expertise, organic synthesis, molecular characterization, device fabrication and optoelectronic testing. After exploring a search space of 1.6 million molecules and screening over 400,000 of them using time-dependent density functional theory, we identified thousands of promising novel organic light-emitting diode molecules across the visible spectrum. Our team collaboratively selected the best candidates from this set. The experimentally determined external quantum efficiencies for these synthesized candidates were as large as 22%.
Structure Elucidation of Unknown Metabolites in Metabolomics by Combined NMR and MS/MS Prediction
Boiteau, Rene M.; Hoyt, David W.; Nicora, Carrie D.; ...
2018-01-17
Here, we introduce a cheminformatics approach that combines highly selective and orthogonal structure elucidation parameters; accurate mass, MS/MS (MS 2), and NMR in a single analysis platform to accurately identify unknown metabolites in untargeted studies. The approach starts with an unknown LC-MS feature, and then combines the experimental MS/MS and NMR information of the unknown to effectively filter the false positive candidate structures based on their predicted MS/MS and NMR spectra. We demonstrate the approach on a model mixture and then we identify an uncatalogued secondary metabolite in Arabidopsis thaliana. The NMR/MS 2 approach is well suited for discovery ofmore » new metabolites in plant extracts, microbes, soils, dissolved organic matter, food extracts, biofuels, and biomedical samples, facilitating the identification of metabolites that are not present in experimental NMR and MS metabolomics databases.« less
Structure Elucidation of Unknown Metabolites in Metabolomics by Combined NMR and MS/MS Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boiteau, Rene M.; Hoyt, David W.; Nicora, Carrie D.
Here, we introduce a cheminformatics approach that combines highly selective and orthogonal structure elucidation parameters; accurate mass, MS/MS (MS 2), and NMR in a single analysis platform to accurately identify unknown metabolites in untargeted studies. The approach starts with an unknown LC-MS feature, and then combines the experimental MS/MS and NMR information of the unknown to effectively filter the false positive candidate structures based on their predicted MS/MS and NMR spectra. We demonstrate the approach on a model mixture and then we identify an uncatalogued secondary metabolite in Arabidopsis thaliana. The NMR/MS 2 approach is well suited for discovery ofmore » new metabolites in plant extracts, microbes, soils, dissolved organic matter, food extracts, biofuels, and biomedical samples, facilitating the identification of metabolites that are not present in experimental NMR and MS metabolomics databases.« less
Structure Elucidation of Unknown Metabolites in Metabolomics by Combined NMR and MS/MS Prediction
Hoyt, David W.; Nicora, Carrie D.; Kinmonth-Schultz, Hannah A.; Ward, Joy K.
2018-01-01
We introduce a cheminformatics approach that combines highly selective and orthogonal structure elucidation parameters; accurate mass, MS/MS (MS2), and NMR into a single analysis platform to accurately identify unknown metabolites in untargeted studies. The approach starts with an unknown LC-MS feature, and then combines the experimental MS/MS and NMR information of the unknown to effectively filter out the false positive candidate structures based on their predicted MS/MS and NMR spectra. We demonstrate the approach on a model mixture, and then we identify an uncatalogued secondary metabolite in Arabidopsis thaliana. The NMR/MS2 approach is well suited to the discovery of new metabolites in plant extracts, microbes, soils, dissolved organic matter, food extracts, biofuels, and biomedical samples, facilitating the identification of metabolites that are not present in experimental NMR and MS metabolomics databases. PMID:29342073
Kuenemann, Melaine A.; Szymczyk, Malgorzata; Chen, Yufei; Sultana, Nadia; Hinks, David; Freeman, Harold S.; Williams, Antony J.
2017-01-01
We present the Max Weaver Dye Library, a collection of ∼98 000 vials of custom-made and largely sparingly water-soluble dyes. Two years ago, the Eastman Chemical Company donated the library to North Carolina State University. This unique collection of chemicals, housed in the College of Textiles, also includes tens of thousands of fabric samples dyed using some of the library's compounds. Although the collection lies at the core of hundreds of patented inventions, the overwhelming majority of this chemical treasure trove has never been published or shared outside of a small group of scientists. Thus, the goal of this donation was to make this chemical collection, and associated data, available to interested parties in the research community. To date, we have digitized a subset of 2700 dyes which allowed us to start the constitutional and structural analysis of the collection using cheminformatics approaches. Herein, we open the discussion regarding the research opportunities offered by this unique library. PMID:28959395
Pharmacological mechanism-based drug safety assessment and prediction.
Abernethy, D R; Woodcock, J; Lesko, L J
2011-06-01
Advances in cheminformatics, bioinformatics, and pharmacology in the context of biological systems are now at a point that these tools can be applied to mechanism-based drug safety assessment and prediction. The development of such predictive tools at the US Food and Drug Administration (FDA) will complement ongoing efforts in drug safety that are focused on spontaneous adverse event reporting and active surveillance to monitor drug safety. This effort will require the active collaboration of scientists in the pharmaceutical industry, academe, and the National Institutes of Health, as well as those at the FDA, to reach its full potential. Here, we describe the approaches and goals for the mechanism-based drug safety assessment and prediction program.
Molecular graph convolutions: moving beyond fingerprints
NASA Astrophysics Data System (ADS)
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Molecular graph convolutions: moving beyond fingerprints.
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-08-01
Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.
Computational assessment of organic photovoltaic candidate compounds
NASA Astrophysics Data System (ADS)
Borunda, Mario; Dai, Shuo; Olivares-Amaya, Roberto; Amador-Bedolla, Carlos; Aspuru-Guzik, Alan
2015-03-01
Organic photovoltaic (OPV) cells are emerging as a possible renewable alternative to petroleum based resources and are needed to meet our growing demand for energy. Although not as efficient as silicon based cells, OPV cells have as an advantage that their manufacturing cost is potentially lower. The Harvard Clean Energy Project, using a cheminformatic approach of pattern recognition and machine learning strategies, has ranked a molecular library of more than 2.6 million candidate compounds based on their performance as possible OPV materials. Here, we present a ranking of the top 1000 molecules for use as photovoltaic materials based on their optical absorption properties obtained via time-dependent density functional theory. This computational search has revealed the molecular motifs shared by the set of most promising molecules.
Toxico-Cheminformatics: A New Frontier for Predictive Toxicology
The DSSTox database network and efforts to improve public access to chemical toxicity information resources, coupled with high-throughput screening (HTS) data and efforts to systematize legacy toxicity studies, have the potential to significantly improve predictive capabilities i...
Web-based services for drug design and discovery.
Frey, Jeremy G; Bird, Colin L
2011-09-01
Reviews of the development of drug discovery through the 20(th) century recognised the importance of chemistry and increasingly bioinformatics, but had relatively little to say about the importance of computing and networked computing in particular. However, the design and discovery of new drugs is arguably the most significant single application of bioinformatics and cheminformatics to have benefitted from the increases in the range and power of the computational techniques since the emergence of the World Wide Web, commonly now referred to as simply 'the Web'. Web services have enabled researchers to access shared resources and to deploy standardized calculations in their search for new drugs. This article first considers the fundamental principles of Web services and workflows, and then explores the facilities and resources that have evolved to meet the specific needs of chem- and bio-informatics. This strategy leads to a more detailed examination of the basic components that characterise molecules and the essential predictive techniques, followed by a discussion of the emerging networked services that transcend the basic provisions, and the growing trend towards embracing modern techniques, in particular the Semantic Web. In the opinion of the authors, the issues that require community action are: increasing the amount of chemical data available for open access; validating the data as provided; and developing more efficient links between the worlds of cheminformatics and bioinformatics. The goal is to create ever better drug design services.
CHEMINFORMATICS TOOLS FOR TOXICANT CHARACTERIZATION
Overview of open resources to support automated structure verification and elucidation
Cheminformatics methods form an essential basis for providing analytical scientists with access to data, algorithms and workflows. There are an increasing number of free online databases (compound databases, spectral libraries, data repositories) and a rich collection of software...
Recent Developments in Toxico-Cheminformatics: A New Frontier for Predictive Toxicology
Efforts to improve public access to chemical toxicity information resources, coupled with new high-throughput screening (HTS) data and efforts to systematize legacy toxicity studies, have the potential to significantly improve predictive capabilities in toxicology. Important rec...
Toxico-Cheminformatics: New and Expanding Public Resources to Support Chemical Toxicity Assessments
High-throughput screening (HTS) technologies, along with efforts to improve public access to chemical toxicity information resources and to systematize older toxicity studies, have the potential to significantly improve information gathering efforts for chemical assessments and p...
New Toxico-Cheminformatics & Computational Toxicology Initiatives At EPA
EPA’s National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction. The DSSTox project is improving public access to quality structure-annotated chemical toxicity information in less summarized forms than tr...
Molecular graph convolutions: moving beyond fingerprints
Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick
2016-01-01
Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503
The registration of new chemicals under the Toxicological Substances Control Act (TSCA) and new pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) requires knowledge of the process science underlying the transformation of organic chemicals in natural...
Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective
Gu, Shuo
2017-01-01
With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed. PMID:28690664
Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective.
Gu, Shuo; Pei, Jianfeng
2017-01-01
With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.
NASA Astrophysics Data System (ADS)
Olivares-Amaya, Roberto; Hachmann, Johannes; Amador-Bedolla, Carlos; Daly, Aidan; Jinich, Adrian; Atahan-Evrenk, Sule; Boixo, Sergio; Aspuru-Guzik, Alán
2012-02-01
Organic photovoltaic devices have emerged as competitors to silicon-based solar cells, currently reaching efficiencies of over 9% and offering desirable properties for manufacturing and installation. We study conjugated donor polymers for high-efficiency bulk-heterojunction photovoltaic devices with a molecular library motivated by experimental feasibility. We use quantum mechanics and a distributed computing approach to explore this vast molecular space. We will detail the screening approach starting from the generation of the molecular library, which can be easily extended to other kinds of molecular systems. We will describe the screening method for these materials which ranges from descriptor models, ubiquitous in the drug discovery community, to eventually reaching first principles quantum chemistry methods. We will present results on the statistical analysis, based principally on machine learning, specifically partial least squares and Gaussian processes. Alongside, clustering methods and the use of the hypergeometric distribution reveal moieties important for the donor materials and allow us to quantify structure-property relationships. These efforts enable us to accelerate materials discovery in organic photovoltaics through our collaboration with experimental groups.
Maréchal, Eric
2008-09-01
Chemogenomics is the study of the interaction of functional biological systems with exogenous small molecules, or in broader sense the study of the intersection of biological and chemical spaces. Chemogenomics requires expertises in biology, chemistry and computational sciences (bioinformatics, cheminformatics, large scale statistics and machine learning methods) but it is more than the simple apposition of each of these disciplines. Biological entities interacting with small molecules can be isolated proteins or more elaborate systems, from single cells to complete organisms. The biological space is therefore analyzed at various postgenomic levels (genomic, transcriptomic, proteomic or any phenotypic level). The space of small molecules is partially real, corresponding to commercial and academic collections of compounds, and partially virtual, corresponding to the chemical space possibly synthesizable. Synthetic chemistry has developed novel strategies allowing a physical exploration of this universe of possibilities. A major challenge of cheminformatics is to charter the virtual space of small molecules using realistic biological constraints (bioavailability, druggability, structural biological information). Chemogenomics is a descendent of conventional pharmaceutical approaches, since it involves the screening of chemolibraries for their effect on biological targets, and benefits from the advances in the corresponding enabling technologies and the introduction of new biological markers. Screening was originally motivated by the rigorous discovery of new drugs, neglecting and throwing away any molecule that would fail to meet the standards required for a therapeutic treatment. It is now the basis for the discovery of small molecules that might or might not be directly used as drugs, but which have an immense potential for basic research, as probes to explore an increasing number of biological phenomena. Concerns about the environmental impact of chemical industry open new fields of research for chemogenomics.
. Cheminformatic exploration of the chemical landscape of consumer products
Although Consumer products are a primary source of chemical exposures, little information is available on the chemical ingredients of these products and the concentrations at which they are present. To address this data gap, we have created a database of chemicals in consumer pro...
Recent Developments in Toxico-Cheminformatics; Supporting a New Paradigm for Predictive Toxicology
EPA's National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction through the harnessing of legacy toxicity data, creation of data linkages, and generation of new high-content and high-thoughput screening d...
Advances in Toxico-Cheminformatics: Supporting a New Paradigm for Predictive Toxicology
EPA’s National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction through the harnessing of legacy toxicity data, creation of data linkages, and generation of new high-throughput screening (HTS) data. The D...
Twenty Five Years in Cheminformatics - A Career Path ...
Antony Williams is a Computational Chemist at the US Environmental Protection Agency in the National Center for Computational Toxicology. He has been involved in cheminformatics and the dissemination of chemical information for over twenty-five years. He has worked for a Fortune 500 company (Eastman Kodak), in two successful start-ups (ACD/Labs and ChemSpider), for the Royal Society of Chemistry (in publishing) and, now, at the EPA. Throughout his career path he has experienced multiple diverse work cultures and focused his efforts on understanding the needs of his employers and the often unrecognized needs of a larger community. Antony will provide a short overview of his career path and discuss the various decisions that helped motivate his change in career from professional spectroscopist to website host and innovator, to working for one of the world's foremost scientific societies and now for one of the most impactful government organizations in the world. Invited Presentation at ACS Spring meeting at CINF: Careers in Chemical Information session
Zani, Carlos L; Carroll, Anthony R
2017-06-23
The discovery of novel and/or new bioactive natural products from biota sources is often confounded by the reisolation of known natural products. Dereplication strategies that involve the analysis of NMR and MS spectroscopic data to infer structural features present in purified natural products in combination with database searches of these substructures provide an efficient method to rapidly identify known natural products. Unfortunately this strategy has been hampered by the lack of publically available and comprehensive natural product databases and open source cheminformatics tools. A new platform, DEREP-NP, has been developed to help solve this problem. DEREP-NP uses the open source cheminformatics program DataWarrior to generate a database containing counts of 65 structural fragments present in 229 358 natural product structures derived from plants, animals, and microorganisms, published before 2013 and freely available in the nonproprietary Universal Natural Products Database (UNPD). By counting the number of times one or more of these structural features occurs in an unknown compound, as deduced from the analysis of its NMR ( 1 H, HSQC, and/or HMBC) and/or MS data, matching structures carrying the same numeric combination of searched structural features can be retrieved from the database. Confirmation that the matching structure is the same compound can then be verified through literature comparison of spectroscopic data. This methodology can be applied to both purified natural products and fractions containing a small number of individual compounds that are often generated as screening libraries. The utility of DEREP-NP has been verified through the analysis of spectra derived from compounds (and fractions containing two or three compounds) isolated from plant, marine invertebrate, and fungal sources. DEREP-NP is freely available at https://github.com/clzani/DEREP-NP and will help to streamline the natural product discovery process.
Computational Approaches to Chemical Hazard Assessment
Luechtefeld, Thomas; Hartung, Thomas
2018-01-01
Summary Computational prediction of toxicity has reached new heights as a result of decades of growth in the magnitude and diversity of biological data. Public packages for statistics and machine learning make model creation faster. New theory in machine learning and cheminformatics enables integration of chemical structure, toxicogenomics, simulated and physical data in the prediction of chemical health hazards, and other toxicological information. Our earlier publications have characterized a toxicological dataset of unprecedented scale resulting from the European REACH legislation (Registration Evaluation Authorisation and Restriction of Chemicals). These publications dove into potential use cases for regulatory data and some models for exploiting this data. This article analyzes the options for the identification and categorization of chemicals, moves on to the derivation of descriptive features for chemicals, discusses different kinds of targets modeled in computational toxicology, and ends with a high-level perspective of the algorithms used to create computational toxicology models. PMID:29101769
Molecular Rift: Virtual Reality for Drug Designers.
Norrby, Magnus; Grebner, Christoph; Eriksson, Joakim; Boström, Jonas
2015-11-23
Recent advances in interaction design have created new ways to use computers. One example is the ability to create enhanced 3D environments that simulate physical presence in the real world--a virtual reality. This is relevant to drug discovery since molecular models are frequently used to obtain deeper understandings of, say, ligand-protein complexes. We have developed a tool (Molecular Rift), which creates a virtual reality environment steered with hand movements. Oculus Rift, a head-mounted display, is used to create the virtual settings. The program is controlled by gesture-recognition, using the gaming sensor MS Kinect v2, eliminating the need for standard input devices. The Open Babel toolkit was integrated to provide access to powerful cheminformatics functions. Molecular Rift was developed with a focus on usability, including iterative test-group evaluations. We conclude with reflections on virtual reality's future capabilities in chemistry and education. Molecular Rift is open source and can be downloaded from GitHub.
Route to three-dimensional fragments using diversity-oriented synthesis
Hung, Alvin W.; Ramek, Alex; Wang, Yikai; Kaya, Taner; Wilson, J. Anthony; Clemons, Paul A.; Young, Damian W.
2011-01-01
Fragment-based drug discovery (FBDD) has proven to be an effective means of producing high-quality chemical ligands as starting points for drug-discovery pursuits. The increasing number of clinical candidate drugs developed using FBDD approaches is a testament of the efficacy of this approach. The success of fragment-based methods is highly dependent on the identity of the fragment library used for screening. The vast majority of FBDD has centered on the use of sp2-rich aromatic compounds. An expanded set of fragments that possess more 3D character would provide access to a larger chemical space of fragments than those currently used. Diversity-oriented synthesis (DOS) aims to efficiently generate a set of molecules diverse in skeletal and stereochemical properties. Molecules derived from DOS have also displayed significant success in the modulation of function of various “difficult” targets. Herein, we describe the application of DOS toward the construction of a unique set of fragments containing highly sp3-rich skeletons for fragment-based screening. Using cheminformatic analysis, we quantified the shapes and physical properties of the new 3D fragments and compared them with a database containing known fragment-like molecules. PMID:21482811
Route to three-dimensional fragments using diversity-oriented synthesis.
Hung, Alvin W; Ramek, Alex; Wang, Yikai; Kaya, Taner; Wilson, J Anthony; Clemons, Paul A; Young, Damian W
2011-04-26
Fragment-based drug discovery (FBDD) has proven to be an effective means of producing high-quality chemical ligands as starting points for drug-discovery pursuits. The increasing number of clinical candidate drugs developed using FBDD approaches is a testament of the efficacy of this approach. The success of fragment-based methods is highly dependent on the identity of the fragment library used for screening. The vast majority of FBDD has centered on the use of sp(2)-rich aromatic compounds. An expanded set of fragments that possess more 3D character would provide access to a larger chemical space of fragments than those currently used. Diversity-oriented synthesis (DOS) aims to efficiently generate a set of molecules diverse in skeletal and stereochemical properties. Molecules derived from DOS have also displayed significant success in the modulation of function of various "difficult" targets. Herein, we describe the application of DOS toward the construction of a unique set of fragments containing highly sp(3)-rich skeletons for fragment-based screening. Using cheminformatic analysis, we quantified the shapes and physical properties of the new 3D fragments and compared them with a database containing known fragment-like molecules.
Melagraki, Georgia; Ntougkos, Evangelos; Rinotas, Vagelis; Papaneophytou, Christos; Leonis, Georgios; Mavromoustakos, Thomas; Kontopidis, George; Douni, Eleni; Afantitis, Antreas; Kollias, George
2017-04-01
We present an in silico drug discovery pipeline developed and applied for the identification and virtual screening of small-molecule Protein-Protein Interaction (PPI) compounds that act as dual inhibitors of TNF and RANKL through the trimerization interface. The cheminformatics part of the pipeline was developed by combining structure-based with ligand-based modeling using the largest available set of known TNF inhibitors in the literature (2481 small molecules). To facilitate virtual screening, the consensus predictive model was made freely available at: http://enalos.insilicotox.com/TNFPubChem/. We thus generated a priority list of nine small molecules as candidates for direct TNF function inhibition. In vitro evaluation of these compounds led to the selection of two small molecules that act as potent direct inhibitors of TNF function, with IC50 values comparable to those of a previously-described direct inhibitor (SPD304), but with significantly reduced toxicity. These molecules were also identified as RANKL inhibitors and validated in vitro with respect to this second functionality. Direct binding of the two compounds was confirmed both for TNF and RANKL, as well as their ability to inhibit the biologically-active trimer forms. Molecular dynamics calculations were also carried out for the two small molecules in each protein to offer additional insight into the interactions that govern TNF and RANKL complex formation. To our knowledge, these compounds, namely T8 and T23, constitute the second and third published examples of dual small-molecule direct function inhibitors of TNF and RANKL, and could serve as lead compounds for the development of novel treatments for inflammatory and autoimmune diseases.
The registration of new chemicals under the Toxic Substances Control Act (TSCA) and new pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) requires knowledge of the process science underlying the transport and transformation of organic chemicals in n...
ACToR Chemical Structure processing using Open Source ChemInformatics Libraries (FutureToxII)
ACToR (Aggregated Computational Toxicology Resource) is a centralized database repository developed by the National Center for Computational Toxicology (NCCT) at the U.S. Environmental Protection Agency (EPA). Free and open source tools were used to compile toxicity data from ove...
NASA Astrophysics Data System (ADS)
Bardant, Teuku Beuna; Dahnum, Deliana; Amaliyah, Nur
2017-11-01
Simultaneous Saccharification Fermentation (SSF) of palm oil (Elaeis guineensis) empty fruit bunch (EFB) pulp were investigated as a part of ethanol production process. SSF was investigated by observing the effect of substrate loading variation in range 10-20%w, cellulase loading 5-30 FPU/gr substrate and yeast addition 1-2%v to the ethanol yield. Mathematical model for describing the effects of these three variables to the ethanol yield were developed using Response Surface Methodology-Cheminformatics (RSM-CI). The model gave acceptable accuracy in predicting ethanol yield for Simultaneous Saccharification and Fermentation (SSF) with coefficient of determination (R2) 0.8899. Model validation based on data from previous study gave (R2) 0.7942 which was acceptable for using this model for trend prediction analysis. Trend prediction analysis based on model prediction yield showed that SSF gave trend for higher yield when the process was operated in high enzyme concentration and low substrate concentration. On the other hand, even SHF model showed better yield will be obtained if operated in lower substrate concentration, it still possible to operate in higher substrate concentration with slightly lower yield. Opportunity provided by SHF to operate in high loading substrate make it preferable option for application in commercial scale.
Wang, Cheng; He, Lidong; Li, Da-Wei; Bruschweiler-Li, Lei; Marshall, Alan G; Brüschweiler, Rafael
2017-10-06
Metabolite identification in metabolomics samples is a key step that critically impacts downstream analysis. We recently introduced the SUMMIT NMR/mass spectrometry (MS) hybrid approach for the identification of the molecular structure of unknown metabolites based on the combination of NMR, MS, and combinatorial cheminformatics. Here, we demonstrate the feasibility of the approach for an untargeted analysis of both a model mixture and E. coli cell lysate based on 2D/3D NMR experiments in combination with Fourier transform ion cyclotron resonance MS and MS/MS data. For 19 of the 25 model metabolites, SUMMIT yielded complete structures that matched those in the mixture independent of database information. Of those, seven top-ranked structures matched those in the mixture, and four of those were further validated by positive ion MS/MS. For five metabolites, not part of the 19 metabolites, correct molecular structural motifs could be identified. For E. coli, SUMMIT MS/NMR identified 20 previously known metabolites with three or more 1 H spins independent of database information. Moreover, for 15 unknown metabolites, molecular structural fragments were determined consistent with their spin systems and chemical shifts. By providing structural information for entire metabolites or molecular fragments, SUMMIT MS/NMR greatly assists the targeted or untargeted analysis of complex mixtures of unknown compounds.
The Adverse Outcome Pathway (AOP) framework has emerged to capitalise on the vast quantity of mechanistic data generated by alternative techniques, as well as advances in systems biology, cheminformatics, and bioinformatics. AOPs provide a scaffold onto which mechanistic data can...
EPA’s National Center for Computational Toxicology is generating data and capabilities to support a new paradigm for toxicity screening and prediction. The DSSTox project is improving public access to quality structure-annotated chemical toxicity information in less summarized fo...
The importance of data curation on QSAR Modeling - PHYSPROP open data as a case study. (QSAR 2016)
During the last few decades many QSAR models and tools have been developed at the US EPA, including the widely used EPISuite. During this period the arsenal of computational capabilities supporting cheminformatics has broadened dramatically with multiple software packages. These ...
EPAs National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction through harnessing of legacy toxicity data, creation of data linkages, and generation of new in vitro screening data. In association with EPA...
EPA’s Computational Toxicology Center is building capabilities to support a new paradigm for toxicity screening and prediction through harnessing of legacy toxicity data, creation of data linkages, and generation of new in vitro screening data. In association with EPA’s ToxCastTM...
Novel Activities of Select NSAID R-Enantiomers against Rac1 and Cdc42 GTPases
Oprea, Tudor I.; Sklar, Larry A.; Agola, Jacob O.; Guo, Yuna; Silberberg, Melina; Roxby, Joshua; Vestling, Anna; Romero, Elsa; Surviladze, Zurab; Murray-Krezan, Cristina; Waller, Anna; Ursu, Oleg; Hudson, Laurie G.; Wandinger-Ness, Angela
2015-01-01
Rho family GTPases (including Rac, Rho and Cdc42) collectively control cell proliferation, adhesion and migration and are of interest as functional therapeutic targets in numerous epithelial cancers. Based on high throughput screening of the Prestwick Chemical Library® and cheminformatics we identified the R-enantiomers of two approved drugs (naproxen and ketorolac) as inhibitors of Rac1 and Cdc42. The corresponding S-enantiomers are considered the active component in racemic drug formulations, acting as non-steroidal anti-inflammatory drugs (NSAIDs) with selective activity against cyclooxygenases. Here, we show that the S-enantiomers of naproxen and ketorolac are inactive against the GTPases. Additionally, more than twenty other NSAIDs lacked inhibitory action against the GTPases, establishing the selectivity of the two identified NSAIDs. R-naproxen was first identified as a lead compound and tested in parallel with its S-enantiomer and the non-chiral 6-methoxy-naphthalene acetic acid (active metabolite of nabumetone, another NSAID) as a structural series. Cheminformatics-based substructure analyses—using the rotationally constrained carboxylate in R-naproxen—led to identification of racemic [R/S] ketorolac as a suitable FDA-approved candidate. Cell based measurement of GTPase activity (in animal and human cell lines) demonstrated that the R-enantiomers specifically inhibit epidermal growth factor stimulated Rac1 and Cdc42 activation. The GTPase inhibitory effects of the R-enantiomers in cells largely mimic those of established Rac1 (NSC23766) and Cdc42 (CID2950007/ML141) specific inhibitors. Docking predicts that rotational constraints position the carboxylate moieties of the R-enantiomers to preferentially coordinate the magnesium ion, thereby destabilizing nucleotide binding to Rac1 and Cdc42. The S-enantiomers can be docked but are less favorably positioned in proximity to the magnesium. R-naproxen and R-ketorolac have potential for rapid translation and efficacy in the treatment of several epithelial cancer types on account of established human toxicity profiles and novel activities against Rho-family GTPases. PMID:26558612
OrChem - An open source chemistry search engine for Oracle(R).
Rijnbeek, Mark; Steinbeck, Christoph
2009-10-22
Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net.
ChemBank: a small-molecule screening and cheminformatics resource database.
Seiler, Kathleen Petri; George, Gregory A; Happ, Mary Pat; Bodycombe, Nicole E; Carrinski, Hyman A; Norton, Stephanie; Brudz, Steve; Sullivan, John P; Muhlich, Jeremy; Serrano, Martin; Ferraiolo, Paul; Tolliday, Nicola J; Schreiber, Stuart L; Clemons, Paul A
2008-01-01
ChemBank (http://chembank.broad.harvard.edu/) is a public, web-based informatics environment developed through a collaboration between the Chemical Biology Program and Platform at the Broad Institute of Harvard and MIT. This knowledge environment includes freely available data derived from small molecules and small-molecule screens and resources for studying these data. ChemBank is unique among small-molecule databases in its dedication to the storage of raw screening data, its rigorous definition of screening experiments in terms of statistical hypothesis testing, and its metadata-based organization of screening experiments into projects involving collections of related assays. ChemBank stores an increasingly varied set of measurements derived from cells and other biological assay systems treated with small molecules. Analysis tools are available and are continuously being developed that allow the relationships between small molecules, cell measurements, and cell states to be studied. Currently, ChemBank stores information on hundreds of thousands of small molecules and hundreds of biomedically relevant assays that have been performed at the Broad Institute by collaborators from the worldwide research community. The goal of ChemBank is to provide life scientists unfettered access to biomedically relevant data and tools heretofore available primarily in the private sector.
Karapetyan, Karen; Batchelor, Colin; Sharpe, David; Tkachenko, Valery; Williams, Antony J
2015-01-01
There are presently hundreds of online databases hosting millions of chemical compounds and associated data. As a result of the number of cheminformatics software tools that can be used to produce the data, subtle differences between the various cheminformatics platforms, as well as the naivety of the software users, there are a myriad of issues that can exist with chemical structure representations online. In order to help facilitate validation and standardization of chemical structure datasets from various sources we have delivered a freely available internet-based platform to the community for the processing of chemical compound datasets. The chemical validation and standardization platform (CVSP) both validates and standardizes chemical structure representations according to sets of systematic rules. The chemical validation algorithms detect issues with submitted molecular representations using pre-defined or user-defined dictionary-based molecular patterns that are chemically suspicious or potentially requiring manual review. Each identified issue is assigned one of three levels of severity - Information, Warning, and Error - in order to conveniently inform the user of the need to browse and review subsets of their data. The validation process includes validation of atoms and bonds (e.g., making aware of query atoms and bonds), valences, and stereo. The standard form of submission of collections of data, the SDF file, allows the user to map the data fields to predefined CVSP fields for the purpose of cross-validating associated SMILES and InChIs with the connection tables contained within the SDF file. This platform has been applied to the analysis of a large number of data sets prepared for deposition to our ChemSpider database and in preparation of data for the Open PHACTS project. In this work we review the results of the automated validation of the DrugBank dataset, a popular drug and drug target database utilized by the community, and ChEMBL 17 data set. CVSP web site is located at http://cvsp.chemspider.com/. A platform for the validation and standardization of chemical structure representations of various formats has been developed and made available to the community to assist and encourage the processing of chemical structure files to produce more homogeneous compound representations for exchange and interchange between online databases. While the CVSP platform is designed with flexibility inherent to the rules that can be used for processing the data we have produced a recommended rule set based on our own experiences with the large data sets such as DrugBank, ChEMBL, and data sets from ChemSpider.
ConfChem Conference on Select 2016 BCCE Presentations: Twentieth Year of the OLCC
ERIC Educational Resources Information Center
Belford, Robert E.
2017-01-01
The ACS CHED Committee on Computers in Chemical Education (CCCE) ran the first intercollegiate OnLine Chemistry Course (OLCC) on Environmental and Industrial Chemistry in 1996, and is offering the seventh OLCC on Cheminformatics and Public Compound Databases: An Introduction to Big Data in Chemistry in 2017. This Communication summarizes the past,…
ERIC Educational Resources Information Center
Kamijo, Haruo; Morii, Shingo; Yamaguchi, Wataru; Toyooka, Naoki; Tada-Umezaki, Masahito; Hirobayashi, Shigeki
2016-01-01
Various tactile methods, such as Braille, have been employed to enhance the recognition ability of chemical structures by individuals with visual disabilities. However, it is unknown whether reading aloud the names of chemical compounds would be effective in this regard. There are no systems currently available using an audio component to assist…
The expansion of chemical-bioassay data in the public domain is a boon to science; however, the difficulty in establishing accurate linkages from CAS registry number (CASRN) to structure, or for properly annotating names and synonyms for a particular structure is well known. DSS...
OrChem - An open source chemistry search engine for Oracle®
2009-01-01
Background Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Results Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. Availability OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net. PMID:20298521
Skinnider, Michael A; Dejong, Chris A; Franczak, Brian C; McNicholas, Paul D; Magarvey, Nathan A
2017-08-16
Natural products represent a prominent source of pharmaceutically and industrially important agents. Calculating the chemical similarity of two molecules is a central task in cheminformatics, with applications at multiple stages of the drug discovery pipeline. Quantifying the similarity of natural products is a particularly important problem, as the biological activities of these molecules have been extensively optimized by natural selection. The large and structurally complex scaffolds of natural products distinguish their physical and chemical properties from those of synthetic compounds. However, no analysis of the performance of existing methods for molecular similarity calculation specific to natural products has been reported to date. Here, we present LEMONS, an algorithm for the enumeration of hypothetical modular natural product structures. We leverage this algorithm to conduct a comparative analysis of molecular similarity methods within the unique chemical space occupied by modular natural products using controlled synthetic data, and comprehensively investigate the impact of diverse biosynthetic parameters on similarity search. We additionally investigate a recently described algorithm for natural product retrobiosynthesis and alignment, and find that when rule-based retrobiosynthesis can be applied, this approach outperforms conventional two-dimensional fingerprints, suggesting it may represent a valuable approach for the targeted exploration of natural product chemical space and microbial genome mining. Our open-source algorithm is an extensible method of enumerating hypothetical natural product structures with diverse potential applications in bioinformatics.
NASA Astrophysics Data System (ADS)
Dai, Shao-Xing; Li, Wen-Xing; Han, Fei-Fei; Guo, Yi-Cheng; Zheng, Jun-Juan; Liu, Jia-Qian; Wang, Qian; Gao, Yue-Dong; Li, Gong-Hua; Huang, Jing-Fei
2016-05-01
There is a constant demand to develop new, effective, and affordable anti-cancer drugs. The traditional Chinese medicine (TCM) is a valuable and alternative resource for identifying novel anti-cancer agents. In this study, we aim to identify the anti-cancer compounds and plants from the TCM database by using cheminformatics. We first predicted 5278 anti-cancer compounds from TCM database. The top 346 compounds were highly potent active in the 60 cell lines test. Similarity analysis revealed that 75% of the 5278 compounds are highly similar to the approved anti-cancer drugs. Based on the predicted anti-cancer compounds, we identified 57 anti-cancer plants by activity enrichment. The identified plants are widely distributed in 46 genera and 28 families, which broadens the scope of the anti-cancer drug screening. Finally, we constructed a network of predicted anti-cancer plants and approved drugs based on the above results. The network highlighted the supportive role of the predicted plant in the development of anti-cancer drug and suggested different molecular anti-cancer mechanisms of the plants. Our study suggests that the predicted compounds and plants from TCM database offer an attractive starting point and a broader scope to mine for potential anti-cancer agents.
Dai, Shao-Xing; Li, Wen-Xing; Han, Fei-Fei; Guo, Yi-Cheng; Zheng, Jun-Juan; Liu, Jia-Qian; Wang, Qian; Gao, Yue-Dong; Li, Gong-Hua; Huang, Jing-Fei
2016-05-05
There is a constant demand to develop new, effective, and affordable anti-cancer drugs. The traditional Chinese medicine (TCM) is a valuable and alternative resource for identifying novel anti-cancer agents. In this study, we aim to identify the anti-cancer compounds and plants from the TCM database by using cheminformatics. We first predicted 5278 anti-cancer compounds from TCM database. The top 346 compounds were highly potent active in the 60 cell lines test. Similarity analysis revealed that 75% of the 5278 compounds are highly similar to the approved anti-cancer drugs. Based on the predicted anti-cancer compounds, we identified 57 anti-cancer plants by activity enrichment. The identified plants are widely distributed in 46 genera and 28 families, which broadens the scope of the anti-cancer drug screening. Finally, we constructed a network of predicted anti-cancer plants and approved drugs based on the above results. The network highlighted the supportive role of the predicted plant in the development of anti-cancer drug and suggested different molecular anti-cancer mechanisms of the plants. Our study suggests that the predicted compounds and plants from TCM database offer an attractive starting point and a broader scope to mine for potential anti-cancer agents.
Dai, Shao-Xing; Li, Wen-Xing; Han, Fei-Fei; Guo, Yi-Cheng; Zheng, Jun-Juan; Liu, Jia-Qian; Wang, Qian; Gao, Yue-Dong; Li, Gong-Hua; Huang, Jing-Fei
2016-01-01
There is a constant demand to develop new, effective, and affordable anti-cancer drugs. The traditional Chinese medicine (TCM) is a valuable and alternative resource for identifying novel anti-cancer agents. In this study, we aim to identify the anti-cancer compounds and plants from the TCM database by using cheminformatics. We first predicted 5278 anti-cancer compounds from TCM database. The top 346 compounds were highly potent active in the 60 cell lines test. Similarity analysis revealed that 75% of the 5278 compounds are highly similar to the approved anti-cancer drugs. Based on the predicted anti-cancer compounds, we identified 57 anti-cancer plants by activity enrichment. The identified plants are widely distributed in 46 genera and 28 families, which broadens the scope of the anti-cancer drug screening. Finally, we constructed a network of predicted anti-cancer plants and approved drugs based on the above results. The network highlighted the supportive role of the predicted plant in the development of anti-cancer drug and suggested different molecular anti-cancer mechanisms of the plants. Our study suggests that the predicted compounds and plants from TCM database offer an attractive starting point and a broader scope to mine for potential anti-cancer agents. PMID:27145869
Machine-learning-assisted materials discovery using failed experiments
NASA Astrophysics Data System (ADS)
Raccuglia, Paul; Elbert, Katherine C.; Adler, Philip D. F.; Falk, Casey; Wenny, Malia B.; Mollo, Aurelio; Zeller, Matthias; Friedler, Sorelle A.; Schrier, Joshua; Norquist, Alexander J.
2016-05-01
Inorganic-organic hybrid materials such as organically templated metal oxides, metal-organic frameworks (MOFs) and organohalide perovskites have been studied for decades, and hydrothermal and (non-aqueous) solvothermal syntheses have produced thousands of new materials that collectively contain nearly all the metals in the periodic table. Nevertheless, the formation of these compounds is not fully understood, and development of new compounds relies primarily on exploratory syntheses. Simulation- and data-driven approaches (promoted by efforts such as the Materials Genome Initiative) provide an alternative to experimental trial-and-error. Three major strategies are: simulation-based predictions of physical properties (for example, charge mobility, photovoltaic properties, gas adsorption capacity or lithium-ion intercalation) to identify promising target candidates for synthetic efforts; determination of the structure-property relationship from large bodies of experimental data, enabled by integration with high-throughput synthesis and measurement tools; and clustering on the basis of similar crystallographic structure (for example, zeolite structure classification or gas adsorption properties). Here we demonstrate an alternative approach that uses machine-learning algorithms trained on reaction data to predict reaction outcomes for the crystallization of templated vanadium selenites. We used information on ‘dark’ reactions—failed or unsuccessful hydrothermal syntheses—collected from archived laboratory notebooks from our laboratory, and added physicochemical property descriptions to the raw notebook information using cheminformatics techniques. We used the resulting data to train a machine-learning model to predict reaction success. When carrying out hydrothermal synthesis experiments using previously untested, commercially available organic building blocks, our machine-learning model outperformed traditional human strategies, and successfully predicted conditions for new organically templated inorganic product formation with a success rate of 89 per cent. Inverting the machine-learning model reveals new hypotheses regarding the conditions for successful product formation.
Molecular structure input on the web.
Ertl, Peter
2010-02-02
A molecule editor, that is program for input and editing of molecules, is an indispensable part of every cheminformatics or molecular processing system. This review focuses on a special type of molecule editors, namely those that are used for molecule structure input on the web. Scientific computing is now moving more and more in the direction of web services and cloud computing, with servers scattered all around the Internet. Thus a web browser has become the universal scientific user interface, and a tool to edit molecules directly within the web browser is essential.The review covers a history of web-based structure input, starting with simple text entry boxes and early molecule editors based on clickable maps, before moving to the current situation dominated by Java applets. One typical example - the popular JME Molecule Editor - will be described in more detail. Modern Ajax server-side molecule editors are also presented. And finally, the possible future direction of web-based molecule editing, based on technologies like JavaScript and Flash, is discussed.
Collaborative drug discovery for More Medicines for Tuberculosis (MM4TB)
Ekins, Sean; Spektor, Anna Coulon; Clark, Alex M.; Dole, Krishna; Bunin, Barry A.
2016-01-01
Neglected disease drug discovery is generally poorly funded compared with major diseases and hence there is an increasing focus on collaboration and precompetitive efforts such as public–private partnerships (PPPs). The More Medicines for Tuberculosis (MM4TB) project is one such collaboration funded by the EU with the goal of discovering new drugs for tuberculosis. Collaborative Drug Discovery has provided a commercial web-based platform called CDD Vault which is a hosted collaborative solution for securely sharing diverse chemistry and biology data. Using CDD Vault alongside other commercial and free cheminformatics tools has enabled support of this and other large collaborative projects, aiding drug discovery efforts and fostering collaboration. We will describe CDD's efforts in assisting with the MM4TB project. PMID:27884746
Design of a fragment library that maximally represents available chemical space.
Schulz, M N; Landström, J; Bright, K; Hubbard, R E
2011-07-01
Cheminformatics protocols have been developed and assessed that identify a small set of fragments which can represent the compounds in a chemical library for use in fragment-based ligand discovery. Six different methods have been implemented and tested on Input Libraries of compounds from three suppliers. The resulting Fragment Sets have been characterised on the basis of computed physico-chemical properties and their similarity to the Input Libraries. A method that iteratively identifies fragments with the maximum number of similar compounds in the Input Library (Nearest Neighbours) produces the most diverse library. This approach could increase the success of experimental ligand discovery projects, by providing fragments that can be progressed rapidly to larger compounds through access to available similar compounds (known as SAR by Catalog).
Guney, Tezcan; Wenderski, Todd A; Boudreau, Matthew W; Tan, Derek S
2018-06-23
Medium-ring natural products exhibit diverse biological activities but such scaffolds are underrepresented in probe and drug discovery efforts due to the limitations of classical macrocyclization reactions. We report herein a tandem oxidative dearomatization-ring-expanding rearoma¬tization (ODRE) reaction that generates benzannulated medium-ring lactams directly from simple bicyclic substrates. The reaction accommodates diverse aryl substrates (haloarenes, aryl ethers, aryl amides, heterocycles) and strategic incorporation of a bridgehead alcohol generates a versatile ketone moiety in the products amenable to downstream modifications. Cheminformatic analysis indicates that these medium rings access regions of chemical space that overlap with related natural products and are distinct from synthetic drugs, setting the stage for their use in discovery screening against novel biological targets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mapping of Drug-like Chemical Universe with Reduced Complexity Molecular Frameworks.
Kontijevskis, Aleksejs
2017-04-24
The emergence of the DNA-encoded chemical libraries (DEL) field in the past decade has attracted the attention of the pharmaceutical industry as a powerful mechanism for the discovery of novel drug-like hits for various biological targets. Nuevolution Chemetics technology enables DNA-encoded synthesis of billions of chemically diverse drug-like small molecule compounds, and the efficient screening and optimization of these, facilitating effective identification of drug candidates at an unprecedented speed and scale. Although many approaches have been developed by the cheminformatics community for the analysis and visualization of drug-like chemical space, most of them are restricted to the analysis of a maximum of a few millions of compounds and cannot handle collections of 10 8 -10 12 compounds typical for DELs. To address this big chemical data challenge, we developed the Reduced Complexity Molecular Frameworks (RCMF) methodology as an abstract and very general way of representing chemical structures. By further introducing RCMF descriptors, we constructed a global framework map of drug-like chemical space and demonstrated how chemical space occupied by multi-million-member drug-like Chemetics DNA-encoded libraries and virtual combinatorial libraries with >10 12 members could be analyzed and mapped without a need for library enumeration. We further validate the approach by performing RCMF-based searches in a drug-like chemical universe and mapping Chemetics library selection outputs for LSD1 targets on a global framework chemical space map.
Kotapalli, Sudha Sravanti; Nallam, Sri Satya Anila; Nadella, Lavanya; Banerjee, Tanmay; Rode, Haridas B; Mainkar, Prathama S; Ummanni, Ramesh
2015-01-01
The purpose of this study was to provide a number of diverse and promising early-lead compounds that will feed into the drug discovery pipeline for developing new antitubercular agents. The results from the phenotypic screening of the open-source compound library against Mycobacterium smegmatis and Mycobacterium bovis (BCG) with hit validation against M. tuberculosis (H37Rv) have identified novel potent hit compounds. To determine their druglikeness, a systematic analysis of physicochemical properties of the hit compounds has been performed using cheminformatics tools. The hit molecules were analysed by clustering based on their chemical finger prints and structural similarity determining their chemical diversity. The hit compound library is also filtered for druglikeness based on the physicochemical descriptors following Lipinski filters. The robust filtration of hits followed by secondary screening against BCG, H37Rv and cytotoxicity evaluation has identified 12 compounds with potential against H37Rv (MIC range 0.4 to 12.5 μM). Furthermore in cytotoxicity assays, 12 compounds displayed low cytotoxicity against liver and lung cells providing high therapeutic index > 50. To avoid any variations in activity due to the route of chemical synthesis, the hit compounds were re synthesized independently and confirmed for their potential against H37Rv. Taken together, the hits reported here provides copious potential starting points for generation of new leads eventually adds to drug discovery pipeline against tuberculosis.
Predicting Monoamine Oxidase Inhibitory Activity through Ligand-Based Models
Vilar, Santiago; Ferino, Giulio; Quezada, Elias; Santana, Lourdes; Friedman, Carol
2013-01-01
The evolution of bio- and cheminformatics associated with the development of specialized software and increasing computer power has produced a great interest in theoretical in silico methods applied in drug rational design. These techniques apply the concept that “similar molecules have similar biological properties” that has been exploited in Medicinal Chemistry for years to design new molecules with desirable pharmacological profiles. Ligand-based methods are not dependent on receptor structural data and take into account two and three-dimensional molecular properties to assess similarity of new compounds in regards to the set of molecules with the biological property under study. Depending on the complexity of the calculation, there are different types of ligand-based methods, such as QSAR (Quantitative Structure-Activity Relationship) with 2D and 3D descriptors, CoMFA (Comparative Molecular Field Analysis) or pharmacophoric approaches. This work provides a description of a series of ligand-based models applied in the prediction of the inhibitory activity of monoamine oxidase (MAO) enzymes. The controlled regulation of the enzymes’ function through the use of MAO inhibitors is used as a treatment in many psychiatric and neurological disorders, such as depression, anxiety, Alzheimer’s and Parkinson’s disease. For this reason, multiple scaffolds, such as substituted coumarins, indolylmethylamine or pyridazine derivatives were synthesized and assayed toward MAO-A and MAO-B inhibition. Our intention is to focus on the description of ligand-based models to provide new insights in the relationship between the MAO inhibitory activity and the molecular structure of the different inhibitors, and further study enzyme selectivity and possible mechanisms of action. PMID:23231398
Wagener, Johannes; Spjuth, Ola; Willighagen, Egon L; Wikberg, Jarl ES
2009-01-01
Background Life sciences make heavily use of the web for both data provision and analysis. However, the increasing amount of available data and the diversity of analysis tools call for machine accessible interfaces in order to be effective. HTTP-based Web service technologies, like the Simple Object Access Protocol (SOAP) and REpresentational State Transfer (REST) services, are today the most common technologies for this in bioinformatics. However, these methods have severe drawbacks, including lack of discoverability, and the inability for services to send status notifications. Several complementary workarounds have been proposed, but the results are ad-hoc solutions of varying quality that can be difficult to use. Results We present a novel approach based on the open standard Extensible Messaging and Presence Protocol (XMPP), consisting of an extension (IO Data) to comprise discovery, asynchronous invocation, and definition of data types in the service. That XMPP cloud services are capable of asynchronous communication implies that clients do not have to poll repetitively for status, but the service sends the results back to the client upon completion. Implementations for Bioclipse and Taverna are presented, as are various XMPP cloud services in bio- and cheminformatics. Conclusion XMPP with its extensions is a powerful protocol for cloud services that demonstrate several advantages over traditional HTTP-based Web services: 1) services are discoverable without the need of an external registry, 2) asynchronous invocation eliminates the need for ad-hoc solutions like polling, and 3) input and output types defined in the service allows for generation of clients on the fly without the need of an external semantics description. The many advantages over existing technologies make XMPP a highly interesting candidate for next generation online services in bioinformatics. PMID:19732427
Wagener, Johannes; Spjuth, Ola; Willighagen, Egon L; Wikberg, Jarl E S
2009-09-04
Life sciences make heavily use of the web for both data provision and analysis. However, the increasing amount of available data and the diversity of analysis tools call for machine accessible interfaces in order to be effective. HTTP-based Web service technologies, like the Simple Object Access Protocol (SOAP) and REpresentational State Transfer (REST) services, are today the most common technologies for this in bioinformatics. However, these methods have severe drawbacks, including lack of discoverability, and the inability for services to send status notifications. Several complementary workarounds have been proposed, but the results are ad-hoc solutions of varying quality that can be difficult to use. We present a novel approach based on the open standard Extensible Messaging and Presence Protocol (XMPP), consisting of an extension (IO Data) to comprise discovery, asynchronous invocation, and definition of data types in the service. That XMPP cloud services are capable of asynchronous communication implies that clients do not have to poll repetitively for status, but the service sends the results back to the client upon completion. Implementations for Bioclipse and Taverna are presented, as are various XMPP cloud services in bio- and cheminformatics. XMPP with its extensions is a powerful protocol for cloud services that demonstrate several advantages over traditional HTTP-based Web services: 1) services are discoverable without the need of an external registry, 2) asynchronous invocation eliminates the need for ad-hoc solutions like polling, and 3) input and output types defined in the service allows for generation of clients on the fly without the need of an external semantics description. The many advantages over existing technologies make XMPP a highly interesting candidate for next generation online services in bioinformatics.
LICSS - a chemical spreadsheet in microsoft excel
2012-01-01
Background Representations of chemical datasets in spreadsheet format are important for ready data assimilation and manipulation. In addition to the normal spreadsheet facilities, chemical spreadsheets need to have visualisable chemical structures and data searchable by chemical as well as textual queries. Many such chemical spreadsheet tools are available, some operating in the familiar Microsoft Excel environment. However, within this group, the performance of Excel is often compromised, particularly in terms of the number of compounds which can usefully be stored on a sheet. Summary LICSS is a lightweight chemical spreadsheet within Microsoft Excel for Windows. LICSS stores structures solely as Smiles strings. Chemical operations are carried out by calling Java code modules which use the CDK, JChemPaint and OPSIN libraries to provide cheminformatics functionality. Compounds in sheets or charts may be visualised (individually or en masse), and sheets may be searched by substructure or similarity. All the molecular descriptors available in CDK may be calculated for compounds (in batch or on-the-fly), and various cheminformatic operations such as fingerprint calculation, Sammon mapping, clustering and R group table creation may be carried out. We detail here the features of LICSS and how they are implemented. We also explain the design criteria, particularly in terms of potential corporate use, which led to this particular implementation. Conclusions LICSS is an Excel-based chemical spreadsheet with a difference: • It can usefully be used on sheets containing hundreds of thousands of compounds; it doesn't compromise the normal performance of Microsoft Excel • It is designed to be installed and run in environments in which users do not have admin privileges; installation involves merely file copying, and sharing of LICSS sheets invokes automatic installation • It is free and extensible LICSS is open source software and we hope sufficient detail is provided here to enable developers to add their own features and share with the community. PMID:22301088
LICSS - a chemical spreadsheet in microsoft excel.
Lawson, Kevin R; Lawson, Jonty
2012-02-02
Representations of chemical datasets in spreadsheet format are important for ready data assimilation and manipulation. In addition to the normal spreadsheet facilities, chemical spreadsheets need to have visualisable chemical structures and data searchable by chemical as well as textual queries. Many such chemical spreadsheet tools are available, some operating in the familiar Microsoft Excel environment. However, within this group, the performance of Excel is often compromised, particularly in terms of the number of compounds which can usefully be stored on a sheet. LICSS is a lightweight chemical spreadsheet within Microsoft Excel for Windows. LICSS stores structures solely as Smiles strings. Chemical operations are carried out by calling Java code modules which use the CDK, JChemPaint and OPSIN libraries to provide cheminformatics functionality. Compounds in sheets or charts may be visualised (individually or en masse), and sheets may be searched by substructure or similarity. All the molecular descriptors available in CDK may be calculated for compounds (in batch or on-the-fly), and various cheminformatic operations such as fingerprint calculation, Sammon mapping, clustering and R group table creation may be carried out.We detail here the features of LICSS and how they are implemented. We also explain the design criteria, particularly in terms of potential corporate use, which led to this particular implementation. LICSS is an Excel-based chemical spreadsheet with a difference:• It can usefully be used on sheets containing hundreds of thousands of compounds; it doesn't compromise the normal performance of Microsoft Excel• It is designed to be installed and run in environments in which users do not have admin privileges; installation involves merely file copying, and sharing of LICSS sheets invokes automatic installation• It is free and extensibleLICSS is open source software and we hope sufficient detail is provided here to enable developers to add their own features and share with the community.
Li, Xu-Zhao; Zhang, Shuai-Nan; Yang, Xu-Yan
2017-12-01
This study was aimed to explore the chemical basis of the rhizomes and aerial parts of Dioscorea nipponica Makino (DN). The pharmacokinetic profiles of the compounds from DN were calculated via ACD/I-Lab and PreADMET program. Their potential therapeutic and toxicity targets were screened through the DrugBank's or T3DB's ChemQuery structure search. Eleven of 48 compounds in the rhizomes and over half of the compounds in the aerial parts had moderate or good human oral bioavailability. Twenty-three of 48 compounds in the rhizomes and 40/43 compounds from the aerial parts had moderate or good permeability to intestinal cells. Forty-three of 48 compounds from the rhizomes and 18/43 compounds in the aerial parts bound weakly to the plasma proteins. Eleven of 48 compounds in the rhizomes and 36/43 compounds of the aerial parts might pass across the blood-brain barrier. Forty-three 48 compounds in the rhizomes and 18/43 compounds from the aerial parts showed low renal excretion ability. The compounds in the rhizomes possessed 391 potential therapeutic targets and 216 potential toxicity targets. Additionally, the compounds from the aerial parts possessed 101 potential therapeutic targets and 183 potential toxicity targets. These findings indicated that combination of cheminformatics and bioinformatics may facilitate achieving the objectives of this study. © 2017 Royal Pharmaceutical Society.
Borrel, Alexandre; Fourches, Denis
2017-12-01
There is a growing interest for the broad use of Augmented Reality (AR) and Virtual Reality (VR) in the fields of bioinformatics and cheminformatics to visualize complex biological and chemical structures. AR and VR technologies allow for stunning and immersive experiences, offering untapped opportunities for both research and education purposes. However, preparing 3D models ready to use for AR and VR is time-consuming and requires a technical expertise that severely limits the development of new contents of potential interest for structural biologists, medicinal chemists, molecular modellers and teachers. Herein we present the RealityConvert software tool and associated website, which allow users to easily convert molecular objects to high quality 3D models directly compatible for AR and VR applications. For chemical structures, in addition to the 3D model generation, RealityConvert also generates image trackers, useful to universally call and anchor that particular 3D model when used in AR applications. The ultimate goal of RealityConvert is to facilitate and boost the development and accessibility of AR and VR contents for bioinformatics and cheminformatics applications. http://www.realityconvert.com. dfourch@ncsu.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Kutchukian, Peter S; So, Sung-Sau; Fischer, Christian; Waller, Chris L
2015-01-01
Fragment based screening (FBS) has emerged as a mainstream lead discovery strategy in academia, biotechnology start-ups, and large pharma. As a prerequisite of FBS, a structurally diverse library of fragments is desirable in order to identify chemical matter that will interact with the range of diverse target classes that are prosecuted in contemporary screening campaigns. In addition, it is also desirable to offer synthetically amenable starting points to increase the probability of a successful fragment evolution through medicinal chemistry. Herein we describe a method to identify biologically relevant chemical substructures that are missing from an existing fragment library (chemical gaps), and organize these chemical gaps hierarchically so that medicinal chemists can efficiently navigate the prioritized chemical space and subsequently select purchasable fragments for inclusion in an enhanced fragment library.
He, Xingrui; Chen, Xia; Lin, Songbo; Mo, Xiaochang; Zhou, Pengyong; Zhang, Zhihao; Lu, Yaoyao; Yang, Yu; Gu, Haining
2016-01-01
Abstract Natural products are a major source of biological molecules. The 3‐methylfuran scaffold is found in a variety of plant secondary metabolite chemical elicitors that confer host‐plant resistance against insect pests. Herein, the diversity‐oriented synthesis of a natural‐product‐like library is reported, in which the 3‐methylfuran core is fused in an angular attachment to six common natural product scaffolds—coumarin, chalcone, flavone, flavonol, isoflavone and isoquinolinone. The structural diversity of this library is assessed computationally using cheminformatic analysis. Phenotypic high‐throughput screening of β‐glucuronidase activity uncovers several hits. Further in vivo screening confirms that these hits can induce resistance in rice to nymphs of the brown planthopper Nilaparvata lugens. This work validates the combination of diversity‐oriented synthesis and high‐throughput screening of β‐glucuronidase activity as a strategy for discovering new chemical elicitors. PMID:28168155
Chemical signatures and new drug targets for gametocytocidal drug development
NASA Astrophysics Data System (ADS)
Sun, Wei; Tanaka, Takeshi Q.; Magle, Crystal T.; Huang, Wenwei; Southall, Noel; Huang, Ruili; Dehdashti, Seameen J.; McKew, John C.; Williamson, Kim C.; Zheng, Wei
2014-01-01
Control of parasite transmission is critical for the eradication of malaria. However, most antimalarial drugs are not active against P. falciparum gametocytes, responsible for the spread of malaria. Consequently, patients can remain infectious for weeks after the clearance of asexual parasites and clinical symptoms. Here we report the identification of 27 potent gametocytocidal compounds (IC50 < 1 μM) from screening 5,215 known drugs and compounds. All these compounds were active against three strains of gametocytes with different drug sensitivities and geographical origins, 3D7, HB3 and Dd2. Cheminformatic analysis revealed chemical signatures for P. falciparum sexual and asexual stages indicative of druggability and suggesting potential targets. Torin 2, a top lead compound (IC50 = 8 nM against gametocytes in vitro), completely blocked oocyst formation in a mouse model of transmission. These results provide critical new leads and potential targets to expand the repertoire of malaria transmission-blocking reagents.
Petri Nets - A Mathematical Formalism to Analyze Chemical Reaction Networks.
Koch, Ina
2010-12-17
In this review we introduce and discuss Petri nets - a mathematical formalism to describe and analyze chemical reaction networks. Petri nets were developed to describe concurrency in general systems. We find most applications to technical and financial systems, but since about twenty years also in systems biology to model biochemical systems. This review aims to give a short informal introduction to the basic formalism illustrated by a chemical example, and to discuss possible applications to the analysis of chemical reaction networks, including cheminformatics. We give a short overview about qualitative as well as quantitative modeling Petri net techniques useful in systems biology, summarizing the state-of-the-art in that field and providing the main literature references. Finally, we discuss advantages and limitations of Petri nets and give an outlook to further development. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
2016-10-01
regular group meetings and attended bi- weekly journal clubs at VPC. During the course of the two year period, I have presented the research project...DATE: October 2016 TYPE OF REPORT: Final PREPARED FOR: U.S. Army Medical Research and Materiel Command Fort Detrick, Maryland 21702-5012...author(s) and should not be construed as an official Department of the Army position, policy or decision unless so designated by other documentation
An open-source java platform for automated reaction mapping.
Crabtree, John D; Mehta, Dinesh P; Kouri, Tina M
2010-09-27
This article presents software applications that have been built upon a modular, open-source, reaction mapping library that can be used in both cheminformatics and bioinformatics research. We first describe the theoretical underpinnings and modular architecture of the core software library. We then describe two applications that have been built upon that core. The first is a generic reaction viewer and mapper, and the second classifies reactions according to rules that can be modified by end users with little or no programming skills.
Isidro-Llobet, Albert; Hadje Georgiou, Kathy; Galloway, Warren R J D; Giacomini, Elisa; Hansen, Mette R; Méndez-Abt, Gabriela; Tan, Yaw Sing; Carro, Laura; Sore, Hannah F; Spring, David R
2015-04-21
Macrocyclic peptidomimetics are associated with a broad range of biological activities. However, despite such potentially valuable properties, the macrocyclic peptidomimetic structural class is generally considered as being poorly explored within drug discovery. This has been attributed to the lack of general methods for producing collections of macrocyclic peptidomimetics with high levels of structural, and thus shape, diversity. In particular, there is a lack of scaffold diversity in current macrocyclic peptidomimetic libraries; indeed, the efficient construction of diverse molecular scaffolds presents a formidable general challenge to the synthetic chemist. Herein we describe a new, advanced strategy for the diversity-oriented synthesis (DOS) of macrocyclic peptidomimetics that enables the combinatorial variation of molecular scaffolds (core macrocyclic ring architectures). The generality and robustness of this DOS strategy is demonstrated by the step-efficient synthesis of a structurally diverse library of over 200 macrocyclic peptidomimetic compounds, each based around a distinct molecular scaffold and isolated in milligram quantities, from readily available building-blocks. To the best of our knowledge this represents an unprecedented level of scaffold diversity in a synthetically derived library of macrocyclic peptidomimetics. Cheminformatic analysis indicated that the library compounds access regions of chemical space that are distinct from those addressed by top-selling brand-name drugs and macrocyclic natural products, illustrating the value of our DOS approach to sample regions of chemical space underexploited in current drug discovery efforts. An analysis of three-dimensional molecular shapes illustrated that the DOS library has a relatively high level of shape diversity.
Carlsson, Lars; Spjuth, Ola; Adams, Samuel; Glen, Robert C; Boyer, Scott
2010-07-01
Predicting metabolic sites is important in the drug discovery process to aid in rapid compound optimisation. No interactive tool exists and most of the useful tools are quite expensive. Here a fast and reliable method to analyse ligands and visualise potential metabolic sites is presented which is based on annotated metabolic data, described by circular fingerprints. The method is available via the graphical workbench Bioclipse, which is equipped with advanced features in cheminformatics. Due to the speed of predictions (less than 50 ms per molecule), scientists can get real time decision support when editing chemical structures. Bioclipse is a rich client, which means that all calculations are performed on the local computer and do not require network connection. Bioclipse and MetaPrint2D are free for all users, released under open source licenses, and available from http://www.bioclipse.net.
Grimm, Fabian A; Russell, William K; Luo, Yu-Syuan; Iwata, Yasuhiro; Chiu, Weihsueh A; Roy, Tim; Boogaard, Peter J; Ketelslegers, Hans B; Rusyn, Ivan
2017-06-20
Substances of Unknown or Variable composition, Complex reaction products, and Biological materials (UVCBs), including many refined petroleum products, present a major challenge in regulatory submissions under the EU Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) and US High Production Volume regulatory regimes. The inherent complexity of these substances, as well as variability in composition obfuscates detailed chemical characterization of each individual substance and their grouping for human and environmental health evaluation through read-across. In this study, we applied ion mobility mass spectrometry in conjunction with cheminformatics-based data integration and visualization to derive substance-specific signatures based on the distribution and abundance of various heteroatom classes. We used petroleum substances from four petroleum substance manufacturing streams and evaluated their chemical composition similarity based on high-dimensional substance-specific quantitative parameters including m/z distribution, drift time, carbon number range, and associated double bond equivalents and hydrogen-to-carbon ratios. Data integration and visualization revealed group-specific similarities for petroleum substances. Observed differences within a product group were indicative of batch- or manufacturer-dependent variation. We demonstrate how high-resolution analytical chemistry approaches can be used effectively to support categorization of UVCBs based on their heteroatom composition and how such data can be used in regulatory decision-making.
2015-08-01
AW, Mo F, Wang K, McConeghy B, Brahmbhatt S, Jong L, Mitchell DM, Johnston RL, Haegert A, Li E, Liew J , Yeung J , Shrestha R, Lapuk AV, McPherson A...Shukin R, Bell RH, Anderson S, Bishop J , Hurtado-Coll A, Xiao H, Chinnaiyan AM, Mehra R, Lin D, Wang Y, Fazli L, Gleave ME, Volik SV, Collins CC...Heterogeneity in the inter-tumor transcriptome of high risk prostate cancer. Genome Biol. (2014). 15: 426. [6]. Song T, Hwang KB, Hsing M, Lee K, Bohn J
Advances in computational metabolomics and databases deepen the understanding of metabolisms.
Tsugawa, Hiroshi
2018-01-29
Mass spectrometry (MS)-based metabolomics is the popular platform for metabolome analyses. Computational techniques for the processing of MS raw data, for example, feature detection, peak alignment, and the exclusion of false-positive peaks, have been established. The next stage of untargeted metabolomics would be to decipher the mass fragmentation of small molecules for the global identification of human-, animal-, plant-, and microbiota metabolomes, resulting in a deeper understanding of metabolisms. This review is an update on the latest computational metabolomics including known/expected structure databases, chemical ontology classifications, and mass spectrometry cheminformatics for the interpretation of mass fragmentations and for the elucidation of unknown metabolites. The importance of metabolome 'databases' and 'repositories' is also discussed because novel biological discoveries are often attributable to the accumulation of data, to relational databases, and to their statistics. Lastly, a practical guide for metabolite annotations is presented as the summary of this review. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ebalunode, Jerry O; Zheng, Weifan; Tropsha, Alexander
2011-01-01
Optimization of chemical library composition affords more efficient identification of hits from biological screening experiments. The optimization could be achieved through rational selection of reagents used in combinatorial library synthesis. However, with a rapid advent of parallel synthesis methods and availability of millions of compounds synthesized by many vendors, it may be more efficient to design targeted libraries by means of virtual screening of commercial compound collections. This chapter reviews the application of advanced cheminformatics approaches such as quantitative structure-activity relationships (QSAR) and pharmacophore modeling (both ligand and structure based) for virtual screening. Both approaches rely on empirical SAR data to build models; thus, the emphasis is placed on achieving models of the highest rigor and external predictive power. We present several examples of successful applications of both approaches for virtual screening to illustrate their utility. We suggest that the expert use of both QSAR and pharmacophore models, either independently or in combination, enables users to achieve targeted libraries enriched with experimentally confirmed hit compounds.
libChEBI: an API for accessing the ChEBI database.
Swainston, Neil; Hastings, Janna; Dekker, Adriano; Muthukrishnan, Venkatesh; May, John; Steinbeck, Christoph; Mendes, Pedro
2016-01-01
ChEBI is a database and ontology of chemical entities of biological interest. It is widely used as a source of identifiers to facilitate unambiguous reference to chemical entities within biological models, databases, ontologies and literature. ChEBI contains a wealth of chemical data, covering over 46,500 distinct chemical entities, and related data such as chemical formula, charge, molecular mass, structure, synonyms and links to external databases. Furthermore, ChEBI is an ontology, and thus provides meaningful links between chemical entities. Unlike many other resources, ChEBI is fully human-curated, providing a reliable, non-redundant collection of chemical entities and related data. While ChEBI is supported by a web service for programmatic access and a number of download files, it does not have an API library to facilitate the use of ChEBI and its data in cheminformatics software. To provide this missing functionality, libChEBI, a comprehensive API library for accessing ChEBI data, is introduced. libChEBI is available in Java, Python and MATLAB versions from http://github.com/libChEBI, and provides full programmatic access to all data held within the ChEBI database through a simple and documented API. libChEBI is reliant upon the (automated) download and regular update of flat files that are held locally. As such, libChEBI can be embedded in both on- and off-line software applications. libChEBI allows better support of ChEBI and its data in the development of new cheminformatics software. Covering three key programming languages, it allows for the entirety of the ChEBI database to be accessed easily and quickly through a simple API. All code is open access and freely available.
An “ADME Module” in the Adverse Outcome Pathway ...
The Adverse Outcome Pathway (AOP) framework has generated intense interest for its utility to organize knowledge on the toxicity mechanisms, starting from a molecular initiating event (MIE) to an adverse outcome across various levels of biological organization. While the AOP framework is designed to be chemical agnostic, it is widely recognized that considering chemicals’ absorption, distribution, metabolism, and excretion (ADME) behaviors is critical in applying the AOP framework in chemical-specific risk assessment. Currently, information being generated as part of the Organisation for Economic Co-operation and Development (OECD) AOP Development Programme is being consolidated into an AOP Knowledgebase (http://aopwiki.org). To enhance the use of this Knowledgebase in risk assessment, an ADME Module has been developed to contain the ADME information needed to connect MIEs and other key events in an AOP for specific chemicals. The conceptual structure of this module characterizes the potential of a chemical to reach the target MIE based on either its structure-based features or relative rates of ADME. The key features of this module include (1) a framework for connecting biology-based AOP to biochemical-based ADME and chemical/human activity-based exposure pathways; (2) links to qualitative tools (e.g., structure-based cheminformatic model) that screen for chemicals that could potentially reach the target MIE; (3) links to quantitative tools (e.g., dose-r
Benchmarking Ligand-Based Virtual High-Throughput Screening with the PubChem Database
Butkiewicz, Mariusz; Lowe, Edward W.; Mueller, Ralf; Mendenhall, Jeffrey L.; Teixeira, Pedro L.; Weaver, C. David; Meiler, Jens
2013-01-01
With the rapidly increasing availability of High-Throughput Screening (HTS) data in the public domain, such as the PubChem database, methods for ligand-based computer-aided drug discovery (LB-CADD) have the potential to accelerate and reduce the cost of probe development and drug discovery efforts in academia. We assemble nine data sets from realistic HTS campaigns representing major families of drug target proteins for benchmarking LB-CADD methods. Each data set is public domain through PubChem and carefully collated through confirmation screens validating active compounds. These data sets provide the foundation for benchmarking a new cheminformatics framework BCL::ChemInfo, which is freely available for non-commercial use. Quantitative structure activity relationship (QSAR) models are built using Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Decision Trees (DTs), and Kohonen networks (KNs). Problem-specific descriptor optimization protocols are assessed including Sequential Feature Forward Selection (SFFS) and various information content measures. Measures of predictive power and confidence are evaluated through cross-validation, and a consensus prediction scheme is tested that combines orthogonal machine learning algorithms into a single predictor. Enrichments ranging from 15 to 101 for a TPR cutoff of 25% are observed. PMID:23299552
Advanced SPARQL querying in small molecule databases.
Galgonek, Jakub; Hurt, Tomáš; Michlíková, Vendula; Onderka, Petr; Schwarz, Jan; Vondrášek, Jiří
2016-01-01
In recent years, the Resource Description Framework (RDF) and the SPARQL query language have become more widely used in the area of cheminformatics and bioinformatics databases. These technologies allow better interoperability of various data sources and powerful searching facilities. However, we identified several deficiencies that make usage of such RDF databases restrictive or challenging for common users. We extended a SPARQL engine to be able to use special procedures inside SPARQL queries. This allows the user to work with data that cannot be simply precomputed and thus cannot be directly stored in the database. We designed an algorithm that checks a query against data ontology to identify possible user errors. This greatly improves query debugging. We also introduced an approach to visualize retrieved data in a user-friendly way, based on templates describing visualizations of resource classes. To integrate all of our approaches, we developed a simple web application. Our system was implemented successfully, and we demonstrated its usability on the ChEBI database transformed into RDF form. To demonstrate procedure call functions, we employed compound similarity searching based on OrChem. The application is publicly available at https://bioinfo.uochb.cas.cz/projects/chemRDF.
Prediction of Hydrolysis Products of Organic Chemicals under Environmental pH Conditions.
Tebes-Stevens, Caroline; Patel, Jay M; Jones, W Jack; Weber, Eric J
2017-05-02
Cheminformatics-based software tools can predict the molecular structure of transformation products using a library of transformation reaction schemes. This paper presents the development of such a library for abiotic hydrolysis of organic chemicals under environmentally relevant conditions. The hydrolysis reaction schemes in the library encode the process science gathered from peer-reviewed literature and regulatory reports. Each scheme has been ranked on a scale of one to six based on the median half-life in a data set compiled from literature-reported hydrolysis rates. These ranks are used to predict the most likely transformation route when more than one structural fragment susceptible to hydrolysis is present in a molecule of interest. Separate rank assignments are established for pH 5, 7, and 9 to represent standard conditions in hydrolysis studies required for registration of pesticides in Organisation for Economic Co-operation and Development (OECD) member countries. The library is applied to predict the likely hydrolytic transformation products for two lists of chemicals, one representative of chemicals used in commerce and the other specific to pesticides, to evaluate which hydrolysis reaction pathways are most likely to be relevant for organic chemicals found in the natural environment.
Data Resources for the Computer-Guided Discovery of Bioactive Natural Products.
Chen, Ya; de Bruyn Kops, Christina; Kirchmair, Johannes
2017-09-25
Natural products from plants, animals, marine life, fungi, bacteria, and other organisms are an important resource for modern drug discovery. Their biological relevance and structural diversity make natural products good starting points for drug design. Natural product-based drug discovery can benefit greatly from computational approaches, which are a valuable precursor or supplementary method to in vitro testing. We present an overview of 25 virtual and 31 physical natural product libraries that are useful for applications in cheminformatics, in particular virtual screening. The overview includes detailed information about each library, the extent of its structural information, and the overlap between different sources of natural products. In terms of chemical structures, there is a large overlap between freely available and commercial virtual natural product libraries. Of particular interest for drug discovery is that at least ten percent of known natural products are readily purchasable and many more natural products and derivatives are available through on-demand sourcing, extraction and synthesis services. Many of the readily purchasable natural products are of small size and hence of relevance to fragment-based drug discovery. There are also an increasing number of macrocyclic natural products and derivatives becoming available for screening.
Czodrowski, Paul
2014-11-01
In the 1960s, the kappa statistic was introduced for the estimation of chance agreement in inter- and intra-rater reliability studies. The kappa statistic was strongly pushed by the medical field where it could be successfully applied via analyzing diagnoses of identical patient groups. Kappa is well suited for classification tasks where ranking is not considered. The main advantage of kappa is its simplicity and the general applicability to multi-class problems which is the major difference to receiver operating characteristic area under the curve. In this manuscript, I will outline the usage of kappa for classification tasks, and I will evaluate the role and uses of kappa in specifically machine learning and cheminformatics.
Genomes to natural products PRediction Informatics for Secondary Metabolomes (PRISM)
Skinnider, Michael A.; Dejong, Chris A.; Rees, Philip N.; Johnston, Chad W.; Li, Haoxin; Webster, Andrew L. H.; Wyatt, Morgan A.; Magarvey, Nathan A.
2015-01-01
Microbial natural products are an invaluable source of evolved bioactive small molecules and pharmaceutical agents. Next-generation and metagenomic sequencing indicates untapped genomic potential, yet high rediscovery rates of known metabolites increasingly frustrate conventional natural product screening programs. New methods to connect biosynthetic gene clusters to novel chemical scaffolds are therefore critical to enable the targeted discovery of genetically encoded natural products. Here, we present PRISM, a computational resource for the identification of biosynthetic gene clusters, prediction of genetically encoded nonribosomal peptides and type I and II polyketides, and bio- and cheminformatic dereplication of known natural products. PRISM implements novel algorithms which render it uniquely capable of predicting type II polyketides, deoxygenated sugars, and starter units, making it a comprehensive genome-guided chemical structure prediction engine. A library of 57 tailoring reactions is leveraged for combinatorial scaffold library generation when multiple potential substrates are consistent with biosynthetic logic. We compare the accuracy of PRISM to existing genomic analysis platforms. PRISM is an open-source, user-friendly web application available at http://magarveylab.ca/prism/. PMID:26442528
Reading PDB: perception of molecules from 3D atomic coordinates.
Urbaczek, Sascha; Kolodzik, Adrian; Groth, Inken; Heuser, Stefan; Rarey, Matthias
2013-01-28
The analysis of small molecule crystal structures is a common way to gather valuable information for drug development. The necessary structural data is usually provided in specific file formats containing only element identities and three-dimensional atomic coordinates as reliable chemical information. Consequently, the automated perception of molecular structures from atomic coordinates has become a standard task in cheminformatics. The molecules generated by such methods must be both chemically valid and reasonable to provide a reliable basis for subsequent calculations. This can be a difficult task since the provided coordinates may deviate from ideal molecular geometries due to experimental uncertainties or low resolution. Additionally, the quality of the input data often differs significantly thus making it difficult to distinguish between actual structural features and mere geometric distortions. We present a method for the generation of molecular structures from atomic coordinates based on the recently published NAOMI model. By making use of this consistent chemical description, our method is able to generate reliable results even with input data of low quality. Molecules from 363 Protein Data Bank (PDB) entries could be perceived with a success rate of 98%, a result which could not be achieved with previously described methods. The robustness of our approach has been assessed by processing all small molecules from the PDB and comparing them to reference structures. The complete data set can be processed in less than 3 min, thus showing that our approach is suitable for large scale applications.
Isidro-Llobet, Albert; Hadje Georgiou, Kathy; Galloway, Warren R. J. D.; Giacomini, Elisa; Hansen, Mette R.; Méndez-Abt, Gabriela; Tan, Yaw Sing; Carro, Laura; Sore, Hannah F.
2015-01-01
Macrocyclic peptidomimetics are associated with a broad range of biological activities. However, despite such potentially valuable properties, the macrocyclic peptidomimetic structural class is generally considered as being poorly explored within drug discovery. This has been attributed to the lack of general methods for producing collections of macrocyclic peptidomimetics with high levels of structural, and thus shape, diversity. In particular, there is a lack of scaffold diversity in current macrocyclic peptidomimetic libraries; indeed, the efficient construction of diverse molecular scaffolds presents a formidable general challenge to the synthetic chemist. Herein we describe a new, advanced strategy for the diversity-oriented synthesis (DOS) of macrocyclic peptidomimetics that enables the combinatorial variation of molecular scaffolds (core macrocyclic ring architectures). The generality and robustness of this DOS strategy is demonstrated by the step-efficient synthesis of a structurally diverse library of over 200 macrocyclic peptidomimetic compounds, each based around a distinct molecular scaffold and isolated in milligram quantities, from readily available building-blocks. To the best of our knowledge this represents an unprecedented level of scaffold diversity in a synthetically derived library of macrocyclic peptidomimetics. Cheminformatic analysis indicated that the library compounds access regions of chemical space that are distinct from those addressed by top-selling brand-name drugs and macrocyclic natural products, illustrating the value of our DOS approach to sample regions of chemical space underexploited in current drug discovery efforts. An analysis of three-dimensional molecular shapes illustrated that the DOS library has a relatively high level of shape diversity. PMID:25778821
Jamal, Salma; Scaria, Vinod
2014-01-01
Background. Traditional Chinese medicine encompasses a well established alternate system of medicine based on a broad range of herbal formulations and is practiced extensively in the region for the treatment of a wide variety of diseases. In recent years, several reports describe in depth studies of the molecular ingredients of traditional Chinese medicines on the biological activities including anti-bacterial activities. The availability of a well-curated dataset of molecular ingredients of traditional Chinese medicines and accurate in-silico cheminformatics models for data mining for antitubercular agents and computational filters to prioritize molecules has prompted us to search for potential hits from these datasets. Results. We used a consensus approach to predict molecules with potential antitubercular activities from a large dataset of molecular ingredients of traditional Chinese medicines available in the public domain. We further prioritized 160 molecules based on five computational filters (SMARTSfilter) so as to avoid potentially undesirable molecules. We further examined the molecules for permeability across Mycobacterial cell wall and for potential activities against non-replicating and drug tolerant Mycobacteria. Additional in-depth literature surveys for the reported antitubercular activities of the molecular ingredients and their sources were considered for drawing support to prioritization. Conclusions. Our analysis suggests that datasets of molecular ingredients of traditional Chinese medicines offer a new opportunity to mine for potential biological activities. In this report, we suggest a proof-of-concept methodology to prioritize molecules for further experimental assays using a variety of computational tools. We also additionally suggest that a subset of prioritized molecules could be used for evaluation for tuberculosis due to their additional effect against non-replicating tuberculosis as well as the additional hepato-protection offered by the source of these ingredients.
Jamal, Salma
2014-01-01
Background. Traditional Chinese medicine encompasses a well established alternate system of medicine based on a broad range of herbal formulations and is practiced extensively in the region for the treatment of a wide variety of diseases. In recent years, several reports describe in depth studies of the molecular ingredients of traditional Chinese medicines on the biological activities including anti-bacterial activities. The availability of a well-curated dataset of molecular ingredients of traditional Chinese medicines and accurate in-silico cheminformatics models for data mining for antitubercular agents and computational filters to prioritize molecules has prompted us to search for potential hits from these datasets. Results. We used a consensus approach to predict molecules with potential antitubercular activities from a large dataset of molecular ingredients of traditional Chinese medicines available in the public domain. We further prioritized 160 molecules based on five computational filters (SMARTSfilter) so as to avoid potentially undesirable molecules. We further examined the molecules for permeability across Mycobacterial cell wall and for potential activities against non-replicating and drug tolerant Mycobacteria. Additional in-depth literature surveys for the reported antitubercular activities of the molecular ingredients and their sources were considered for drawing support to prioritization. Conclusions. Our analysis suggests that datasets of molecular ingredients of traditional Chinese medicines offer a new opportunity to mine for potential biological activities. In this report, we suggest a proof-of-concept methodology to prioritize molecules for further experimental assays using a variety of computational tools. We also additionally suggest that a subset of prioritized molecules could be used for evaluation for tuberculosis due to their additional effect against non-replicating tuberculosis as well as the additional hepato-protection offered by the source of these ingredients. PMID:25081126
NASA Astrophysics Data System (ADS)
Alencar Filho, Edilson B.; Santos, Aline A.; Oliveira, Boaz G.
2017-04-01
The proposal of this work includes the use of quantum chemical methods and cheminformatics strategies in order to understand the structural profile and reactivity of α-nucleophiles compounds such as oximes, amidoximes and hydroxamic acids, related to hydrolysis rate of organophosphates. Theoretical conformational study of 41 compounds were carried out through the PM3 semiempirical Hamiltonian, followed by the geometry optimization at the B3LYP/6-31+G(d,p) level of theory, complemented by Polarized Continuum Model (PCM) to simulate the aqueous environment. In line with the experimental hypothesis about hydrolytic power, the strength of the Intramolecular Hydrogen Bonds (IHBs) at light of the Bader's Quantum Theory of Atoms in Molecules (QTAIM) is related to the preferential conformations of α-nucleophiles. A set of E-Dragon descriptors (1,666) were submitted to a variable selection through Ordered Predictor Selection (OPS) algorithm. Five descriptors, including atomic charges obtained from the Natural Bond Orbitals (NBO) protocol jointly with a fragment index associated to the presence/absence of IHBs, provided a Quantitative Structure-Property Relationship (QSPR) model via Multiple Linear Regression (MLR). This model showed good validation parameters (R2 = 0.80, Qloo2 = 0.67 and Qext2 = 0.81) and allowed the identification of significant physicochemical features on the molecular scaffold in order to design compounds potentially more active against organophosphorus poisoning.
Drug discovery in the next millennium.
Ohlstein, E H; Ruffolo, R R; Elliott, J D
2000-01-01
Selection and validation of novel molecular targets have become of paramount importance in light of the plethora of new potential therapeutic drug targets that have emerged from human gene sequencing. In response to this revolution within the pharmaceutical industry, the development of high-throughput methods in both biology and chemistry has been necessitated. This review addresses these technological advances as well as several new areas that have been created by necessity to deal with this new paradigm, such as bioinformatics, cheminformatics, and functional genomics. With many of these key components of future drug discovery now in place, it is possible to map out a critical path for this process that will be used into the new millennium.
Knowns and unknowns in metabolomics identified by multidimensional NMR and hybrid MS/NMR methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bingol, Kerem; Brüschweiler, Rafael
Metabolomics continues to make rapid progress through the development of new and better methods and their applications to gain insight into the metabolism of a wide range of different biological systems from a systems biology perspective. Customization of NMR databases and search tools allows the faster and more accurate identification of known metabolites, whereas the identification of unknowns, without a need for extensive purification, requires new strategies to integrate NMR with mass spectrometry, cheminformatics, and computational methods. For some applications, the use of covalent and non-covalent attachments in the form of labeled tags or nanoparticles can significantly reduce the complexitymore » of these tasks.« less
Simulation-based training for nurses: Systematic review and meta-analysis.
Hegland, Pål A; Aarlie, Hege; Strømme, Hilde; Jamtvedt, Gro
2017-07-01
Simulation-based training is a widespread strategy to improve health-care quality. However, its effect on registered nurses has previously not been established in systematic reviews. The aim of this systematic review is to evaluate effect of simulation-based training on nurses' skills and knowledge. We searched CDSR, DARE, HTA, CENTRAL, CINAHL, MEDLINE, Embase, ERIC, and SveMed+ for randomised controlled trials (RCT) evaluating effect of simulation-based training among nurses. Searches were completed in December 2016. Two reviewers independently screened abstracts and full-text, extracted data, and assessed risk of bias. We compared simulation-based training to other learning strategies, high-fidelity simulation to other simulation strategies, and different organisation of simulation training. Data were analysed through meta-analysis and narrative syntheses. GRADE was used to assess the quality of evidence. Fifteen RCTs met the inclusion criteria. For the comparison of simulation-based training to other learning strategies on nurses' skills, six studies in the meta-analysis showed a significant, but small effect in favour of simulation (SMD -1.09, CI -1.72 to -0.47). There was large heterogeneity (I 2 85%). For the other comparisons, there was large between-study variation in results. The quality of evidence for all comparisons was graded as low. The effect of simulation-based training varies substantially between studies. Our meta-analysis showed a significant effect of simulation training compared to other learning strategies, but the quality of evidence was low indicating uncertainty. Other comparisons showed inconsistency in results. Based on our findings simulation training appears to be an effective strategy to improve nurses' skills, but further good-quality RCTs with adequate sample sizes are needed. Copyright © 2017 Elsevier Ltd. All rights reserved.
2011-01-01
Background Over the past several centuries, chemistry has permeated virtually every facet of human lifestyle, enriching fields as diverse as medicine, agriculture, manufacturing, warfare, and electronics, among numerous others. Unfortunately, application-specific, incompatible chemical information formats and representation strategies have emerged as a result of such diverse adoption of chemistry. Although a number of efforts have been dedicated to unifying the computational representation of chemical information, disparities between the various chemical databases still persist and stand in the way of cross-domain, interdisciplinary investigations. Through a common syntax and formal semantics, Semantic Web technology offers the ability to accurately represent, integrate, reason about and query across diverse chemical information. Results Here we specify and implement the Chemical Entity Semantic Specification (CHESS) for the representation of polyatomic chemical entities, their substructures, bonds, atoms, and reactions using Semantic Web technologies. CHESS provides means to capture aspects of their corresponding chemical descriptors, connectivity, functional composition, and geometric structure while specifying mechanisms for data provenance. We demonstrate that using our readily extensible specification, it is possible to efficiently integrate multiple disparate chemical data sources, while retaining appropriate correspondence of chemical descriptors, with very little additional effort. We demonstrate the impact of some of our representational decisions on the performance of chemically-aware knowledgebase searching and rudimentary reaction candidate selection. Finally, we provide access to the tools necessary to carry out chemical entity encoding in CHESS, along with a sample knowledgebase. Conclusions By harnessing the power of Semantic Web technologies with CHESS, it is possible to provide a means of facile cross-domain chemical knowledge integration with full preservation of data correspondence and provenance. Our representation builds on existing cheminformatics technologies and, by the virtue of RDF specification, remains flexible and amenable to application- and domain-specific annotations without compromising chemical data integration. We conclude that the adoption of a consistent and semantically-enabled chemical specification is imperative for surviving the coming chemical data deluge and supporting systems science research. PMID:21595881
Chepelev, Leonid L; Dumontier, Michel
2011-05-19
Over the past several centuries, chemistry has permeated virtually every facet of human lifestyle, enriching fields as diverse as medicine, agriculture, manufacturing, warfare, and electronics, among numerous others. Unfortunately, application-specific, incompatible chemical information formats and representation strategies have emerged as a result of such diverse adoption of chemistry. Although a number of efforts have been dedicated to unifying the computational representation of chemical information, disparities between the various chemical databases still persist and stand in the way of cross-domain, interdisciplinary investigations. Through a common syntax and formal semantics, Semantic Web technology offers the ability to accurately represent, integrate, reason about and query across diverse chemical information. Here we specify and implement the Chemical Entity Semantic Specification (CHESS) for the representation of polyatomic chemical entities, their substructures, bonds, atoms, and reactions using Semantic Web technologies. CHESS provides means to capture aspects of their corresponding chemical descriptors, connectivity, functional composition, and geometric structure while specifying mechanisms for data provenance. We demonstrate that using our readily extensible specification, it is possible to efficiently integrate multiple disparate chemical data sources, while retaining appropriate correspondence of chemical descriptors, with very little additional effort. We demonstrate the impact of some of our representational decisions on the performance of chemically-aware knowledgebase searching and rudimentary reaction candidate selection. Finally, we provide access to the tools necessary to carry out chemical entity encoding in CHESS, along with a sample knowledgebase. By harnessing the power of Semantic Web technologies with CHESS, it is possible to provide a means of facile cross-domain chemical knowledge integration with full preservation of data correspondence and provenance. Our representation builds on existing cheminformatics technologies and, by the virtue of RDF specification, remains flexible and amenable to application- and domain-specific annotations without compromising chemical data integration. We conclude that the adoption of a consistent and semantically-enabled chemical specification is imperative for surviving the coming chemical data deluge and supporting systems science research.
Thiopurine Drugs Repositioned as Tyrosinase Inhibitors
Choi, Joonhyeok; Lee, You-Mie; Jee, Jun-Goo
2017-01-01
Drug repositioning is the application of the existing drugs to new uses and has the potential to reduce the time and cost required for the typical drug discovery process. In this study, we repositioned thiopurine drugs used for the treatment of acute leukaemia as new tyrosinase inhibitors. Tyrosinase catalyses two successive oxidations in melanin biosynthesis: the conversions of tyrosine to dihydroxyphenylalanine (DOPA) and DOPA to dopaquinone. Continuous efforts are underway to discover small molecule inhibitors of tyrosinase for therapeutic and cosmetic purposes. Structure-based virtual screening predicted inhibitor candidates from the US Food and Drug Administration (FDA)-approved drugs. Enzyme assays confirmed the thiopurine leukaemia drug, thioguanine, as a tyrosinase inhibitor with the inhibitory constant of 52 μM. Two other thiopurine drugs, mercaptopurine and azathioprine, were also evaluated for their tyrosinase inhibition; mercaptopurine caused stronger inhibition than thioguanine did, whereas azathioprine was a poor inhibitor. The inhibitory constant of mercaptopurine (16 μM) was comparable to that of the well-known inhibitor kojic acid (13 μM). The cell-based assay using B16F10 melanoma cells confirmed that the compounds inhibit mammalian tyrosinase. Particularly, 50 μM thioguanine reduced the melanin content by 57%, without apparent cytotoxicity. Cheminformatics showed that the thiopurine drugs shared little chemical similarity with the known tyrosinase inhibitors. PMID:29283382
Elkamhawy, Ahmed; Paik, Sora; Hassan, Ahmed H E; Lee, Yong Sup; Roh, Eun Joo
2017-12-01
Searching for hit compounds within the huge chemical space resembles the attempt to find a needle in a haystack. Cheminformatics-guided selection of few representative molecules of a rationally designed virtual combinatorial library is a powerful tool to confront this challenge, speed up hit identification and cut off costs. Herein, this approach has been applied to identify hit compounds with novel scaffolds able to inhibit EGFR kinase. From a generated virtual library, six 4-aryloxy-5-aminopyrimidine scaffold-derived compounds were selected, synthesized and evaluated as hit EGFR inhibitors. 4-Aryloxy-5-benzamidopyrimidines inhibited EGFR with IC 50 1.05-5.37 μM. Cell-based assay of the most potent EGFR inhibitor hit (10ac) confirmed its cytotoxicity against different cancerous cells. In spite of no EGFR, HER2 or VEGFR1 inhibition was elicited by 4-aryloxy-5-(thio)ureidopyrimidine derivatives, cell-based evaluation suggested them as antiproliferative hits acting by other mechanism(s). Molecular docking study provided a plausible explanation of incapability of 4-aryloxy-5-(thio)ureidopyrimidines to inhibit EGFR and suggested a reasonable binding mode of 4-aryloxy-5-benzamidopyrimidines which provides a basis to develop more optimized ligands. Copyright © 2017 Elsevier Inc. All rights reserved.
Shang, Jun; Hu, Ben; Wang, Junmei; Zhu, Feng; Kang, Yu; Li, Dan; Sun, Huiyong; Kong, De-Xin; Hou, Tingjun
2018-06-07
This is a new golden age for drug discovery based on natural products derived from both marine and terrestrial sources. Herein, a straightforward but important question is "what are the major structural differences between marine natural products (MNPs) and terrestrial natural products (TNPs)?" To answer this question, we analyzed the important physicochemical properties, structural features, and drug-likeness of the two types of natural products and discussed their differences from the perspective of evolution. In general, MNPs have lower solubility and are often larger than TNPs. On average, particularly from the perspective of unique fragments and scaffolds, MNPs usually possess more long chains and large rings, especially 8- to 10-membered rings. MNPs also have more nitrogen atoms and halogens, notably bromines, and fewer oxygen atoms, suggesting that MNPs may be synthesized by more diverse biosynthetic pathways than TNPs. Analysis of the frequently occurring Murcko frameworks in MNPs and TNPS also reveals a striking difference between MNPs and TNPs. The scaffolds of the former tend to be longer and often contain ester bonds connected to 10-membered rings, while the scaffolds of the latter tend to be shorter and often bear more stable ring systems and bond types. Besides, the prediction from the naïve Bayesian drug-likeness classification model suggests that most compounds in MNPs and TNPs are drug-like, although MNPs are slightly more drug-like than TNPs. We believe that MNPs and TNPs with novel drug-like scaffolds have great potential to be drug leads or drug candidates in drug discovery campaigns.
Pawlowski, Roger P.; Phipps, Eric T.; Salinger, Andrew G.
2012-01-01
An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering.
Genomes to natural products PRediction Informatics for Secondary Metabolomes (PRISM).
Skinnider, Michael A; Dejong, Chris A; Rees, Philip N; Johnston, Chad W; Li, Haoxin; Webster, Andrew L H; Wyatt, Morgan A; Magarvey, Nathan A
2015-11-16
Microbial natural products are an invaluable source of evolved bioactive small molecules and pharmaceutical agents. Next-generation and metagenomic sequencing indicates untapped genomic potential, yet high rediscovery rates of known metabolites increasingly frustrate conventional natural product screening programs. New methods to connect biosynthetic gene clusters to novel chemical scaffolds are therefore critical to enable the targeted discovery of genetically encoded natural products. Here, we present PRISM, a computational resource for the identification of biosynthetic gene clusters, prediction of genetically encoded nonribosomal peptides and type I and II polyketides, and bio- and cheminformatic dereplication of known natural products. PRISM implements novel algorithms which render it uniquely capable of predicting type II polyketides, deoxygenated sugars, and starter units, making it a comprehensive genome-guided chemical structure prediction engine. A library of 57 tailoring reactions is leveraged for combinatorial scaffold library generation when multiple potential substrates are consistent with biosynthetic logic. We compare the accuracy of PRISM to existing genomic analysis platforms. PRISM is an open-source, user-friendly web application available at http://magarveylab.ca/prism/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
The ChEMBL database as linked open data
2013-01-01
Background Making data available as Linked Data using Resource Description Framework (RDF) promotes integration with other web resources. RDF documents can natively link to related data, and others can link back using Uniform Resource Identifiers (URIs). RDF makes the data machine-readable and uses extensible vocabularies for additional information, making it easier to scale up inference and data analysis. Results This paper describes recent developments in an ongoing project converting data from the ChEMBL database into RDF triples. Relative to earlier versions, this updated version of ChEMBL-RDF uses recently introduced ontologies, including CHEMINF and CiTO; exposes more information from the database; and is now available as dereferencable, linked data. To demonstrate these new features, we present novel use cases showing further integration with other web resources, including Bio2RDF, Chem2Bio2RDF, and ChemSpider, and showing the use of standard ontologies for querying. Conclusions We have illustrated the advantages of using open standards and ontologies to link the ChEMBL database to other databases. Using those links and the knowledge encoded in standards and ontologies, the ChEMBL-RDF resource creates a foundation for integrated semantic web cheminformatics applications, such as the presented decision support. PMID:23657106
Sastry, Madhavi; Lowrie, Jeffrey F; Dixon, Steven L; Sherman, Woody
2010-05-24
A systematic virtual screening study on 11 pharmaceutically relevant targets has been conducted to investigate the interrelation between 8 two-dimensional (2D) fingerprinting methods, 13 atom-typing schemes, 13 bit scaling rules, and 12 similarity metrics using the new cheminformatics package Canvas. In total, 157 872 virtual screens were performed to assess the ability of each combination of parameters to identify actives in a database screen. In general, fingerprint methods, such as MOLPRINT2D, Radial, and Dendritic that encode information about local environment beyond simple linear paths outperformed other fingerprint methods. Atom-typing schemes with more specific information, such as Daylight, Mol2, and Carhart were generally superior to more generic atom-typing schemes. Enrichment factors across all targets were improved considerably with the best settings, although no single set of parameters performed optimally on all targets. The size of the addressable bit space for the fingerprints was also explored, and it was found to have a substantial impact on enrichments. Small bit spaces, such as 1024, resulted in many collisions and in a significant degradation in enrichments compared to larger bit spaces that avoid collisions.
2014-01-01
Background A challenge for drug of abuse testing is presented by ‘designer drugs’, compounds typically discovered by modifications of existing clinical drug classes such as amphetamines and cannabinoids. Drug of abuse screening immunoassays directed at amphetamine or methamphetamine only detect a small subset of designer amphetamine-like drugs, and those immunoassays designed for tetrahydrocannabinol metabolites generally do not cross-react with synthetic cannabinoids lacking the classic cannabinoid chemical backbone. This suggests complexity in understanding how to detect and identify whether a patient has taken a molecule of one class or another, impacting clinical care. Methods Cross-reactivity data from immunoassays specifically targeting designer amphetamine-like and synthetic cannabinoid drugs was collected from multiple published sources, and virtual chemical libraries for molecular similarity analysis were built. The virtual library for synthetic cannabinoid analysis contained a total of 169 structures, while the virtual library for amphetamine-type stimulants contained 288 compounds. Two-dimensional (2D) similarity for each test compound was compared to the target molecule of the immunoassay undergoing analysis. Results 2D similarity differentiated between cross-reactive and non-cross-reactive compounds for immunoassays targeting mephedrone/methcathinone, 3,4-methylenedioxypyrovalerone, benzylpiperazine, mephentermine, and synthetic cannabinoids. Conclusions In this study, we applied 2D molecular similarity analysis to the designer amphetamine-type stimulants and synthetic cannabinoids. Similarity calculations can be used to more efficiently decide which drugs and metabolites should be tested in cross-reactivity studies, as well as to design experiments and potentially predict antigens that would lead to immunoassays with cross reactivity for a broader array of designer drugs. PMID:24851137
Jamal, Salma; Scaria, Vinod
2013-11-19
Leishmaniasis is a neglected tropical disease which affects approx. 12 million individuals worldwide and caused by parasite Leishmania. The current drugs used in the treatment of Leishmaniasis are highly toxic and has seen widespread emergence of drug resistant strains which necessitates the need for the development of new therapeutic options. The high throughput screen data available has made it possible to generate computational predictive models which have the ability to assess the active scaffolds in a chemical library followed by its ADME/toxicity properties in the biological trials. In the present study, we have used publicly available, high-throughput screen datasets of chemical moieties which have been adjudged to target the pyruvate kinase enzyme of L. mexicana (LmPK). The machine learning approach was used to create computational models capable of predicting the biological activity of novel antileishmanial compounds. Further, we evaluated the molecules using the substructure based approach to identify the common substructures contributing to their activity. We generated computational models based on machine learning methods and evaluated the performance of these models based on various statistical figures of merit. Random forest based approach was determined to be the most sensitive, better accuracy as well as ROC. We further added a substructure based approach to analyze the molecules to identify potentially enriched substructures in the active dataset. We believe that the models developed in the present study would lead to reduction in cost and length of clinical studies and hence newer drugs would appear faster in the market providing better healthcare options to the patients.
Visual analytics in cheminformatics: user-supervised descriptor selection for QSAR methods.
Martínez, María Jimena; Ponzoni, Ignacio; Díaz, Mónica F; Vazquez, Gustavo E; Soto, Axel J
2015-01-01
The design of QSAR/QSPR models is a challenging problem, where the selection of the most relevant descriptors constitutes a key step of the process. Several feature selection methods that address this step are concentrated on statistical associations among descriptors and target properties, whereas the chemical knowledge is left out of the analysis. For this reason, the interpretability and generality of the QSAR/QSPR models obtained by these feature selection methods are drastically affected. Therefore, an approach for integrating domain expert's knowledge in the selection process is needed for increase the confidence in the final set of descriptors. In this paper a software tool, which we named Visual and Interactive DEscriptor ANalysis (VIDEAN), that combines statistical methods with interactive visualizations for choosing a set of descriptors for predicting a target property is proposed. Domain expertise can be added to the feature selection process by means of an interactive visual exploration of data, and aided by statistical tools and metrics based on information theory. Coordinated visual representations are presented for capturing different relationships and interactions among descriptors, target properties and candidate subsets of descriptors. The competencies of the proposed software were assessed through different scenarios. These scenarios reveal how an expert can use this tool to choose one subset of descriptors from a group of candidate subsets or how to modify existing descriptor subsets and even incorporate new descriptors according to his or her own knowledge of the target property. The reported experiences showed the suitability of our software for selecting sets of descriptors with low cardinality, high interpretability, low redundancy and high statistical performance in a visual exploratory way. Therefore, it is possible to conclude that the resulting tool allows the integration of a chemist's expertise in the descriptor selection process with a low cognitive effort in contrast with the alternative of using an ad-hoc manual analysis of the selected descriptors. Graphical abstractVIDEAN allows the visual analysis of candidate subsets of descriptors for QSAR/QSPR. In the two panels on the top, users can interactively explore numerical correlations as well as co-occurrences in the candidate subsets through two interactive graphs.
Reaction-Infiltration Instabilities in Fractured and Porous Rocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ladd, Anthony
In this project we are developing a multiscale analysis of the evolution of fracture permeability, using numerical simulations and linear stability analysis. Our simulations include fully three-dimensional simulations of the fracture topography, fluid flow, and reactant transport, two-dimensional simulations based on aperture models, and linear stability analysis.
Information Retrieval and Text Mining Technologies for Chemistry.
Krallinger, Martin; Rabal, Obdulia; Lourenço, Anália; Oyarzabal, Julen; Valencia, Alfonso
2017-06-28
Efficient access to chemical information contained in scientific literature, patents, technical reports, or the web is a pressing need shared by researchers and patent attorneys from different chemical disciplines. Retrieval of important chemical information in most cases starts with finding relevant documents for a particular chemical compound or family. Targeted retrieval of chemical documents is closely connected to the automatic recognition of chemical entities in the text, which commonly involves the extraction of the entire list of chemicals mentioned in a document, including any associated information. In this Review, we provide a comprehensive and in-depth description of fundamental concepts, technical implementations, and current technologies for meeting these information demands. A strong focus is placed on community challenges addressing systems performance, more particularly CHEMDNER and CHEMDNER patents tasks of BioCreative IV and V, respectively. Considering the growing interest in the construction of automatically annotated chemical knowledge bases that integrate chemical information and biological data, cheminformatics approaches for mapping the extracted chemical names into chemical structures and their subsequent annotation together with text mining applications for linking chemistry with biological information are also presented. Finally, future trends and current challenges are highlighted as a roadmap proposal for research in this emerging field.
New Toxico-Cheminformatics & Computational Toxicology ...
EPA’s National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction. The DSSTox project is improving public access to quality structure-annotated chemical toxicity information in less summarized forms than traditionally employed in SAR modeling, and in ways that facilitate data-mining, and data read-across. The DSSTox Structure-Browser provides structure searchability across all published DSSTox toxicity-related inventory, and is enabling linkages between previously isolated toxicity data resources. As of early March 2008, the public DSSTox inventory has been integrated into PubChem, allowing a user to take full advantage of PubChem structure-activity and bioassay clustering features. The most recent DSSTox version of the Carcinogenic Potency Database file (CPDBAS) illustrates ways in which various summary definitions of carcinogenic activity can be employed in modeling and data mining. Phase I of the ToxCastTM project is generating high-throughput screening data from several hundred biochemical and cell-based assays for a set of 320 chemicals, mostly pesticide actives, with rich toxicology profiles. Incorporating and expanding traditional SAR concepts into this new high-throughput and data-rich world pose conceptual and practical challenges, but also holds great promise for improving predictive capabilities.
The Design and the Formative Evaluation of a Web-Based Course for Simulation Analysis Experiences
ERIC Educational Resources Information Center
Tao, Yu-Hui; Guo, Shin-Ming; Lu, Ya-Hui
2006-01-01
Simulation output analysis has received little attention comparing to modeling and programming in real-world simulation applications. This is further evidenced by our observation that students and beginners acquire neither adequate details of knowledge nor relevant experience of simulation output analysis in traditional classroom learning. With…
a Simulation-As Framework Facilitating Webgis Based Installation Planning
NASA Astrophysics Data System (ADS)
Zheng, Z.; Chang, Z. Y.; Fei, Y. F.
2017-09-01
Installation Planning is constrained by both natural and social conditions, especially for spatially sparse but functionally connected facilities. Simulation is important for proper deploy in space and configuration in function of facilities to make them a cohesive and supportive system to meet users' operation needs. Based on requirement analysis, we propose a framework to combine GIS and Agent simulation to overcome the shortness in temporal analysis and task simulation of traditional GIS. In this framework, Agent based simulation runs as a service on the server, exposes basic simulation functions, such as scenario configuration, simulation control, and simulation data retrieval to installation planners. At the same time, the simulation service is able to utilize various kinds of geoprocessing services in Agents' process logic to make sophisticated spatial inferences and analysis. This simulation-as-a-service framework has many potential benefits, such as easy-to-use, on-demand, shared understanding, and boosted performances. At the end, we present a preliminary implement of this concept using ArcGIS javascript api 4.0 and ArcGIS for server, showing how trip planning and driving can be carried out by agents.
Cloud-Based Orchestration of a Model-Based Power and Data Analysis Toolchain
NASA Technical Reports Server (NTRS)
Post, Ethan; Cole, Bjorn; Dinkel, Kevin; Kim, Hongman; Lee, Erich; Nairouz, Bassem
2016-01-01
The proposed Europa Mission concept contains many engineering and scientific instruments that consume varying amounts of power and produce varying amounts of data throughout the mission. System-level power and data usage must be well understood and analyzed to verify design requirements. Numerous cross-disciplinary tools and analysis models are used to simulate the system-level spacecraft power and data behavior. This paper addresses the problem of orchestrating a consistent set of models, tools, and data in a unified analysis toolchain when ownership is distributed among numerous domain experts. An analysis and simulation environment was developed as a way to manage the complexity of the power and data analysis toolchain and to reduce the simulation turnaround time. A system model data repository is used as the trusted store of high-level inputs and results while other remote servers are used for archival of larger data sets and for analysis tool execution. Simulation data passes through numerous domain-specific analysis tools and end-to-end simulation execution is enabled through a web-based tool. The use of a cloud-based service facilitates coordination among distributed developers and enables scalable computation and storage needs, and ensures a consistent execution environment. Configuration management is emphasized to maintain traceability between current and historical simulation runs and their corresponding versions of models, tools and data.
Local-feature analysis for automated coarse-graining of bulk-polymer molecular dynamics simulations.
Xue, Y; Ludovice, P J; Grover, M A
2012-12-01
A method for automated coarse-graining of bulk polymers is presented, using the data-mining tool of local feature analysis. Most existing methods for polymer coarse-graining define superatoms based on their covalent bonding topology along the polymer backbone, but here superatoms are defined based only on their correlated motions, as observed in molecular dynamics simulations. Correlated atomic motions are identified in the simulation data using local feature analysis, between atoms in the same or in different polymer chains. Groups of highly correlated atoms constitute the superatoms in the coarse-graining scheme, and the positions of their seed coordinates are then projected forward in time. Based on only the seed positions, local feature analysis enables the full reconstruction of all atomic positions. This reconstruction suggests an iterative scheme to reduce the computation of the simulations to initialize another short molecular dynamic simulation, identify new superatoms, and again project forward in time.
NASA Technical Reports Server (NTRS)
Phillips, Dave; Haas, William; Barth, Tim; Benjamin, Perakath; Graul, Michael; Bagatourova, Olga
2005-01-01
Range Process Simulation Tool (RPST) is a computer program that assists managers in rapidly predicting and quantitatively assessing the operational effects of proposed technological additions to, and/or upgrades of, complex facilities and engineering systems such as the Eastern Test Range. Originally designed for application to space transportation systems, RPST is also suitable for assessing effects of proposed changes in industrial facilities and large organizations. RPST follows a model-based approach that includes finite-capacity schedule analysis and discrete-event process simulation. A component-based, scalable, open architecture makes RPST easily and rapidly tailorable for diverse applications. Specific RPST functions include: (1) definition of analysis objectives and performance metrics; (2) selection of process templates from a processtemplate library; (3) configuration of process models for detailed simulation and schedule analysis; (4) design of operations- analysis experiments; (5) schedule and simulation-based process analysis; and (6) optimization of performance by use of genetic algorithms and simulated annealing. The main benefits afforded by RPST are provision of information that can be used to reduce costs of operation and maintenance, and the capability for affordable, accurate, and reliable prediction and exploration of the consequences of many alternative proposed decisions.
Sustainable Practices in Medicinal Chemistry Part 2: Green by Design.
Aliagas, Ignacio; Berger, Raphaëlle; Goldberg, Kristin; Nishimura, Rachel T; Reilly, John; Richardson, Paul; Richter, Daniel; Sherer, Edward C; Sparling, Brian A; Bryan, Marian C
2017-07-27
With the development of ever-expanding synthetic methodologies, a medicinal chemist's toolkit continues to swell. However, with finite time and resources as well as a growing understanding of our field's environment impact, it is critical to refine what can be made to what should be made. This review seeks to highlight multiple cheminformatic approaches in drug discovery that can influence and triage design and execution impacting the likelihood of rapidly generating high-value molecules in a more sustainable manner. This strategy gives chemists the tools to design and refine vast libraries, stress "druglikeness", and rapidly identify SAR trends. Project success, i.e., identification of a clinical candidate, is then reached faster with fewer molecules with the farther-reaching ramification of using fewer resources and generating less waste, thereby helping "green" our field.
Hadadi, Noushin; Hafner, Jasmin; Shajkofci, Adrian; Zisaki, Aikaterini; Hatzimanikatis, Vassily
2016-10-21
Because the complexity of metabolism cannot be intuitively understood or analyzed, computational methods are indispensable for studying biochemistry and deepening our understanding of cellular metabolism to promote new discoveries. We used the computational framework BNICE.ch along with cheminformatic tools to assemble the whole theoretical reactome from the known metabolome through expansion of the known biochemistry presented in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. We constructed the ATLAS of Biochemistry, a database of all theoretical biochemical reactions based on known biochemical principles and compounds. ATLAS includes more than 130 000 hypothetical enzymatic reactions that connect two or more KEGG metabolites through novel enzymatic reactions that have never been reported to occur in living organisms. Moreover, ATLAS reactions integrate 42% of KEGG metabolites that are not currently present in any KEGG reaction into one or more novel enzymatic reactions. The generated repository of information is organized in a Web-based database ( http://lcsb-databases.epfl.ch/atlas/ ) that allows the user to search for all possible routes from any substrate compound to any product. The resulting pathways involve known and novel enzymatic steps that may indicate unidentified enzymatic activities and provide potential targets for protein engineering. Our approach of introducing novel biochemistry into pathway design and associated databases will be important for synthetic biology and metabolic engineering.
MyMolDB: a micromolecular database solution with open source and free components.
Xia, Bing; Tai, Zheng-Fu; Gu, Yu-Cheng; Li, Bang-Jing; Ding, Li-Sheng; Zhou, Yan
2011-10-01
To manage chemical structures in small laboratories is one of the important daily tasks. Few solutions are available on the internet, and most of them are closed source applications. The open-source applications typically have limited capability and basic cheminformatics functionalities. In this article, we describe an open-source solution to manage chemicals in research groups based on open source and free components. It has a user-friendly interface with the functions of chemical handling and intensive searching. MyMolDB is a micromolecular database solution that supports exact, substructure, similarity, and combined searching. This solution is mainly implemented using scripting language Python with a web-based interface for compound management and searching. Almost all the searches are in essence done with pure SQL on the database by using the high performance of the database engine. Thus, impressive searching speed has been archived in large data sets for no external Central Processing Unit (CPU) consuming languages were involved in the key procedure of the searching. MyMolDB is an open-source software and can be modified and/or redistributed under GNU General Public License version 3 published by the Free Software Foundation (Free Software Foundation Inc. The GNU General Public License, Version 3, 2007. Available at: http://www.gnu.org/licenses/gpl.html). The software itself can be found at http://code.google.com/p/mymoldb/. Copyright © 2011 Wiley Periodicals, Inc.
Battista, Alexis
2017-01-01
The dominant frameworks for describing how simulations support learning emphasize increasing access to structured practice and the provision of feedback which are commonly associated with skills-based simulations. By contrast, studies examining student participants' experiences during scenario-based simulations suggest that learning may also occur through participation. However, studies directly examining student participation during scenario-based simulations are limited. This study examined the types of activities student participants engaged in during scenario-based simulations and then analyzed their patterns of activity to consider how participation may support learning. Drawing from Engeström's first-, second-, and third-generation activity systems analysis, an in-depth descriptive analysis was conducted. The study drew from multiple qualitative methods, namely narrative, video, and activity systems analysis, to examine student participants' activities and interaction patterns across four video-recorded simulations depicting common motivations for using scenario-based simulations (e.g., communication, critical patient management). The activity systems analysis revealed that student participants' activities encompassed three clinically relevant categories, including (a) use of physical clinical tools and artifacts, (b) social interactions, and (c) performance of structured interventions. Role assignment influenced participants' activities and the complexity of their engagement. Importantly, participants made sense of the clinical situation presented in the scenario by reflexively linking these three activities together. Specifically, student participants performed structured interventions, relying upon the use of physical tools, clinical artifacts, and social interactions together with interactions between students, standardized patients, and other simulated participants to achieve their goals. When multiple student participants were present, such as in a team-based scenario, they distributed the workload to achieve their goals. The findings suggest that student participants learned as they engaged in these scenario-based simulations when they worked to make sense of the patient's clinical presentation. The findings may provide insight into how student participants' meaning-making efforts are mediated by the cultural artifacts (e.g., physical clinical tools) they access, the social interactions they engage in, the structured interventions they perform, and the roles they are assigned. The findings also highlight the complex and emergent properties of scenario-based simulations as well as how activities are nested. Implications for learning, instructional design, and assessment are discussed.
Simulation-based bronchoscopy training: systematic review and meta-analysis.
Kennedy, Cassie C; Maldonado, Fabien; Cook, David A
2013-07-01
Simulation-based bronchoscopy training is increasingly used, but effectiveness remains uncertain. We sought to perform a comprehensive synthesis of published work on simulation-based bronchoscopy training. We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus for eligible articles through May 11, 2011. We included all original studies involving health professionals that evaluated, in comparison with no intervention or an alternative instructional approach, simulation-based training for flexible or rigid bronchoscopy. Study selection and data abstraction were performed independently and in duplicate. We pooled results using random effects meta-analysis. From an initial pool of 10,903 articles, we identified 17 studies evaluating simulation-based bronchoscopy training. In comparison with no intervention, simulation training was associated with large benefits on skills and behaviors (pooled effect size, 1.21 [95% CI, 0.82-1.60]; n=8 studies) and moderate benefits on time (0.62 [95% CI, 0.12-1.13]; n=7). In comparison with clinical instruction, behaviors with real patients showed nonsignificant effects favoring simulation for time (0.61 [95% CI, -1.47 to 2.69]) and process (0.33 [95% CI, -1.46 to 2.11]) outcomes (n=2 studies each), although variation in training time might account for these differences. Four studies compared alternate simulation-based training approaches. Inductive analysis to inform instructional design suggested that longer or more structured training is more effective, authentic clinical context adds value, and animal models and plastic part-task models may be superior to more costly virtual-reality simulators. Simulation-based bronchoscopy training is effective in comparison with no intervention. Comparative effectiveness studies are few.
An Investigation of Computer-based Simulations for School Crises Management.
ERIC Educational Resources Information Center
Degnan, Edward; Bozeman, William
2001-01-01
Describes development of a computer-based simulation program for training school personnel in crisis management. Addresses the data collection and analysis involved in developing a simulated event, the systems requirements for simulation, and a case study of application and use of the completed simulation. (Contains 21 references.) (Authors/PKP)
Tjiam, Irene M; Schout, Barbara M A; Hendrikx, Ad J M; Scherpbier, Albert J J M; Witjes, J Alfred; van Merriënboer, Jeroen J G
2012-01-01
Most studies of simulator-based surgical skills training have focused on the acquisition of psychomotor skills, but surgical procedures are complex tasks requiring both psychomotor and cognitive skills. As skills training is modelled on expert performance consisting partly of unconscious automatic processes that experts are not always able to explicate, simulator developers should collaborate with educational experts and physicians in developing efficient and effective training programmes. This article presents an approach to designing simulator-based skill training comprising cognitive task analysis integrated with instructional design according to the four-component/instructional design model. This theory-driven approach is illustrated by a description of how it was used in the development of simulator-based training for the nephrostomy procedure.
Mathematics Career Simulations: An Invitation
ERIC Educational Resources Information Center
Sinn, Robb; Phipps, Marnie
2013-01-01
A simulated academic career was combined with inquiry-based learning in an upper-division undergraduate mathematics course. Concepts such as tenure, professional conferences and journals were simulated. Simulation procedures were combined with student-led, inquiry-based classroom formats. A qualitative analysis (ethnography) describes the culture…
Greenberg, David A.
2011-01-01
Computer simulation methods are under-used tools in genetic analysis because simulation approaches have been portrayed as inferior to analytic methods. Even when simulation is used, its advantages are not fully exploited. Here, I present SHIMSHON, our package of genetic simulation programs that have been developed, tested, used for research, and used to generated data for Genetic Analysis Workshops (GAW). These simulation programs, now web-accessible, can be used by anyone to answer questions about designing and analyzing genetic disease studies for locus identification. This work has three foci: (1) the historical context of SHIMSHON's development, suggesting why simulation has not been more widely used so far. (2) Advantages of simulation: computer simulation helps us to understand how genetic analysis methods work. It has advantages for understanding disease inheritance and methods for gene searches. Furthermore, simulation methods can be used to answer fundamental questions that either cannot be answered by analytical approaches or cannot even be defined until the problems are identified and studied, using simulation. (3) I argue that, because simulation was not accepted, there was a failure to grasp the meaning of some simulation-based studies of linkage. This may have contributed to perceived weaknesses in linkage analysis; weaknesses that did not, in fact, exist. PMID:22189467
Predicting Adverse Drug Effects from Literature- and Database-Mined Assertions.
La, Mary K; Sedykh, Alexander; Fourches, Denis; Muratov, Eugene; Tropsha, Alexander
2018-06-06
Given that adverse drug effects (ADEs) have led to post-market patient harm and subsequent drug withdrawal, failure of candidate agents in the drug development process, and other negative outcomes, it is essential to attempt to forecast ADEs and other relevant drug-target-effect relationships as early as possible. Current pharmacologic data sources, providing multiple complementary perspectives on the drug-target-effect paradigm, can be integrated to facilitate the inference of relationships between these entities. This study aims to identify both existing and unknown relationships between chemicals (C), protein targets (T), and ADEs (E) based on evidence in the literature. Cheminformatics and data mining approaches were employed to integrate and analyze publicly available clinical pharmacology data and literature assertions interrelating drugs, targets, and ADEs. Based on these assertions, a C-T-E relationship knowledge base was developed. Known pairwise relationships between chemicals, targets, and ADEs were collected from several pharmacological and biomedical data sources. These relationships were curated and integrated according to Swanson's paradigm to form C-T-E triangles. Missing C-E edges were then inferred as C-E relationships. Unreported associations between drugs, targets, and ADEs were inferred, and inferences were prioritized as testable hypotheses. Several C-E inferences, including testosterone → myocardial infarction, were identified using inferences based on the literature sources published prior to confirmatory case reports. Timestamping approaches confirmed the predictive ability of this inference strategy on a larger scale. The presented workflow, based on free-access databases and an association-based inference scheme, provided novel C-E relationships that have been validated post hoc in case reports. With refinement of prioritization schemes for the generated C-E inferences, this workflow may provide an effective computational method for the early detection of potential drug candidate ADEs that can be followed by targeted experimental investigations.
Advances in structure elucidation of small molecules using mass spectrometry
Fiehn, Oliver
2010-01-01
The structural elucidation of small molecules using mass spectrometry plays an important role in modern life sciences and bioanalytical approaches. This review covers different soft and hard ionization techniques and figures of merit for modern mass spectrometers, such as mass resolving power, mass accuracy, isotopic abundance accuracy, accurate mass multiple-stage MS(n) capability, as well as hybrid mass spectrometric and orthogonal chromatographic approaches. The latter part discusses mass spectral data handling strategies, which includes background and noise subtraction, adduct formation and detection, charge state determination, accurate mass measurements, elemental composition determinations, and complex data-dependent setups with ion maps and ion trees. The importance of mass spectral library search algorithms for tandem mass spectra and multiple-stage MS(n) mass spectra as well as mass spectral tree libraries that combine multiple-stage mass spectra are outlined. The successive chapter discusses mass spectral fragmentation pathways, biotransformation reactions and drug metabolism studies, the mass spectral simulation and generation of in silico mass spectra, expert systems for mass spectral interpretation, and the use of computational chemistry to explain gas-phase phenomena. A single chapter discusses data handling for hyphenated approaches including mass spectral deconvolution for clean mass spectra, cheminformatics approaches and structure retention relationships, and retention index predictions for gas and liquid chromatography. The last section reviews the current state of electronic data sharing of mass spectra and discusses the importance of software development for the advancement of structure elucidation of small molecules. Electronic supplementary material The online version of this article (doi:10.1007/s12566-010-0015-9) contains supplementary material, which is available to authorized users. PMID:21289855
ERIC Educational Resources Information Center
Pieper, William J.; And Others
This study was initiated to design, develop, implement, and evaluate a videodisc-based simulator system, the Interactive Graphics Simulator (IGS) for 6883 Converter Flight Control Test Station training at Lowry Air Force Base, Colorado. The simulator provided a means for performing task analysis online, developing simulations from the task…
In situ visualization and data analysis for turbidity currents simulation
NASA Astrophysics Data System (ADS)
Camata, Jose J.; Silva, Vítor; Valduriez, Patrick; Mattoso, Marta; Coutinho, Alvaro L. G. A.
2018-01-01
Turbidity currents are underflows responsible for sediment deposits that generate geological formations of interest for the oil and gas industry. LibMesh-sedimentation is an application built upon the libMesh library to simulate turbidity currents. In this work, we present the integration of libMesh-sedimentation with in situ visualization and in transit data analysis tools. DfAnalyzer is a solution based on provenance data to extract and relate strategic simulation data in transit from multiple data for online queries. We integrate libMesh-sedimentation and ParaView Catalyst to perform in situ data analysis and visualization. We present a parallel performance analysis for two turbidity currents simulations showing that the overhead for both in situ visualization and in transit data analysis is negligible. We show that our tools enable monitoring the sediments appearance at runtime and steer the simulation based on the solver convergence and visual information on the sediment deposits, thus enhancing the analytical power of turbidity currents simulations.
Cost considerations in using simulations for medical training.
Fletcher, J D; Wind, Alexander P
2013-10-01
This article reviews simulation used for medical training, techniques for assessing simulation-based training, and cost analyses that can be included in such assessments. Simulation in medical training appears to take four general forms: human actors who are taught to simulate illnesses and ailments in standardized ways; virtual patients who are generally presented via computer-controlled, multimedia displays; full-body manikins that simulate patients using electronic sensors, responders, and controls; and part-task anatomical simulations of various body parts and systems. Techniques for assessing costs include benefit-cost analysis, return on investment, and cost-effectiveness analysis. Techniques for assessing the effectiveness of simulation-based medical training include the use of transfer effectiveness ratios and incremental transfer effectiveness ratios to measure transfer of knowledge and skill provided by simulation to the performance of medical procedures. Assessment of costs and simulation effectiveness can be combined with measures of transfer using techniques such as isoperformance analysis to identify ways of minimizing costs without reducing performance effectiveness or maximizing performance without increasing costs. In sum, economic analysis must be considered in training assessments if training budgets are to compete successfully with other requirements for funding. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
Demonstration of innovative techniques for work zone safety data analysis
DOT National Transportation Integrated Search
2009-07-15
Based upon the results of the simulator data analysis, additional future research can be : identified to validate the driving simulator in terms of similarities with Ohio work zones. For : instance, the speeds observed in the simulator were greater f...
NASA Astrophysics Data System (ADS)
García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier
2016-04-01
Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.
Ten Eyck, Raymond P; Tews, Matthew; Ballester, John M; Hamilton, Glenn C
2010-06-01
To determine the impact of simulation-based instruction on student performance in the role of emergency department resuscitation team leader. A randomized, single-blinded, controlled study using an intention to treat analysis. Eighty-three fourth-year medical students enrolled in an emergency medicine clerkship were randomly allocated to two groups differing only by instructional format. Each student individually completed an initial simulation case, followed by a standardized curriculum of eight cases in either group simulation or case-based group discussion format before a second individual simulation case. A remote coinvestigator measured eight objective performance end points using digital recordings of all individual simulation cases. McNemar chi2, Pearson correlation, repeated measures multivariate analysis of variance, and follow-up analysis of variance were used for statistical evaluation. Sixty-eight students (82%) completed both initial and follow-up individual simulations. Eight students were lost from the simulation group and seven from the discussion group. The mean postintervention case performance was significantly better for the students allocated to simulation instruction compared with the group discussion students for four outcomes including a decrease in mean time to (1) order an intravenous line; (2) initiate cardiac monitoring; (3) order initial laboratory tests; and (4) initiate blood pressure monitoring. Paired comparisons of each student's initial and follow-up simulations demonstrated significant improvement in the same four areas, in mean time to order an abdominal radiograph and in obtaining an allergy history. A single simulation-based teaching session significantly improved student performance as a team leader. Additional simulation sessions provided further improvement compared with instruction provided in case-based group discussion format.
Construction of dynamic stochastic simulation models using knowledge-based techniques
NASA Technical Reports Server (NTRS)
Williams, M. Douglas; Shiva, Sajjan G.
1990-01-01
Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).
Di Ianni, Mauricio E.; del Valle, Mara E.; Enrique, Andrea V.; Rosella, Mara A.; Bruno, Fiorella; Bruno-Blanch, Luis E.
2015-01-01
Abstract Steviol glycosides are natural constituents of Stevia rebaudiana (Bert.) Bert. (Asteraceae) that have recently gained worldwide approval as nonnutritive sweeteners by the Joint Food and Agriculture Organization/World Organization Expert Committee on Food Additives. Cheminformatic tools suggested that the aglycone steviol and several of its phase I metabolites were predicted as potential anticonvulsant agents effective in the seizure animal model maximal electroshock seizure (MES) test. Thus, aqueous infusion from S. rebaudiana was tested in the MES test (mice, intraperitoneal administration), confirming dose-dependent anticonvulsant effect. Afterward, isolated stevioside and rebaudioside A were tested in the MES test, with positive results. Though drug repositioning most often focuses on known therapeutics, this article illustrates the possibilities of this strategy to find new functionalities and therapeutic indications for food constituents and natural products. PMID:26258457
Taking Open Innovation to the Molecular Level - Strengths and Limitations.
Zdrazil, Barbara; Blomberg, Niklas; Ecker, Gerhard F
2012-08-01
The ever-growing availability of large-scale open data and its maturation is having a significant impact on industrial drug-discovery, as well as on academic and non-profit research. As industry is changing to an 'open innovation' business concept, precompetitive initiatives and strong public-private partnerships including academic research cooperation partners are gaining more and more importance. Now, the bioinformatics and cheminformatics communities are seeking for web tools which allow the integration of this large volume of life science datasets available in the public domain. Such a data exploitation tool would ideally be able to answer complex biological questions by formulating only one search query. In this short review/perspective, we outline the use of semantic web approaches for data and knowledge integration. Further, we discuss strengths and current limitations of public available data retrieval tools and integrated platforms.
John, Majnu; Lencz, Todd; Malhotra, Anil K; Correll, Christoph U; Zhang, Jian-Ping
2018-06-01
Meta-analysis of genetic association studies is being increasingly used to assess phenotypic differences between genotype groups. When the underlying genetic model is assumed to be dominant or recessive, assessing the phenotype differences based on summary statistics, reported for individual studies in a meta-analysis, is a valid strategy. However, when the genetic model is additive, a similar strategy based on summary statistics will lead to biased results. This fact about the additive model is one of the things that we establish in this paper, using simulations. The main goal of this paper is to present an alternate strategy for the additive model based on simulating data for the individual studies. We show that the alternate strategy is far superior to the strategy based on summary statistics.
Improving the performance of a filling line based on simulation
NASA Astrophysics Data System (ADS)
Jasiulewicz-Kaczmarek, M.; Bartkowiak, T.
2016-08-01
The paper describes the method of improving performance of a filling line based on simulation. This study concerns a production line that is located in a manufacturing centre of a FMCG company. A discrete event simulation model was built using data provided by maintenance data acquisition system. Two types of failures were identified in the system and were approximated using continuous statistical distributions. The model was validated taking into consideration line performance measures. A brief Pareto analysis of line failures was conducted to identify potential areas of improvement. Two improvements scenarios were proposed and tested via simulation. The outcome of the simulations were the bases of financial analysis. NPV and ROI values were calculated taking into account depreciation, profits, losses, current CIT rate and inflation. A validated simulation model can be a useful tool in maintenance decision-making process.
Learning with STEM Simulations in the Classroom: Findings and Trends from a Meta-Analysis
ERIC Educational Resources Information Center
D'Angelo, Cynthia M.; Rutstein, Daisy; Harris, Christopher J.
2016-01-01
This article presents a summary of the findings of a systematic review and meta-analysis of the literature on computer-based interactive simulations for K-12 science, technology, engineering, and mathematics (STEM) learning topics. For achievement outcomes, simulations had a moderate to strong effect on student learning. Overall, simulations have…
An Online Prediction Platform to Support the Environmental ...
Historical QSAR models are currently utilized across a broad range of applications within the U.S. Environmental Protection Agency (EPA). These models predict basic physicochemical properties (e.g., logP, aqueous solubility, vapor pressure), which are then incorporated into exposure, fate and transport models. Whereas the classical manner of publishing results in peer-reviewed journals remains appropriate, there are substantial benefits to be gained by providing enhanced, open access to the training data sets and resulting models. Benefits include improved transparency, more flexibility to expand training sets and improve model algorithms, and greater ability to independently characterize model performance both globally and in local areas of chemistry. We have developed a web-based prediction platform that uses open-source descriptors and modeling algorithms, employs modern cheminformatics technologies, and is tailored for ease of use by the toxicology and environmental regulatory community. This tool also provides web-services to meet both EPA’s projects and the modeling community at-large. The platform hosts models developed within EPA’s National Center for Computational Toxicology, as well as those developed by other EPA scientists and the outside scientific community. Recognizing that there are other on-line QSAR model platforms currently available which have additional capabilities, we connect to such services, where possible, to produce an integrated
NASA Astrophysics Data System (ADS)
Jain, Sankalp; Kotsampasakou, Eleni; Ecker, Gerhard F.
2018-05-01
Cheminformatics datasets used in classification problems, especially those related to biological or physicochemical properties, are often imbalanced. This presents a major challenge in development of in silico prediction models, as the traditional machine learning algorithms are known to work best on balanced datasets. The class imbalance introduces a bias in the performance of these algorithms due to their preference towards the majority class. Here, we present a comparison of the performance of seven different meta-classifiers for their ability to handle imbalanced datasets, whereby Random Forest is used as base-classifier. Four different datasets that are directly (cholestasis) or indirectly (via inhibition of organic anion transporting polypeptide 1B1 and 1B3) related to liver toxicity were chosen for this purpose. The imbalance ratio in these datasets ranges between 4:1 and 20:1 for negative and positive classes, respectively. Three different sets of molecular descriptors for model development were used, and their performance was assessed in 10-fold cross-validation and on an independent validation set. Stratified bagging, MetaCost and CostSensitiveClassifier were found to be the best performing among all the methods. While MetaCost and CostSensitiveClassifier provided better sensitivity values, Stratified Bagging resulted in high balanced accuracies.
NASA Astrophysics Data System (ADS)
Jain, Sankalp; Kotsampasakou, Eleni; Ecker, Gerhard F.
2018-04-01
Cheminformatics datasets used in classification problems, especially those related to biological or physicochemical properties, are often imbalanced. This presents a major challenge in development of in silico prediction models, as the traditional machine learning algorithms are known to work best on balanced datasets. The class imbalance introduces a bias in the performance of these algorithms due to their preference towards the majority class. Here, we present a comparison of the performance of seven different meta-classifiers for their ability to handle imbalanced datasets, whereby Random Forest is used as base-classifier. Four different datasets that are directly (cholestasis) or indirectly (via inhibition of organic anion transporting polypeptide 1B1 and 1B3) related to liver toxicity were chosen for this purpose. The imbalance ratio in these datasets ranges between 4:1 and 20:1 for negative and positive classes, respectively. Three different sets of molecular descriptors for model development were used, and their performance was assessed in 10-fold cross-validation and on an independent validation set. Stratified bagging, MetaCost and CostSensitiveClassifier were found to be the best performing among all the methods. While MetaCost and CostSensitiveClassifier provided better sensitivity values, Stratified Bagging resulted in high balanced accuracies.
IMPPAT: A curated database of Indian Medicinal Plants, Phytochemistry And Therapeutics.
Mohanraj, Karthikeyan; Karthikeyan, Bagavathy Shanmugam; Vivek-Ananth, R P; Chand, R P Bharath; Aparna, S R; Mangalapandi, Pattulingam; Samal, Areejit
2018-03-12
Phytochemicals of medicinal plants encompass a diverse chemical space for drug discovery. India is rich with a flora of indigenous medicinal plants that have been used for centuries in traditional Indian medicine to treat human maladies. A comprehensive online database on the phytochemistry of Indian medicinal plants will enable computational approaches towards natural product based drug discovery. In this direction, we present, IMPPAT, a manually curated database of 1742 Indian Medicinal Plants, 9596 Phytochemicals, And 1124 Therapeutic uses spanning 27074 plant-phytochemical associations and 11514 plant-therapeutic associations. Notably, the curation effort led to a non-redundant in silico library of 9596 phytochemicals with standard chemical identifiers and structure information. Using cheminformatic approaches, we have computed the physicochemical, ADMET (absorption, distribution, metabolism, excretion, toxicity) and drug-likeliness properties of the IMPPAT phytochemicals. We show that the stereochemical complexity and shape complexity of IMPPAT phytochemicals differ from libraries of commercial compounds or diversity-oriented synthesis compounds while being similar to other libraries of natural products. Within IMPPAT, we have filtered a subset of 960 potential druggable phytochemicals, of which majority have no significant similarity to existing FDA approved drugs, and thus, rendering them as good candidates for prospective drugs. IMPPAT database is openly accessible at: https://cb.imsc.res.in/imppat .
Evaluation of food-relevant chemicals in the ToxCast high-throughput screening program
Thousands of chemicals are directly added to or come in contact with food, many of which have undergone little to no toxicological evaluation. The landscape of the food-relevant chemical universe was evaluated using cheminformatics, and subsequently the bioactivity of food-relevant chemicals across the publicly available ToxCast highthroughput screening program was assessed. In total, 8659 food-relevant chemicals were compiled including direct food additives, food contact substances, and pesticides. Of these food-relevant chemicals, 4719 had curated structure definition files amenable to defining chemical fingerprints, which were used to cluster chemicals using a selforganizing map approach. Pesticides, and direct food additives clustered apart from one another with food contact substances generally in between, supporting that these categories not only reflect different uses but also distinct chemistries. Subsequently, 1530 food-relevant chemicals were identified in ToxCast comprising 616 direct food additives, 371 food contact substances, and 543 pesticides. Bioactivity across ToxCast was filtered for cytotoxicity to identify selective chemical effects. Initiating analyses from strictly chemical-based methodology or bioactivity/cytotoxicity-driven evaluation presents unbiased approaches for prioritizing chemicals. Although bioactivity in vitro is not necessarily predictive of adverse effects in vivo, these data provide insight into chemical properties and cellu
Evaluation of food-relevant chemicals in the ToxCast high ...
There are thousands of chemicals that are directly added to or come in contact with food, many of which have undergone little to no toxicological evaluation. The ToxCast high-throughput screening (HTS) program has evaluated over 1,800 chemicals in concentration-response across ~820 assay endpoints and continues to grow; with all data completely available to the public, this resource serves as a unique opportunity to evaluate the bioactivity of chemicals in vitro. This study investigated the chemical landscape of the food-relevant chemical universe using cheminformatics analyses, and subsequently evaluated the bioactivity of food-relevant chemicals included in the ToxCast HTS program. Initially, a list of 9,437 food-relevant chemicals was compiled by comprehensively mining publicly available sources for direct food additives, food contact substances, indirect food additives, and pesticides. Of these food-relevant chemicals, 4,638 were associated with curated structure definition files amenable to defining physical/chemical features used to generate chemical fingerprints. Clustering was conducted based on the chemical fingerprints using a self-organizing map approach. This revealed that pesticides, food contact substances, and direct food additives generally clustered apart from one another, supporting that these categories reflect not only different uses but also distinct chemistries. Subsequently, 967 of the 9,437 food-relevant chemicals were identified in the T
Numerical modeling and performance analysis of zinc oxide (ZnO) thin-film based gas sensor
NASA Astrophysics Data System (ADS)
Punetha, Deepak; Ranjan, Rashmi; Pandey, Saurabh Kumar
2018-05-01
This manuscript describes the modeling and analysis of Zinc Oxide thin film based gas sensor. The conductance and sensitivity of the sensing layer has been described by change in temperature as well as change in gas concentration. The analysis has been done for reducing and oxidizing agents. Simulation results revealed the change in resistance and sensitivity of the sensor with respect to temperature and different gas concentration. To check the feasibility of the model, all the simulated results have been analyze by different experimental reported work. Wolkenstein theory has been used to model the proposed sensor and the simulation results have been shown by using device simulation software.
Cheminformatics-aided pharmacovigilance: application to Stevens-Johnson Syndrome
Low, Yen S; Caster, Ola; Bergvall, Tomas; Fourches, Denis; Zang, Xiaoling; Norén, G Niklas; Rusyn, Ivan; Edwards, Ralph
2016-01-01
Objective Quantitative Structure-Activity Relationship (QSAR) models can predict adverse drug reactions (ADRs), and thus provide early warnings of potential hazards. Timely identification of potential safety concerns could protect patients and aid early diagnosis of ADRs among the exposed. Our objective was to determine whether global spontaneous reporting patterns might allow chemical substructures associated with Stevens-Johnson Syndrome (SJS) to be identified and utilized for ADR prediction by QSAR models. Materials and Methods Using a reference set of 364 drugs having positive or negative reporting correlations with SJS in the VigiBase global repository of individual case safety reports (Uppsala Monitoring Center, Uppsala, Sweden), chemical descriptors were computed from drug molecular structures. Random Forest and Support Vector Machines methods were used to develop QSAR models, which were validated by external 5-fold cross validation. Models were employed for virtual screening of DrugBank to predict SJS actives and inactives, which were corroborated using knowledge bases like VigiBase, ChemoText, and MicroMedex (Truven Health Analytics Inc, Ann Arbor, Michigan). Results We developed QSAR models that could accurately predict if drugs were associated with SJS (area under the curve of 75%–81%). Our 10 most active and inactive predictions were substantiated by SJS reports (or lack thereof) in the literature. Discussion Interpretation of QSAR models in terms of significant chemical descriptors suggested novel SJS structural alerts. Conclusions We have demonstrated that QSAR models can accurately identify SJS active and inactive drugs. Requiring chemical structures only, QSAR models provide effective computational means to flag potentially harmful drugs for subsequent targeted surveillance and pharmacoepidemiologic investigations. PMID:26499102
Mohd Fauzi, Fazlin; John, Cini Mathew; Karunanidhi, Arunkumar; Mussa, Hamse Y; Ramasamy, Rajesh; Adam, Aishah; Bender, Andreas
2017-02-02
Cassia auriculata (CA) is used as an antidiabetic therapy in Ayurvedic and Siddha practice. This study aimed to understand the mode-of-action of CA via combined cheminformatics and in vivo biological analysis. In particular, the effect of 10 polyphenolic constituents of CA in modulating insulin and immunoprotective pathways were studied. In silico target prediction was first employed to predict the probability of the polyphenols interacting with key protein targets related to insulin signalling, based on a model trained on known bioactivity data and chemical similarity considerations. Next, CA was investigated in in vivo studies where induced type 2 diabetic rats were treated with CA for 28 days and the expression levels of genes regulating insulin signalling pathway, glucose transporters of hepatic (GLUT2) and muscular (GLUT4) tissue, insulin receptor substrate (IRS), phosphorylated insulin receptor (AKT), gluconeogenesis (G6PC and PCK-1), along with inflammatory mediators genes (NF-κB, IL-6, IFN-γ and TNF-α) and peroxisome proliferators-activated receptor gamma (PPAR-γ) were determined by qPCR. In silico analysis shows that several of the top 20 enriched targets predicted for the constituents of CA are involved in insulin signalling pathways e.g. PTPN1, PCK-α, AKT2, PI3K-γ. Some of the predictions were supported by scientific literature such as the prediction of PI3K for epigallocatechin gallate. Based on the in silico and in vivo findings, we hypothesized that CA may enhance glucose uptake and glucose transporter expressions via the IRS signalling pathway. This is based on AKT2 and PI3K-γ being listed in the top 20 enriched targets. In vivo analysis shows significant increase in the expression of IRS, AKT, GLUT2 and GLUT4. CA may also affect the PPAR-γ signalling pathway. This is based on the CA-treated groups showing significant activation of PPAR-γ in the liver compared to control. PPAR-γ was predicted by the in silico target prediction with high normalisation rate although it was not in the top 20 most enriched targets. CA may also be involved in the gluconeogenesis and glycogenolysis in the liver based on the downregulation of G6PC and PCK-1 genes seen in CA-treated groups. In addition, CA-treated groups also showed decreased cholesterol, triglyceride, glucose, CRP and Hb1Ac levels, and increased insulin and C-peptide levels. These findings demonstrate the insulin secretagogue and sensitizer effect of CA. Based on both an in silico and in vivo analysis, we propose here that CA mediates glucose/lipid metabolism via the PI3K signalling pathway, and influence AKT thereby causing insulin secretion and insulin sensitivity in peripheral tissues. CA enhances glucose uptake and expression of glucose transporters in particular via the upregulation of GLUT2 and GLUT4. Thus, based on its ability to modulate immunometabolic pathways, CA appears as an attractive long term therapy for T2DM even at relatively low doses. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Jeremy S. Fried; Larry D. Potts; Sara M. Loreno; Glenn A. Christensen; R. Jamie Barbour
2017-01-01
The Forest Inventory and Analysis (FIA)-based BioSum (Bioregional Inventory Originated Simulation Under Management) is a free policy analysis framework and workflow management software solution. It addresses complex management questions concerning forest health and vulnerability for large, multimillion acre, multiowner landscapes using FIA plot data as the initial...
Hybrid statistics-simulations based method for atom-counting from ADF STEM images.
De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra
2017-06-01
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.
Numerical Simulation of Monitoring Corrosion in Reinforced Concrete Based on Ultrasonic Guided Waves
Zheng, Zhupeng; Lei, Ying; Xue, Xin
2014-01-01
Numerical simulation based on finite element method is conducted to predict the location of pitting corrosion in reinforced concrete. Simulation results show that it is feasible to predict corrosion monitoring based on ultrasonic guided wave in reinforced concrete, and wavelet analysis can be used for the extremely weak signal of guided waves due to energy leaking into concrete. The characteristic of time-frequency localization of wavelet transform is adopted in the corrosion monitoring of reinforced concrete. Guided waves can be successfully used to identify corrosion defects in reinforced concrete with the analysis of suitable wavelet-based function and its scale. PMID:25013865
Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers
NASA Astrophysics Data System (ADS)
Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott
2017-11-01
Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.
ERIC Educational Resources Information Center
Hao, Jiangang; Smith, Lawrence; Mislevy, Robert; von Davier, Alina; Bauer, Malcolm
2016-01-01
Extracting information efficiently from game/simulation-based assessment (G/SBA) logs requires two things: a well-structured log file and a set of analysis methods. In this report, we propose a generic data model specified as an extensible markup language (XML) schema for the log files of G/SBAs. We also propose a set of analysis methods for…
10 CFR 431.17 - Determination of efficiency.
Code of Federal Regulations, 2014 CFR
2014-01-01
... characteristics of that basic model, and (ii) Based on engineering or statistical analysis, computer simulation or... simulation or modeling, and other analytic evaluation of performance data on which the AEDM is based... applied. (iii) If requested by the Department, the manufacturer shall conduct simulations to predict the...
10 CFR 431.17 - Determination of efficiency.
Code of Federal Regulations, 2012 CFR
2012-01-01
... characteristics of that basic model, and (ii) Based on engineering or statistical analysis, computer simulation or... simulation or modeling, and other analytic evaluation of performance data on which the AEDM is based... applied. (iii) If requested by the Department, the manufacturer shall conduct simulations to predict the...
NASA Technical Reports Server (NTRS)
Moin, Parviz; Spalart, Philippe R.
1987-01-01
The use of simulation data bases for the examination of turbulent flows is an effective research tool. Studies of the structure of turbulence have been hampered by the limited number of probes and the impossibility of measuring all desired quantities. Also, flow visualization is confined to the observation of passive markers with limited field of view and contamination caused by time-history effects. Computer flow fields are a new resource for turbulence research, providing all the instantaneous flow variables in three-dimensional space. Simulation data bases also provide much-needed information for phenomenological turbulence modeling. Three dimensional velocity and pressure fields from direct simulations can be used to compute all the terms in the transport equations for the Reynolds stresses and the dissipation rate. However, only a few, geometrically simple flows have been computed by direct numerical simulation, and the inventory of simulation does not fully address the current modeling needs in complex turbulent flows. The availability of three-dimensional flow fields also poses challenges in developing new techniques for their analysis, techniques based on experimental methods, some of which are used here for the analysis of direct-simulation data bases in studies of the mechanics of turbulent flows.
Web-Based Predictive Analytics to Improve Patient Flow in the Emergency Department
NASA Technical Reports Server (NTRS)
Buckler, David L.
2012-01-01
The Emergency Department (ED) simulation project was established to demonstrate how requirements-driven analysis and process simulation can help improve the quality of patient care for the Veterans Health Administration's (VHA) Veterans Affairs Medical Centers (VAMC). This project developed a web-based simulation prototype of patient flow in EDs, validated the performance of the simulation against operational data, and documented IT requirements for the ED simulation.
Ensor, Joie; Burke, Danielle L; Snell, Kym I E; Hemming, Karla; Riley, Richard D
2018-05-18
Researchers and funders should consider the statistical power of planned Individual Participant Data (IPD) meta-analysis projects, as they are often time-consuming and costly. We propose simulation-based power calculations utilising a two-stage framework, and illustrate the approach for a planned IPD meta-analysis of randomised trials with continuous outcomes where the aim is to identify treatment-covariate interactions. The simulation approach has four steps: (i) specify an underlying (data generating) statistical model for trials in the IPD meta-analysis; (ii) use readily available information (e.g. from publications) and prior knowledge (e.g. number of studies promising IPD) to specify model parameter values (e.g. control group mean, intervention effect, treatment-covariate interaction); (iii) simulate an IPD meta-analysis dataset of a particular size from the model, and apply a two-stage IPD meta-analysis to obtain the summary estimate of interest (e.g. interaction effect) and its associated p-value; (iv) repeat the previous step (e.g. thousands of times), then estimate the power to detect a genuine effect by the proportion of summary estimates with a significant p-value. In a planned IPD meta-analysis of lifestyle interventions to reduce weight gain in pregnancy, 14 trials (1183 patients) promised their IPD to examine a treatment-BMI interaction (i.e. whether baseline BMI modifies intervention effect on weight gain). Using our simulation-based approach, a two-stage IPD meta-analysis has < 60% power to detect a reduction of 1 kg weight gain for a 10-unit increase in BMI. Additional IPD from ten other published trials (containing 1761 patients) would improve power to over 80%, but only if a fixed-effect meta-analysis was appropriate. Pre-specified adjustment for prognostic factors would increase power further. Incorrect dichotomisation of BMI would reduce power by over 20%, similar to immediately throwing away IPD from ten trials. Simulation-based power calculations could inform the planning and funding of IPD projects, and should be used routinely.
Ekins, Sean; Kaneko, Takushi; Lipinski, Christopher A; Bradford, Justin; Dole, Krishna; Spektor, Anna; Gregory, Kellan; Blondeau, David; Ernst, Sylvia; Yang, Jeremy; Goncharoff, Nicko; Hohman, Moses M; Bunin, Barry A
2010-11-01
There is an urgent need for new drugs against tuberculosis which annually claims 1.7-1.8 million lives. One approach to identify potential leads is to screen in vitro small molecules against Mycobacterium tuberculosis (Mtb). Until recently there was no central repository to collect information on compounds screened. Consequently, it has been difficult to analyze molecular properties of compounds that inhibit the growth of Mtb in vitro. We have collected data from publically available sources on over 300 000 small molecules deposited in the Collaborative Drug Discovery TB Database. A cheminformatics analysis on these compounds indicates that inhibitors of the growth of Mtb have statistically higher mean logP, rule of 5 alerts, while also having lower HBD count, atom count and lower PSA (ChemAxon descriptors), compared to compounds that are classed as inactive. Additionally, Bayesian models for selecting Mtb active compounds were evaluated with over 100 000 compounds and, they demonstrated 10 fold enrichment over random for the top ranked 600 compounds. This represents a promising approach for finding compounds active against Mtb in whole cells screened under the same in vitro conditions. Various sets of Mtb hit molecules were also examined by various filtering rules used widely in the pharmaceutical industry to identify compounds with potentially reactive moieties. We found differences between the number of compounds flagged by these rules in Mtb datasets, malaria hits, FDA approved drugs and antibiotics. Combining these approaches may enable selection of compounds with increased probability of inhibition of whole cell Mtb activity.
Development of a Aerothermoelastic-Acoustics Simulation Capability of Flight Vehicles
NASA Technical Reports Server (NTRS)
Gupta, K. K.; Choi, S. B.; Ibrahim, A.
2010-01-01
A novel numerical, finite element based analysis methodology is presented in this paper suitable for accurate and efficient simulation of practical, complex flight vehicles. An associated computer code, developed in this connection, is also described in some detail. Thermal effects of high speed flow obtained from a heat conduction analysis are incorporated in the modal analysis which in turn affects the unsteady flow arising out of interaction of elastic structures with the air. Numerical examples pertaining to representative problems are given in much detail testifying to the efficacy of the advocated techniques. This is a unique implementation of temperature effects in a finite element CFD based multidisciplinary simulation analysis capability involving large scale computations.
Predictive modeling targets thymidylate synthase ThyX in Mycobacterium tuberculosis.
Djaout, Kamel; Singh, Vinayak; Boum, Yap; Katawera, Victoria; Becker, Hubert F; Bush, Natassja G; Hearnshaw, Stephen J; Pritchard, Jennifer E; Bourbon, Pauline; Madrid, Peter B; Maxwell, Anthony; Mizrahi, Valerie; Myllykallio, Hannu; Ekins, Sean
2016-06-10
There is an urgent need to identify new treatments for tuberculosis (TB), a major infectious disease caused by Mycobacterium tuberculosis (Mtb), which results in 1.5 million deaths each year. We have targeted two essential enzymes in this organism that are promising for antibacterial therapy and reported to be inhibited by naphthoquinones. ThyX is an essential thymidylate synthase that is mechanistically and structurally unrelated to the human enzyme. DNA gyrase is a DNA topoisomerase present in bacteria and plants but not animals. The current study set out to understand the structure-activity relationships of these targets in Mtb using a combination of cheminformatics and in vitro screening. Here, we report the identification of new Mtb ThyX inhibitors, 2-chloro-3-(4-methanesulfonylpiperazin-1-yl)-1,4-dihydronaphthalene-1,4-dione) and idebenone, which show modest whole-cell activity and appear to act, at least in part, by targeting ThyX in Mtb.
ChEMBL web services: streamlining access to drug discovery data and utilities
Davies, Mark; Nowotka, Michał; Papadatos, George; Dedman, Nathan; Gaulton, Anna; Atkinson, Francis; Bellis, Louisa; Overington, John P.
2015-01-01
ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology. PMID:25883136
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanson, Andrew D.; Henry, Christopher S.; Fiehn, Oliver
It is increasingly clear that (a) many metabolites undergo spontaneous or enzyme-catalyzed side reactions in vivo, (b) the damaged metabolites formed by these reactions can be harmful, and (c) organisms have biochemical systems that limit the buildup of damaged metabolites. These damage-control systems either return a damaged molecule to its pristine state (metabolite repair) or convert harmful molecules to harmless ones (damage preemption). Because all organisms share a core set of metabolites that suffer the same chemical and enzymatic damage reactions, certain damage-control systems are widely conserved across the kingdoms of life. Relatively few damage reactions and damage-control systems aremore » well known. Uncovering new damage reactions and identifying the corresponding damaged metabolites, damage-control genes, and enzymes demands a coordinated mix of chemistry, metabolomics, cheminformatics, biochemistry, and comparative genomics. This review illustrates the above points using examples from plants, which are at least as prone to metabolite damage as other organisms.« less
Recent Advances in Targeted and Untargeted Metabolomics by NMR and MS/NMR Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bingol, Kerem
Metabolomics has made significant progress in multiple fronts in the last 18 months. This minireview aimed to give an overview of these advancements in the light of their contribution to targeted and untargeted metabolomics. New computational approaches have emerged to overcome manual absolute quantitation step of metabolites in 1D 1H NMR spectra. This provides more consistency between inter-laboratory comparisons. Integration of 2D NMR metabolomics databases under a unified web server allowed very accurate identification of the metabolites that have been catalogued in these databases. For the remaining uncatalogued and unknown metabolites, new cheminformatics approaches have been developed by combining NMRmore » and mass spectrometry. These hybrid NMR/MS approaches accelerated the identification of unknowns in untargeted studies, and now they are allowing to profile ever larger number of metabolites in application studies.« less
NASA Astrophysics Data System (ADS)
McEwen, Leah; Li, Ye
2014-10-01
There are compelling needs from a variety of camps for more chemistry data to be available. While there are funder and government mandates for depositing research data in the United States and Europe, this does not mean it will be done well or expediently. Chemists themselves do not appear overly engaged at this stage and chemistry librarians who work directly with chemists and their local information environments are interested in helping with this challenge. Our unique understanding of organizing data and information enables us to contribute to building necessary infrastructure and establishing standards and best practices across the full research data cycle. As not many support structures focused on chemistry currently exist, we are initiating explorations through a few case studies and focused pilot projects presented here, with an aim of identifying opportunities for increased collaboration among chemists, chemistry librarians, cheminformaticians and other chemistry professionals.
McEwen, Leah; Li, Ye
2014-10-01
There are compelling needs from a variety of camps for more chemistry data to be available. While there are funder and government mandates for depositing research data in the United States and Europe, this does not mean it will be done well or expediently. Chemists themselves do not appear overly engaged at this stage and chemistry librarians who work directly with chemists and their local information environments are interested in helping with this challenge. Our unique understanding of organizing data and information enables us to contribute to building necessary infrastructure and establishing standards and best practices across the full research data cycle. As not many support structures focused on chemistry currently exist, we are initiating explorations through a few case studies and focused pilot projects presented here, with an aim of identifying opportunities for increased collaboration among chemists, chemistry librarians, cheminformaticians and other chemistry professionals.
NASA Technical Reports Server (NTRS)
Levison, William H.
1988-01-01
This study explored application of a closed loop pilot/simulator model to the analysis of some simulator fidelity issues. The model was applied to two data bases: (1) a NASA ground based simulation of an air-to-air tracking task in which nonvisual cueing devices were explored, and (2) a ground based and inflight study performed by the Calspan Corporation to explore the effects of simulator delay on attitude tracking performance. The model predicted the major performance trends obtained in both studies. A combined analytical and experimental procedure for exploring simulator fidelity issues is outlined.
Methods for simulation-based analysis of fluid-structure interaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barone, Matthew Franklin; Payne, Jeffrey L.
2005-10-01
Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonalmore » decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.« less
pyPcazip: A PCA-based toolkit for compression and analysis of molecular simulation data
NASA Astrophysics Data System (ADS)
Shkurti, Ardita; Goni, Ramon; Andrio, Pau; Breitmoser, Elena; Bethune, Iain; Orozco, Modesto; Laughton, Charles A.
The biomolecular simulation community is currently in need of novel and optimised software tools that can analyse and process, in reasonable timescales, the large generated amounts of molecular simulation data. In light of this, we have developed and present here pyPcazip: a suite of software tools for compression and analysis of molecular dynamics (MD) simulation data. The software is compatible with trajectory file formats generated by most contemporary MD engines such as AMBER, CHARMM, GROMACS and NAMD, and is MPI parallelised to permit the efficient processing of very large datasets. pyPcazip is a Unix based open-source software (BSD licenced) written in Python.
Noncontiguous atom matching structural similarity function.
Teixeira, Ana L; Falcao, Andre O
2013-10-28
Measuring similarity between molecules is a fundamental problem in cheminformatics. Given that similar molecules tend to have similar physical, chemical, and biological properties, the notion of molecular similarity plays an important role in the exploration of molecular data sets, query-retrieval in molecular databases, and in structure-property/activity modeling. Various methods to define structural similarity between molecules are available in the literature, but so far none has been used with consistent and reliable results for all situations. We propose a new similarity method based on atom alignment for the analysis of structural similarity between molecules. This method is based on the comparison of the bonding profiles of atoms on comparable molecules, including features that are seldom found in other structural or graph matching approaches like chirality or double bond stereoisomerism. The similarity measure is then defined on the annotated molecular graph, based on an iterative directed graph similarity procedure and optimal atom alignment between atoms using a pairwise matching algorithm. With the proposed approach the similarities detected are more intuitively understood because similar atoms in the molecules are explicitly shown. This noncontiguous atom matching structural similarity method (NAMS) was tested and compared with one of the most widely used similarity methods (fingerprint-based similarity) using three difficult data sets with different characteristics. Despite having a higher computational cost, the method performed well being able to distinguish either different or very similar hydrocarbons that were indistinguishable using a fingerprint-based approach. NAMS also verified the similarity principle using a data set of structurally similar steroids with differences in the binding affinity to the corticosteroid binding globulin receptor by showing that pairs of steroids with a high degree of similarity (>80%) tend to have smaller differences in the absolute value of binding activity. Using a highly diverse set of compounds with information about the monoamine oxidase inhibition level, the method was also able to recover a significantly higher average fraction of active compounds when the seed is active for different cutoff threshold values of similarity. Particularly, for the cutoff threshold values of 86%, 93%, and 96.5%, NAMS was able to recover a fraction of actives of 0.57, 0.63, and 0.83, respectively, while the fingerprint-based approach was able to recover a fraction of actives of 0.41, 0.40, and 0.39, respectively. NAMS is made available freely for the whole community in a simple Web based tool as well as the Python source code at http://nams.lasige.di.fc.ul.pt/.
Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2016-01-01
An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.
Using cognitive task analysis to develop simulation-based training for medical tasks.
Cannon-Bowers, Jan; Bowers, Clint; Stout, Renee; Ricci, Katrina; Hildabrand, Annette
2013-10-01
Pressures to increase the efficacy and effectiveness of medical training are causing the Department of Defense to investigate the use of simulation technologies. This article describes a comprehensive cognitive task analysis technique that can be used to simultaneously generate training requirements, performance metrics, scenario requirements, and simulator/simulation requirements for medical tasks. On the basis of a variety of existing techniques, we developed a scenario-based approach that asks experts to perform the targeted task multiple times, with each pass probing a different dimension of the training development process. In contrast to many cognitive task analysis approaches, we argue that our technique can be highly cost effective because it is designed to accomplish multiple goals. The technique was pilot tested with expert instructors from a large military medical training command. These instructors were employed to generate requirements for two selected combat casualty care tasks-cricothyroidotomy and hemorrhage control. Results indicated that the technique is feasible to use and generates usable data to inform simulation-based training system design. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
Simulation of tunneling construction methods of the Cisumdawu toll road
NASA Astrophysics Data System (ADS)
Abduh, Muhamad; Sukardi, Sapto Nugroho; Ola, Muhammad Rusdian La; Ariesty, Anita; Wirahadikusumah, Reini D.
2017-11-01
Simulation can be used as a tool for planning and analysis of a construction method. Using simulation technique, a contractor could design optimally resources associated with a construction method and compare to other methods based on several criteria, such as productivity, waste, and cost. This paper discusses the use of simulation using Norwegian Method of Tunneling (NMT) for a 472-meter tunneling work in the Cisumdawu Toll Road project. Primary and secondary data were collected to provide useful information for simulation as well as problems that may be faced by the contractor. The method was modelled using the CYCLONE and then simulated using the WebCYCLONE. The simulation could show the duration of the project from the duration model of each work tasks which based on literature review, machine productivity, and several assumptions. The results of simulation could also show the total cost of the project that was modeled based on journal construction & building unit cost and online websites of local and international suppliers. The analysis of the advantages and disadvantages of the method was conducted based on its, wastes, and cost. The simulation concluded the total cost of this operation is about Rp. 900,437,004,599 and the total duration of the tunneling operation is 653 days. The results of the simulation will be used for a recommendation to the contractor before the implementation of the already selected tunneling operation.
Ribay, Kathryn; Kim, Marlene T; Wang, Wenyi; Pinolini, Daniel; Zhu, Hao
2016-03-01
Estrogen receptors (ERα) are a critical target for drug design as well as a potential source of toxicity when activated unintentionally. Thus, evaluating potential ERα binding agents is critical in both drug discovery and chemical toxicity areas. Using computational tools, e.g., Quantitative Structure-Activity Relationship (QSAR) models, can predict potential ERα binding agents before chemical synthesis. The purpose of this project was to develop enhanced predictive models of ERα binding agents by utilizing advanced cheminformatics tools that can integrate publicly available bioassay data. The initial ERα binding agent data set, consisting of 446 binders and 8307 non-binders, was obtained from the Tox21 Challenge project organized by the NIH Chemical Genomics Center (NCGC). After removing the duplicates and inorganic compounds, this data set was used to create a training set (259 binders and 259 non-binders). This training set was used to develop QSAR models using chemical descriptors. The resulting models were then used to predict the binding activity of 264 external compounds, which were available to us after the models were developed. The cross-validation results of training set [Correct Classification Rate (CCR) = 0.72] were much higher than the external predictivity of the unknown compounds (CCR = 0.59). To improve the conventional QSAR models, all compounds in the training set were used to search PubChem and generate a profile of their biological responses across thousands of bioassays. The most important bioassays were prioritized to generate a similarity index that was used to calculate the biosimilarity score between each two compounds. The nearest neighbors for each compound within the set were then identified and its ERα binding potential was predicted by its nearest neighbors in the training set. The hybrid model performance (CCR = 0.94 for cross validation; CCR = 0.68 for external prediction) showed significant improvement over the original QSAR models, particularly for the activity cliffs that induce prediction errors. The results of this study indicate that the response profile of chemicals from public data provides useful information for modeling and evaluation purposes. The public big data resources should be considered along with chemical structure information when predicting new compounds, such as unknown ERα binding agents.
Analysis of factors influencing hydration site prediction based on molecular dynamics simulations.
Yang, Ying; Hu, Bingjie; Lill, Markus A
2014-10-27
Water contributes significantly to the binding of small molecules to proteins in biochemical systems. Molecular dynamics (MD) simulation based programs such as WaterMap and WATsite have been used to probe the locations and thermodynamic properties of hydration sites at the surface or in the binding site of proteins generating important information for structure-based drug design. However, questions associated with the influence of the simulation protocol on hydration site analysis remain. In this study, we use WATsite to investigate the influence of factors such as simulation length and variations in initial protein conformations on hydration site prediction. We find that 4 ns MD simulation is appropriate to obtain a reliable prediction of the locations and thermodynamic properties of hydration sites. In addition, hydration site prediction can be largely affected by the initial protein conformations used for MD simulations. Here, we provide a first quantification of this effect and further indicate that similar conformations of binding site residues (RMSD < 0.5 Å) are required to obtain consistent hydration site predictions.
Simulation-Based Bronchoscopy Training
Kennedy, Cassie C.; Maldonado, Fabien
2013-01-01
Background: Simulation-based bronchoscopy training is increasingly used, but effectiveness remains uncertain. We sought to perform a comprehensive synthesis of published work on simulation-based bronchoscopy training. Methods: We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus for eligible articles through May 11, 2011. We included all original studies involving health professionals that evaluated, in comparison with no intervention or an alternative instructional approach, simulation-based training for flexible or rigid bronchoscopy. Study selection and data abstraction were performed independently and in duplicate. We pooled results using random effects meta-analysis. Results: From an initial pool of 10,903 articles, we identified 17 studies evaluating simulation-based bronchoscopy training. In comparison with no intervention, simulation training was associated with large benefits on skills and behaviors (pooled effect size, 1.21 [95% CI, 0.82-1.60]; n = 8 studies) and moderate benefits on time (0.62 [95% CI, 0.12-1.13]; n = 7). In comparison with clinical instruction, behaviors with real patients showed nonsignificant effects favoring simulation for time (0.61 [95% CI, −1.47 to 2.69]) and process (0.33 [95% CI, −1.46 to 2.11]) outcomes (n = 2 studies each), although variation in training time might account for these differences. Four studies compared alternate simulation-based training approaches. Inductive analysis to inform instructional design suggested that longer or more structured training is more effective, authentic clinical context adds value, and animal models and plastic part-task models may be superior to more costly virtual-reality simulators. Conclusions: Simulation-based bronchoscopy training is effective in comparison with no intervention. Comparative effectiveness studies are few. PMID:23370487
Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T
Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From thesemore » five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less
Algorithms and architecture for multiprocessor based circuit simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deutsch, J.T.
Accurate electrical simulation is critical to the design of high performance integrated circuits. Logic simulators can verify function and give first-order timing information. Switch level simulators are more effective at dealing with charge sharing than standard logic simulators, but cannot provide accurate timing information or discover DC problems. Delay estimation techniques and cell level simulation can be used in constrained design methods, but must be tuned for each application, and circuit simulation must still be used to generate the cell models. None of these methods has the guaranteed accuracy that many circuit designers desire, and none can provide detailed waveformmore » information. Detailed electrical-level simulation can predict circuit performance if devices and parasitics are modeled accurately. However, the computational requirements of conventional circuit simulators make it impractical to simulate current large circuits. In this dissertation, the implementation of Iterated Timing Analysis (ITA), a relaxation-based technique for accurate circuit simulation, on a special-purpose multiprocessor is presented. The ITA method is an SOR-Newton, relaxation-based method which uses event-driven analysis and selective trace to exploit the temporal sparsity of the electrical network. Because event-driven selective trace techniques are employed, this algorithm lends itself to implementation on a data-driven computer.« less
Traffic Flow Density Distribution Based on FEM
NASA Astrophysics Data System (ADS)
Ma, Jing; Cui, Jianming
In analysis of normal traffic flow, it usually uses the static or dynamic model to numerical analyze based on fluid mechanics. However, in such handling process, the problem of massive modeling and data handling exist, and the accuracy is not high. Finite Element Method (FEM) is a production which is developed from the combination of a modern mathematics, mathematics and computer technology, and it has been widely applied in various domain such as engineering. Based on existing theory of traffic flow, ITS and the development of FEM, a simulation theory of the FEM that solves the problems existing in traffic flow is put forward. Based on this theory, using the existing Finite Element Analysis (FEA) software, the traffic flow is simulated analyzed with fluid mechanics and the dynamics. Massive data processing problem of manually modeling and numerical analysis is solved, and the authenticity of simulation is enhanced.
NASA Astrophysics Data System (ADS)
Yang, Wenxiu; Liu, Yanbo; Zhang, Ligai; Cao, Hong; Wang, Yang; Yao, Jinbo
2016-06-01
Needleless electrospinning technology is considered as a better avenue to produce nanofibrous materials at large scale, and electric field intensity and its distribution play an important role in controlling nanofiber diameter and quality of the nanofibrous web during electrospinning. In the current study, a novel needleless electrospinning method was proposed based on Von Koch curves of Fractal configuration, simulation and analysis on electric field intensity and distribution in the new electrospinning process were performed with Finite element analysis software, Comsol Multiphysics 4.4, based on linear and nonlinear Von Koch fractal curves (hereafter called fractal models). The result of simulation and analysis indicated that Second level fractal structure is the optimal linear electrospinning spinneret in terms of field intensity and uniformity. Further simulation and analysis showed that the circular type of Fractal spinneret has better field intensity and distribution compared to spiral type of Fractal spinneret in the nonlinear Fractal electrospinning technology. The electrospinning apparatus with the optimal Von Koch fractal spinneret was set up to verify the theoretical analysis results from Comsol simulation, achieving more uniform electric field distribution and lower energy cost, compared to the current needle and needleless electrospinning technologies.
DEPEND: A simulation-based environment for system level dependability analysis
NASA Technical Reports Server (NTRS)
Goswami, Kumar; Iyer, Ravishankar K.
1992-01-01
The design and evaluation of highly reliable computer systems is a complex issue. Designers mostly develop such systems based on prior knowledge and experience and occasionally from analytical evaluations of simplified designs. A simulation-based environment called DEPEND which is especially geared for the design and evaluation of fault-tolerant architectures is presented. DEPEND is unique in that it exploits the properties of object-oriented programming to provide a flexible framework with which a user can rapidly model and evaluate various fault-tolerant systems. The key features of the DEPEND environment are described, and its capabilities are illustrated with a detailed analysis of a real design. In particular, DEPEND is used to simulate the Unix based Tandem Integrity fault-tolerance and evaluate how well it handles near-coincident errors caused by correlated and latent faults. Issues such as memory scrubbing, re-integration policies, and workload dependent repair times which affect how the system handles near-coincident errors are also evaluated. Issues such as the method used by DEPEND to simulate error latency and the time acceleration technique that provides enormous simulation speed up are also discussed. Unlike any other simulation-based dependability studies, the use of these approaches and the accuracy of the simulation model are validated by comparing the results of the simulations, with measurements obtained from fault injection experiments conducted on a production Tandem Integrity machine.
Research on monocentric model of urbanization by agent-based simulation
NASA Astrophysics Data System (ADS)
Xue, Ling; Yang, Kaizhong
2008-10-01
Over the past years, GIS have been widely used for modeling urbanization from a variety of perspectives such as digital terrain representation and overlay analysis using cell-based data platform. Similarly, simulation of urban dynamics has been achieved with the use of Cellular Automata. In contrast to these approaches, agent-based simulation provides a much more powerful set of tools. This allows researchers to set up a counterpart for real environmental and urban systems in computer for experimentation and scenario analysis. This Paper basically reviews the research on the economic mechanism of urbanization and an agent-based monocentric model is setup for further understanding the urbanization process and mechanism in China. We build an endogenous growth model with dynamic interactions between spatial agglomeration and urban development by using agent-based simulation. It simulates the migration decisions of two main types of agents, namely rural and urban households between rural and urban area. The model contains multiple economic interactions that are crucial in understanding urbanization and industrial process in China. These adaptive agents can adjust their supply and demand according to the market situation by a learning algorithm. The simulation result shows this agent-based urban model is able to perform the regeneration and to produce likely-to-occur projections of reality.
Panchal, Mitesh B; Upadhyay, Sanjay H
2014-09-01
In this study, the feasibility of single walled boron nitride nanotube (SWBNNT)-based biosensors has been ensured considering the continuum modelling-based simulation approach, for mass-based detection of various bacterium/viruses. Various types of bacterium or viruses have been taken into consideration at the free-end of the cantilevered configuration of the SWBNNT, as a biosensor. Resonant frequency shift-based analysis has been performed with the adsorption of various bacterium/viruses considered as additional mass to the SWBNNT-based sensor system. The continuum mechanics-based analytical approach, considering effective wall thickness has been considered to validate the finite element method (FEM)-based simulation results, based on continuum volume-based modelling of the SWBNNT. As a systematic analysis approach, the FEM-based simulation results are found in excellent agreement with the analytical results, to analyse the SWBNNTs for their wide range of applications such as nanoresonators, biosensors, gas-sensors, transducers and so on. The obtained results suggest that by using the SWBNNT of smaller size the sensitivity of the sensor system can be enhanced and detection of the bacterium/virus having mass of 4.28 × 10⁻²⁴ kg can be effectively performed.
The CompTox Chemistry Dashboard: a community data resource for environmental chemistry.
Williams, Antony J; Grulke, Christopher M; Edwards, Jeff; McEachran, Andrew D; Mansouri, Kamel; Baker, Nancy C; Patlewicz, Grace; Shah, Imran; Wambaugh, John F; Judson, Richard S; Richard, Ann M
2017-11-28
Despite an abundance of online databases providing access to chemical data, there is increasing demand for high-quality, structure-curated, open data to meet the various needs of the environmental sciences and computational toxicology communities. The U.S. Environmental Protection Agency's (EPA) web-based CompTox Chemistry Dashboard is addressing these needs by integrating diverse types of relevant domain data through a cheminformatics layer, built upon a database of curated substances linked to chemical structures. These data include physicochemical, environmental fate and transport, exposure, usage, in vivo toxicity, and in vitro bioassay data, surfaced through an integration hub with link-outs to additional EPA data and public domain online resources. Batch searching allows for direct chemical identifier (ID) mapping and downloading of multiple data streams in several different formats. This facilitates fast access to available structure, property, toxicity, and bioassay data for collections of chemicals (hundreds to thousands at a time). Advanced search capabilities are available to support, for example, non-targeted analysis and identification of chemicals using mass spectrometry. The contents of the chemistry database, presently containing ~ 760,000 substances, are available as public domain data for download. The chemistry content underpinning the Dashboard has been aggregated over the past 15 years by both manual and auto-curation techniques within EPA's DSSTox project. DSSTox chemical content is subject to strict quality controls to enforce consistency among chemical substance-structure identifiers, as well as list curation review to ensure accurate linkages of DSSTox substances to chemical lists and associated data. The Dashboard, publicly launched in April 2016, has expanded considerably in content and user traffic over the past year. It is continuously evolving with the growth of DSSTox into high-interest or data-rich domains of interest to EPA, such as chemicals on the Toxic Substances Control Act listing, while providing the user community with a flexible and dynamic web-based platform for integration, processing, visualization and delivery of data and resources. The Dashboard provides support for a broad array of research and regulatory programs across the worldwide community of toxicologists and environmental scientists.
49 CFR 236.923 - Task analysis and basic requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... classroom, simulator, computer-based, hands-on, or other formally structured training and testing, except... for Processor-Based Signal and Train Control Systems § 236.923 Task analysis and basic requirements...) Based on a formal task analysis, identify the installation, maintenance, repair, modification...
Ryu, Won Hyung A; Mostafa, Ahmed E; Dharampal, Navjit; Sharlin, Ehud; Kopp, Gail; Jacobs, W Bradley; Hurlbert, R John; Chan, Sonny; Sutherland, Garnette R
2017-10-01
Simulation-based education has made its entry into surgical residency training, particularly as an adjunct to hands-on clinical experience. However, one of the ongoing challenges to wide adoption is the capacity of simulators to incorporate educational features required for effective learning. The aim of this study was to identify strengths and limitations of spine simulators to characterize design elements that are essential in enhancing resident education. We performed a mixed qualitative and quantitative cohort study with a focused survey and interviews of stakeholders in spine surgery pertaining to their experiences on 3 spine simulators. Ten participants were recruited spanning all levels of training and expertise until qualitative analysis reached saturation of themes. Participants were asked to perform lumbar pedicle screw insertion on 3 simulators. Afterward, a 10-item survey was administrated and a focused interview was conducted to explore topics pertaining to the design features of the simulators. Overall impressions of the simulators were positive with regards to their educational benefit, but our qualitative analysis revealed differing strengths and limitations. Main design strengths of the computer-based simulators were incorporation of procedural guidance and provision of performance feedback. The synthetic model excelled in achieving more realistic haptic feedback and incorporating use of actual surgical tools. Stakeholders from trainees to experts acknowledge the growing role of simulation-based education in spine surgery. However, different simulation modalities have varying design elements that augment learning in distinct ways. Characterization of these design characteristics will allow for standardization of simulation curricula in spinal surgery, optimizing educational benefit. Copyright © 2017 Elsevier Inc. All rights reserved.
Trellis coding with Continuous Phase Modulation (CPM) for satellite-based land-mobile communications
NASA Technical Reports Server (NTRS)
1989-01-01
This volume of the final report summarizes the results of our studies on the satellite-based mobile communications project. It includes: a detailed analysis, design, and simulations of trellis coded, full/partial response CPM signals with/without interleaving over various Rician fading channels; analysis and simulation of computational cutoff rates for coherent, noncoherent, and differential detection of CPM signals; optimization of the complete transmission system; analysis and simulation of power spectrum of the CPM signals; design and development of a class of Doppler frequency shift estimators; design and development of a symbol timing recovery circuit; and breadboard implementation of the transmission system. Studies prove the suitability of the CPM system for mobile communications.
Extending simulation modeling to activity-based costing for clinical procedures.
Glick, N D; Blackmore, C C; Zelman, W N
2000-04-01
A simulation model was developed to measure costs in an Emergency Department setting for patients presenting with possible cervical-spine injury who needed radiological imaging. Simulation, a tool widely used to account for process variability but typically focused on utilization and throughput analysis, is being introduced here as a realistic means to perform an activity-based-costing (ABC) analysis, because traditional ABC methods have difficulty coping with process variation in healthcare. Though the study model has a very specific application, it can be generalized to other settings simply by changing the input parameters. In essence, simulation was found to be an accurate and viable means to conduct an ABC analysis; in fact, the output provides more complete information than could be achieved through other conventional analyses, which gives management more leverage with which to negotiate contractual reimbursements.
1990-10-01
to economic, technological, spatial or logistic concerns, or involve training, man-machine interfaces, or integration into existing systems. Once the...probabilistic reasoning, mixed analysis- and simulation-oriented, mixed computation- and communication-oriented, nonpreemptive static priority...scheduling base, nonrandomized, preemptive static priority scheduling base, randomized, simulation-oriented, and static scheduling base. The selection of both
An Example-Based Brain MRI Simulation Framework.
He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L
2015-02-21
The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.
WarpIV: In situ visualization and analysis of ion accelerator simulations
Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...
2016-05-09
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less
2009-12-01
events. Work associated with aperiodic tasks have the same statistical behavior and the same timing requirements. The timing deadlines are soft. • Sporadic...answers, but it is possible to calculate how precise the estimates are. Simulation-based performance analysis of a model includes a statistical ...to evaluate all pos- sible states in a timely manner. This is the principle reason for resorting to simulation and statistical analysis to evaluate
Comparison of Nonlinear Random Response Using Equivalent Linearization and Numerical Simulation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2000-01-01
A recently developed finite-element-based equivalent linearization approach for the analysis of random vibrations of geometrically nonlinear multiple degree-of-freedom structures is validated. The validation is based on comparisons with results from a finite element based numerical simulation analysis using a numerical integration technique in physical coordinates. In particular, results for the case of a clamped-clamped beam are considered for an extensive load range to establish the limits of validity of the equivalent linearization approach.
Systematic analysis of signaling pathways using an integrative environment.
Visvanathan, Mahesh; Breit, Marc; Pfeifer, Bernhard; Baumgartner, Christian; Modre-Osprian, Robert; Tilg, Bernhard
2007-01-01
Understanding the biological processes of signaling pathways as a whole system requires an integrative software environment that has comprehensive capabilities. The environment should include tools for pathway design, visualization, simulation and a knowledge base concerning signaling pathways as one. In this paper we introduce a new integrative environment for the systematic analysis of signaling pathways. This system includes environments for pathway design, visualization, simulation and a knowledge base that combines biological and modeling information concerning signaling pathways that provides the basic understanding of the biological system, its structure and functioning. The system is designed with a client-server architecture. It contains a pathway designing environment and a simulation environment as upper layers with a relational knowledge base as the underlying layer. The TNFa-mediated NF-kB signal trans-duction pathway model was designed and tested using our integrative framework. It was also useful to define the structure of the knowledge base. Sensitivity analysis of this specific pathway was performed providing simulation data. Then the model was extended showing promising initial results. The proposed system offers a holistic view of pathways containing biological and modeling data. It will help us to perform biological interpretation of the simulation results and thus contribute to a better understanding of the biological system for drug identification.
Relaxation estimation of RMSD in molecular dynamics immunosimulations.
Schreiner, Wolfgang; Karch, Rudolf; Knapp, Bernhard; Ilieva, Nevena
2012-01-01
Molecular dynamics simulations have to be sufficiently long to draw reliable conclusions. However, no method exists to prove that a simulation has converged. We suggest the method of "lagged RMSD-analysis" as a tool to judge if an MD simulation has not yet run long enough. The analysis is based on RMSD values between pairs of configurations separated by variable time intervals Δt. Unless RMSD(Δt) has reached a stationary shape, the simulation has not yet converged.
NASA Technical Reports Server (NTRS)
Young, Sun-Woo; Carmichael, Gregory R.
1994-01-01
Tropospheric ozone production and transport in mid-latitude eastern Asia is studied. Data analysis of surface-based ozone measurements in Japan and satellite-based tropospheric column measurements of the entire western Pacific Rim are combined with results from three-dimensional model simulations to investigate the diurnal, seasonal and long-term variations of ozone in this region. Surface ozone measurements from Japan show distinct seasonal variation with a spring peak and summer minimum. Satellite studies of the entire tropospheric column of ozone show high concentrations in both the spring and summer seasons. Finally, preliminary model simulation studies show good agreement with observed values.
TB Mobile: a mobile app for anti-tuberculosis molecules with known targets
2013-01-01
Background An increasing number of researchers are focused on strategies for developing inhibitors of Mycobacterium tuberculosis (Mtb) as tuberculosis (TB) drugs. Results In order to learn from prior work we have collated information on molecules screened versus Mtb and their targets which has been made available in the Collaborative Drug Discovery (CDD) database. This dataset contains published data on target, essentiality, links to PubMed, TBDB, TBCyc (which provides a pathway-based visualization of the entire cellular biochemical network) and human homolog information. The development of mobile cheminformatics apps could lower the barrier to drug discovery and promote collaboration. Therefore we have used this set of over 700 molecules screened versus Mtb and their targets to create a free mobile app (TB Mobile) that displays molecule structures and links to the bioinformatics data. By input of a molecular structures and performing a similarity search within the app we can infer potential targets or search by targets to retrieve compounds known to be active. Conclusions TB Mobile may assist researchers as part of their workflow in identifying potential targets for hits generated from phenotypic screening and in prioritizing them for further follow-up. The app is designed to lower the barriers to accessing this information, so that all researchers with an interest in combatting this deadly disease can use it freely to the benefit of their own efforts. PMID:23497706
The importance of data curation on QSAR Modeling ...
During the last few decades many QSAR models and tools have been developed at the US EPA, including the widely used EPISuite. During this period the arsenal of computational capabilities supporting cheminformatics has broadened dramatically with multiple software packages. These modern tools allow for more advanced techniques in terms of chemical structure representation and storage, as well as enabling automated data-mining and standardization approaches to examine and fix data quality issues.This presentation will investigate the impact of data curation on the reliability of QSAR models being developed within the EPA‘s National Center for Computational Toxicology. As part of this work we have attempted to disentangle the influence of the quality versus quantity of data based on the Syracuse PHYSPROP database partly used by EPISuite software. We will review our automated approaches to examining key datasets related to the EPISuite data to validate across chemical structure representations (e.g., mol file and SMILES) and identifiers (chemical names and registry numbers) and approaches to standardize data into QSAR-ready formats prior to modeling procedures. Our efforts to quantify and segregate data into quality categories has allowed us to evaluate the resulting models that can be developed from these data slices and to quantify to what extent efforts developing high-quality datasets have the expected pay-off in terms of predicting performance. The most accur
Evaluation of food-relevant chemicals in the ToxCast high-throughput screening program.
Karmaus, Agnes L; Filer, Dayne L; Martin, Matthew T; Houck, Keith A
2016-06-01
Thousands of chemicals are directly added to or come in contact with food, many of which have undergone little to no toxicological evaluation. The landscape of the food-relevant chemical universe was evaluated using cheminformatics, and subsequently the bioactivity of food-relevant chemicals across the publicly available ToxCast highthroughput screening program was assessed. In total, 8659 food-relevant chemicals were compiled including direct food additives, food contact substances, and pesticides. Of these food-relevant chemicals, 4719 had curated structure definition files amenable to defining chemical fingerprints, which were used to cluster chemicals using a selforganizing map approach. Pesticides, and direct food additives clustered apart from one another with food contact substances generally in between, supporting that these categories not only reflect different uses but also distinct chemistries. Subsequently, 1530 food-relevant chemicals were identified in ToxCast comprising 616 direct food additives, 371 food contact substances, and 543 pesticides. Bioactivity across ToxCast was filtered for cytotoxicity to identify selective chemical effects. Initiating analyses from strictly chemical-based methodology or bioactivity/cytotoxicity-driven evaluation presents unbiased approaches for prioritizing chemicals. Although bioactivity in vitro is not necessarily predictive of adverse effects in vivo, these data provide insight into chemical properties and cellular targets through which foodrelevant chemicals elicit bioactivity. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
How Diverse are the Protein-Bound Conformations of Small-Molecule Drugs and Cofactors?
NASA Astrophysics Data System (ADS)
Friedrich, Nils-Ole; Simsir, Méliné; Kirchmair, Johannes
2018-03-01
Knowledge of the bioactive conformations of small molecules or the ability to predict them with theoretical methods is of key importance to the design of bioactive compounds such as drugs, agrochemicals and cosmetics. Using an elaborate cheminformatics pipeline, which also evaluates the support of individual atom coordinates by the measured electron density, we compiled a complete set (“Sperrylite Dataset”) of high-quality structures of protein-bound ligand conformations from the PDB. The Sperrylite Dataset consists of a total of 10,936 high-quality structures of 4548 unique ligands. Based on this dataset, we assessed the variability of the bioactive conformations of 91 small molecules—each represented by a minimum of ten structures—and found it to be largely independent of the number of rotatable bonds. Sixty-nine molecules had at least two distinct conformations (defined by an RMSD greater than 1 Å). For a representative subset of 17 approved drugs and cofactors we observed a clear trend for the formation of few clusters of highly similar conformers. Even for proteins that share a very low sequence identity, ligands were regularly found to adopt similar conformations. For cofactors, a clear trend for extended conformations was measured, although in few cases also coiled conformers were observed. The Sperrylite Dataset is available for download from http://www.zbh.uni-hamburg.de/sperrylite_dataset.
EPA DSSTox and ToxCast Project Updates: Generating New ...
EPA’s National Center for Computational Toxicology is generating data and capabilities to support a new paradigm for toxicity screening and prediction. The DSSTox project is improving public access to quality structure-annotated chemical toxicity information in less summarized forms than traditionally employed in SAR modeling, and in ways that facilitate data-mining and data read-across. The DSSTox Structure-Browser provides structure searchability across the full published DSSTox toxicity-related inventory, enables linkages to and from previously isolated toxicity data resources (soon to include public microarray resources GEO, ArrayExpress, and CEBS), and provides link-outs to cross-indexed public resources such as PubChem, ChemSpider, and ACToR. The published DSSTox inventory and bioassay information also have been integrated into PubChem allowing a user to take full advantage of PubChem structure-activity and bioassay clustering features. Phase I of the ToxCastTM project has generated high-throughput screening (HTS) data from several hundred biochemical and cell-based assays for a set of 320 chemicals, mostly pesticide actives, with rich toxicology profiles. DSSTox and ACToR are providing the primary cheminformatics support for ToxCastTM and collaborative efforts with the National Toxicology Program’s HTS Program and the NIH Chemical Genomics Center. DSSTox will also be a primary vehicle for publishing ToxCastTM ToxRef summarized bioassay data for use
Reducing EnergyPlus Run Time For Code Compliance Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.
2014-09-12
Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less
The utility of the AusEd driving simulator in the clinical assessment of driver fatigue.
Desai, Anup V; Wilsmore, Brad; Bartlett, Delwyn J; Unger, Gunnar; Constable, Ben; Joffe, David; Grunstein, Ronald R
2007-08-01
Several driving simulators have been developed which range in complexity from PC based driving tasks to advanced "real world" simulators. The AusEd driving simulator is a PC based task, which was designed to be conducive to and test for driver fatigue. This paper describes the AusEd driving simulator in detail, including the technical requirements, hardware, screen and file outputs, and analysis software. Some aspects of the test are standardized, while others can be modified to suit the experimental situation. The AusEd driving simulator is sensitive to performance decrement from driver fatigue in the laboratory setting, potentially making it useful as a laboratory or office based test for driver fatigue risk management. However, more research is still needed to correlate laboratory based simulator performance with real world driving performance and outcomes.
Deep learning for computational chemistry.
Goh, Garrett B; Hodas, Nathan O; Vishnu, Abhinav
2017-06-15
The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on multilayer neural networks. Within the last few years, we have seen the transformative impact of deep learning in many domains, particularly in speech recognition and computer vision, to the extent that the majority of expert practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties that distinguish them from traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including quantitative structure activity relationship, virtual screening, protein structure prediction, quantum chemistry, materials design, and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non-neural networks state-of-the-art models across disparate research topics, and deep neural network-based models often exceeded the "glass ceiling" expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a valuable tool for computational chemistry. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Deep learning for computational chemistry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goh, Garrett B.; Hodas, Nathan O.; Vishnu, Abhinav
The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. Inmore » this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.« less
2013-08-01
earplug and earmuff showing HPD simulator elements for energy flow paths...unprotected or protected ear traditionally start with analysis of energy flow through schematic diagrams based on electroacoustic (EA) analogies between...Schröter, 1983; Schröter and Pösselt, 1986; Shaw and Thiessen, 1958, 1962; Zwislocki, 1957). The analysis method tracks energy flow through fluid and
2004-01-01
Cognitive Task Analysis Abstract As Department of Defense (DoD) leaders rely more on modeling and simulation to provide information on which to base...capabilities and intent. Cognitive Task Analysis (CTA) Cognitive Task Analysis (CTA) is an extensive/detailed look at tasks and subtasks performed by a...Domain Analysis and Task Analysis: A Difference That Matters. In Cognitive Task Analysis , edited by J. M. Schraagen, S.
Carvalho, Henrique F; Barbosa, Arménio J M; Roque, Ana C A; Iranzo, Olga; Branco, Ricardo J F
2017-01-01
Recent advances in de novo protein design have gained considerable insight from the intrinsic dynamics of proteins, based on the integration of molecular dynamics simulations protocols on the state-of-the-art de novo protein design protocols used nowadays. With this protocol we illustrate how to set up and run a molecular dynamics simulation followed by a functional protein dynamics analysis. New users will be introduced to some useful open-source computational tools, including the GROMACS molecular dynamics simulation software package and ProDy for protein structural dynamics analysis.
Wang, Rongmei; Shi, Nianke; Bai, Jinbing; Zheng, Yaguang; Zhao, Yue
2015-07-09
The present study was designed to implement an interprofessional simulation-based education program for nursing students and evaluate the influence of this program on nursing students' attitudes toward interprofessional education and knowledge about operating room nursing. Nursing students were randomly assigned to either the interprofessional simulation-based education or traditional course group. A before-and-after study of nursing students' attitudes toward the program was conducted using the Readiness for Interprofessional Learning Scale. Responses to an open-ended question were categorized using thematic content analysis. Nursing students' knowledge about operating room nursing was measured. Nursing students from the interprofessional simulation-based education group showed statistically different responses to four of the nineteen questions in the Readiness for Interprofessional Learning Scale, reflecting a more positive attitude toward interprofessional learning. This was also supported by thematic content analysis of the open-ended responses. Furthermore, nursing students in the simulation-based education group had a significant improvement in knowledge about operating room nursing. The integrated course with interprofessional education and simulation provided a positive impact on undergraduate nursing students' perceptions toward interprofessional learning and knowledge about operating room nursing. Our study demonstrated that this course may be a valuable elective option for undergraduate nursing students in operating room nursing education.
Improving the Aircraft Design Process Using Web-Based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)
2000-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Improving the Aircraft Design Process Using Web-based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.
2003-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
THE VALUE OF NUDGING IN THE METEOROLOGY MODEL FOR RETROSPECTIVE CMAQ SIMULATIONS
Using a nudging-based data assimilation approach throughout a meteorology simulation (i.e., as a "dynamic analysis") is considered valuable because it can provide a better overall representation of the meteorology than a pure forecast. Dynamic analysis is often used in...
Ishikawa, Sohta A; Inagaki, Yuji; Hashimoto, Tetsuo
2012-01-01
In phylogenetic analyses of nucleotide sequences, 'homogeneous' substitution models, which assume the stationarity of base composition across a tree, are widely used, albeit individual sequences may bear distinctive base frequencies. In the worst-case scenario, a homogeneous model-based analysis can yield an artifactual union of two distantly related sequences that achieved similar base frequencies in parallel. Such potential difficulty can be countered by two approaches, 'RY-coding' and 'non-homogeneous' models. The former approach converts four bases into purine and pyrimidine to normalize base frequencies across a tree, while the heterogeneity in base frequency is explicitly incorporated in the latter approach. The two approaches have been applied to real-world sequence data; however, their basic properties have not been fully examined by pioneering simulation studies. Here, we assessed the performances of the maximum-likelihood analyses incorporating RY-coding and a non-homogeneous model (RY-coding and non-homogeneous analyses) on simulated data with parallel convergence to similar base composition. Both RY-coding and non-homogeneous analyses showed superior performances compared with homogeneous model-based analyses. Curiously, the performance of RY-coding analysis appeared to be significantly affected by a setting of the substitution process for sequence simulation relative to that of non-homogeneous analysis. The performance of a non-homogeneous analysis was also validated by analyzing a real-world sequence data set with significant base heterogeneity.
Analysis of Factors Influencing Hydration Site Prediction Based on Molecular Dynamics Simulations
2015-01-01
Water contributes significantly to the binding of small molecules to proteins in biochemical systems. Molecular dynamics (MD) simulation based programs such as WaterMap and WATsite have been used to probe the locations and thermodynamic properties of hydration sites at the surface or in the binding site of proteins generating important information for structure-based drug design. However, questions associated with the influence of the simulation protocol on hydration site analysis remain. In this study, we use WATsite to investigate the influence of factors such as simulation length and variations in initial protein conformations on hydration site prediction. We find that 4 ns MD simulation is appropriate to obtain a reliable prediction of the locations and thermodynamic properties of hydration sites. In addition, hydration site prediction can be largely affected by the initial protein conformations used for MD simulations. Here, we provide a first quantification of this effect and further indicate that similar conformations of binding site residues (RMSD < 0.5 Å) are required to obtain consistent hydration site predictions. PMID:25252619
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
NASA Technical Reports Server (NTRS)
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
Incorporating quality and safety education for nurses competencies in simulation scenario design.
Jarzemsky, Paula; McCarthy, Jane; Ellis, Nadege
2010-01-01
When planning a simulation scenario, even if adopting prepackaged simulation scenarios, faculty should first conduct a task analysis to guide development of learning objectives and cue critical events. The authors describe a strategy for systematic planning of simulation-based training that incorporates knowledge, skills, and attitudes as defined by the Quality and Safety Education for Nurses (QSEN) initiative. The strategy cues faculty to incorporate activities that target QSEN competencies (patient-centered care, teamwork and collaboration, evidence-based practice, quality improvement, informatics, and safety) before, during, and after simulation scenarios.
2009-06-01
simulation is the campaign-level Peace Support Operations Model (PSOM). This thesis provides a quantitative analysis of PSOM. The results are based ...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . 15. NUMBER OF PAGES 159...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . vi THIS PAGE
Next Generation Simulation Framework for Robotic and Human Space Missions
NASA Technical Reports Server (NTRS)
Cameron, Jonathan M.; Balaram, J.; Jain, Abhinandan; Kuo, Calvin; Lim, Christopher; Myint, Steven
2012-01-01
The Dartslab team at NASA's Jet Propulsion Laboratory (JPL) has a long history of developing physics-based simulations based on the Darts/Dshell simulation framework that have been used to simulate many planetary robotic missions, such as the Cassini spacecraft and the rovers that are currently driving on Mars. Recent collaboration efforts between the Dartslab team at JPL and the Mission Operations Directorate (MOD) at NASA Johnson Space Center (JSC) have led to significant enhancements to the Dartslab DSENDS (Dynamics Simulator for Entry, Descent and Surface landing) software framework. The new version of DSENDS is now being used for new planetary mission simulations at JPL. JSC is using DSENDS as the foundation for a suite of software known as COMPASS (Core Operations, Mission Planning, and Analysis Spacecraft Simulation) that is the basis for their new human space mission simulations and analysis. In this paper, we will describe the collaborative process with the JPL Dartslab and the JSC MOD team that resulted in the redesign and enhancement of the DSENDS software. We will outline the improvements in DSENDS that simplify creation of new high-fidelity robotic/spacecraft simulations. We will illustrate how DSENDS simulations are assembled and show results from several mission simulations.
Color visual simulation applications at the Defense Mapping Agency
NASA Astrophysics Data System (ADS)
Simley, J. D.
1984-09-01
The Defense Mapping Agency (DMA) produces the Digital Landmass System data base to provide culture and terrain data in support of numerous aircraft simulators. In order to conduct data base and simulation quality control and requirements analysis, DMA has developed the Sensor Image Simulator which can rapidly generate visual and radar static scene digital simulations. The use of color in visual simulation allows the clear portrayal of both landcover and terrain data, whereas the initial black and white capabilities were restricted in this role and thus found limited use. Color visual simulation has many uses in analysis to help determine the applicability of current and prototype data structures to better meet user requirements. Color visual simulation is also significant in quality control since anomalies can be more easily detected in natural appearing forms of the data. The realism and efficiency possible with advanced processing and display technology, along with accurate data, make color visual simulation a highly effective medium in the presentation of geographic information. As a result, digital visual simulation is finding increased potential as a special purpose cartographic product. These applications are discussed and related simulation examples are presented.
Development and validation of the simulation-based learning evaluation scale.
Hung, Chang-Chiao; Liu, Hsiu-Chen; Lin, Chun-Chih; Lee, Bih-O
2016-05-01
The instruments that evaluate a student's perception of receiving simulated training are English versions and have not been tested for reliability or validity. The aim of this study was to develop and validate a Chinese version Simulation-Based Learning Evaluation Scale (SBLES). Four stages were conducted to develop and validate the SBLES. First, specific desired competencies were identified according to the National League for Nursing and Taiwan Nursing Accreditation Council core competencies. Next, the initial item pool was comprised of 50 items related to simulation that were drawn from the literature of core competencies. Content validity was established by use of an expert panel. Finally, exploratory factor analysis and confirmatory factor analysis were conducted for construct validity, and Cronbach's coefficient alpha determined the scale's internal consistency reliability. Two hundred and fifty students who had experienced simulation-based learning were invited to participate in this study. Two hundred and twenty-five students completed and returned questionnaires (response rate=90%). Six items were deleted from the initial item pool and one was added after an expert panel review. Exploratory factor analysis with varimax rotation revealed 37 items remaining in five factors which accounted for 67% of the variance. The construct validity of SBLES was substantiated in a confirmatory factor analysis that revealed a good fit of the hypothesized factor structure. The findings tally with the criterion of convergent and discriminant validity. The range of internal consistency for five subscales was .90 to .93. Items were rated on a 5-point scale from 1 (strongly disagree) to 5 (strongly agree). The results of this study indicate that the SBLES is valid and reliable. The authors recommend that the scale could be applied in the nursing school to evaluate the effectiveness of simulation-based learning curricula. Copyright © 2016 Elsevier Ltd. All rights reserved.
POLICY ISSUES ASSOCIATED WITH USING SIMULATION TO ASSESS ENVIRONMENTAL IMPACTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uchitel, Kirsten; Tanana, Heather
This report examines the relationship between simulation-based science and judicial assessments of simulations or models supporting evaluations of environmental harms or risks, considering both how it exists currently and how it might be shaped in the future. This report considers the legal standards relevant to judicial assessments of simulation-based science and provides examples of the judicial application of those legal standards. Next, this report discusses the factors that inform whether there is a correlation between the sophistication of a challenged simulation and judicial support for that simulation. Finally, this report examines legal analysis of the broader issues that must bemore » addressed for simulation-based science to be better understood and utilized in the context of judicial challenge and evaluation. !« less
Using a virtual reality temporal bone simulator to assess otolaryngology trainees.
Zirkle, Molly; Roberson, David W; Leuwer, Rudolf; Dubrowski, Adam
2007-02-01
The objective of this study is to determine the feasibility of computerized evaluation of resident performance using hand motion analysis on a virtual reality temporal bone (VR TB) simulator. We hypothesized that both computerized analysis and expert ratings would discriminate the performance of novices from experienced trainees. We also hypothesized that performance on the virtual reality temporal bone simulator (VR TB) would differentiate based on previous drilling experience. The authors conducted a randomized, blind assessment study. Nineteen volunteers from the Otolaryngology-Head and Neck Surgery training program at the University of Toronto drilled both a cadaveric TB and a simulated VR TB. Expert reviewers were asked to assess operative readiness of the trainee based on a blind video review of their performance. Computerized hand motion analysis of each participant's performance was conducted. Expert raters were able to discriminate novices from experienced trainees (P < .05) on cadaveric temporal bones, and there was a trend toward discrimination on VR TB performance. Hand motion analysis showed that experienced trainees had better movement economy than novices (P < .05) on the VR TB. Performance, as measured by hand motion analysis on the VR TB simulator, reflects trainees' previous drilling experience. This study suggests that otolaryngology trainees could accomplish initial temporal bone training on a VR TB simulator, which can provide feedback to the trainee, and may reduce the need for constant faculty supervision and evaluation.
Analysis and numerical simulation research of the heating process in the oven
NASA Astrophysics Data System (ADS)
Chen, Yawei; Lei, Dingyou
2016-10-01
How to use the oven to bake delicious food is the most concerned problem of the designers and users of the oven. For this intent, this paper analyzed the heat distribution in the oven based on the basic operation principles and proceeded the data simulation of the temperature distribution on the rack section. Constructing the differential equation model of the temperature distribution changes in the pan when the oven works based on the heat radiation and heat transmission, based on the idea of utilizing cellular automation to simulate heat transfer process, used ANSYS software to proceed the numerical simulation analysis to the rectangular, round-cornered rectangular, elliptical and circular pans and giving out the instantaneous temperature distribution of the corresponding shapes of the pans. The temperature distribution of the rectangular and circular pans proves that the product gets overcooked easily at the corners and edges of rectangular pans but not of a round pan.
Parallel computing in enterprise modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.
2008-08-01
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less
Testability analysis on a hydraulic system in a certain equipment based on simulation model
NASA Astrophysics Data System (ADS)
Zhang, Rui; Cong, Hua; Liu, Yuanhong; Feng, Fuzhou
2018-03-01
Aiming at the problem that the complicated structure and the shortage of fault statistics information in hydraulic systems, a multi value testability analysis method based on simulation model is proposed. Based on the simulation model of AMESim, this method injects the simulated faults and records variation of test parameters ,such as pressure, flow rate, at each test point compared with those under normal conditions .Thus a multi-value fault-test dependency matrix is established. Then the fault detection rate (FDR) and fault isolation rate (FIR) are calculated based on the dependency matrix. Finally the system of testability and fault diagnosis capability are analyzed and evaluated, which can only reach a lower 54%(FDR) and 23%(FIR). In order to improve testability performance of the system,. number and position of the test points are optimized on the system. Results show the proposed test placement scheme can be used to solve the problems that difficulty, inefficiency and high cost in the system maintenance.
AGR-1 Thermocouple Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Einerson
2012-05-01
This report documents an effort to analyze measured and simulated data obtained in the Advanced Gas Reactor (AGR) fuel irradiation test program conducted in the INL's Advanced Test Reactor (ATR) to support the Next Generation Nuclear Plant (NGNP) R&D program. The work follows up on a previous study (Pham and Einerson, 2010), in which statistical analysis methods were applied for AGR-1 thermocouple data qualification. The present work exercises the idea that, while recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, results of the numerical simulations can be used in combination with the statistical analysis methods tomore » further improve qualification of measured data. Additionally, the combined analysis of measured and simulation data can generate insights about simulation model uncertainty that can be useful for model improvement. This report also describes an experimental control procedure to maintain fuel target temperature in the future AGR tests using regression relationships that include simulation results. The report is organized into four chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program, AGR-1 test configuration and test procedure, overview of AGR-1 measured data, and overview of physics and thermal simulation, including modeling assumptions and uncertainties. A brief summary of statistical analysis methods developed in (Pham and Einerson 2010) for AGR-1 measured data qualification within NGNP Data Management and Analysis System (NDMAS) is also included for completeness. Chapters 2-3 describe and discuss cases, in which the combined use of experimental and simulation data is realized. A set of issues associated with measurement and modeling uncertainties resulted from the combined analysis are identified. This includes demonstration that such a combined analysis led to important insights for reducing uncertainty in presentation of AGR-1 measured data (Chapter 2) and interpretation of simulation results (Chapter 3). The statistics-based simulation-aided experimental control procedure described for the future AGR tests is developed and demonstrated in Chapter 4. The procedure for controlling the target fuel temperature (capsule peak or average) is based on regression functions of thermocouple readings and other relevant parameters and accounting for possible changes in both physical and thermal conditions and in instrument performance.« less
Analysis, Simulation, and Verification of Knowledge-Based, Rule-Based, and Expert Systems
NASA Technical Reports Server (NTRS)
Hinchey, Mike; Rash, James; Erickson, John; Gracanin, Denis; Rouff, Chris
2010-01-01
Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rule base as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS.
NASA Technical Reports Server (NTRS)
Kubat, Gregory
2016-01-01
This report addresses a deliverable to the UAS-in-the-NAS project for recommendations for integration of CNPC and ATC communications based on analysis results from modeled radio system and NAS-wide UA communication architecture simulations. For each recommendation, a brief explanation of the rationale for its consideration is provided with any supporting results obtained or observed in our simulation activity.
Research and analysis of head-directed area-of-interest visual system concepts
NASA Technical Reports Server (NTRS)
Sinacori, J. B.
1983-01-01
An analysis and survey with conjecture supporting a preliminary data base design is presented. The data base is intended for use in a Computer Image Generator visual subsystem for a rotorcraft flight simulator that is used for rotorcraft systems development, not training. The approach taken was to attempt to identify the visual perception strategies used during terrain flight, survey environmental and image generation factors, and meld these into a preliminary data base design. This design is directed at Data Base developers, and hopefully will stimulate and aid their efforts to evolve such a Base that will support simulation of terrain flight operations.
Zhang, Shunqi; Yin, Tao; Ma, Ren; Liu, Zhipeng
2015-08-01
Functional imaging method of biological electrical characteristics based on magneto-acoustic effect gives valuable information of tissue in early tumor diagnosis, therein time and frequency characteristics analysis of magneto-acoustic signal is important in image reconstruction. This paper proposes wave summing method based on Green function solution for acoustic source of magneto-acoustic effect. Simulations and analysis under quasi 1D transmission condition are carried out to time and frequency characteristics of magneto-acoustic signal of models with different thickness. Simulation results of magneto-acoustic signal were verified through experiments. Results of the simulation with different thickness showed that time-frequency characteristics of magneto-acoustic signal reflected thickness of sample. Thin sample, which is less than one wavelength of pulse, and thick sample, which is larger than one wavelength, showed different summed waveform and frequency characteristics, due to difference of summing thickness. Experimental results verified theoretical analysis and simulation results. This research has laid a foundation for acoustic source and conductivity reconstruction to the medium with different thickness in magneto-acoustic imaging.
General specifications for the development of a PC-based simulator of the NASA RECON system
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros
1984-01-01
The general specifications for the design and implementation of an IBM PC/XT-based simulator of the NASA RECON system, including record designs, file structure designs, command language analysis, program design issues, error recovery considerations, and usage monitoring facilities are discussed. Once implemented, such a simulator will be utilized to evaluate the effectiveness of simulated information system access in addition to actual system usage as part of the total educational programs being developed within the NASA contract.
HYDROLOGIC MODEL CALIBRATION AND UNCERTAINTY IN SCENARIO ANALYSIS
A systematic analysis of model performance during simulations based on
observed land-cover/use change is used to quantify error associated with water-yield
simulations for a series of known landscape conditions over a 24-year period with the
goal of evaluatin...
Discrete Event Simulation of a Suppression of Enemy Air Defenses (SEAD) Mission
2008-03-01
component-based DES developed in Java® using the Simkit simulation package. Analysis of ship self air defense system selection ( Turan , 1999) is another...Institute of Technology, Wright-Patterson AFB OH, March 2003 (ADA445279 ) Turan , Bulent. A Comparative Analysis of Ship Self Air Defense (SSAD) Systems
2016-06-01
characteristics, experimental design techniques, and analysis methodologies that distinguish each phase of the MBSE MEASA. To ensure consistency... methodology . Experimental design selection, simulation analysis, and trade space analysis support the final two stages. Figure 27 segments the MBSE MEASA...rounding has the potential to increase the correlation between columns of the experimental design matrix. The design methodology presented in Vieira
FEAMAC/CARES Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composites
NASA Technical Reports Server (NTRS)
Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Bhatt, Ramakrishna
2016-01-01
Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.
Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix Composite
NASA Technical Reports Server (NTRS)
Nemeth, Noel; Bednarcyk, Brett; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu
2015-01-01
Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MAC/GMC (Micromechanics Analysis Code/ Generalized Method of Cells) composite material analysis code. The resulting code is called FEAMAC/CARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMAC/CARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMAC/CARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.
Reduced order model based on principal component analysis for process simulation and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, Y.; Malacina, A.; Biegler, L.
2009-01-01
It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models,more » this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.« less
Virtual reality based surgery simulation for endoscopic gynaecology.
Székely, G; Bajka, M; Brechbühler, C; Dual, J; Enzler, R; Haller, U; Hug, J; Hutter, R; Ironmonger, N; Kauer, M; Meier, V; Niederer, P; Rhomberg, A; Schmid, P; Schweitzer, G; Thaler, M; Vuskovic, V; Tröster, G
1999-01-01
Virtual reality (VR) based surgical simulator systems offer very elegant possibilities to both enrich and enhance traditional education in endoscopic surgery. However, while a wide range of VR simulator systems have been proposed and realized in the past few years, most of these systems are far from able to provide a reasonably realistic surgical environment. We explore the basic approaches to the current limits of realism and ultimately seek to extend these based on our description and analysis of the most important components of a VR-based endoscopic simulator. The feasibility of the proposed techniques is demonstrated on a first modular prototype system implementing the basic algorithms for VR-training in gynaecologic laparoscopy.
Performance Analysis of an Actor-Based Distributed Simulation
NASA Technical Reports Server (NTRS)
Schoeffler, James D.
1998-01-01
Object-oriented design of simulation programs appears to be very attractive because of the natural association of components in the simulated system with objects. There is great potential in distributing the simulation across several computers for the purpose of parallel computation and its consequent handling of larger problems in less elapsed time. One approach to such a design is to use "actors", that is, active objects with their own thread of control. Because these objects execute concurrently, communication is via messages. This is in contrast to an object-oriented design using passive objects where communication between objects is via method calls (direct calls when they are in the same address space and remote procedure calls when they are in different address spaces or different machines). This paper describes a performance analysis program for the evaluation of a design for distributed simulations based upon actors.
Engineering uses of physics-based ground motion simulations
Baker, Jack W.; Luco, Nicolas; Abrahamson, Norman A.; Graves, Robert W.; Maechling, Phillip J.; Olsen, Kim B.
2014-01-01
This paper summarizes validation methodologies focused on enabling ground motion simulations to be used with confidence in engineering applications such as seismic hazard analysis and dynmaic analysis of structural and geotechnical systems. Numberical simullation of ground motion from large erthquakes, utilizing physics-based models of earthquake rupture and wave propagation, is an area of active research in the earth science community. Refinement and validatoin of these models require collaboration between earthquake scientists and engineering users, and testing/rating methodolgies for simulated ground motions to be used with confidence in engineering applications. This paper provides an introduction to this field and an overview of current research activities being coordinated by the Souther California Earthquake Center (SCEC). These activities are related both to advancing the science and computational infrastructure needed to produce ground motion simulations, as well as to engineering validation procedures. Current research areas and anticipated future achievements are also discussed.
Development of background-free tame fluorescent probes for intracellular live cell imaging
Alamudi, Samira Husen; Satapathy, Rudrakanta; Kim, Jihyo; Su, Dongdong; Ren, Haiyan; Das, Rajkumar; Hu, Lingna; Alvarado-Martínez, Enrique; Lee, Jung Yeol; Hoppmann, Christian; Peña-Cabrera, Eduardo; Ha, Hyung-Ho; Park, Hee-Sung; Wang, Lei; Chang, Young-Tae
2016-01-01
Fluorescence labelling of an intracellular biomolecule in native living cells is a powerful strategy to achieve in-depth understanding of the biomolecule's roles and functions. Besides being nontoxic and specific, desirable labelling probes should be highly cell permeable without nonspecific interactions with other cellular components to warrant high signal-to-noise ratio. While it is critical, rational design for such probes is tricky. Here we report the first predictive model for cell permeable background-free probe development through optimized lipophilicity, water solubility and charged van der Waals surface area. The model was developed by utilizing high-throughput screening in combination with cheminformatics. We demonstrate its reliability by developing CO-1 and AzG-1, a cyclooctyne- and azide-containing BODIPY probe, respectively, which specifically label intracellular target organelles and engineered proteins with minimum background. The results provide an efficient strategy for development of background-free probes, referred to as ‘tame' probes, and novel tools for live cell intracellular imaging. PMID:27321135
ChEMBL web services: streamlining access to drug discovery data and utilities.
Davies, Mark; Nowotka, Michał; Papadatos, George; Dedman, Nathan; Gaulton, Anna; Atkinson, Francis; Bellis, Louisa; Overington, John P
2015-07-01
ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
The Presynaptic Component of the Serotonergic System is Required for Clozapine's Efficacy
Yadav, Prem N; Abbas, Atheir I; Farrell, Martilias S; Setola, Vincent; Sciaky, Noah; Huang, Xi-Ping; Kroeze, Wesley K; Crawford, LaTasha K; Piel, David A; Keiser, Michael J; Irwin, John J; Shoichet, Brian K; Deneris, Evan S; Gingrich, Jay; Beck, Sheryl G; Roth, Bryan L
2011-01-01
Clozapine, by virtue of its absence of extrapyramidal side effects and greater efficacy, revolutionized the treatment of schizophrenia, although the mechanisms underlying this exceptional activity remain controversial. Combining an unbiased cheminformatics and physical screening approach, we evaluated clozapine's activity at >2350 distinct molecular targets. Clozapine, and the closely related atypical antipsychotic drug olanzapine, interacted potently with a unique spectrum of molecular targets. This distinct pattern, which was not shared with the typical antipsychotic drug haloperidol, suggested that the serotonergic neuronal system was a key determinant of clozapine's actions. To test this hypothesis, we used pet1−/− mice, which are deficient in serotonergic presynaptic markers. We discovered that the antipsychotic-like properties of the atypical antipsychotic drugs clozapine and olanzapine were abolished in a pharmacological model that mimics NMDA-receptor hypofunction in pet1−/− mice, whereas haloperidol's efficacy was unaffected. These results show that clozapine's ability to normalize NMDA-receptor hypofunction, which is characteristic of schizophrenia, depends on an intact presynaptic serotonergic neuronal system. PMID:21048700
Construction simulation analysis of 120m continuous rigid frame bridge based on Midas Civil
NASA Astrophysics Data System (ADS)
Shi, Jing-xian; Ran, Zhi-hong
2018-03-01
In this paper, a three-dimensional finite element model of a continuous rigid frame bridge with a main span of 120m is established by the simulation and analysis of Midas Civil software. The deflection and stress of the main beam in each construction stage of continuous beam bridge are simulated and analyzed, which provides a reliable technical guarantee for the safe construction of the bridge.
Predicting pedestrian flow: a methodology and a proof of concept based on real-life data.
Davidich, Maria; Köster, Gerta
2013-01-01
Building a reliable predictive model of pedestrian motion is very challenging: Ideally, such models should be based on observations made in both controlled experiments and in real-world environments. De facto, models are rarely based on real-world observations due to the lack of available data; instead, they are largely based on intuition and, at best, literature values and laboratory experiments. Such an approach is insufficient for reliable simulations of complex real-life scenarios: For instance, our analysis of pedestrian motion under natural conditions at a major German railway station reveals that the values for free-flow velocities and the flow-density relationship differ significantly from widely used literature values. It is thus necessary to calibrate and validate the model against relevant real-life data to make it capable of reproducing and predicting real-life scenarios. In this work we aim at constructing such realistic pedestrian stream simulation. Based on the analysis of real-life data, we present a methodology that identifies key parameters and interdependencies that enable us to properly calibrate the model. The success of the approach is demonstrated for a benchmark model, a cellular automaton. We show that the proposed approach significantly improves the reliability of the simulation and hence the potential prediction accuracy. The simulation is validated by comparing the local density evolution of the measured data to that of the simulated data. We find that for our model the most sensitive parameters are: the source-target distribution of the pedestrian trajectories, the schedule of pedestrian appearances in the scenario and the mean free-flow velocity. Our results emphasize the need for real-life data extraction and analysis to enable predictive simulations.
Simulation-based optimization framework for reuse of agricultural drainage water in irrigation.
Allam, A; Tawfik, A; Yoshimura, C; Fleifle, A
2016-05-01
A simulation-based optimization framework for agricultural drainage water (ADW) reuse has been developed through the integration of a water quality model (QUAL2Kw) and a genetic algorithm. This framework was applied to the Gharbia drain in the Nile Delta, Egypt, in summer and winter 2012. First, the water quantity and quality of the drain was simulated using the QUAL2Kw model. Second, uncertainty analysis and sensitivity analysis based on Monte Carlo simulation were performed to assess QUAL2Kw's performance and to identify the most critical variables for determination of water quality, respectively. Finally, a genetic algorithm was applied to maximize the total reuse quantity from seven reuse locations with the condition not to violate the standards for using mixed water in irrigation. The water quality simulations showed that organic matter concentrations are critical management variables in the Gharbia drain. The uncertainty analysis showed the reliability of QUAL2Kw to simulate water quality and quantity along the drain. Furthermore, the sensitivity analysis showed that the 5-day biochemical oxygen demand, chemical oxygen demand, total dissolved solids, total nitrogen and total phosphorous are highly sensitive to point source flow and quality. Additionally, the optimization results revealed that the reuse quantities of ADW can reach 36.3% and 40.4% of the available ADW in the drain during summer and winter, respectively. These quantities meet 30.8% and 29.1% of the drainage basin requirements for fresh irrigation water in the respective seasons. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.
2010-07-01
Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.
NASA Astrophysics Data System (ADS)
Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping
2015-01-01
As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.
DLTPulseGenerator: A library for the simulation of lifetime spectra based on detector-output pulses
NASA Astrophysics Data System (ADS)
Petschke, Danny; Staab, Torsten E. M.
2018-01-01
The quantitative analysis of lifetime spectra relevant in both life and materials sciences presents one of the ill-posed inverse problems and, hence, leads to most stringent requirements on the hardware specifications and the analysis algorithms. Here we present DLTPulseGenerator, a library written in native C++ 11, which provides a simulation of lifetime spectra according to the measurement setup. The simulation is based on pairs of non-TTL detector output-pulses. Those pulses require the Constant Fraction Principle (CFD) for the determination of the exact timing signal and, thus, the calculation of the time difference i.e. the lifetime. To verify the functionality, simulation results were compared to experimentally obtained data using Positron Annihilation Lifetime Spectroscopy (PALS) on pure tin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less
USER MANUAL FOR EXPRESS, THE EXAMS-PRZM EXPOSURE SIMULATION SHELL
The Environmental Fate and Effects Division (EFED) of EPA's Office of Pesticide Programs(OPP) uses a suite of ORD simulation models for the exposure analysis portion of regulatory risk assessments. These models (PRZM, EXAMS, AgDisp) are complex, process-based simulation codes tha...
Probabilistic load simulation: Code development status
NASA Astrophysics Data System (ADS)
Newell, J. F.; Ho, H.
1991-05-01
The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.
A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON
King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix
2008-01-01
As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597
Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Alder, J.; van Griensven, A.; Meixner, T.
2003-12-01
Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.
Planning to Execution Earned Value Risk Management Tool
2015-09-01
Experiments ............................................................24 b. Result Analysis ...31 B. STAKEHOLDER ANALYSIS .....................................................................33 C. OPERATIONAL-BASED SCENARIO...42 c. Simulation and Analysis Activity ............................................43 2. Project Management Phase
Howard Evan Canfield; Vicente L. Lopes
2000-01-01
A process-based, simulation model for evaporation, soil water and streamflow (BROOK903) was used to estimate soil moisture change on a semiarid rangeland watershed in southeastern Arizona. A sensitivity analysis was performed to select parameters affecting ET and soil moisture for calibration. Automatic parameter calibration was performed using a procedure based on a...
A simulation model for probabilistic analysis of Space Shuttle abort modes
NASA Technical Reports Server (NTRS)
Hage, R. T.
1993-01-01
A simulation model which was developed to provide a probabilistic analysis tool to study the various space transportation system abort mode situations is presented. The simulation model is based on Monte Carlo simulation of an event-tree diagram which accounts for events during the space transportation system's ascent and its abort modes. The simulation model considers just the propulsion elements of the shuttle system (i.e., external tank, main engines, and solid boosters). The model was developed to provide a better understanding of the probability of occurrence and successful completion of abort modes during the vehicle's ascent. The results of the simulation runs discussed are for demonstration purposes only, they are not official NASA probability estimates.
DEVELOPMENT OF USER-FRIENDLY SIMULATION SYSTEM OF EARTHQUAKE INDUCED URBAN SPREADING FIRE
NASA Astrophysics Data System (ADS)
Tsujihara, Osamu; Gawa, Hidemi; Hayashi, Hirofumi
In the simulation of earthquake induced urban spreading fire, the produce of the analytical model of the target area is required as well as the analysis of spreading fire and the presentati on of the results. In order to promote the use of the simulation, it is important that the simulation system is non-intrusive and the analysis results can be demonstrated by the realistic presentation. In this study, the simulation system is developed based on the Petri-net algorithm, in which the easy operation can be realized in the modeling of the target area of the simulation through the presentation of analytical results by realistic 3-D animation.
Nonlinearity in Social Service Evaluation: A Primer on Agent-Based Modeling
ERIC Educational Resources Information Center
Israel, Nathaniel; Wolf-Branigin, Michael
2011-01-01
Measurement of nonlinearity in social service research and evaluation relies primarily on spatial analysis and, to a lesser extent, social network analysis. Recent advances in geographic methods and computing power, however, allow for the greater use of simulation methods. These advances now enable evaluators and researchers to simulate complex…
Estimating Uncertainty in N2O Emissions from US Cropland Soils
USDA-ARS?s Scientific Manuscript database
A Monte Carlo analysis was combined with an empirically-based approach to quantify uncertainties in soil N2O emissions from US croplands estimated with the DAYCENT simulation model. Only a subset of croplands was simulated in the Monte Carlo analysis which was used to infer uncertainties across the ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamm, L.L.
1998-10-07
This report is one of a series of reports that document normal operation and accident simulations for the Accelerator Production of Tritium (APT) blanket heat removal (HR) system. These simulations were performed for the Preliminary Safety Analysis Report.
A method for ensemble wildland fire simulation
Mark A. Finney; Isaac C. Grenfell; Charles W. McHugh; Robert C. Seli; Diane Trethewey; Richard D. Stratton; Stuart Brittain
2011-01-01
An ensemble simulation system that accounts for uncertainty in long-range weather conditions and two-dimensional wildland fire spread is described. Fuel moisture is expressed based on the energy release component, a US fire danger rating index, and its variation throughout the fire season is modeled using time series analysis of historical weather data. This analysis...
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan; Arnold, Steven; Mital, Subodh; Murthy, Pappu; Walton, Owen
2015-01-01
Reported here is a coupling of two NASA developed codes: CARES (Ceramics Analysis and Reliability Evaluation of Structures) with the MACGMC composite material analysis code. The resulting code is called FEAMACCARES and is constructed as an Abaqus finite element analysis UMAT (user defined material). Here we describe the FEAMACCARES code and an example problem (taken from the open literature) of a laminated CMC in off-axis loading is shown. FEAMACCARES performs stochastic-strength-based damage simulation response of a CMC under multiaxial loading using elastic stiffness reduction of the failed elements.
Scenario Analysis of Soil and Water Conservation in Xiejia Watershed Based on Improved CSLE Model
NASA Astrophysics Data System (ADS)
Liu, Jieying; Yu, Ming; Wu, Yong; Huang, Yao; Nie, Yawen
2018-01-01
According to the existing research results and related data, use the scenario analysis method, to evaluate the effects of different soil and water conservation measures on soil erosion in a small watershed. Based on the analysis of soil erosion scenarios and model simulation budgets in the study area, it is found that all scenarios simulated soil erosion rates are lower than the present situation of soil erosion in 2013. Soil and water conservation measures are more effective in reducing soil erosion than soil and water conservation biological measures and soil and water conservation tillage measures.
An integrated modeling and design tool for advanced optical spacecraft
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
1992-01-01
Consideration is given to the design and status of the Integrated Modeling of Optical Systems (IMOS) tool and to critical design issues. A multidisciplinary spacecraft design and analysis tool with support for structural dynamics, controls, thermal analysis, and optics, IMOS provides rapid and accurate end-to-end performance analysis, simulations, and optimization of advanced space-based optical systems. The requirements for IMOS-supported numerical arrays, user defined data structures, and a hierarchical data base are outlined, and initial experience with the tool is summarized. A simulation of a flexible telescope illustrates the integrated nature of the tools.
Simulation System of Car Crash Test in C-NCAP Analysis Based on an Improved Apriori Algorithm*
NASA Astrophysics Data System (ADS)
Xiang, LI
In order to analysis car crash test in C-NCAP, an improved algorithm is given based on Apriori algorithm in this paper. The new algorithm is implemented with vertical data layout, breadth first searching, and intersecting. It takes advantage of the efficiency of vertical data layout and intersecting, and prunes candidate frequent item sets like Apriori. Finally, the new algorithm is applied in simulation of car crash test analysis system. The result shows that the relations will affect the C-NCAP test results, and it can provide a reference for the automotive design.
Pneumafil casing blower through moving reference frame (MRF) - A CFD simulation
NASA Astrophysics Data System (ADS)
Manivel, R.; Vijayanandh, R.; Babin, T.; Sriram, G.
2018-05-01
In this analysis work, the ring frame of Pneumafil casing blower of the textile mills with a power rating of 5 kW have been simulated using Computational Fluid Dynamics (CFD) code. The CFD analysis of the blower is carried out in Ansys Workbench 16.2 with Fluent using MRF solver settings. The simulation settings and boundary conditions are based on literature study and field data acquired. The main objective of this work is to reduce the energy consumption of the blower. The flow analysis indicated that the power consumption is influenced by the deflector plate orientation and deflector plate strip situated at the outlet casing of the blower. The energy losses occurred in the blower is due to the recirculation zones formed around the deflector plate strip. The deflector plate orientation is changed and optimized to reduce the energy consumption. The proposed optimized model is based on the simulation results which had relatively lesser power consumption than the existing and other cases. The energy losses in the Pneumafil casing blower are reduced through CFD analysis.
WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions
Karr, Jonathan R.; Phillips, Nolan C.; Covert, Markus W.
2014-01-01
Mechanistic ‘whole-cell’ models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. Database URL: http://www.wholecellsimdb.org Source code repository URL: http://github.com/CovertLab/WholeCellSimDB PMID:25231498
Training Knowledge Bots for Physics-Based Simulations Using Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Wong, Jay Ming
2014-01-01
Millions of complex physics-based simulations are required for design of an aerospace vehicle. These simulations are usually performed by highly trained and skilled analysts, who execute, monitor, and steer each simulation. Analysts rely heavily on their broad experience that may have taken 20-30 years to accumulate. In addition, the simulation software is complex in nature, requiring significant computational resources. Simulations of system of systems become even more complex and are beyond human capacity to effectively learn their behavior. IBM has developed machines that can learn and compete successfully with a chess grandmaster and most successful jeopardy contestants. These machines are capable of learning some complex problems much faster than humans can learn. In this paper, we propose using artificial neural network to train knowledge bots to identify the idiosyncrasies of simulation software and recognize patterns that can lead to successful simulations. We examine the use of knowledge bots for applications of computational fluid dynamics (CFD), trajectory analysis, commercial finite-element analysis software, and slosh propellant dynamics. We will show that machine learning algorithms can be used to learn the idiosyncrasies of computational simulations and identify regions of instability without including any additional information about their mathematical form or applied discretization approaches.
NASA Technical Reports Server (NTRS)
Cross, Jon B.; Koontz, Steven L.; Lan, Esther H.
1993-01-01
The effects of atomic oxygen on boron nitride (BN), silicon nitride (Si3N4), Intelsat 6 solar cell interconnects, organic polymers, and MoS2 and WS2 dry lubricant, were studied in Low Earth Orbit (LEO) flight experiments and in a ground based simulation facility. Both the inflight and ground based experiments employed in situ electrical resistance measurements to detect penetration of atomic oxygen through materials and Electron Spectroscopy for Chemical Analysis (ESCA) analysis to measure chemical composition changes. Results are given. The ground based results on the materials studied to date show good qualitative correlation with the LEO flight results, thus validating the simulation fidelity of the ground based facility in terms of reproducing LEO flight results. In addition it was demonstrated that ground based simulation is capable of performing more detailed experiments than orbital exposures can presently perform. This allows the development of a fundamental understanding of the mechanisms involved in the LEO environment degradation of materials.
Hygrothermal Simulation: A Tool for Building Envelope Design Analysis
Samuel V. Glass; Anton TenWolde; Samuel L. Zelinka
2013-01-01
Is it possible to gauge the risk of moisture problems while designing the building envelope? This article provides a brief introduction to computer-based hygrothermal (heat and moisture) simulation, shows how simulation can be useful as a design tool, and points out a number of im-portant considerations regarding model inputs and limita-tions. Hygrothermal simulation...
A Two-Step Approach to Uncertainty Quantification of Core Simulators
Yankov, Artem; Collins, Benjamin; Klein, Markus; ...
2012-01-01
For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less
Peterson, Steven M.; Flynn, Amanda T.; Vrabel, Joseph; Ryter, Derek W.
2015-08-12
The calibrated groundwater-flow model was used with the Groundwater-Management Process for the 2005 version of the U.S. Geological Survey modular three-dimensional groundwater model, MODFLOW–2005, to provide a tool for the NPNRD to better understand how water-management decisions could affect stream base flows of the North Platte River at Bridgeport, Nebr., streamgage in a future period from 2008 to 2019 under varying climatic conditions. The simulation-optimization model was constructed to analyze the maximum increase in simulated stream base flow that could be obtained with the minimum amount of reductions in groundwater withdrawals for irrigation. A second analysis extended the first to analyze the simulated base-flow benefit of groundwater withdrawals along with application of intentional recharge, that is, water from canals being released into rangeland areas with sandy soils. With optimized groundwater withdrawals and intentional recharge, the maximum simulated stream base flow was 15–23 cubic feet per second (ft3/s) greater than with no management at all, or 10–15 ft3/s larger than with managed groundwater withdrawals only. These results indicate not only the amount that simulated stream base flow can be increased by these management options, but also the locations where the management options provide the most or least benefit to the simulated stream base flow. For the analyses in this report, simulated base flow was best optimized by reductions in groundwater withdrawals north of the North Platte River and in the western half of the area. Intentional recharge sites selected by the optimization had a complex distribution but were more likely to be closer to the North Platte River or its tributaries. Future users of the simulation-optimization model will be able to modify the input files as to type, location, and timing of constraints, decision variables of groundwater withdrawals by zone, and other variables to explore other feasible management scenarios that may yield different increases in simulated future base flow of the North Platte River.
Simulations of multi-contrast x-ray imaging using near-field speckles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zdora, Marie-Christine; Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom and Department of Physics & Astronomy, University College London, London, WC1E 6BT; Thibault, Pierre
2016-01-28
X-ray dark-field and phase-contrast imaging using near-field speckles is a novel technique that overcomes limitations inherent in conventional absorption x-ray imaging, i.e. poor contrast for features with similar density. Speckle-based imaging yields a wealth of information with a simple setup tolerant to polychromatic and divergent beams, and simple data acquisition and analysis procedures. Here, we present a simulation software used to model the image formation with the speckle-based technique, and we compare simulated results on a phantom sample with experimental synchrotron data. Thorough simulation of a speckle-based imaging experiment will help for better understanding and optimising the technique itself.
NASA Astrophysics Data System (ADS)
Nazzal, M. A.
2018-04-01
It is established that some superplastic materials undergo significant cavitation during deformation. In this work, stability analysis for the superplastic copper based alloy Coronze-638 at 550 °C based on Hart's definition of stable plastic deformation and finite element simulations for the balanced biaxial loading case are carried out to study the effects of hydrostatic pressure on cavitation evolution during superplastic forming. The finite element results show that imposing hydrostatic pressure yields to a reduction in cavitation growth.
Design for dependability: A simulation-based approach. Ph.D. Thesis, 1993
NASA Technical Reports Server (NTRS)
Goswami, Kumar K.
1994-01-01
This research addresses issues in simulation-based system level dependability analysis of fault-tolerant computer systems. The issues and difficulties of providing a general simulation-based approach for system level analysis are discussed and a methodology that address and tackle these issues is presented. The proposed methodology is designed to permit the study of a wide variety of architectures under various fault conditions. It permits detailed functional modeling of architectural features such as sparing policies, repair schemes, routing algorithms as well as other fault-tolerant mechanisms, and it allows the execution of actual application software. One key benefit of this approach is that the behavior of a system under faults does not have to be pre-defined as it is normally done. Instead, a system can be simulated in detail and injected with faults to determine its failure modes. The thesis describes how object-oriented design is used to incorporate this methodology into a general purpose design and fault injection package called DEPEND. A software model is presented that uses abstractions of application programs to study the behavior and effect of software on hardware faults in the early design stage when actual code is not available. Finally, an acceleration technique that combines hierarchical simulation, time acceleration algorithms and hybrid simulation to reduce simulation time is introduced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Hoy, B.W.
1988-01-01
The measurement of ultra-low frequency vibration (.01 to 1.0 Hz) in motion based flight simulators was undertaken to quantify the energy and frequencies of motion present during operation. Methods of measurement, the selection of transducers, recorders, and analyzers and the development of a test plan, as well as types of analysis are discussed. Analysis of the data using a high-speed minicomputer and a comparison of the computer analysis with standard FFT analysis are also discussed. Measurement of simulator motion with the pilot included as part of the control dynamics had not been done up to this time. The data aremore » being used to evaluate the effect of low frequency energy on the vestibular system of the air crew, and the incidence of simulator induced sickness. 11 figs.« less
The Researches on Damage Detection Method for Truss Structures
NASA Astrophysics Data System (ADS)
Wang, Meng Hong; Cao, Xiao Nan
2018-06-01
This paper presents an effective method to detect damage in truss structures. Numerical simulation and experimental analysis were carried out on a damaged truss structure under instantaneous excitation. The ideal excitation point and appropriate hammering method were determined to extract time domain signals under two working conditions. The frequency response function and principal component analysis were used for data processing, and the angle between the frequency response function vectors was selected as a damage index to ascertain the location of a damaged bar in the truss structure. In the numerical simulation, the time domain signal of all nodes was extracted to determine the location of the damaged bar. In the experimental analysis, the time domain signal of a portion of the nodes was extracted on the basis of an optimal sensor placement method based on the node strain energy coefficient. The results of the numerical simulation and experimental analysis showed that the damage detection method based on the frequency response function and principal component analysis could locate the damaged bar accurately.
Antonopoulos, Markos; Stamatakos, Georgios
2015-01-01
Intensive glioma tumor infiltration into the surrounding normal brain tissues is one of the most critical causes of glioma treatment failure. To quantitatively understand and mathematically simulate this phenomenon, several diffusion-based mathematical models have appeared in the literature. The majority of them ignore the anisotropic character of diffusion of glioma cells since availability of pertinent truly exploitable tomographic imaging data is limited. Aiming at enriching the anisotropy-enhanced glioma model weaponry so as to increase the potential of exploiting available tomographic imaging data, we propose a Brownian motion-based mathematical analysis that could serve as the basis for a simulation model estimating the infiltration of glioblastoma cells into the surrounding brain tissue. The analysis is based on clinical observations and exploits diffusion tensor imaging (DTI) data. Numerical simulations and suggestions for further elaboration are provided.
The Role of Multiphysics Simulation in Multidisciplinary Analysis
NASA Technical Reports Server (NTRS)
Rifai, Steven M.; Ferencz, Robert M.; Wang, Wen-Ping; Spyropoulos, Evangelos T.; Lawrence, Charles; Melis, Matthew E.
1998-01-01
This article describes the applications of the Spectrum(Tm) Solver in Multidisciplinary Analysis (MDA). Spectrum, a multiphysics simulation software based on the finite element method, addresses compressible and incompressible fluid flow, structural, and thermal modeling as well as the interaction between these disciplines. Multiphysics simulation is based on a single computational framework for the modeling of multiple interacting physical phenomena. Interaction constraints are enforced in a fully-coupled manner using the augmented-Lagrangian method. Within the multiphysics framework, the finite element treatment of fluids is based on Galerkin-Least-Squares (GLS) method with discontinuity capturing operators. The arbitrary-Lagrangian-Eulerian method is utilized to account for deformable fluid domains. The finite element treatment of solids and structures is based on the Hu-Washizu variational principle. The multiphysics architecture lends itself naturally to high-performance parallel computing. Aeroelastic, propulsion, thermal management and manufacturing applications are presented.
In situ and in-transit analysis of cosmological simulations
Friesen, Brian; Almgren, Ann; Lukic, Zarija; ...
2016-08-24
Modern cosmological simulations have reached the trillion-element scale, rendering data storage and subsequent analysis formidable tasks. To address this circumstance, we present a new MPI-parallel approach for analysis of simulation data while the simulation runs, as an alternative to the traditional workflow consisting of periodically saving large data sets to disk for subsequent ‘offline’ analysis. We demonstrate this approach in the compressible gasdynamics/N-body code Nyx, a hybrid MPI+OpenMP code based on the BoxLib framework, used for large-scale cosmological simulations. We have enabled on-the-fly workflows in two different ways: one is a straightforward approach consisting of all MPI processes periodically haltingmore » the main simulation and analyzing each component of data that they own (‘ in situ’). The other consists of partitioning processes into disjoint MPI groups, with one performing the simulation and periodically sending data to the other ‘sidecar’ group, which post-processes it while the simulation continues (‘in-transit’). The two groups execute their tasks asynchronously, stopping only to synchronize when a new set of simulation data needs to be analyzed. For both the in situ and in-transit approaches, we experiment with two different analysis suites with distinct performance behavior: one which finds dark matter halos in the simulation using merge trees to calculate the mass contained within iso-density contours, and another which calculates probability distribution functions and power spectra of various fields in the simulation. Both are common analysis tasks for cosmology, and both result in summary statistics significantly smaller than the original data set. We study the behavior of each type of analysis in each workflow in order to determine the optimal configuration for the different data analysis algorithms.« less
3D Simulation of External Flooding Events for the RISMC Pathway
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prescott, Steven; Mandelli, Diego; Sampath, Ramprasad
2015-09-01
Incorporating 3D simulations as part of the Risk-Informed Safety Margins Characterization (RISMIC) Toolkit allows analysts to obtain a more complete picture of complex system behavior for events including external plant hazards. External events such as flooding have become more important recently – however these can be analyzed with existing and validated simulated physics toolkits. In this report, we describe these approaches specific to flooding-based analysis using an approach called Smoothed Particle Hydrodynamics. The theory, validation, and example applications of the 3D flooding simulation are described. Integrating these 3D simulation methods into computational risk analysis provides a spatial/visual aspect to themore » design, improves the realism of results, and can prove visual understanding to validate the analysis of flooding.« less
Zhao, Huawei
2009-01-01
A ZEMAX model was constructed to simulate a clinical trial of intraocular lenses (IOLs) based on a clinically oriented Monte Carlo ensemble analysis using postoperative ocular parameters. The purpose of this model is to test the feasibility of streamlining and optimizing both the design process and the clinical testing of IOLs. This optical ensemble analysis (OEA) is also validated. Simulated pseudophakic eyes were generated by using the tolerancing and programming features of ZEMAX optical design software. OEA methodology was verified by demonstrating that the results of clinical performance simulations were consistent with previously published clinical performance data using the same types of IOLs. From these results we conclude that the OEA method can objectively simulate the potential clinical trial performance of IOLs.
MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion
NASA Astrophysics Data System (ADS)
Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong
This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.
He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong
2016-02-01
Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.
2006-10-01
The objective was to construct a bridge between existing and future microscopic simulation codes ( kMC , MD, MC, BD, LB etc.) and traditional, continuum...kinetic Monte Carlo, kMC , equilibrium MC, Lattice-Boltzmann, LB, Brownian Dynamics, BD, or general agent-based, AB) simulators. It also, fortuitously...cond-mat/0310460 at arXiv.org. 27. Coarse Projective kMC Integration: Forward/Reverse Initial and Boundary Value Problems", R. Rico-Martinez, C. W
NASA Astrophysics Data System (ADS)
Guo, Donglin; Wang, Huijun; Wang, Aihui
2017-11-01
Numerical simulation is of great importance to the investigation of changes in frozen ground on large spatial and long temporal scales. Previous studies have focused on the impacts of improvements in the model for the simulation of frozen ground. Here the sensitivities of permafrost simulation to different atmospheric forcing data sets are examined using the Community Land Model, version 4.5 (CLM4.5), in combination with three sets of newly developed and reanalysis-based atmospheric forcing data sets (NOAA Climate Forecast System Reanalysis (CFSR), European Centre for Medium-Range Weather Forecasts Re-Analysis Interim (ERA-I), and NASA Modern Era Retrospective-Analysis for Research and Applications (MERRA)). All three simulations were run from 1979 to 2009 at a resolution of 0.5° × 0.5° and validated with what is considered to be the best available permafrost observations (soil temperature, active layer thickness, and permafrost extent). Results show that the use of reanalysis-based atmospheric forcing data set reproduces the variations in soil temperature and active layer thickness but produces evident biases in their climatologies. Overall, the simulations based on the CFSR and ERA-I data sets give more reasonable results than the simulation based on the MERRA data set, particularly for the present-day permafrost extent and the change in active layer thickness. The three simulations produce ranges for the present-day climatology (permafrost area: 11.31-13.57 × 106 km2; active layer thickness: 1.10-1.26 m) and for recent changes (permafrost area: -5.8% to -9.0%; active layer thickness: 9.9%-20.2%). The differences in air temperature increase, snow depth, and permafrost thermal conditions in these simulations contribute to the differences in simulated results.
NASA Astrophysics Data System (ADS)
Stepanov, Dmitry; Gusev, Anatoly; Diansky, Nikolay
2016-04-01
Based on numerical simulations the study investigates impact of atmospheric forcing on heat content variability of the sub-surface layer in Japan/East Sea (JES), 1948-2009. We developed a model configuration based on a INMOM model and atmospheric forcing extracted from the CORE phase II experiment dataset 1948-2009, which enables to assess impact of only atmospheric forcing on heat content variability of the sub-surface layer of the JES. An analysis of kinetic energy (KE) and total heat content (THC) in the JES obtained from our numerical simulations showed that the simulated circulation of the JES is being quasi-steady state. It was found that the year-mean KE variations obtained from our numerical simulations are similar those extracted from the SODA reanalysis. Comparison of the simulated THC and that extracted from the SODA reanalysis showed significant consistence between them. An analysis of numerical simulations showed that the simulated circulation structure is very similar that obtained from the PALACE floats in the intermediate and abyssal layers in the JES. Using empirical orthogonal function analysis we studied spatial-temporal variability of the heat content of the sub-surface layer in the JES. Based on comparison of the simulated heat content variations with those obtained from natural observations an assessment of the atmospheric forcing impact on the heat content variability was obtained. Using singular value decomposition analysis we considered relationships between the heat content variability and wind stress curl as well as sensible heat flux in winter. It was established the major role of sensible heat flux in decadal variability of the heat content of the sub-surface layer in the JES. The research was supported by the Russian Foundation for Basic Research (grant N 14-05-00255) and the Council on the Russian Federation President Grants (grant N MK-3241.2015.5)
Kinematics Simulation Analysis of Packaging Robot with Joint Clearance
NASA Astrophysics Data System (ADS)
Zhang, Y. W.; Meng, W. J.; Wang, L. Q.; Cui, G. H.
2018-03-01
Considering the influence of joint clearance on the motion error, repeated positioning accuracy and overall position of the machine, this paper presents simulation analysis of a packaging robot — 2 degrees of freedom(DOF) planar parallel robot based on the characteristics of high precision and fast speed of packaging equipment. The motion constraint equation of the mechanism is established, and the analysis and simulation of the motion error are carried out in the case of turning the revolute clearance. The simulation results show that the size of the joint clearance will affect the movement accuracy and packaging efficiency of the packaging robot. The analysis provides a reference point of view for the packaging equipment design and selection criteria and has a great significance on the packaging industry automation.
Monte Carlo simulation: Its status and future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murtha, J.A.
1997-04-01
Monte Carlo simulation is a statistics-based analysis tool that yields probability-vs.-value relationships for key parameters, including oil and gas reserves, capital exposure, and various economic yardsticks, such as net present value (NPV) and return on investment (ROI). Monte Carlo simulation is a part of risk analysis and is sometimes performed in conjunction with or as an alternative to decision [tree] analysis. The objectives are (1) to define Monte Carlo simulation in a more general context of risk and decision analysis; (2) to provide some specific applications, which can be interrelated; (3) to respond to some of the criticisms; (4) tomore » offer some cautions about abuses of the method and recommend how to avoid the pitfalls; and (5) to predict what the future has in store.« less
Self-reconfigurable ship fluid-network modeling for simulation-based design
NASA Astrophysics Data System (ADS)
Moon, Kyungjin
Our world is filled with large-scale engineering systems, which provide various services and conveniences in our daily life. A distinctive trend in the development of today's large-scale engineering systems is the extensive and aggressive adoption of automation and autonomy that enable the significant improvement of systems' robustness, efficiency, and performance, with considerably reduced manning and maintenance costs, and the U.S. Navy's DD(X), the next-generation destroyer program, is considered as an extreme example of such a trend. This thesis pursues a modeling solution for performing simulation-based analysis in the conceptual or preliminary design stage of an intelligent, self-reconfigurable ship fluid system, which is one of the concepts of DD(X) engineering plant development. Through the investigations on the Navy's approach for designing a more survivable ship system, it is found that the current naval simulation-based analysis environment is limited by the capability gaps in damage modeling, dynamic model reconfiguration, and simulation speed of the domain specific models, especially fluid network models. As enablers of filling these gaps, two essential elements were identified in the formulation of the modeling method. The first one is the graph-based topological modeling method, which will be employed for rapid model reconstruction and damage modeling, and the second one is the recurrent neural network-based, component-level surrogate modeling method, which will be used to improve the affordability and efficiency of the modeling and simulation (M&S) computations. The integration of the two methods can deliver computationally efficient, flexible, and automation-friendly M&S which will create an environment for more rigorous damage analysis and exploration of design alternatives. As a demonstration for evaluating the developed method, a simulation model of a notional ship fluid system was created, and a damage analysis was performed. Next, the models representing different design configurations of the fluid system were created, and damage analyses were performed with them in order to find an optimal design configuration for system survivability. Finally, the benefits and drawbacks of the developed method were discussed based on the result of the demonstration.
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth; Jang, Jiann-Woei; McCants, Edward; Omohundro, Zachary; Ring, Tom; Templeton, Jeremy; Zoss, Jeremy; Wallace, Jonathan; Ziegler, Philip
2011-01-01
Draper Station Analysis Tool (DSAT) is a computer program, built on commercially available software, for simulating and analyzing complex dynamic systems. Heretofore used in designing and verifying guidance, navigation, and control systems of the International Space Station, DSAT has a modular architecture that lends itself to modification for application to spacecraft or terrestrial systems. DSAT consists of user-interface, data-structures, simulation-generation, analysis, plotting, documentation, and help components. DSAT automates the construction of simulations and the process of analysis. DSAT provides a graphical user interface (GUI), plus a Web-enabled interface, similar to the GUI, that enables a remotely located user to gain access to the full capabilities of DSAT via the Internet and Webbrowser software. Data structures are used to define the GUI, the Web-enabled interface, simulations, and analyses. Three data structures define the type of analysis to be performed: closed-loop simulation, frequency response, and/or stability margins. DSAT can be executed on almost any workstation, desktop, or laptop computer. DSAT provides better than an order of magnitude improvement in cost, schedule, and risk assessment for simulation based design and verification of complex dynamic systems.
RRegrs: an R package for computer-aided model selection with multiple regression models.
Tsiliki, Georgia; Munteanu, Cristian R; Seoane, Jose A; Fernandez-Lozano, Carlos; Sarimveis, Haralambos; Willighagen, Egon L
2015-01-01
Predictive regression models can be created with many different modelling approaches. Choices need to be made for data set splitting, cross-validation methods, specific regression parameters and best model criteria, as they all affect the accuracy and efficiency of the produced predictive models, and therefore, raising model reproducibility and comparison issues. Cheminformatics and bioinformatics are extensively using predictive modelling and exhibit a need for standardization of these methodologies in order to assist model selection and speed up the process of predictive model development. A tool accessible to all users, irrespectively of their statistical knowledge, would be valuable if it tests several simple and complex regression models and validation schemes, produce unified reports, and offer the option to be integrated into more extensive studies. Additionally, such methodology should be implemented as a free programming package, in order to be continuously adapted and redistributed by others. We propose an integrated framework for creating multiple regression models, called RRegrs. The tool offers the option of ten simple and complex regression methods combined with repeated 10-fold and leave-one-out cross-validation. Methods include Multiple Linear regression, Generalized Linear Model with Stepwise Feature Selection, Partial Least Squares regression, Lasso regression, and Support Vector Machines Recursive Feature Elimination. The new framework is an automated fully validated procedure which produces standardized reports to quickly oversee the impact of choices in modelling algorithms and assess the model and cross-validation results. The methodology was implemented as an open source R package, available at https://www.github.com/enanomapper/RRegrs, by reusing and extending on the caret package. The universality of the new methodology is demonstrated using five standard data sets from different scientific fields. Its efficiency in cheminformatics and QSAR modelling is shown with three use cases: proteomics data for surface-modified gold nanoparticles, nano-metal oxides descriptor data, and molecular descriptors for acute aquatic toxicity data. The results show that for all data sets RRegrs reports models with equal or better performance for both training and test sets than those reported in the original publications. Its good performance as well as its adaptability in terms of parameter optimization could make RRegrs a popular framework to assist the initial exploration of predictive models, and with that, the design of more comprehensive in silico screening applications.Graphical abstractRRegrs is a computer-aided model selection framework for R multiple regression models; this is a fully validated procedure with application to QSAR modelling.
Team-Based Simulations: Learning Ethical Conduct in Teacher Trainee Programs
ERIC Educational Resources Information Center
Shapira-Lishchinsky, Orly
2013-01-01
This study aimed to identify the learning aspects of team-based simulations (TBS) through the analysis of ethical incidents experienced by 50 teacher trainees. A four-dimensional model emerged: learning to make decisions in a "supportive-forgiving" environment; learning to develop standards of care; learning to reduce misconduct; and learning to…
Re'class'ification of 'quant'ified classical simulated annealing
NASA Astrophysics Data System (ADS)
Tanaka, Toshiyuki
2009-12-01
We discuss a classical reinterpretation of quantum-mechanics-based analysis of classical Markov chains with detailed balance, that is based on the quantum-classical correspondence. The classical reinterpretation is then used to demonstrate that it successfully reproduces a sufficient condition for cooling schedule in classical simulated annealing, which has the inverse-logarithmic scaling.
Michael A. Larson; Frank R., III Thompson; Joshua J. Millspaugh; William D. Dijak; Stephen R. Shifley
2004-01-01
Methods for habitat modeling based on landscape simulations and population viability modeling based on habitat quality are well developed, but no published study of which we are aware has effectively joined them in a single, comprehensive analysis. We demonstrate the application of a population viability model for ovenbirds (Seiurus aurocapillus)...
Cotton, Cary C; Erim, Daniel; Eluri, Swathi; Palmer, Sarah H; Green, Daniel J; Wolf, W Asher; Runge, Thomas M; Wheeler, Stephanie; Shaheen, Nicholas J; Dellon, Evan S
2017-06-01
Topical corticosteroids or dietary elimination are recommended as first-line therapies for eosinophilic esophagitis, but data to directly compare these therapies are scant. We performed a cost utility comparison of topical corticosteroids and the 6-food elimination diet (SFED) in treatment of eosinophilic esophagitis, from the payer perspective. We used a modified Markov model based on current clinical guidelines, in which transition between states depended on histologic response simulated at the individual cohort-member level. Simulation parameters were defined by systematic review and meta-analysis to determine the base-case estimates and bounds of uncertainty for sensitivity analysis. Meta-regression models included adjustment for differences in study and cohort characteristics. In the base-case scenario, topical fluticasone was about as effective as SFED but more expensive at a 5-year time horizon ($9261.58 vs $5719.72 per person). SFED was more effective and less expensive than topical fluticasone and topical budesonide in the base-case scenario. Probabilistic sensitivity analysis revealed little uncertainty in relative treatment effectiveness. There was somewhat greater uncertainty in the relative cost of treatments; most simulations found SFED to be less expensive. In a cost utility analysis comparing topical corticosteroids and SFED for first-line treatment of eosinophilic esophagitis, the therapies were similar in effectiveness. SFED was on average less expensive, and more cost effective in most simulations, than topical budesonide and topical fluticasone, from a payer perspective and not accounting for patient-level costs or quality of life. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Yamaguchi, Yusuke; Sakamoto, Wataru; Goto, Masashi; Staessen, Jan A.; Wang, Jiguang; Gueyffier, Francois; Riley, Richard D.
2014-01-01
When some trials provide individual patient data (IPD) and the others provide only aggregate data (AD), meta-analysis methods for combining IPD and AD are required. We propose a method that reconstructs the missing IPD for AD trials by a Bayesian sampling procedure and then applies an IPD meta-analysis model to the mixture of simulated IPD and…
Liebert, Cara A; Mazer, Laura; Bereknyei Merrell, Sylvia; Lin, Dana T; Lau, James N
2016-09-01
The flipped classroom, a blended learning paradigm that uses pre-session online videos reinforced with interactive sessions, has been proposed as an alternative to traditional lectures. This article investigates medical students' perceptions of a simulation-based, flipped classroom for the surgery clerkship and suggests best practices for implementation in this setting. A prospective cohort of students (n = 89), who were enrolled in the surgery clerkship during a 1-year period, was taught via a simulation-based, flipped classroom approach. Students completed an anonymous, end-of-clerkship survey regarding their perceptions of the curriculum. Quantitative analysis of Likert responses and qualitative analysis of narrative responses were performed. Students' perceptions of the curriculum were positive, with 90% rating it excellent or outstanding. The majority reported the curriculum should be continued (95%) and applied to other clerkships (84%). The component received most favorably by the students was the simulation-based skill sessions. Students rated the effectiveness of the Khan Academy-style videos the highest compared with other video formats (P < .001). Qualitative analysis identified 21 subthemes in 4 domains: general positive feedback, educational content, learning environment, and specific benefits to medical students. The students reported that the learning environment fostered accountability and self-directed learning. Specific perceived benefits included preparation for the clinical rotation and the National Board of Medical Examiners shelf exam, decreased class time, socialization with peers, and faculty interaction. Medical students' perceptions of a simulation-based, flipped classroom in the surgery clerkship were overwhelmingly positive. The flipped classroom approach can be applied successfully in a surgery clerkship setting and may offer additional benefits compared with traditional lecture-based curricula. Copyright © 2016 Elsevier Inc. All rights reserved.
[Simulation and data analysis of stereological modeling based on virtual slices].
Wang, Hao; Shen, Hong; Bai, Xiao-yan
2008-05-01
To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.
Dependability analysis of parallel systems using a simulation-based approach. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sawyer, Darren Charles
1994-01-01
The analysis of dependability in large, complex, parallel systems executing real applications or workloads is examined in this thesis. To effectively demonstrate the wide range of dependability problems that can be analyzed through simulation, the analysis of three case studies is presented. For each case, the organization of the simulation model used is outlined, and the results from simulated fault injection experiments are explained, showing the usefulness of this method in dependability modeling of large parallel systems. The simulation models are constructed using DEPEND and C++. Where possible, methods to increase dependability are derived from the experimental results. Another interesting facet of all three cases is the presence of some kind of workload of application executing in the simulation while faults are injected. This provides a completely new dimension to this type of study, not possible to model accurately with analytical approaches.
Enhancing Students' Employability through Business Simulation
ERIC Educational Resources Information Center
Avramenko, Alex
2012-01-01
Purpose: The purpose of this paper is to introduce an approach to business simulation with less dependence on business simulation software to provide innovative work experience within a programme of study, to boost students' confidence and employability. Design/methodology/approach: The paper is based on analysis of existing business simulation…
Effectiveness of Simulation in a Hybrid and Online Networking Course.
ERIC Educational Resources Information Center
Cameron, Brian H.
2003-01-01
Reports on a study that compares the performance of students enrolled in two sections of a Web-based computer networking course: one utilizing a simulation package and the second utilizing a static, graphical software package. Analysis shows statistically significant improvements in performance in the simulation group compared to the…
Simulation Higher Order Language Requirements Study.
ERIC Educational Resources Information Center
Goodenough, John B.; Braun, Christine L.
The definitions provided for high order language (HOL) requirements for programming flight training simulators are based on the analysis of programs written for a variety of simulators. Examples drawn from these programs are used to justify the need for certain HOL capabilities. A description of the general structure and organization of the…
Estimating School Efficiency: A Comparison of Methods Using Simulated Data.
ERIC Educational Resources Information Center
Bifulco, Robert; Bretschneider, Stuart
2001-01-01
Uses simulated data to assess the adequacy of two econometric and linear-programming techniques (data-envelopment analysis and corrected ordinary least squares) for measuring performance-based school reform. In complex data sets (simulated to contain measurement error and endogeneity), these methods are inadequate efficiency measures. (Contains 40…
Solution of AntiSeepage for Mengxi River Based on Numerical Simulation of Unsaturated Seepage
Ji, Youjun; Zhang, Linzhi; Yue, Jiannan
2014-01-01
Lessening the leakage of surface water can reduce the waste of water resources and ground water pollution. To solve the problem that Mengxi River could not store water enduringly, geology investigation, theoretical analysis, experiment research, and numerical simulation analysis were carried out. Firstly, the seepage mathematical model was established based on unsaturated seepage theory; secondly, the experimental equipment for testing hydraulic conductivity of unsaturated soil was developed to obtain the curve of two-phase flow. The numerical simulation of leakage in natural conditions proves the previous inference and leakage mechanism of river. At last, the seepage control capacities of different impervious materials were compared by numerical simulations. According to the engineering actuality, the impervious material was selected. The impervious measure in this paper has been proved to be effectible by hydrogeological research today. PMID:24707199
Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark
2013-01-01
Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data.
Overheating Anomalies during Flight Test Due to the Base Bleeding
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry; Hafiychuck, Halyna; Osipov, Slava; Ponizhovskaya, Ekaterina; Smelyanskiy, Vadim; Dagostino, Mark; Canabal, Francisco; Mobley, Brandon L.
2012-01-01
In this paper we present the results of the analytical and numerical studies of the plume interaction with the base flow in the presence of base out-gassing. The physics-based analysis and CFD modeling of the base heating for single solid rocket motor performed in this research addressed the following questions: what are the key factors making base flow so different from that in the Shuttle [1]; why CFD analysis of this problem reveals small plume recirculation; what major factors influence base temperature; and why overheating was initiated at a given time in the flight. To answer these questions topological analysis of the base flow was performed and Korst theory was used to estimate relative contributions of radiation, plume recirculation, and chemically reactive out-gassing to the base heating. It was shown that base bleeding and small base volume are the key factors contributing to the overheating, while plume recirculation is effectively suppressed by asymmetric configuration of the flow formed earlier in the flight. These findings are further verified using CFD simulations that include multi-species gas environment both in the plume and in the base. Solid particles in the exhaust plume (Al2O3) and char particles in the base bleeding were also included into the simulations and their relative contributions into the base temperature rise were estimated. The results of simulations are in good agreement with the temperature and pressure in the base measured during the test.
Comas, Mercè; Arrospide, Arantzazu; Mar, Javier; Sala, Maria; Vilaprinyó, Ester; Hernández, Cristina; Cots, Francesc; Martínez, Juan; Castells, Xavier
2014-01-01
To assess the budgetary impact of switching from screen-film mammography to full-field digital mammography in a population-based breast cancer screening program. A discrete-event simulation model was built to reproduce the breast cancer screening process (biennial mammographic screening of women aged 50 to 69 years) combined with the natural history of breast cancer. The simulation started with 100,000 women and, during a 20-year simulation horizon, new women were dynamically entered according to the aging of the Spanish population. Data on screening were obtained from Spanish breast cancer screening programs. Data on the natural history of breast cancer were based on US data adapted to our population. A budget impact analysis comparing digital with screen-film screening mammography was performed in a sample of 2,000 simulation runs. A sensitivity analysis was performed for crucial screening-related parameters. Distinct scenarios for recall and detection rates were compared. Statistically significant savings were found for overall costs, treatment costs and the costs of additional tests in the long term. The overall cost saving was 1,115,857€ (95%CI from 932,147 to 1,299,567) in the 10th year and 2,866,124€ (95%CI from 2,492,610 to 3,239,638) in the 20th year, representing 4.5% and 8.1% of the overall cost associated with screen-film mammography. The sensitivity analysis showed net savings in the long term. Switching to digital mammography in a population-based breast cancer screening program saves long-term budget expense, in addition to providing technical advantages. Our results were consistent across distinct scenarios representing the different results obtained in European breast cancer screening programs.
Comas, Mercè; Arrospide, Arantzazu; Mar, Javier; Sala, Maria; Vilaprinyó, Ester; Hernández, Cristina; Cots, Francesc; Martínez, Juan; Castells, Xavier
2014-01-01
Objective To assess the budgetary impact of switching from screen-film mammography to full-field digital mammography in a population-based breast cancer screening program. Methods A discrete-event simulation model was built to reproduce the breast cancer screening process (biennial mammographic screening of women aged 50 to 69 years) combined with the natural history of breast cancer. The simulation started with 100,000 women and, during a 20-year simulation horizon, new women were dynamically entered according to the aging of the Spanish population. Data on screening were obtained from Spanish breast cancer screening programs. Data on the natural history of breast cancer were based on US data adapted to our population. A budget impact analysis comparing digital with screen-film screening mammography was performed in a sample of 2,000 simulation runs. A sensitivity analysis was performed for crucial screening-related parameters. Distinct scenarios for recall and detection rates were compared. Results Statistically significant savings were found for overall costs, treatment costs and the costs of additional tests in the long term. The overall cost saving was 1,115,857€ (95%CI from 932,147 to 1,299,567) in the 10th year and 2,866,124€ (95%CI from 2,492,610 to 3,239,638) in the 20th year, representing 4.5% and 8.1% of the overall cost associated with screen-film mammography. The sensitivity analysis showed net savings in the long term. Conclusions Switching to digital mammography in a population-based breast cancer screening program saves long-term budget expense, in addition to providing technical advantages. Our results were consistent across distinct scenarios representing the different results obtained in European breast cancer screening programs. PMID:24832200
Ekins, Sean; Madrid, Peter B; Sarker, Malabika; Li, Shao-Gang; Mittal, Nisha; Kumar, Pradeep; Wang, Xin; Stratton, Thomas P; Zimmerman, Matthew; Talcott, Carolyn; Bourbon, Pauline; Travers, Mike; Yadav, Maneesh; Freundlich, Joel S
2015-01-01
Integrated computational approaches for Mycobacterium tuberculosis (Mtb) are useful to identify new molecules that could lead to future tuberculosis (TB) drugs. Our approach uses information derived from the TBCyc pathway and genome database, the Collaborative Drug Discovery TB database combined with 3D pharmacophores and dual event Bayesian models of whole-cell activity and lack of cytotoxicity. We have prioritized a large number of molecules that may act as mimics of substrates and metabolites in the TB metabolome. We computationally searched over 200,000 commercial molecules using 66 pharmacophores based on substrates and metabolites from Mtb and further filtering with Bayesian models. We ultimately tested 110 compounds in vitro that resulted in two compounds of interest, BAS 04912643 and BAS 00623753 (MIC of 2.5 and 5 μg/mL, respectively). These molecules were used as a starting point for hit-to-lead optimization. The most promising class proved to be the quinoxaline di-N-oxides, evidenced by transcriptional profiling to induce mRNA level perturbations most closely resembling known protonophores. One of these, SRI58 exhibited an MIC = 1.25 μg/mL versus Mtb and a CC50 in Vero cells of >40 μg/mL, while featuring fair Caco-2 A-B permeability (2.3 x 10-6 cm/s), kinetic solubility (125 μM at pH 7.4 in PBS) and mouse metabolic stability (63.6% remaining after 1 h incubation with mouse liver microsomes). Despite demonstration of how a combined bioinformatics/cheminformatics approach afforded a small molecule with promising in vitro profiles, we found that SRI58 did not exhibit quantifiable blood levels in mice.
Sarker, Malabika; Li, Shao-Gang; Mittal, Nisha; Kumar, Pradeep; Wang, Xin; Stratton, Thomas P.; Zimmerman, Matthew; Talcott, Carolyn; Bourbon, Pauline; Travers, Mike; Yadav, Maneesh
2015-01-01
Integrated computational approaches for Mycobacterium tuberculosis (Mtb) are useful to identify new molecules that could lead to future tuberculosis (TB) drugs. Our approach uses information derived from the TBCyc pathway and genome database, the Collaborative Drug Discovery TB database combined with 3D pharmacophores and dual event Bayesian models of whole-cell activity and lack of cytotoxicity. We have prioritized a large number of molecules that may act as mimics of substrates and metabolites in the TB metabolome. We computationally searched over 200,000 commercial molecules using 66 pharmacophores based on substrates and metabolites from Mtb and further filtering with Bayesian models. We ultimately tested 110 compounds in vitro that resulted in two compounds of interest, BAS 04912643 and BAS 00623753 (MIC of 2.5 and 5 μg/mL, respectively). These molecules were used as a starting point for hit-to-lead optimization. The most promising class proved to be the quinoxaline di-N-oxides, evidenced by transcriptional profiling to induce mRNA level perturbations most closely resembling known protonophores. One of these, SRI58 exhibited an MIC = 1.25 μg/mL versus Mtb and a CC50 in Vero cells of >40 μg/mL, while featuring fair Caco-2 A-B permeability (2.3 x 10−6 cm/s), kinetic solubility (125 μM at pH 7.4 in PBS) and mouse metabolic stability (63.6% remaining after 1 h incubation with mouse liver microsomes). Despite demonstration of how a combined bioinformatics/cheminformatics approach afforded a small molecule with promising in vitro profiles, we found that SRI58 did not exhibit quantifiable blood levels in mice. PMID:26517557
I PASS: an interactive policy analysis simulation system.
Doug Olson; Con Schallau; Wilbur Maki
1984-01-01
This paper describes an interactive policy analysis simulation system(IPASS) that can be used to analyze the long-term economic and demographic effects of alternative forest resource management policies. The IPASS model is a dynamic analytical tool that forecasts growth and development of an economy. It allows the user to introduce changes in selected parameters based...
Planning for Retrospective Conversion: A Simulation of the OCLC TAPECON Service.
ERIC Educational Resources Information Center
Hanson, Heidi; Pronevitz, Gregory
1989-01-01
Describes a simulation of OCLC's TAPECON retrospective conversion service and its impact on an online catalog in a large university research library. The analysis includes results of Library of Congress Card Numbers, author/title, and title searches, and hit rates based on an analysis of OCLC and locally generated reports. (three references)…
Experimental analysis of computer system dependability
NASA Technical Reports Server (NTRS)
Iyer, Ravishankar, K.; Tang, Dong
1993-01-01
This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance.
Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.
Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi
2012-11-08
A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.
NASA Astrophysics Data System (ADS)
Zhang, Jun; Li, Ri Yi
2018-06-01
Building energy simulation is an important supporting tool for green building design and building energy consumption assessment, At present, Building energy simulation software can't meet the needs of energy consumption analysis and cabinet level micro environment control design of prefabricated building. thermal physical model of prefabricated building is proposed in this paper, based on the physical model, the energy consumption calculation software of prefabricated cabin building(PCES) is developed. we can achieve building parameter setting, energy consumption simulation and building thermal process and energy consumption analysis by PCES.
Three-Dimensional Numerical Simulation to Mud Turbine for LWD
NASA Astrophysics Data System (ADS)
Yao, Xiaojiang; Dong, Jingxin; Shang, Jie; Zhang, Guanqi
Hydraulic performance analysis was discussed for a type of turbine on generator used for LWD. The simulation models were built by CFD analysis software FINE/Turbo, and full three-dimensional numerical simulation was carried out for impeller group. The hydraulic parameter such as power, speed and pressure drop, were calculated in two kinds of medium water and mud. Experiment was built in water environment. The error of numerical simulation was less than 6%, verified by experiment. Based on this rationalization proposals would be given to choice appropriate impellers, and the rationalization of methods would be explored.
NASA Astrophysics Data System (ADS)
Fedulov, Boris N.; Safonov, Alexander A.; Sergeichev, Ivan V.; Ushakov, Andrey E.; Klenin, Yuri G.; Makarenko, Irina V.
2016-10-01
An application of composites for construction of subway brackets is a very effective approach to extend their lifetime. However, this approach involves the necessity to prevent process-induced distortions of the bracket due to thermal deformation and chemical shrinkage. At present study, a process simulation has been carried out to support the design of the production tooling. The simulation was based on the application of viscoelastic model for the resin. Simulation results were verified by comparison with results of manufacturing experiments. To optimize the bracket structure the strength analysis was carried out as well.
Ertl, Peter; Patiny, Luc; Sander, Thomas; Rufener, Christian; Zasso, Michaël
2015-01-01
Wikipedia, the world's largest and most popular encyclopedia is an indispensable source of chemistry information. It contains among others also entries for over 15,000 chemicals including metabolites, drugs, agrochemicals and industrial chemicals. To provide an easy access to this wealth of information we decided to develop a substructure and similarity search tool for chemical structures referenced in Wikipedia. We extracted chemical structures from entries in Wikipedia and implemented a web system allowing structure and similarity searching on these data. The whole search as well as visualization system is written in JavaScript and therefore can run locally within a web page and does not require a central server. The Wikipedia Chemical Structure Explorer is accessible on-line at www.cheminfo.org/wikipedia and is available also as an open source project from GitHub for local installation. The web-based Wikipedia Chemical Structure Explorer provides a useful resource for research as well as for chemical education enabling both researchers and students easy and user friendly chemistry searching and identification of relevant information in Wikipedia. The tool can also help to improve quality of chemical entries in Wikipedia by providing potential contributors regularly updated list of entries with problematic structures. And last but not least this search system is a nice example of how the modern web technology can be applied in the field of cheminformatics. Graphical abstractWikipedia Chemical Structure Explorer allows substructure and similarity searches on molecules referenced in Wikipedia.
Jabor, A; Vlk, T; Boril, P
1996-04-15
We designed a simulation model for the assessment of the financial risks involved when a new diagnostic test is introduced in the laboratory. The model is based on a neural network consisting of ten neurons and assumes that input entities can have assigned appropriate uncertainty. Simulations are done on a 1-day interval basis. Risk analysis completes the model and the financial effects are evaluated for a selected time period. The basic output of the simulation consists of total expenses and income during the simulation time, net present value of the project at the end of simulation, total number of control samples during simulation, total number of patients evaluated and total number of used kits.
Gemignani, Jessica; Middell, Eike; Barbour, Randall L; Graber, Harry L; Blankertz, Benjamin
2018-04-04
The statistical analysis of functional near infrared spectroscopy (fNIRS) data based on the general linear model (GLM) is often made difficult by serial correlations, high inter-subject variability of the hemodynamic response, and the presence of motion artifacts. In this work we propose to extract information on the pattern of hemodynamic activations without using any a priori model for the data, by classifying the channels as 'active' or 'not active' with a multivariate classifier based on linear discriminant analysis (LDA). This work is developed in two steps. First we compared the performance of the two analyses, using a synthetic approach in which simulated hemodynamic activations were combined with either simulated or real resting-state fNIRS data. This procedure allowed for exact quantification of the classification accuracies of GLM and LDA. In the case of real resting-state data, the correlations between classification accuracy and demographic characteristics were investigated by means of a Linear Mixed Model. In the second step, to further characterize the reliability of the newly proposed analysis method, we conducted an experiment in which participants had to perform a simple motor task and data were analyzed with the LDA-based classifier as well as with the standard GLM analysis. The results of the simulation study show that the LDA-based method achieves higher classification accuracies than the GLM analysis, and that the LDA results are more uniform across different subjects and, in contrast to the accuracies achieved by the GLM analysis, have no significant correlations with any of the demographic characteristics. Findings from the real-data experiment are consistent with the results of the real-plus-simulation study, in that the GLM-analysis results show greater inter-subject variability than do the corresponding LDA results. The results obtained suggest that the outcome of GLM analysis is highly vulnerable to violations of theoretical assumptions, and that therefore a data-driven approach such as that provided by the proposed LDA-based method is to be favored.
NASA Astrophysics Data System (ADS)
Guy, N.; Seyedi, D. M.; Hild, F.
2018-06-01
The work presented herein aims at characterizing and modeling fracturing (i.e., initiation and propagation of cracks) in a clay-rich rock. The analysis is based on two experimental campaigns. The first one relies on a probabilistic analysis of crack initiation considering Brazilian and three-point flexural tests. The second one involves digital image correlation to characterize crack propagation. A nonlocal damage model based on stress regularization is used for the simulations. Two thresholds both based on regularized stress fields are considered. They are determined from the experimental campaigns performed on Lower Watrous rock. The results obtained with the proposed approach are favorably compared with the experimental results.
Specialty Payment Model Opportunities and Assessment: Oncology Simulation Report.
White, Chapin; Chan, Chris; Huckfeldt, Peter J; Kofner, Aaron; Mulcahy, Andrew W; Pollak, Julia; Popescu, Ioana; Timbie, Justin W; Hussey, Peter S
2015-07-15
This article describes the results of a simulation analysis of a payment model for specialty oncology services that is being developed for possible testing by the Center for Medicare and Medicaid Innovation at the Centers for Medicare & Medicaid Services (CMS). CMS asked MITRE and RAND to conduct simulation analyses to preview some of the possible impacts of the payment model and to inform design decisions related to the model. The simulation analysis used an episode-level dataset based on Medicare fee-for-service (FFS) claims for historical oncology episodes provided to Medicare FFS beneficiaries in 2010. Under the proposed model, participating practices would continue to receive FFS payments, would also receive per-beneficiary per-month care management payments for episodes lasting up to six months, and would be eligible for performance-based payments based on per-episode spending for attributed episodes relative to a per-episode spending target. The simulation offers several insights into the proposed payment model for oncology: (1) The care management payments used in the simulation analysis-$960 total per six-month episode-represent only 4 percent of projected average total spending per episode (around $27,000 in 2016), but they are large relative to the FFS revenues of participating oncology practices, which are projected to be around $2,000 per oncology episode. By themselves, the care management payments would increase physician practices' Medicare revenues by roughly 50 percent on average. This represents a substantial new outlay for the Medicare program and a substantial new source of revenues for oncology practices. (2) For the Medicare program to break even, participating oncology practices would have to reduce utilization and intensity by roughly 4 percent. (3) The break-even point can be reduced if the care management payments are reduced or if the performance-based payments are reduced.
The Co-simulation of Humanoid Robot Based on Solidworks, ADAMS and Simulink
NASA Astrophysics Data System (ADS)
Song, Dalei; Zheng, Lidan; Wang, Li; Qi, Weiwei; Li, Yanli
A simulation method of adaptive controller is proposed for the humanoid robot system based on co-simulation of Solidworks, ADAMS and Simulink. A complex mathematical modeling process is avoided by this method, and the real time dynamic simulating function of Simulink would be exerted adequately. This method could be generalized to other complicated control system. This method is adopted to build and analyse the model of humanoid robot. The trajectory tracking and adaptive controller design also proceed based on it. The effect of trajectory tracking is evaluated by fitting-curve theory of least squares method. The anti-interference capability of the robot is improved a lot through comparative analysis.
NASA Technical Reports Server (NTRS)
Morrissey, L. A.; Weinstock, K. J.; Mouat, D. A.; Card, D. H.
1984-01-01
An evaluation of Thematic Mapper Simulator (TMS) data for the geobotanical discrimination of rock types based on vegetative cover characteristics is addressed in this research. A methodology for accomplishing this evaluation utilizing univariate and multivariate techniques is presented. TMS data acquired with a Daedalus DEI-1260 multispectral scanner were integrated with vegetation and geologic information for subsequent statistical analyses, which included a chi-square test, an analysis of variance, stepwise discriminant analysis, and Duncan's multiple range test. Results indicate that ultramafic rock types are spectrally separable from nonultramafics based on vegetative cover through the use of statistical analyses.
Magnetic field simulation and shimming analysis of 3.0T superconducting MRI system
NASA Astrophysics Data System (ADS)
Yue, Z. K.; Liu, Z. Z.; Tang, G. S.; Zhang, X. C.; Duan, L. J.; Liu, W. C.
2018-04-01
3.0T superconducting magnetic resonance imaging (MRI) system has become the mainstream of modern clinical MRI system because of its high field intensity and high degree of uniformity and stability. It has broad prospects in scientific research and other fields. We analyze the principle of magnet designing in this paper. We also perform the magnetic field simulation and shimming analysis of the first 3.0T/850 superconducting MRI system in the world using the Ansoft Maxwell simulation software. We guide the production and optimization of the prototype based on the results of simulation analysis. Thus the magnetic field strength, magnetic field uniformity and magnetic field stability of the prototype is guided to achieve the expected target.
NASA Astrophysics Data System (ADS)
Christian, Paul M.
2002-07-01
This paper presents a demonstrated approach to significantly reduce the cost and schedule of non real-time modeling and simulation, real-time HWIL simulation, and embedded code development. The tool and the methodology presented capitalize on a paradigm that has become a standard operating procedure in the automotive industry. The tool described is known as the Aerospace Toolbox, and it is based on the MathWorks Matlab/Simulink framework, which is a COTS application. Extrapolation of automotive industry data and initial applications in the aerospace industry show that the use of the Aerospace Toolbox can make significant contributions in the quest by NASA and other government agencies to meet aggressive cost reduction goals in development programs. The part I of this paper provided a detailed description of the GUI based Aerospace Toolbox and how it is used in every step of a development program; from quick prototyping of concept developments that leverage built-in point of departure simulations through to detailed design, analysis, and testing. Some of the attributes addressed included its versatility in modeling 3 to 6 degrees of freedom, its library of flight test validated library of models (including physics, environments, hardware, and error sources), and its built-in Monte Carlo capability. Other topics that were covered in part I included flight vehicle models and algorithms, and the covariance analysis package, Navigation System Covariance Analysis Tools (NavSCAT). Part II of this series will cover a more in-depth look at the analysis and simulation capability and provide an update on the toolbox enhancements. It will also address how the Toolbox can be used as a design hub for Internet based collaborative engineering tools such as NASA's Intelligent Synthesis Environment (ISE) and Lockheed Martin's Interactive Missile Design Environment (IMD).
Simulating urban land cover changes at sub-pixel level in a coastal city
NASA Astrophysics Data System (ADS)
Zhao, Xiaofeng; Deng, Lei; Feng, Huihui; Zhao, Yanchuang
2014-10-01
The simulation of urban expansion or land cover changes is a major theme in both geographic information science and landscape ecology. Yet till now, almost all of previous studies were based on grid computations at pixel level. With the prevalence of spectral mixture analysis in urban land cover research, the simulation of urban land cover at sub-pixel level is being put into agenda. This study provided a new approach of land cover simulation at sub-pixel level. Landsat TM/ETM+ images of Xiamen city, China on both the January of 2002 and 2007 were used to acquire land cover data through supervised classification. Then the two classified land cover data were utilized to extract the transformation rule between 2002 and 2007 using logistic regression. The transformation possibility of each land cover type in a certain pixel was taken as its percent in the same pixel after normalization. And cellular automata (CA) based grid computation was carried out to acquire simulated land cover on 2007. The simulated 2007 sub-pixel land cover was testified with a validated sub-pixel land cover achieved by spectral mixture analysis in our previous studies on the same date. And finally the sub-pixel land cover of 2017 was simulated for urban planning and management. The results showed that our method is useful in land cover simulation at sub-pixel level. Although the simulation accuracy is not quite satisfactory for all the land cover types, it provides an important idea and a good start in the CA-based urban land cover simulation.
Modeling languages for biochemical network simulation: reaction vs equation based approaches.
Wiechert, Wolfgang; Noack, Stephan; Elsheikh, Atya
2010-01-01
Biochemical network modeling and simulation is an essential task in any systems biology project. The systems biology markup language (SBML) was established as a standardized model exchange language for mechanistic models. A specific strength of SBML is that numerous tools for formulating, processing, simulation and analysis of models are freely available. Interestingly, in the field of multidisciplinary simulation, the problem of model exchange between different simulation tools occurred much earlier. Several general modeling languages like Modelica have been developed in the 1990s. Modelica enables an equation based modular specification of arbitrary hierarchical differential algebraic equation models. Moreover, libraries for special application domains can be rapidly developed. This contribution compares the reaction based approach of SBML with the equation based approach of Modelica and explains the specific strengths of both tools. Several biological examples illustrating essential SBML and Modelica concepts are given. The chosen criteria for tool comparison are flexibility for constraint specification, different modeling flavors, hierarchical, modular and multidisciplinary modeling. Additionally, support for spatially distributed systems, event handling and network analysis features is discussed. As a major result it is shown that the choice of the modeling tool has a strong impact on the expressivity of the specified models but also strongly depends on the requirements of the application context.
Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R
2014-01-01
Over the past two decades finite element (FE) analysis has become a popular tool for researchers seeking to simulate the biomechanics of the healthy and diabetic foot. The primary aims of these simulations have been to improve our understanding of the foot's complicated mechanical loading in health and disease and to inform interventions designed to prevent plantar ulceration, a major complication of diabetes. This article provides a systematic review and summary of the findings from FE analysis-based computational simulations of the diabetic foot. A systematic literature search was carried out and 31 relevant articles were identified covering three primary themes: methodological aspects relevant to modelling the diabetic foot; investigations of the pathomechanics of the diabetic foot; and simulation-based design of interventions to reduce ulceration risk. Methodological studies illustrated appropriate use of FE analysis for simulation of foot mechanics, incorporating nonlinear tissue mechanics, contact and rigid body movements. FE studies of pathomechanics have provided estimates of internal soft tissue stresses, and suggest that such stresses may often be considerably larger than those measured at the plantar surface and are proportionally greater in the diabetic foot compared to controls. FE analysis allowed evaluation of insole performance and development of new insole designs, footwear and corrective surgery to effectively provide intervention strategies. The technique also presents the opportunity to simulate the effect of changes associated with the diabetic foot on non-mechanical factors such as blood supply to local tissues. While significant advancement in diabetic foot research has been made possible by the use of FE analysis, translational utility of this powerful tool for routine clinical care at the patient level requires adoption of cost-effective (both in terms of labour and computation) and reliable approaches with clear clinical validity for decision making.
NASA Astrophysics Data System (ADS)
Riah, Zoheir; Sommet, Raphael; Nallatamby, Jean C.; Prigent, Michel; Obregon, Juan
2004-05-01
We present in this paper a set of coherent tools for noise characterization and physics-based analysis of noise in semiconductor devices. This noise toolbox relies on a low frequency noise measurement setup with special high current capabilities thanks to an accurate and original calibration. It relies also on a simulation tool based on the drift diffusion equations and the linear perturbation theory, associated with the Green's function technique. This physics-based noise simulator has been implemented successfully in the Scilab environment and is specifically dedicated to HBTs. Some results are given and compared to those existing in the literature.
Ostermeir, Katja; Zacharias, Martin
2014-12-01
Coarse-grained elastic network models (ENM) of proteins offer a low-resolution representation of protein dynamics and directions of global mobility. A Hamiltonian-replica exchange molecular dynamics (H-REMD) approach has been developed that combines information extracted from an ENM analysis with atomistic explicit solvent MD simulations. Based on a set of centers representing rigid segments (centroids) of a protein, a distance-dependent biasing potential is constructed by means of an ENM analysis to promote and guide centroid/domain rearrangements. The biasing potentials are added with different magnitude to the force field description of the MD simulation along the replicas with one reference replica under the control of the original force field. The magnitude and the form of the biasing potentials are adapted during the simulation based on the average sampled conformation to reach a near constant biasing in each replica after equilibration. This allows for canonical sampling of conformational states in each replica. The application of the methodology to a two-domain segment of the glycoprotein 130 and to the protein cyanovirin-N indicates significantly enhanced global domain motions and improved conformational sampling compared with conventional MD simulations. © 2014 Wiley Periodicals, Inc.
WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions.
Karr, Jonathan R; Phillips, Nolan C; Covert, Markus W
2014-01-01
Mechanistic 'whole-cell' models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. http://www.wholecellsimdb.org SOURCE CODE REPOSITORY: URL: http://github.com/CovertLab/WholeCellSimDB. © The Author(s) 2014. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Unni, Vineet; Sankara Narayanan, E. M.
2017-04-01
This is the first report on the numerical analysis of the performance of nanoscale vertical superjunction structures based on impurity doping and an innovative approach that utilizes the polarisation properties inherent in III-V nitride semiconductors. Such nanoscale vertical polarisation super junction structures can be realized by employing a combination of epitaxial growth along the non-polar crystallographic axes of Wurtzite GaN and nanolithography-based processing techniques. Detailed numerical simulations clearly highlight the limitations of a doping based approach and the advantages of the proposed solution for breaking the unipolar one-dimensional material limits of GaN by orders of magnitude.
Comparison of an Agent-based Model of Disease Propagation with the Generalised SIR Epidemic Model
2009-08-01
has become a practical method for conducting Epidemiological Modelling. In the agent- based approach the whole township can be modelled as a system of...SIR system was initially developed based on a very simplified model of social interaction. For instance an assumption of uniform population mixing was...simulating the progress of a disease within a host and of transmission between hosts is based upon Transportation Analysis and Simulation System
Notional Scoring for Technical Review Weighting As Applied to Simulation Credibility Assessment
NASA Technical Reports Server (NTRS)
Hale, Joseph Peter; Hartway, Bobby; Thomas, Danny
2008-01-01
NASA's Modeling and Simulation Standard requires a credibility assessment for critical engineering data produced by models and simulations. Credibility assessment is thus a "qualifyingfactor" in reporting results from simulation-based analysis. The degree to which assessors should be independent of the simulation developers, users and decision makers is a recurring question. This paper provides alternative "weighting algorithms" for calculating the value-added for independence of the levels of technical review defined for the NASA Modeling and Simulation Standard.
Simulation-Based Training for Colonoscopy
Preisler, Louise; Svendsen, Morten Bo Søndergaard; Nerup, Nikolaj; Svendsen, Lars Bo; Konge, Lars
2015-01-01
Abstract The aim of this study was to create simulation-based tests with credible pass/fail standards for 2 different fidelities of colonoscopy models. Only competent practitioners should perform colonoscopy. Reliable and valid simulation-based tests could be used to establish basic competency in colonoscopy before practicing on patients. Twenty-five physicians (10 consultants with endoscopic experience and 15 fellows with very little endoscopic experience) were tested on 2 different simulator models: a virtual-reality simulator and a physical model. Tests were repeated twice on each simulator model. Metrics with discriminatory ability were identified for both modalities and reliability was determined. The contrasting-groups method was used to create pass/fail standards and the consequences of these were explored. The consultants significantly performed faster and scored higher than the fellows on both the models (P < 0.001). Reliability analysis showed Cronbach α = 0.80 and 0.87 for the virtual-reality and the physical model, respectively. The established pass/fail standards failed one of the consultants (virtual-reality simulator) and allowed one fellow to pass (physical model). The 2 tested simulations-based modalities provided reliable and valid assessments of competence in colonoscopy and credible pass/fail standards were established for both the tests. We propose to use these standards in simulation-based training programs before proceeding to supervised training on patients. PMID:25634177
The role of simulation in the design of a neural network chip
NASA Technical Reports Server (NTRS)
Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.
1993-01-01
An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.
Find novel dual-agonist drugs for treating type 2 diabetes by means of cheminformatics.
Liu, Lei; Ma, Ying; Wang, Run-Ling; Xu, Wei-Ren; Wang, Shu-Qing; Chou, Kuo-Chen
2013-01-01
The high prevalence of type 2 diabetes mellitus in the world as well as the increasing reports about the adverse side effects of the existing diabetes treatment drugs have made developing new and effective drugs against the disease a very high priority. In this study, we report ten novel compounds found by targeting peroxisome proliferator-activated receptors (PPARs) using virtual screening and core hopping approaches. PPARs have drawn increasing attention for developing novel drugs to treat diabetes due to their unique functions in regulating glucose, lipid, and cholesterol metabolism. The reported compounds are featured with dual functions, and hence belong to the category of dual agonists. Compared with the single PPAR agonists, the dual PPAR agonists, formed by combining the lipid benefit of PPARα agonists (such as fibrates) and the glycemic advantages of the PPARγ agonists (such as thiazolidinediones), are much more powerful in treating diabetes because they can enhance metabolic effects while minimizing the side effects. This was observed in the studies on molecular dynamics simulations, as well as on absorption, distribution, metabolism, and excretion, that these novel dual agonists not only possessed the same function as ragaglitazar (an investigational drug developed by Novo Nordisk for treating type 2 diabetes) did in activating PPARα and PPARγ, but they also had more favorable conformation for binding to the two receptors. Moreover, the residues involved in forming the binding pockets of PPARα and PPARγ among the top ten compounds are explicitly presented, and this will be very useful for the in-depth conduction of mutagenesis experiments. It is anticipated that the ten compounds may become potential drug candidates, or at the very least, the findings reported here may stimulate new strategies or provide useful insights for designing new and more powerful dual-agonist drugs for treating type 2 diabetes.
Hydrodynamics Analysis and CFD Simulation of Portal Venous System by TIPS and LS.
Wang, Meng; Zhou, Hongyu; Huang, Yaozhen; Gong, Piyun; Peng, Bing; Zhou, Shichun
2015-06-01
In cirrhotic patients, portal hypertension is often associated with a hyperdynamic changes. Transjugular Intrahepatic Portosystemic Shunt (TIPS) and Laparoscopic splenectomy are both treatments for liver cirrhosis due to portal hypertension. While, the two different interventions have different effects on hemodynamics after operation and the possibilities of triggering PVT are different. How hemodynamics of portal vein system evolving with two different operations remain unknown. Based on ultrasound and established numerical methods, CFD technique is applied to analyze hemodynamic changes after TIPS and Laparoscopic splenectomy. In this paper, we applied two 3-D flow models to the hemodynamic analysis for two patients who received a TIPS and a laparoscopic splenectomy, both therapies for treating portal hypertension induced diseases. The current computer simulations give a quantitative analysis of the interplay between hemodynamics and TIPS or splenectomy. In conclusion, the presented computational model can be used for the theoretical analysis of TIPS and laparoscopic splenectomy, clinical decisions could be made based on the simulation results with personal properly treatment.
Keller, Trevor; Lindwall, Greta; Ghosh, Supriyo; Ma, Li; Lane, Brandon M; Zhang, Fan; Kattner, Ursula R; Lass, Eric A; Heigel, Jarred C; Idell, Yaakov; Williams, Maureen E; Allen, Andrew J; Guyer, Jonathan E; Levine, Lyle E
2017-10-15
Numerical simulations are used in this work to investigate aspects of microstructure and microseg-regation during rapid solidification of a Ni-based superalloy in a laser powder bed fusion additive manufacturing process. Thermal modeling by finite element analysis simulates the laser melt pool, with surface temperatures in agreement with in situ thermographic measurements on Inconel 625. Geometric and thermal features of the simulated melt pools are extracted and used in subsequent mesoscale simulations. Solidification in the melt pool is simulated on two length scales. For the multicomponent alloy Inconel 625, microsegregation between dendrite arms is calculated using the Scheil-Gulliver solidification model and DICTRA software. Phase-field simulations, using Ni-Nb as a binary analogue to Inconel 625, produced microstructures with primary cellular/dendritic arm spacings in agreement with measured spacings in experimentally observed microstructures and a lesser extent of microsegregation than predicted by DICTRA simulations. The composition profiles are used to compare thermodynamic driving forces for nucleation against experimentally observed precipitates identified by electron and X-ray diffraction analyses. Our analysis lists the precipitates that may form from FCC phase of enriched interdendritic compositions and compares these against experimentally observed phases from 1 h heat treatments at two temperatures: stress relief at 1143 K (870 °C) or homogenization at 1423 K (1150 °C).
Virtual reality simulation training for health professions trainees in gastrointestinal endoscopy.
Walsh, Catharine M; Sherlock, Mary E; Ling, Simon C; Carnahan, Heather
2012-06-13
Traditionally, training in gastrointestinal endoscopy has been based upon an apprenticeship model, with novice endoscopists learning basic skills under the supervision of experienced preceptors in the clinical setting. Over the last two decades, however, the growing awareness of the need for patient safety has brought the issue of simulation-based training to the forefront. While the use of simulation-based training may have important educational and societal advantages, the effectiveness of virtual reality gastrointestinal endoscopy simulators has yet to be clearly demonstrated. To determine whether virtual reality simulation training can supplement and/or replace early conventional endoscopy training (apprenticeship model) in diagnostic oesophagogastroduodenoscopy, colonoscopy and/or sigmoidoscopy for health professions trainees with limited or no prior endoscopic experience. Health professions, educational and computer databases were searched until November 2011 including The Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE, Scopus, Web of Science, Biosis Previews, CINAHL, Allied and Complementary Medicine Database, ERIC, Education Full Text, CBCA Education, Career and Technical Education @ Scholars Portal, Education Abstracts @ Scholars Portal, Expanded Academic ASAP @ Scholars Portal, ACM Digital Library, IEEE Xplore, Abstracts in New Technologies and Engineering and Computer & Information Systems Abstracts. The grey literature until November 2011 was also searched. Randomised and quasi-randomised clinical trials comparing virtual reality endoscopy (oesophagogastroduodenoscopy, colonoscopy and sigmoidoscopy) simulation training versus any other method of endoscopy training including conventional patient-based training, in-job training, training using another form of endoscopy simulation (e.g. low-fidelity simulator), or no training (however defined by authors) were included. Trials comparing one method of virtual reality training versus another method of virtual reality training (e.g. comparison of two different virtual reality simulators) were also included. Only trials measuring outcomes on humans in the clinical setting (as opposed to animals or simulators) were included. Two authors (CMS, MES) independently assessed the eligibility and methodological quality of trials, and extracted data on the trial characteristics and outcomes. Due to significant clinical and methodological heterogeneity it was not possible to pool study data in order to perform a meta-analysis. Where data were available for each continuous outcome we calculated standardized mean difference with 95% confidence intervals based on intention-to-treat analysis. Where data were available for dichotomous outcomes we calculated relative risk with 95% confidence intervals based on intention-to-treat-analysis. Thirteen trials, with 278 participants, met the inclusion criteria. Four trials compared simulation-based training with conventional patient-based endoscopy training (apprenticeship model) whereas nine trials compared simulation-based training with no training. Only three trials were at low risk of bias. Simulation-based training, as compared with no training, generally appears to provide participants with some advantage over their untrained peers as measured by composite score of competency, independent procedure completion, performance time, independent insertion depth, overall rating of performance or competency error rate and mucosal visualization. Alternatively, there was no conclusive evidence that simulation-based training was superior to conventional patient-based training, although data were limited. The results of this systematic review indicate that virtual reality endoscopy training can be used to effectively supplement early conventional endoscopy training (apprenticeship model) in diagnostic oesophagogastroduodenoscopy, colonoscopy and/or sigmoidoscopy for health professions trainees with limited or no prior endoscopic experience. However, there remains insufficient evidence to advise for or against the use of virtual reality simulation-based training as a replacement for early conventional endoscopy training (apprenticeship model) for health professions trainees with limited or no prior endoscopic experience. There is a great need for the development of a reliable and valid measure of endoscopic performance prior to the completion of further randomised clinical trials with high methodological quality.
Dziuda, Lukasz; Biernacki, Marcin P; Baran, Paulina M; Truszczyński, Olaf E
2014-05-01
In the study, we checked: 1) how the simulator test conditions affect the severity of simulator sickness symptoms; 2) how the severity of simulator sickness symptoms changes over time; and 3) whether the conditions of the simulator test affect the severity of these symptoms in different ways, depending on the time that has elapsed since the performance of the task in the simulator. We studied 12 men aged 24-33 years (M = 28.8, SD = 3.26) using a truck simulator. The SSQ questionnaire was used to assess the severity of the symptoms of simulator sickness. Each of the subjects performed three 30-minute tasks running along the same route in a driving simulator. Each of these tasks was carried out in a different simulator configuration: A) fixed base platform with poor visibility; B) fixed base platform with good visibility; and C) motion base platform with good visibility. The measurement of the severity of the simulator sickness symptoms took place in five consecutive intervals. The results of the analysis showed that the simulator test conditions affect in different ways the severity of the simulator sickness symptoms, depending on the time which has elapsed since performing the task on the simulator. The simulator sickness symptoms persisted at the highest level for the test conditions involving the motion base platform. Also, when performing the tasks on the motion base platform, the severity of the simulator sickness symptoms varied depending on the time that had elapsed since performing the task. Specifically, the addition of motion to the simulation increased the oculomotor and disorientation symptoms reported as well as the duration of the after-effects. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Analytical stability and simulation response study for a coupled two-body system
NASA Technical Reports Server (NTRS)
Tao, K. M.; Roberts, J. R.
1975-01-01
An analytical stability study and a digital simulation response study of two connected rigid bodies are documented. Relative rotation of the bodies at the connection is allowed, thereby providing a model suitable for studying system stability and response during a soft-dock regime. Provisions are made of a docking port axes alignment torque and a despin torque capability for encountering spinning payloads. Although the stability analysis is based on linearized equations, the digital simulation is based on nonlinear models.
Throughput and delay analysis of IEEE 802.15.6-based CSMA/CA protocol.
Ullah, Sana; Chen, Min; Kwak, Kyung Sup
2012-12-01
The IEEE 802.15.6 is a new communication standard on Wireless Body Area Network (WBAN) that focuses on a variety of medical, Consumer Electronics (CE) and entertainment applications. In this paper, the throughput and delay performance of the IEEE 802.15.6 is presented. Numerical formulas are derived to determine the maximum throughput and minimum delay limits of the IEEE 802.15.6 for an ideal channel with no transmission errors. These limits are derived for different frequency bands and data rates. Our analysis is validated by extensive simulations using a custom C+ + simulator. Based on analytical and simulation results, useful conclusions are derived for network provisioning and packet size optimization for different applications.
Aeroservoelastic and Flight Dynamics Analysis Using Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Arena, Andrew S., Jr.
1999-01-01
This document in large part is based on the Masters Thesis of Cole Stephens. The document encompasses a variety of technical and practical issues involved when using the STARS codes for Aeroservoelastic analysis of vehicles. The document covers in great detail a number of technical issues and step-by-step details involved in the simulation of a system where aerodynamics, structures and controls are tightly coupled. Comparisons are made to a benchmark experimental program conducted at NASA Langley. One of the significant advantages of the methodology detailed is that as a result of the technique used to accelerate the CFD-based simulation, a systems model is produced which is very useful for developing the control law strategy, and subsequent high-speed simulations.
Simulation Research on Vehicle Active Suspension Controller Based on G1 Method
NASA Astrophysics Data System (ADS)
Li, Gen; Li, Hang; Zhang, Shuaiyang; Luo, Qiuhui
2017-09-01
Based on the order relation analysis method (G1 method), the optimal linear controller of vehicle active suspension is designed. The system of the main and passive suspension of the single wheel vehicle is modeled and the system input signal model is determined. Secondly, the system motion state space equation is established by the kinetic knowledge and the optimal linear controller design is completed with the optimal control theory. The weighting coefficient of the performance index coefficients of the main passive suspension is determined by the relational analysis method. Finally, the model is simulated in Simulink. The simulation results show that: the optimal weight value is determined by using the sequence relation analysis method under the condition of given road conditions, and the vehicle acceleration, suspension stroke and tire motion displacement are optimized to improve the comprehensive performance of the vehicle, and the active control is controlled within the requirements.
Human swallowing simulation based on videofluorography images using Hamiltonian MPS method
NASA Astrophysics Data System (ADS)
Kikuchi, Takahiro; Michiwaki, Yukihiro; Kamiya, Tetsu; Toyama, Yoshio; Tamai, Tasuku; Koshizuka, Seiichi
2015-09-01
In developed nations, swallowing disorders and aspiration pneumonia have become serious problems. We developed a method to simulate the behavior of the organs involved in swallowing to clarify the mechanisms of swallowing and aspiration. The shape model is based on anatomically realistic geometry, and the motion model utilizes forced displacements based on realistic dynamic images to reflect the mechanisms of human swallowing. The soft tissue organs are modeled as nonlinear elastic material using the Hamiltonian MPS method. This method allows for stable simulation of the complex swallowing movement. A penalty method using metaballs is employed to simulate contact between organ walls and smooth sliding along the walls. We performed four numerical simulations under different analysis conditions to represent four cases of swallowing, including a healthy volunteer and a patient with a swallowing disorder. The simulation results were compared to examine the epiglottic downfolding mechanism, which strongly influences the risk of aspiration.
Development and validation of the Simulation Learning Effectiveness Inventory.
Chen, Shiah-Lian; Huang, Tsai-Wei; Liao, I-Chen; Liu, Chienchi
2015-10-01
To develop and psychometrically test the Simulation Learning Effectiveness Inventory. High-fidelity simulation helps students develop clinical skills and competencies. Yet, reliable instruments measuring learning outcomes are scant. A descriptive cross-sectional survey was used to validate psychometric properties of the instrument measuring students' perception of stimulation learning effectiveness. A purposive sample of 505 nursing students who had taken simulation courses was recruited from a department of nursing of a university in central Taiwan from January 2010-June 2010. The study was conducted in two phases. In Phase I, question items were developed based on the literature review and the preliminary psychometric properties of the inventory were evaluated using exploratory factor analysis. Phase II was conducted to evaluate the reliability and validity of the finalized inventory using confirmatory factor analysis. The results of exploratory and confirmatory factor analyses revealed the instrument was composed of seven factors, named course arrangement, equipment resource, debriefing, clinical ability, problem-solving, confidence and collaboration. A further second-order analysis showed comparable fits between a three second-order factor (preparation, process and outcome) and the seven first-order factor models. Internal consistency was supported by adequate Cronbach's alphas and composite reliability. Convergent and discriminant validities were also supported by confirmatory factor analysis. The study provides evidence that the Simulation Learning Effectiveness Inventory is reliable and valid for measuring student perception of learning effectiveness. The instrument is helpful in building the evidence-based knowledge of the effect of simulation teaching on students' learning outcomes. © 2015 John Wiley & Sons Ltd.
Leming, Matthew; Steiner, Rachel; Styner, Martin
2016-02-27
Tract-based spatial statistics (TBSS) 6 is a software pipeline widely employed in comparative analysis of the white matter integrity from diffusion tensor imaging (DTI) datasets. In this study, we seek to evaluate the relationship between different methods of atlas registration for use with TBSS and different measurements of DTI (fractional anisotropy, FA, axial diffusivity, AD, radial diffusivity, RD, and medial diffusivity, MD). To do so, we have developed a novel tool that builds on existing diffusion atlas building software, integrating it into an adapted version of TBSS called DAB-TBSS (DTI Atlas Builder-Tract-Based Spatial Statistics) by using the advanced registration offered in DTI Atlas Builder 7 . To compare the effectiveness of these two versions of TBSS, we also propose a framework for simulating population differences for diffusion tensor imaging data, providing a more substantive means of empirically comparing DTI group analysis programs such as TBSS. In this study, we used 33 diffusion tensor imaging datasets and simulated group-wise changes in this data by increasing, in three different simulations, the principal eigenvalue (directly altering AD), the second and third eigenvalues (RD), and all three eigenvalues (MD) in the genu, the right uncinate fasciculus, and the left IFO. Additionally, we assessed the benefits of comparing the tensors directly using a functional analysis of diffusion tensor tract statistics (FADTTS 10 ). Our results indicate comparable levels of FA-based detection between DAB-TBSS and TBSS, with standard TBSS registration reporting a higher rate of false positives in other measurements of DTI. Within the simulated changes investigated here, this study suggests that the use of DTI Atlas Builder's registration enhances TBSS group-based studies.
ERIC Educational Resources Information Center
Nugent, William R.
2017-01-01
Meta-analysis is a significant methodological advance that is increasingly important in research synthesis. Fundamental to meta-analysis is the presumption that effect sizes, such as the standardized mean difference (SMD), based on scores from different measures are comparable. It has been argued that population observed score SMDs based on scores…
Field-Scale Evaluation of Infiltration Parameters From Soil Texture for Hydrologic Analysis
NASA Astrophysics Data System (ADS)
Springer, Everett P.; Cundy, Terrance W.
1987-02-01
Recent interest in predicting soil hydraulic properties from simple physical properties such as texture has major implications in the parameterization of physically based models of surface runoff. This study was undertaken to (1) compare, on a field scale, soil hydraulic parameters predicted from texture to those derived from field measurements and (2) compare simulated overland flow response using these two parameter sets. The parameters for the Green-Ampt infiltration equation were obtained from field measurements and using texture-based predictors for two agricultural fields, which were mapped as single soil units. Results of the analyses were that (1) the mean and variance of the field-based parameters were not preserved by the texture-based estimates, (2) spatial and cross correlations between parameters were induced by the texture-based estimation procedures, (3) the overland flow simulations using texture-based parameters were significantly different than those from field-based parameters, and (4) simulations using field-measured hydraulic conductivities and texture-based storage parameters were very close to simulations using only field-based parameters.
Turbulence flight director analysis and preliminary simulation
NASA Technical Reports Server (NTRS)
Johnson, D. E.; Klein, R. E.
1974-01-01
A control column and trottle flight director display system is synthesized for use during flight through severe turbulence. The column system is designed to minimize airspeed excursions without overdriving attitude. The throttle system is designed to augment the airspeed regulation and provide an indication of the trim thrust required for any desired flight path angle. Together they form an energy management system to provide harmonious display indications of current aircraft motions and required corrective action, minimize gust upset tendencies, minimize unsafe aircraft excursions, and maintain satisfactory ride qualities. A preliminary fixed-base piloted simulation verified the analysis and provided a shakedown for a more sophisticated moving-base simulation to be accomplished next. This preliminary simulation utilized a flight scenario concept combining piloting tasks, random turbulence, and discrete gusts to create a high but realistic pilot workload conducive to pilot error and potential upset. The turbulence director (energy management) system significantly reduced pilot workload and minimized unsafe aircraft excursions.
Heidari, Behzad Shiroud; Oliaei, Erfan; Shayesteh, Hadi; Davachi, Seyed Mohammad; Hejazi, Iman; Seyfi, Javad; Bahrami, Mozhgan; Rashedi, Hamid
2017-01-01
In this study, injection molding of three poly lactic acid (PLA) based bone screws was simulated and optimized through minimizing the shrinkage and warpage of the bone screws. The optimization was carried out by investigating the process factors such as coolant temperature, mold temperature, melt temperature, packing time, injection time, and packing pressure. A response surface methodology (RSM), based on the central composite design (CCD), was used to determine the effects of the process factors on the PLA based bone screws. Upon applying the method of maximizing the desirability function, optimization of the factors gave the lowest warpage and shrinkage for nanocomposite PLA bone screw (PLA9). Moreover, PLA9 has the greatest desirability among the selected materials for bone screw injection molding. Meanwhile, a finite element analysis (FE analysis) was also performed to determine the force values and concentration points which cause yielding of the screws under certain conditions. The Von-Mises stress distribution showed that PLA9 screw is more resistant against the highest loads as compared to the other ones. Finally, according to the results of injection molding simulations, the design of experiments (DOE) and structural analysis, PLA9 screw is recommended as the best candidate for the production of biomedical materials among all the three types of screws. Copyright © 2016 Elsevier Ltd. All rights reserved.
Physically-based modelling of high magnitude torrent events with uncertainty quantification
NASA Astrophysics Data System (ADS)
Wing-Yuen Chow, Candace; Ramirez, Jorge; Zimmermann, Markus; Keiler, Margreth
2017-04-01
High magnitude torrent events are associated with the rapid propagation of vast quantities of water and available sediment downslope where human settlements may be established. Assessing the vulnerability of built structures to these events is a part of consequence analysis, where hazard intensity is related to the degree of loss sustained. The specific contribution of the presented work describes a procedure simulate these damaging events by applying physically-based modelling and to include uncertainty information about the simulated results. This is a first step in the development of vulnerability curves based on several intensity parameters (i.e. maximum velocity, sediment deposition depth and impact pressure). The investigation process begins with the collection, organization and interpretation of detailed post-event documentation and photograph-based observation data of affected structures in three sites that exemplify the impact of highly destructive mudflows and flood occurrences on settlements in Switzerland. Hazard intensity proxies are then simulated with the physically-based FLO-2D model (O'Brien et al., 1993). Prior to modelling, global sensitivity analysis is conducted to support a better understanding of model behaviour, parameterization and the quantification of uncertainties (Song et al., 2015). The inclusion of information describing the degree of confidence in the simulated results supports the credibility of vulnerability curves developed with the modelled data. First, key parameters are identified and selected based on literature review. Truncated a priori ranges of parameter values were then defined by expert solicitation. Local sensitivity analysis is performed based on manual calibration to provide an understanding of the parameters relevant to the case studies of interest. Finally, automated parameter estimation is performed to comprehensively search for optimal parameter combinations and associated values, which are evaluated using the observed data collected in the first stage of the investigation. O'Brien, J.S., Julien, P.Y., Fullerton, W. T., 1993. Two-dimensional water flood and mudflow simulation. Journal of Hydraulic Engineering 119(2): 244-261. Song, X., Zhang, J., Zhan, C., Xuan, Y., Ye, M., Xu C., 2015. Global sensitivity analysis in hydrological modeling: Review of concepts, methods, theoretical frameworks, Journal of Hydrology 523: 739-757.
SNDR Limits of Oscillator-Based Sensor Readout Circuits.
Cardes, Fernando; Quintero, Andres; Gutierrez, Eric; Buffa, Cesare; Wiesbauer, Andreas; Hernandez, Luis
2018-02-03
This paper analyzes the influence of phase noise and distortion on the performance of oscillator-based sensor data acquisition systems. Circuit noise inherent to the oscillator circuit manifests as phase noise and limits the SNR. Moreover, oscillator nonlinearity generates distortion for large input signals. Phase noise analysis of oscillators is well known in the literature, but the relationship between phase noise and the SNR of an oscillator-based sensor is not straightforward. This paper proposes a model to estimate the influence of phase noise in the performance of an oscillator-based system by reflecting the phase noise to the oscillator input. The proposed model is based on periodic steady-state analysis tools to predict the SNR of the oscillator. The accuracy of this model has been validated by both simulation and experiment in a 130 nm CMOS prototype. We also propose a method to estimate the SNDR and the dynamic range of an oscillator-based readout circuit that improves by more than one order of magnitude the simulation time compared to standard time domain simulations. This speed up enables the optimization and verification of this kind of systems with iterative algorithms.
A meta-analysis of outcomes from the use of computer-simulated experiments in science education
NASA Astrophysics Data System (ADS)
Lejeune, John Van
The purpose of this study was to synthesize the findings from existing research on the effects of computer simulated experiments on students in science education. Results from 40 reports were integrated by the process of meta-analysis to examine the effect of computer-simulated experiments and interactive videodisc simulations on student achievement and attitudes. Findings indicated significant positive differences in both low-level and high-level achievement of students who use computer-simulated experiments and interactive videodisc simulations as compared to students who used more traditional learning activities. No significant differences in retention, student attitudes toward the subject, or toward the educational method were found. Based on the findings of this study, computer-simulated experiments and interactive videodisc simulations should be used to enhance students' learning in science, especially in cases where the use of traditional laboratory activities are expensive, dangerous, or impractical.
Analysis of the Space Shuttle main engine simulation
NASA Technical Reports Server (NTRS)
Deabreu-Garcia, J. Alex; Welch, John T.
1993-01-01
This is a final report on an analysis of the Space Shuttle Main Engine Program, a digital simulator code written in Fortran. The research was undertaken in ultimate support of future design studies of a shuttle life-extending Intelligent Control System (ICS). These studies are to be conducted by NASA Lewis Space Research Center. The primary purpose of the analysis was to define the means to achieve a faster running simulation, and to determine if additional hardware would be necessary for speeding up simulations for the ICS project. In particular, the analysis was to consider the use of custom integrators based on the Matrix Stability Region Placement (MSRP) method. In addition to speed of execution, other qualities of the software were to be examined. Among these are the accuracy of computations, the useability of the simulation system, and the maintainability of the program and data files. Accuracy involves control of truncation error of the methods, and roundoff error induced by floating point operations. It also involves the requirement that the user be fully aware of the model that the simulator is implementing.
Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems
Stover, Lori J.; Nair, Niketh S.; Faeder, James R.
2014-01-01
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This “network-free” approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of “partial network expansion” into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility. PMID:24699269
Exact hybrid particle/population simulation of rule-based models of biochemical systems.
Hogg, Justin S; Harris, Leonard A; Stover, Lori J; Nair, Niketh S; Faeder, James R
2014-04-01
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility.
NASA Technical Reports Server (NTRS)
Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.
2005-01-01
This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.
NASA Astrophysics Data System (ADS)
Gherghel-Lascu, A.; Apel, W. D.; Arteaga-Velázquez, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Fuchs, B.; Fuhrmann, D.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huber, D.; Huege, T.; Kampert, K.-H.; Kang, D.; Klages, H. O.; Link, K.; Łuczak, P.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Zabierowski, J.
2015-02-01
In previous studies of KASCADE-Grande data, a Monte Carlo simulation code based on the GEANT3 program has been developed to describe the energy deposited by EAS particles in the detector stations. In an attempt to decrease the simulation time and ensure compatibility with the geometry description in standard KASCADE-Grande analysis software, several structural elements have been neglected in the implementation of the Grande station geometry. To improve the agreement between experimental and simulated data, a more accurate simulation of the response of the KASCADE-Grande detector is necessary. A new simulation code has been developed based on the GEANT4 program, including a realistic geometry of the detector station with structural elements that have not been considered in previous studies. The new code is used to study the influence of a realistic detector geometry on the energy deposited in the Grande detector stations by particles from EAS events simulated by CORSIKA. Lateral Energy Correction Functions are determined and compared with previous results based on GEANT3.
Image based SAR product simulation for analysis
NASA Technical Reports Server (NTRS)
Domik, G.; Leberl, F.
1987-01-01
SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.
Performance evaluation of power control algorithms in wireless cellular networks
NASA Astrophysics Data System (ADS)
Temaneh-Nyah, C.; Iita, V.
2014-10-01
Power control in a mobile communication network intents to control the transmission power levels in such a way that the required quality of service (QoS) for the users is guaranteed with lowest possible transmission powers. Most of the studies of power control algorithms in the literature are based on some kind of simplified assumptions which leads to compromise in the validity of the results when applied in a real environment. In this paper, a CDMA network was simulated. The real environment was accounted for by defining the analysis area and the network base stations and mobile stations are defined by their geographical coordinates, the mobility of the mobile stations is accounted for. The simulation also allowed for a number of network parameters including the network traffic, and the wireless channel models to be modified. Finally, we present the simulation results of a convergence speed based comparative analysis of three uplink power control algorithms.
NASA Technical Reports Server (NTRS)
Follen, G.; Naiman, C.; auBuchon, M.
2000-01-01
Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of propulsion systems for aircraft and space vehicles called the Numerical Propulsion System Simulation (NPSS). The NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer, along with the concept of numerical zooming between 0- Dimensional to 1-, 2-, and 3-dimensional component engine codes. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Current "state-of-the-art" engine simulations are 0-dimensional in that there is there is no axial, radial or circumferential resolution within a given component (e.g. a compressor or turbine has no internal station designations). In these 0-dimensional cycle simulations the individual component performance characteristics typically come from a table look-up (map) with adjustments for off-design effects such as variable geometry, Reynolds effects, and clearances. Zooming one or more of the engine components to a higher order, physics-based analysis means a higher order code is executed and the results from this analysis are used to adjust the 0-dimensional component performance characteristics within the system simulation. By drawing on the results from more predictive, physics based higher order analysis codes, "cycle" simulations are refined to closely model and predict the complex physical processes inherent to engines. As part of the overall development of the NPSS, NASA and industry began the process of defining and implementing an object class structure that enables Numerical Zooming between the NPSS Version I (0-dimension) and higher order 1-, 2- and 3-dimensional analysis codes. The NPSS Version I preserves the historical cycle engineering practices but also extends these classical practices into the area of numerical zooming for use within a companies' design system. What follows here is a description of successfully zooming I-dimensional (row-by-row) high pressure compressor results back to a NPSS engine 0-dimension simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the fidelity of the engine system simulation and enable the engine system to be "pre-validated" prior to commitment to engine hardware.
Robust Mediation Analysis Based on Median Regression
Yuan, Ying; MacKinnon, David P.
2014-01-01
Mediation analysis has many applications in psychology and the social sciences. The most prevalent methods typically assume that the error distribution is normal and homoscedastic. However, this assumption may rarely be met in practice, which can affect the validity of the mediation analysis. To address this problem, we propose robust mediation analysis based on median regression. Our approach is robust to various departures from the assumption of homoscedasticity and normality, including heavy-tailed, skewed, contaminated, and heteroscedastic distributions. Simulation studies show that under these circumstances, the proposed method is more efficient and powerful than standard mediation analysis. We further extend the proposed robust method to multilevel mediation analysis, and demonstrate through simulation studies that the new approach outperforms the standard multilevel mediation analysis. We illustrate the proposed method using data from a program designed to increase reemployment and enhance mental health of job seekers. PMID:24079925
Mooney, Barbara Logan; Corrales, L René; Clark, Aurora E
2012-03-30
This work discusses scripts for processing molecular simulations data written using the software package R: A Language and Environment for Statistical Computing. These scripts, named moleculaRnetworks, are intended for the geometric and solvent network analysis of aqueous solutes and can be extended to other H-bonded solvents. New algorithms, several of which are based on graph theory, that interrogate the solvent environment about a solute are presented and described. This includes a novel method for identifying the geometric shape adopted by the solvent in the immediate vicinity of the solute and an exploratory approach for describing H-bonding, both based on the PageRank algorithm of Google search fame. The moleculaRnetworks codes include a preprocessor, which distills simulation trajectories into physicochemical data arrays, and an interactive analysis script that enables statistical, trend, and correlation analysis, and other data mining. The goal of these scripts is to increase access to the wealth of structural and dynamical information that can be obtained from molecular simulations. Copyright © 2012 Wiley Periodicals, Inc.
On Designing Multicore-Aware Simulators for Systems Biology Endowed with OnLine Statistics
Calcagno, Cristina; Coppo, Mario
2014-01-01
The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed. PMID:25050327
Lui, Justin T; Hoy, Monica Y
2017-06-01
Background The increasing prevalence of virtual reality simulation in temporal bone surgery warrants an investigation to assess training effectiveness. Objectives To determine if temporal bone simulator use improves mastoidectomy performance. Data Sources Ovid Medline, Embase, and PubMed databases were systematically searched per the PRISMA guidelines. Review Methods Inclusion criteria were peer-reviewed publications that utilized quantitative data of mastoidectomy performance following the use of a temporal bone simulator. The search was restricted to human studies published in English. Studies were excluded if they were in non-peer-reviewed format, were descriptive in nature, or failed to provide surgical performance outcomes. Meta-analysis calculations were then performed. Results A meta-analysis based on the random-effects model revealed an improvement in overall mastoidectomy performance following training on the temporal bone simulator. A standardized mean difference of 0.87 (95% CI, 0.38-1.35) was generated in the setting of a heterogeneous study population ( I 2 = 64.3%, P < .006). Conclusion In the context of a diverse population of virtual reality simulation temporal bone surgery studies, meta-analysis calculations demonstrate an improvement in trainee mastoidectomy performance with virtual simulation training.
Predicting System Accidents with Model Analysis During Hybrid Simulation
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Fleming, Land D.; Throop, David R.
2002-01-01
Standard discrete event simulation is commonly used to identify system bottlenecks and starving and blocking conditions in resources and services. The CONFIG hybrid discrete/continuous simulation tool can simulate such conditions in combination with inputs external to the simulation. This provides a means for evaluating the vulnerability to system accidents of a system's design, operating procedures, and control software. System accidents are brought about by complex unexpected interactions among multiple system failures , faulty or misleading sensor data, and inappropriate responses of human operators or software. The flows of resource and product materials play a central role in the hazardous situations that may arise in fluid transport and processing systems. We describe the capabilities of CONFIG for simulation-time linear circuit analysis of fluid flows in the context of model-based hazard analysis. We focus on how CONFIG simulates the static stresses in systems of flow. Unlike other flow-related properties, static stresses (or static potentials) cannot be represented by a set of state equations. The distribution of static stresses is dependent on the specific history of operations performed on a system. We discuss the use of this type of information in hazard analysis of system designs.
On designing multicore-aware simulators for systems biology endowed with OnLine statistics.
Aldinucci, Marco; Calcagno, Cristina; Coppo, Mario; Damiani, Ferruccio; Drocco, Maurizio; Sciacca, Eva; Spinella, Salvatore; Torquati, Massimo; Troina, Angelo
2014-01-01
The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.
A cascading failure analysis tool for post processing TRANSCARE simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
This is a MATLAB-based tool to post process simulation results in the EPRI software TRANSCARE, for massive cascading failure analysis following severe disturbances. There are a few key modules available in this tool, including: 1. automatically creating a contingency list to run TRANSCARE simulations, including substation outages above a certain kV threshold, N-k (1, 2 or 3) generator outages and branche outages; 2. read in and analyze a CKO file of PCG definition, an initiating event list, and a CDN file; 3. post process all the simulation results saved in a CDN file and perform critical event corridor analysis; 4.more » provide a summary of TRANSCARE simulations; 5. Identify the most frequently occurring event corridors in the system; and 6. Rank the contingencies using a user defined security index to quantify consequences in terms of total load loss, total number of cascades, etc.« less
Tutorial: Parallel Computing of Simulation Models for Risk Analysis.
Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D
2016-10-01
Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.
Bucknall, Tracey K; Forbes, Helen; Phillips, Nicole M; Hewitt, Nicky A; Cooper, Simon; Bogossian, Fiona
2016-10-01
The aim of this study was to examine the decision-making of nursing students during team based simulations on patient deterioration to determine the sources of information, the types of decisions made and the influences underpinning their decisions. Missed, misinterpreted or mismanaged physiological signs of deterioration in hospitalized patients lead to costly serious adverse events. Not surprisingly, an increased focus on clinical education and graduate nurse work readiness has resulted. A descriptive exploratory design. Clinical simulation laboratories in three Australian universities were used to run team based simulations with a patient actor. A convenience sample of 97 final-year nursing students completed simulations, with three students forming a team. Four teams from each university were randomly selected for detailed analysis. Cued recall during video review of team based simulation exercises to elicit descriptions of individual and team based decision-making and reflections on performance were audio-recorded post simulation (2012) and transcribed. Students recalled 11 types of decisions, including: information seeking; patient assessment; diagnostic; intervention/treatment; evaluation; escalation; prediction; planning; collaboration; communication and reflective. Patient distress, uncertainty and a lack of knowledge were frequently recalled influences on decisions. Incomplete information, premature diagnosis and a failure to consider alternatives when caring for patients is likely to lead to poor quality decisions. All health professionals have a responsibility in recognizing and responding to clinical deterioration within their scope of practice. A typology of nursing students' decision-making in teams, in this context, highlights the importance of individual knowledge, leadership and communication. © 2016 John Wiley & Sons Ltd.
Cohen, Elaine R; Feinglass, Joe; Barsuk, Jeffrey H; Barnard, Cynthia; O'Donnell, Anna; McGaghie, William C; Wayne, Diane B
2010-04-01
Interventions to reduce preventable complications such as catheter-related bloodstream infections (CRBSI) can also decrease hospital costs. However, little is known about the cost-effectiveness of simulation-based education. The aim of this study was to estimate hospital cost savings related to a reduction in CRBSI after simulation training for residents. This was an intervention evaluation study estimating cost savings related to a simulation-based intervention in central venous catheter (CVC) insertion in the Medical Intensive Care Unit (MICU) at an urban teaching hospital. After residents completed a simulation-based mastery learning program in CVC insertion, CRBSI rates declined sharply. Case-control and regression analysis methods were used to estimate savings by comparing CRBSI rates in the year before and after the intervention. Annual savings from reduced CRBSIs were compared with the annual cost of simulation training. Approximately 9.95 CRBSIs were prevented among MICU patients with CVCs in the year after the intervention. Incremental costs attributed to each CRBSI were approximately $82,000 in 2008 dollars and 14 additional hospital days (including 12 MICU days). The annual cost of the simulation-based education was approximately $112,000. Net annual savings were thus greater than $700,000, a 7 to 1 rate of return on the simulation training intervention. A simulation-based educational intervention in CVC insertion was highly cost-effective. These results suggest that investment in simulation training can produce significant medical care cost savings.
Judd, Belinda K; Scanlan, Justin N; Alison, Jennifer A; Waters, Donna; Gordon, Christopher J
2016-08-05
Despite the recent widespread adoption of simulation in clinical education in physiotherapy, there is a lack of validated tools for assessment in this setting. The Assessment of Physiotherapy Practice (APP) is a comprehensive tool used in clinical placement settings in Australia to measure professional competence of physiotherapy students. The aim of the study was to evaluate the validity of the APP for student assessment in simulation settings. A total of 1260 APPs were collected, 971 from students in simulation and 289 from students in clinical placements. Rasch analysis was used to examine the construct validity of the APP tool in three different simulation assessment formats: longitudinal assessment over 1 week of simulation; longitudinal assessment over 2 weeks; and a short-form (25 min) assessment of a single simulation scenario. Comparison with APPs from 5 week clinical placements in hospital and clinic-based settings were also conducted. The APP demonstrated acceptable fit to the expectations of the Rasch model for the 1 and 2 week clinical simulations, exhibiting unidimensional properties that were able to distinguish different levels of student performance. For the short-form simulation, nine of the 20 items recorded greater than 25 % of scores as 'not-assessed' by clinical educators which impacted on the suitability of the APP tool in this simulation format. The APP was a valid assessment tool when used in longitudinal simulation formats. A revised APP may be required for assessment in short-form simulation scenarios.
Finite element analysis of 2-Station hip himulator
NASA Astrophysics Data System (ADS)
Fazli, M. I. M.; Yahya, A.; Shahrom, A.; Nawawi, S. W.; Zainudin, M. R.; Nazarudin, M. S.
2017-10-01
This paper presented the analysis of materials and design architecture of 2-station hip simulator. Hip simulator is a machine used to conduct the joint and wear test of hip prosthetic. In earlier work, the hip simulator was modified and some improvement were made by using SolidWorks software. The simulator consists of 3DOF which controlled by separate stepper motor and a static load that set up by manual method in each station. In this work, finite element analysis (FEA) of hip simulator was implemented to analyse the structure of the design and selected materials used for simulator component. The analysis is completed based on two categories which are safety factor and stress tests. Both design drawing and FEA was done using SolidWorks software. The study of the two categories is performed by applying the peak load up to 4000N on the main frame that is embedded with metal-on-metal hip prosthesis. From FEA, the value of safety factor and degree of stress formation are successfully obtained. All the components exceed the value of 2 for safety factor analysis while the degree of stress formation shows higher value compare to the yield strength of the material. With this results, it provides information regarding part of simulator which are susceptible to destruct. Besides, the results could be used for design improvement and certify the stability of the hip simulator in real application.
The Impact of Grading on a Curve: Assessing the Results of Kulick and Wright's Simulation Analysis
ERIC Educational Resources Information Center
Bailey, Gary L.; Steed, Ronald C.
2012-01-01
Kulick and Wright concluded, based on theoretical mathematical simulations of hypothetical student exam scores, that assigning exam grades to students based on the relative position of their exam performance scores within a normal curve may be unfair, given the role that randomness plays in any given student's performance on any given exam.…
Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...
2015-12-04
Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less
Circuit-based versus full-wave modelling of active microwave circuits
NASA Astrophysics Data System (ADS)
Bukvić, Branko; Ilić, Andjelija Ž.; Ilić, Milan M.
2018-03-01
Modern full-wave computational tools enable rigorous simulations of linear parts of complex microwave circuits within minutes, taking into account all physical electromagnetic (EM) phenomena. Non-linear components and other discrete elements of the hybrid microwave circuit are then easily added within the circuit simulator. This combined full-wave and circuit-based analysis is a must in the final stages of the circuit design, although initial designs and optimisations are still faster and more comfortably done completely in the circuit-based environment, which offers real-time solutions at the expense of accuracy. However, due to insufficient information and general lack of specific case studies, practitioners still struggle when choosing an appropriate analysis method, or a component model, because different choices lead to different solutions, often with uncertain accuracy and unexplained discrepancies arising between the simulations and measurements. We here design a reconfigurable power amplifier, as a case study, using both circuit-based solver and a full-wave EM solver. We compare numerical simulations with measurements on the manufactured prototypes, discussing the obtained differences, pointing out the importance of measured parameters de-embedding, appropriate modelling of discrete components and giving specific recipes for good modelling practices.
Hybrid modeling and empirical analysis of automobile supply chain network
NASA Astrophysics Data System (ADS)
Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying
2017-05-01
Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.
A Model for Simulating the Response of Aluminum Honeycomb Structure to Transverse Loading
NASA Technical Reports Server (NTRS)
Ratcliffe, James G.; Czabaj, Michael W.; Jackson, Wade C.
2012-01-01
A 1-dimensional material model was developed for simulating the transverse (thickness-direction) loading and unloading response of aluminum honeycomb structure. The model was implemented as a user-defined material subroutine (UMAT) in the commercial finite element analysis code, ABAQUS(Registered TradeMark)/Standard. The UMAT has been applied to analyses for simulating quasi-static indentation tests on aluminum honeycomb-based sandwich plates. Comparison of analysis results with data from these experiments shows overall good agreement. Specifically, analyses of quasi-static indentation tests yielded accurate global specimen responses. Predicted residual indentation was also in reasonable agreement with measured values. Overall, this simple model does not involve a significant computational burden, which makes it more tractable to simulate other damage mechanisms in the same analysis.
Improvements to information management systems simulator
NASA Technical Reports Server (NTRS)
Bilek, R. W.
1972-01-01
The performance of personnel in the augmentation and improvement of the interactive IMSIM information management simulation model is summarized. With this augmented model, NASA now has even greater capabilities for the simulation of computer system configurations, data processing loads imposed on these configurations, and executive software to control system operations. Through these simulations, NASA has an extremely cost effective capability for the design and analysis of computer-based data management systems.
NASA Technical Reports Server (NTRS)
Berchem, J.; Raeder, J.; Walker, R. J.; Ashour-Abdalla, M.
1995-01-01
We report on the development of an interactive system for visualizing and analyzing numerical simulation results. This system is based on visualization modules which use the Application Visualization System (AVS) and the NCAR graphics packages. Examples from recent simulations are presented to illustrate how these modules can be used for displaying and manipulating simulation results to facilitate their comparison with phenomenological model results and observations.
A Methodology for Evaluating the Fidelity of Ground-Based Flight Simulators
NASA Technical Reports Server (NTRS)
Zeyada, Y.; Hess, R. A.
1999-01-01
An analytical and experimental investigation was undertaken to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator. The study was part of a larger research effort which has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system which included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle and the motion system. With the exception of time delays which accrued in visual scene production in the simulator, visual scene effects were not included in this study. The NASA Ames Vertical Motion Simulator was used in a simple, single-degree of freedom rotorcraft bob-up/down maneuver. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity which occurred as the characteristics of the motion system were varied over five configurations i The data from three of the five pilots that participated in the experimental study were analyzed in the fuzzy inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzyinference identification can be used to reflect changes in simulator fidelity for the task examined.
A Methodology for Evaluating the Fidelity of Ground-Based Flight Simulators
NASA Technical Reports Server (NTRS)
Zeyada, Y.; Hess, R. A.
1999-01-01
An analytical and experimental investigation was undertaken to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator. The study was part of a larger research effort which has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system which included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle and the motion system. With the exception of time delays which accrued in visual scene production in the simulator, visual scene effects were not included in this study. The NASA Ames Vertical Motion Simulator was used in a simple, single-degree of freedom rotorcraft bob-up/down maneuver. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity which occurred as the characteristics of the motion system were varied over five configurations. The data from three of the five pilots that participated in the experimental study were analyzed in the fuzzy-inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzy-inference identification can be used to reflect changes in simulator fidelity for the task examined.
Modular Analytical Multicomponent Analysis in Gas Sensor Aarrays
Chaiyboun, Ali; Traute, Rüdiger; Kiesewetter, Olaf; Ahlers, Simon; Müller, Gerhard; Doll, Theodor
2006-01-01
A multi-sensor system is a chemical sensor system which quantitatively and qualitatively records gases with a combination of cross-sensitive gas sensor arrays and pattern recognition software. This paper addresses the issue of data analysis for identification of gases in a gas sensor array. We introduce a software tool for gas sensor array configuration and simulation. It concerns thereby about a modular software package for the acquisition of data of different sensors. A signal evaluation algorithm referred to as matrix method was used specifically for the software tool. This matrix method computes the gas concentrations from the signals of a sensor array. The software tool was used for the simulation of an array of five sensors to determine gas concentration of CH4, NH3, H2, CO and C2H5OH. The results of the present simulated sensor array indicate that the software tool is capable of the following: (a) identify a gas independently of its concentration; (b) estimate the concentration of the gas, even if the system was not previously exposed to this concentration; (c) tell when a gas concentration exceeds a certain value. A gas sensor data base was build for the configuration of the software. With the data base one can create, generate and manage scenarios and source files for the simulation. With the gas sensor data base and the simulation software an on-line Web-based version was developed, with which the user can configure and simulate sensor arrays on-line.
Analysis of simulated image sequences from sensors for restricted-visibility operations
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar
1991-01-01
A real time model of the visible output from a 94 GHz sensor, based on a radiometric simulation of the sensor, was developed. A sequence of images as seen from an aircraft as it approaches for landing was simulated using this model. Thirty frames from this sequence of 200 x 200 pixel images were analyzed to identify and track objects in the image using the Cantata image processing package within the visual programming environment provided by the Khoros software system. The image analysis operations are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Honrubia-Escribano, A.; Jimenez-Buendia, F.; Molina-Garcia, A.
This paper presents the current status of simplified wind turbine models used for power system stability analysis. This work is based on the ongoing work being developed in IEC 61400-27. This international standard, for which a technical committee was convened in October 2009, is focused on defining generic (also known as simplified) simulation models for both wind turbines and wind power plants. The results of the paper provide an improved understanding of the usability of generic models to conduct power system simulations.
An Analysis of the Verdicts and Decision-Making Variables of Simulated Juries.
ERIC Educational Resources Information Center
Anapol, Malthon M.
In order to examine jury deliberations, researchers simulated and videotaped court proceedings and jury deliberations based upon an actual civil court case. Special care was taken to make the simulated trial as authentic as the original trial. College students and the general public provided the jurors, which were then divided into twelve separate…
An application of sedimentation simulation in Tahe oilfield
NASA Astrophysics Data System (ADS)
Tingting, He; Lei, Zhao; Xin, Tan; Dongxu, He
2017-12-01
The braided river delta develops in Triassic low oil formation in the block 9 of Tahe oilfield, but its sedimentation evolution process is unclear. By using sedimentation simulation technology, sedimentation process and distribution of braided river delta are studied based on the geological parameters including sequence stratigraphic division, initial sedimentation environment, relative lake level change and accommodation change, source supply and sedimentary transport pattern. The simulation result shows that the error rate between strata thickness of simulation and actual strata thickness is small, and the single well analysis result of simulation is highly consistent with the actual analysis, which can prove that the model is reliable. The study area belongs to braided river delta retrogradation evolution process, which provides favorable basis for fine reservoir description and prediction.
ERIC Educational Resources Information Center
Flowers, Claudia P.; Raju, Nambury S.; Oshima, T. C.
Current interest in the assessment of measurement equivalence emphasizes two methods of analysis, linear, and nonlinear procedures. This study simulated data using the graded response model to examine the performance of linear (confirmatory factor analysis or CFA) and nonlinear (item-response-theory-based differential item function or IRT-Based…
Opportunities and pitfalls in clinical proof-of-concept: principles and examples.
Chen, Chao
2018-04-01
Clinical proof-of-concept trials crucially inform major resource deployment decisions. This paper discusses several mechanisms for enhancing their rigour and efficiency. The importance of careful consideration when using a surrogate endpoint is illustrated; situational effectiveness of run-in patient enrichment is explored; a versatile tool is introduced to ensure a strong pharmacological underpinning; the benefits of dose-titration are revealed by simulation; and the importance of adequately scheduled observations is shown. The general process of model-based trial design and analysis is described and several examples demonstrate the value in historical data, simulation-guided design, model-based analysis and trial adaptation informed by interim analysis. Copyright © 2018 Elsevier Ltd. All rights reserved.
Simulation-Based Probabilistic Tsunami Hazard Analysis: Empirical and Robust Hazard Predictions
NASA Astrophysics Data System (ADS)
De Risi, Raffaele; Goda, Katsuichiro
2017-08-01
Probabilistic tsunami hazard analysis (PTHA) is the prerequisite for rigorous risk assessment and thus for decision-making regarding risk mitigation strategies. This paper proposes a new simulation-based methodology for tsunami hazard assessment for a specific site of an engineering project along the coast, or, more broadly, for a wider tsunami-prone region. The methodology incorporates numerous uncertain parameters that are related to geophysical processes by adopting new scaling relationships for tsunamigenic seismic regions. Through the proposed methodology it is possible to obtain either a tsunami hazard curve for a single location, that is the representation of a tsunami intensity measure (such as inundation depth) versus its mean annual rate of occurrence, or tsunami hazard maps, representing the expected tsunami intensity measures within a geographical area, for a specific probability of occurrence in a given time window. In addition to the conventional tsunami hazard curve that is based on an empirical statistical representation of the simulation-based PTHA results, this study presents a robust tsunami hazard curve, which is based on a Bayesian fitting methodology. The robust approach allows a significant reduction of the number of simulations and, therefore, a reduction of the computational effort. Both methods produce a central estimate of the hazard as well as a confidence interval, facilitating the rigorous quantification of the hazard uncertainties.
Development and validation of the Simulation Learning Effectiveness Scale for nursing students.
Pai, Hsiang-Chu
2016-11-01
To develop and validate the Simulation Learning Effectiveness Scale, which is based on Bandura's social cognitive theory. A simulation programme is a significant teaching strategy for nursing students. Nevertheless, there are few evidence-based instruments that validate the effectiveness of simulation learning in Taiwan. This is a quantitative descriptive design. In Study 1, a nonprobability convenience sample of 151 student nurses completed the Simulation Learning Effectiveness Scale. Exploratory factor analysis was used to examine the factor structure of the instrument. In Study 2, which involved 365 student nurses, confirmatory factor analysis and structural equation modelling were used to analyse the construct validity of the Simulation Learning Effectiveness Scale. In Study 1, exploratory factor analysis yielded three components: self-regulation, self-efficacy and self-motivation. The three factors explained 29·09, 27·74 and 19·32% of the variance, respectively. The final 12-item instrument with the three factors explained 76·15% of variance. Cronbach's alpha was 0·94. In Study 2, confirmatory factor analysis identified a second-order factor termed Simulation Learning Effectiveness Scale. Goodness-of-fit indices showed an acceptable fit overall with the full model (χ 2 /df (51) = 3·54, comparative fit index = 0·96, Tucker-Lewis index = 0·95 and standardised root-mean-square residual = 0·035). In addition, teacher's competence was found to encourage learning, and self-reflection and insight were significantly and positively associated with Simulation Learning Effectiveness Scale. Teacher's competence in encouraging learning also was significantly and positively associated with self-reflection and insight. Overall, theses variable explained 21·9% of the variance in the student's learning effectiveness. The Simulation Learning Effectiveness Scale is a reliable and valid means to assess simulation learning effectiveness for nursing students. The Simulation Learning Effectiveness Scale can be used to examine nursing students' learning effectiveness and serve as a basis to improve student's learning efficiency through simulation programmes. Future implementation research that focuses on the relationship between learning effectiveness and nursing competence in nursing students is recommended. © 2016 John Wiley & Sons Ltd.
McNeer, Richard R; Bennett, Christopher L; Dudaryk, Roman
2016-02-01
Operating rooms are identified as being one of the noisiest of clinical environments, and intraoperative noise is associated with adverse effects on staff and patient safety. Simulation-based experiments would offer controllable and safe venues for investigating this noise problem. However, realistic simulation of the clinical auditory environment is rare in current simulators. Therefore, we retrofitted our operating room simulator to be able to produce immersive auditory simulations with the use of typical sound sources encountered during surgeries. Then, we tested the hypothesis that anesthesia residents would perceive greater task load and fatigue while being given simulated lunch breaks in noisy environments rather than in quiet ones. As a secondary objective, we proposed and tested the plausibility of a novel psychometric instrument for the assessment of stress. In this simulation-based, randomized, repeated-measures, crossover study, 2 validated psychometric survey instruments, the NASA Task Load Index (NASA-TLX), composed of 6 items, and the Swedish Occupational Fatigue Inventory (SOFI), composed of 5 items, were used to assess perceived task load and fatigue, respectively, in first-year anesthesia residents. Residents completed the psychometric instruments after being given lunch breaks in quiet and noisy intraoperative environments (soundscapes). The effects of soundscape grouping on the psychometric instruments and their comprising items were analyzed with a split-plot analysis. A model for a new psychometric instrument for measuring stress that combines the NASA-TLX and SOFI instruments was proposed, and a factor analysis was performed on the collected data to determine the model's plausibility. Twenty residents participated in this study. Multivariate analysis of variance showed an effect of soundscape grouping on the combined NASA-TLX and SOFI instrument items (P = 0.003) and the comparisons of univariate item reached significance for the NASA Temporal Demand item (P = 0.0004) and the SOFI Lack of Energy item (P = 0.001). Factor analysis extracted 4 factors, which were assigned the following construct names for model development: Psychological Task Load, Psychological Fatigue, Acute Physical Load, and Performance-Chronic Physical Load. Six of the 7 fit tests used in the partial confirmatory factor analysis were positive when we fitted the data to the proposed model, suggesting that further validation is warranted. This study provides evidence that noise during surgery can increase feelings of stress, as measured by perceived task load and fatigue levels, in anesthesiologists and adds to the growing literature pointing to an overall adverse impact of clinical noise on caregivers and patient safety. The psychometric model proposed in this study for assessing perceived stress is plausible based on factor analysis and will be useful for characterizing the impact of the clinical environment on subject stress levels in future investigations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrivastava, Manish; Zhao, Chun; Easter, Richard C.
We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recentmore » work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance. This study highlights the large sensitivity of SOA loadings to the particle-phase transformation of SOA volatility, which is neglected in most previous models.« less
SINERGIA laparoscopic virtual reality simulator: didactic design and technical development.
Lamata, Pablo; Gómez, Enrique J; Sánchez-Margallo, Francisco M; López, Oscar; Monserrat, Carlos; García, Verónica; Alberola, Carlos; Florido, Miguel Angel Rodríguez; Ruiz, Juan; Usón, Jesús
2007-03-01
VR laparoscopic simulators have demonstrated its validity in recent studies, and research should be directed towards a high training effectiveness and efficacy. In this direction, an insight into simulators' didactic design and technical development is provided, by describing the methodology followed in the building of the SINERGIA simulator. It departs from a clear analysis of training needs driven by a surgical training curriculum. Existing solutions and validation studies are an important reference for the definition of specifications, which are described with a suitable use of simulation technologies. Five new didactic exercises are proposed to train some of the basic laparoscopic skills. Simulator construction has required existing algorithms and the development of a particle-based biomechanical model, called PARSYS, and a collision handling solution based in a multi-point strategy. The resulting VR laparoscopic simulator includes new exercises and enhanced simulation technologies, and is finding a very good acceptance among surgeons.
A power study of bivariate LOD score analysis of a complex trait and fear/discomfort with strangers
Ji, Fei; Lee, Dayoung; Mendell, Nancy Role
2005-01-01
Complex diseases are often reported along with disease-related traits (DRT). Sometimes investigators consider both disease and DRT phenotypes separately and sometimes they consider individuals as affected if they have either the disease or the DRT, or both. We propose instead to consider the joint distribution of the disease and the DRT and do a linkage analysis assuming a pleiotropic model. We evaluated our results through analysis of the simulated datasets provided by Genetic Analysis Workshop 14. We first conducted univariate linkage analysis of the simulated disease, Kofendrerd Personality Disorder and one of its simulated associated traits, phenotype b (fear/discomfort with strangers). Subsequently, we considered the bivariate phenotype, which combined the information on Kofendrerd Personality Disorder and fear/discomfort with strangers. We developed a program to perform bivariate linkage analysis using an extension to the Elston-Stewart peeling method of likelihood calculation. Using this program we considered the microsatellites within 30 cM of the gene pleiotropic for this simulated disease and DRT. Based on 100 simulations of 300 families we observed excellent power to detect linkage within 10 cM of the disease locus using the DRT and the bivariate trait. PMID:16451570
A power study of bivariate LOD score analysis of a complex trait and fear/discomfort with strangers.
Ji, Fei; Lee, Dayoung; Mendell, Nancy Role
2005-12-30
Complex diseases are often reported along with disease-related traits (DRT). Sometimes investigators consider both disease and DRT phenotypes separately and sometimes they consider individuals as affected if they have either the disease or the DRT, or both. We propose instead to consider the joint distribution of the disease and the DRT and do a linkage analysis assuming a pleiotropic model. We evaluated our results through analysis of the simulated datasets provided by Genetic Analysis Workshop 14. We first conducted univariate linkage analysis of the simulated disease, Kofendrerd Personality Disorder and one of its simulated associated traits, phenotype b (fear/discomfort with strangers). Subsequently, we considered the bivariate phenotype, which combined the information on Kofendrerd Personality Disorder and fear/discomfort with strangers. We developed a program to perform bivariate linkage analysis using an extension to the Elston-Stewart peeling method of likelihood calculation. Using this program we considered the microsatellites within 30 cM of the gene pleiotropic for this simulated disease and DRT. Based on 100 simulations of 300 families we observed excellent power to detect linkage within 10 cM of the disease locus using the DRT and the bivariate trait.
NASA Technical Reports Server (NTRS)
Shackelford, John H.; Saugen, John D.; Wurst, Michael J.; Adler, James
1991-01-01
A generic planar 3 degree of freedom simulation was developed that supports hardware in the loop simulations, guidance and control analysis, and can directly generate flight software. This simulation was developed in a small amount of time utilizing rapid prototyping techniques. The approach taken to develop this simulation tool, the benefits seen using this approach to development, and on-going efforts to improve and extend this capability are described. The simulation is composed of 3 major elements: (1) Docker dynamics model, (2) Dockee dynamics model, and (3) Docker Control System. The docker and dockee models are based on simple planar orbital dynamics equations using a spherical earth gravity model. The docker control system is based on a phase plane approach to error correction.
2011-09-01
Anthony Ciavarelli Second Reader: Roberto de Beauclair THIS PAGE INTENTIONALLY LEFT BLANK i...Ciavarelli Thesis Co-Advisor Roberto de Beauclair Second Reader Mathias Kolsch Chair, Modeling, Virtual Environments, and Simulation
Verification of Sulfate Attack Penetration Rates for Saltstone Disposal Unit Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G. P.
Recent Special Analysis modeling of Saltstone Disposal Units consider sulfate attack on concrete and utilize degradation rates estimated from Cementitious Barriers Partnership software simulations. This study provides an independent verification of those simulation results using an alternative analysis method and an independent characterization data source. The sulfate penetration depths estimated herein are similar to the best-estimate values in SRNL-STI-2013-00118 Rev. 2 and well below the nominal values subsequently used to define Saltstone Special Analysis base cases.
Challenges of interprofessional team training: a qualitative analysis of residents' perceptions.
van Schaik, Sandrijn; Plant, Jennifer; O'Brien, Bridget
2015-01-01
Simulation-based interprofessional team training is thought to improve patient care. Participating teams often consist of both experienced providers and trainees, which likely impacts team dynamics, particularly when a resident leads the team. Although similar team composition is found in real-life, debriefing after simulations puts a spotlight on team interactions and in particular on residents in the role of team leader. The goal of the current study was to explore residents' perceptions of simulation-based interprofessional team training. This was a secondary analysis of a study of residents in the pediatric residency training program at the University of California, San Francisco (United States) leading interprofessional teams in simulated resuscitations, followed by facilitated debriefing. Residents participated in individual, semi-structured, audio-recorded interviews within one month of the simulation. The original study aimed to examine residents' self-assessment of leadership skills, and during analysis we encountered numerous comments regarding the interprofessional nature of the simulation training. We therefore performed a secondary analysis of the interview transcripts. We followed an iterative process to create a coding scheme, and used interprofessional learning and practice as sensitizing concepts to extract relevant themes. 16 residents participated in the study. Residents felt that simulated resuscitations were helpful but anxiety provoking, largely due to interprofessional dynamics. They embraced the interprofessional training opportunity and appreciated hearing other healthcare providers' perspectives, but questioned the value of interprofessional debriefing. They identified the need to maintain positive relationships with colleagues in light of the teams' complex hierarchy as a barrier to candid feedback. Pediatric residents in our study appreciated the opportunity to participate in interprofessional team training but were conflicted about the value of feedback and debriefing in this setting. These data indicate that the optimal approach to such interprofessional education activities deserves further study.
NASA Technical Reports Server (NTRS)
Stupl, Jan; Faber, Nicolas; Foster, Cyrus; Yang, Fan Yang; Nelson, Bron; Aziz, Jonathan; Nuttall, Andrew; Henze, Chris; Levit, Creon
2014-01-01
This paper provides an updated efficiency analysis of the LightForce space debris collision avoidance scheme. LightForce aims to prevent collisions on warning by utilizing photon pressure from ground based, commercial off the shelf lasers. Past research has shown that a few ground-based systems consisting of 10 kilowatt class lasers directed by 1.5 meter telescopes with adaptive optics could lower the expected number of collisions in Low Earth Orbit (LEO) by an order of magnitude. Our simulation approach utilizes the entire Two Line Element (TLE) catalogue in LEO for a given day as initial input. Least-squares fitting of a TLE time series is used for an improved orbit estimate. We then calculate the probability of collision for all LEO objects in the catalogue for a time step of the simulation. The conjunctions that exceed a threshold probability of collision are then engaged by a simulated network of laser ground stations. After those engagements, the perturbed orbits are used to re-assess the probability of collision and evaluate the efficiency of the system. This paper describes new simulations with three updated aspects: 1) By utilizing a highly parallel simulation approach employing hundreds of processors, we have extended our analysis to a much broader dataset. The simulation time is extended to one year. 2) We analyze not only the efficiency of LightForce on conjunctions that naturally occur, but also take into account conjunctions caused by orbit perturbations due to LightForce engagements. 3) We use a new simulation approach that is regularly updating the LightForce engagement strategy, as it would be during actual operations. In this paper we present our simulation approach to parallelize the efficiency analysis, its computational performance and the resulting expected efficiency of the LightForce collision avoidance system. Results indicate that utilizing a network of four LightForce stations with 20 kilowatt lasers, 85% of all conjunctions with a probability of collision Pc > 10 (sup -6) can be mitigated.
Comparative study on gene set and pathway topology-based enrichment methods.
Bayerlová, Michaela; Jung, Klaus; Kramer, Frank; Klemm, Florian; Bleckmann, Annalen; Beißbarth, Tim
2015-10-22
Enrichment analysis is a popular approach to identify pathways or sets of genes which are significantly enriched in the context of differentially expressed genes. The traditional gene set enrichment approach considers a pathway as a simple gene list disregarding any knowledge of gene or protein interactions. In contrast, the new group of so called pathway topology-based methods integrates the topological structure of a pathway into the analysis. We comparatively investigated gene set and pathway topology-based enrichment approaches, considering three gene set and four topological methods. These methods were compared in two extensive simulation studies and on a benchmark of 36 real datasets, providing the same pathway input data for all methods. In the benchmark data analysis both types of methods showed a comparable ability to detect enriched pathways. The first simulation study was conducted with KEGG pathways, which showed considerable gene overlaps between each other. In this study with original KEGG pathways, none of the topology-based methods outperformed the gene set approach. Therefore, a second simulation study was performed on non-overlapping pathways created by unique gene IDs. Here, methods accounting for pathway topology reached higher accuracy than the gene set methods, however their sensitivity was lower. We conducted one of the first comprehensive comparative works on evaluating gene set against pathway topology-based enrichment methods. The topological methods showed better performance in the simulation scenarios with non-overlapping pathways, however, they were not conclusively better in the other scenarios. This suggests that simple gene set approach might be sufficient to detect an enriched pathway under realistic circumstances. Nevertheless, more extensive studies and further benchmark data are needed to systematically evaluate these methods and to assess what gain and cost pathway topology information introduces into enrichment analysis. Both types of methods for enrichment analysis require further improvements in order to deal with the problem of pathway overlaps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzi, Silvio; Hereld, Mark; Insley, Joseph
In this work we perform in-situ visualization of molecular dynamics simulations, which can help scientists to visualize simulation output on-the-fly, without incurring storage overheads. We present a case study to couple LAMMPS, the large-scale molecular dynamics simulation code with vl3, our parallel framework for large-scale visualization and analysis. Our motivation is to identify effective approaches for covisualization and exploration of large-scale atomistic simulations at interactive frame rates.We propose a system of coupled libraries and describe its architecture, with an implementation that runs on GPU-based clusters. We present the results of strong and weak scalability experiments, as well as future researchmore » avenues based on our results.« less
A Petri Net Approach Based Elementary Siphons Supervisor for Flexible Manufacturing Systems
NASA Astrophysics Data System (ADS)
Abdul-Hussin, Mowafak Hassan
2015-05-01
This paper presents an approach to constructing a class of an S3PR net for modeling, simulation and control of processes occurring in the flexible manufacturing system (FMS) used based elementary siphons of a Petri net. Siphons are very important to the analysis and control of deadlocks of FMS that is significant objectives of siphons. Petri net models in the efficiency structure analysis, and utilization of the FMSs when different policy can be implemented lead to the deadlock prevention. We are representing an effective deadlock-free policy of a special class of Petri nets called S3PR. Simulation of Petri net structural analysis and reachability graph analysis is used for analysis and control of Petri nets. Petri nets contain been successfully as one of the most powerful tools for modelling of FMS, where Using structural analysis, we show that liveness of such systems can be attributed to the absence of under marked siphons.
Golebiowski, Jérôme; Antonczak, Serge; Fernandez-Carmona, Juan; Condom, Roger; Cabrol-Bass, Daniel
2004-12-01
Nanosecond molecular dynamics using the Ewald summation method have been performed to elucidate the structural and energetic role of the closing base pair in loop-loop RNA duplexes neutralized by Mg2+ counterions in aqueous phases. Mismatches GA, CU and Watson-Crick GC base pairs have been considered for closing the loop of an RNA in complementary interaction with HIV-1 TAR. The simulations reveal that the mismatch GA base, mediated by a water molecule, leads to a complex that presents the best compromise between flexibility and energetic contributions. The mismatch CU base pair, in spite of the presence of an inserted water molecule, is too short to achieve a tight interaction at the closing-loop junction and seems to force TAR to reorganize upon binding. An energetic analysis has allowed us to quantify the strength of the interactions of the closing and the loop-loop pairs throughout the simulations. Although the water-mediated GA closing base pair presents an interaction energy similar to that found on fully geometry-optimized structure, the water-mediated CU closing base pair energy interaction reaches less than half the optimal value.
UWB Bandpass Filter with Ultra-wide Stopband based on Ring Resonator
NASA Astrophysics Data System (ADS)
Kazemi, Maryam; Lotfi, Saeedeh; Siahkamari, Hesam; Mohammadpanah, Mahmood
2018-04-01
An ultra-wideband (UWB) bandpass filter with ultra-wide stopband based on a rectangular ring resonator is presented. The filter is designed for the operational frequency band from 4.10 GHz to 10.80 GHz with an ultra-wide stopband from 11.23 GHz to 40 GHz. The even and odd equivalent circuits are used to achieve a suitable analysis of the proposed filter performance. To verify the design and analysis, the proposed bandpass filter is simulated using full-wave EM simulator Advanced Design System and fabricated on a 20mil thick Rogers_RO4003 substrate with relative permittivity of 3.38 and a loss tangent of 0.0021. The proposed filter behavior is investigated and simulation results are in good agreement with measurement results.
Potter, Julie Elizabeth; Gatward, Jonathan J; Kelly, Michelle A; McKay, Leigh; McCann, Ellie; Elliott, Rosalind M; Perry, Lin
2017-12-01
The approach, communication skills, and confidence of clinicians responsible for raising deceased organ donation may influence families' donation decisions. The aim of this study was to increase the preparedness and confidence of intensive care clinicians allocated to work in a "designated requester" role. We conducted a posttest evaluation of an innovative simulation-based training program. Simulation-based training enabled clinicians to rehearse the "balanced approach" to family donation conversations (FDCs) in the designated requester role. Professional actors played family members in simulated clinical settings using authentic scenarios, with video-assisted reflective debriefing. Participants completed an evaluation after the workshop. Simple descriptive statistical analysis and content analysis were performed. Between January 2013 and July 2015, 25 workshops were undertaken with 86 participants; 82 (95.3%) returned evaluations. Respondents were registered practicing clinicians; over half (44/82; 53.7%) were intensivists. Most attended a single workshop. Evaluations were overwhelmingly positive with the majority rating workshops as outstanding (64/80; 80%). Scenario fidelity, competence of the actors, opportunity to practice and receive feedback on performance, and feedback from actors, both in and out of character, were particularly valued. Most (76/78; 97.4%) reported feeling more confident about their designated requester role. Simulation-based communication training for the designated requester role in FDCs increased the knowledge and confidence of clinicians to raise the topic of donation.
Hyper-X Stage Separation Trajectory Validation Studies
NASA Technical Reports Server (NTRS)
Tartabini, Paul V.; Bose, David M.; McMinn, John D.; Martin, John G.; Strovers, Brian K.
2003-01-01
An independent twelve degree-of-freedom simulation of the X-43A separation trajectory was created with the Program to Optimize Simulated trajectories (POST II). This simulation modeled the multi-body dynamics of the X-43A and its booster and included the effect of two pyrotechnically actuated pistons used to push the vehicles apart as well as aerodynamic interaction forces and moments between the two vehicles. The simulation was developed to validate trajectory studies conducted with a 14 degree-of-freedom simulation created early in the program using the Automatic Dynamic Analysis of Mechanics Systems (ADAMS) simulation software. The POST simulation was less detailed than the official ADAMS-based simulation used by the Project, but was simpler, more concise and ran faster, while providing similar results. The increase in speed provided by the POST simulation provided the Project with an alternate analysis tool. This tool was ideal for performing separation control logic trade studies that required the running of numerous Monte Carlo trajectories.
Survey of factors influencing learner engagement with simulation debriefing among nursing students.
Roh, Young Sook; Jang, Kie In
2017-12-01
Simulation-based education has escalated worldwide, yet few studies have rigorously explored predictors of learner engagement with simulation debriefing. The purpose of this cross-sectional, descriptive survey was to identify factors that determine learner engagement with simulation debriefing among nursing students. A convenience sample of 296 Korean nursing students enrolled in the simulation-based course completed the survey. A total of five instruments were used: (i) Characteristics of Debriefing; (ii) Debriefing Assessment for Simulation in Healthcare - Student Version; (iii) The Korean version of the Simulation Design Scale; (iv) Communication Skills Scale; and (v) Clinical-Based Stress Scale. Multiple regression analysis was performed using the variables to investigate the influencing factors. The results indicated that influencing factors of learning engagement with simulation debriefing were simulation design, confidentiality, stress, and number of students. Simulation design was the most important factor. Video-assisted debriefing was not a significant factor affecting learner engagement. Educators should organize and conduct debriefing activities while considering these factors to effectively induce learner engagement. Further study is needed to identify the effects of debriefing sessions targeting learners' needs and considering situational factors on learning outcomes. © 2017 John Wiley & Sons Australia, Ltd.
The effects of changing land cover on streamflow simulation in Puerto Rico
Van Beusekom, Ashley E.; Hay, Lauren E.; Viger, Roland; Gould, William A.; Collazo, Jaime; Henareh Khalyani, Azad
2014-01-01
This study quantitatively explores whether land cover changes have a substantive impact on simulated streamflow within the tropical island setting of Puerto Rico. The Precipitation Runoff Modeling System (PRMS) was used to compare streamflow simulations based on five static parameterizations of land cover with those based on dynamically varying parameters derived from four land cover scenes for the period 1953-2012. The PRMS simulations based on static land cover illustrated consistent differences in simulated streamflow across the island. It was determined that the scale of the analysis makes a difference: large regions with localized areas that have undergone dramatic land cover change may show negligible difference in total streamflow, but streamflow simulations using dynamic land cover parameters for a highly altered subwatershed clearly demonstrate the effects of changing land cover on simulated streamflow. Incorporating dynamic parameterization in these highly altered watersheds can reduce the predictive uncertainty in simulations of streamflow using PRMS. Hydrologic models that do not consider the projected changes in land cover may be inadequate for water resource management planning for future conditions.
NASA Technical Reports Server (NTRS)
Appleby, M. H.; Golightly, M. J.; Hardy, A. C.
1993-01-01
Major improvements have been completed in the approach to analyses and simulation of spacecraft radiation shielding and exposure. A computer-aided design (CAD)-based system has been developed for determining the amount of shielding provided by a spacecraft and simulating transmission of an incident radiation environment to any point within or external to the vehicle. Shielding analysis is performed using a customized ray-tracing subroutine contained within a standard engineering modeling software package. This improved shielding analysis technique has been used in several vehicle design programs such as a Mars transfer habitat, pressurized lunar rover, and the redesigned international Space Station. Results of analysis performed for the Space Station astronaut exposure assessment are provided to demonastrate the applicability and versatility of the system.
Effectiveness of patient simulation in nursing education: meta-analysis.
Shin, Sujin; Park, Jin-Hwa; Kim, Jung-Hee
2015-01-01
The use of simulation as an educational tool is becoming increasingly prevalent in nursing education, and a variety of simulators are utilized. Based on the results of these studies, nursing facilitators must find ways to promote effective learning among students in clinical practice and classrooms. To identify the best available evidence about the effects of patient simulation in nursing education through a meta-analysis. This study explores quantitative evidence published in the electronic databases: EBSCO, Medline, ScienceDirect, and ERIC. Using a search strategy, we identified 2503 potentially relevant articles. Twenty studies were included in the final analysis. We found significant post-intervention improvements in various domains for participants who received simulation education compared to the control groups, with a pooled random-effects standardized mean difference of 0.71, which is a medium-to-large effect size. In the subgroup analysis, we found that simulation education in nursing had benefits, in terms of effect sizes, when the effects were evaluated through performance, the evaluation outcome was psychomotor skills, the subject of learning was clinical, learners were clinical nurses and senior undergraduate nursing students, and simulators were high fidelity. These results indicate that simulation education demonstrated medium to large effect sizes and could guide nurse educators with regard to the conditions under which patient simulation is more effective than traditional learning methods. Copyright © 2014 Elsevier Ltd. All rights reserved.