Sample records for computational database screening

  1. DockScreen: A database of in silico biomolecular interactions to support computational toxicology

    EPA Science Inventory

    We have developed DockScreen, a database of in silico biomolecular interactions designed to enable rational molecular toxicological insight within a computational toxicology framework. This database is composed of chemical/target (receptor and enzyme) binding scores calculated by...

  2. Building a medical image processing algorithm verification database

    NASA Astrophysics Data System (ADS)

    Brown, C. Wayne

    2000-06-01

    The design of a database containing head Computed Tomography (CT) studies is presented, along with a justification for the database's composition. The database will be used to validate software algorithms that screen normal head CT studies from studies that contain pathology. The database is designed to have the following major properties: (1) a size sufficient for statistical viability, (2) inclusion of both normal (no pathology) and abnormal scans, (3) inclusion of scans due to equipment malfunction, technologist error, and uncooperative patients, (4) inclusion of data sets from multiple scanner manufacturers, (5) inclusion of data sets from different gender and age groups, and (6) three independent diagnosis of each data set. Designed correctly, the database will provide a partial basis for FDA (United States Food and Drug Administration) approval of image processing algorithms for clinical use. Our goal for the database is the proof of viability of screening head CT's for normal anatomy using computer algorithms. To put this work into context, a classification scheme for 'computer aided diagnosis' systems is proposed.

  3. Creating and virtually screening databases of fluorescently-labelled compounds for the discovery of target-specific molecular probes

    NASA Astrophysics Data System (ADS)

    Kamstra, Rhiannon L.; Dadgar, Saedeh; Wigg, John; Chowdhury, Morshed A.; Phenix, Christopher P.; Floriano, Wely B.

    2014-11-01

    Our group has recently demonstrated that virtual screening is a useful technique for the identification of target-specific molecular probes. In this paper, we discuss some of our proof-of-concept results involving two biologically relevant target proteins, and report the development of a computational script to generate large databases of fluorescence-labelled compounds for computer-assisted molecular design. The virtual screening of a small library of 1,153 fluorescently-labelled compounds against two targets, and the experimental testing of selected hits reveal that this approach is efficient at identifying molecular probes, and that the screening of a labelled library is preferred over the screening of base compounds followed by conjugation of confirmed hits. The automated script for library generation explores the known reactivity of commercially available dyes, such as NHS-esters, to create large virtual databases of fluorescence-tagged small molecules that can be easily synthesized in a laboratory. A database of 14,862 compounds, each tagged with the ATTO680 fluorophore was generated with the automated script reported here. This library is available for downloading and it is suitable for virtual ligand screening aiming at the identification of target-specific fluorescent molecular probes.

  4. iScreen: world's first cloud-computing web server for virtual screening and de novo drug design based on TCM database@Taiwan

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian

    2011-06-01

    The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.

  5. iScreen: world's first cloud-computing web server for virtual screening and de novo drug design based on TCM database@Taiwan.

    PubMed

    Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian

    2011-06-01

    The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.

  6. Safeguarding Databases Basic Concepts Revisited.

    ERIC Educational Resources Information Center

    Cardinali, Richard

    1995-01-01

    Discusses issues of database security and integrity, including computer crime and vandalism, human error, computer viruses, employee and user access, and personnel policies. Suggests some precautions to minimize system vulnerability such as careful personnel screening, audit systems, passwords, and building and software security systems. (JKP)

  7. Reverse screening methods to search for the protein targets of chemopreventive compounds

    NASA Astrophysics Data System (ADS)

    Huang, Hongbin; Zhang, Guigui; Zhou, Yuquan; Lin, Chenru; Chen, Suling; Lin, Yutong; Mai, Shangkang; Huang, Zunnan

    2018-05-01

    This article is a systematic review of reverse screening methods used to search for the protein targets of chemopreventive compounds or drugs. Typical chemopreventive compounds include components of traditional Chinese medicine, natural compounds and Food and Drug Administration (FDA)-approved drugs. Such compounds are somewhat selective but are predisposed to bind multiple protein targets distributed throughout diverse signaling pathways in human cells. In contrast to conventional virtual screening, which identifies the ligands of a targeted protein from a compound database, reverse screening is used to identify the potential targets or unintended targets of a given compound from a large number of receptors by examining their known ligands or crystal structures. This method, also known as in silico or computational target fishing, is highly valuable for discovering the target receptors of query molecules from terrestrial or marine natural products, exploring the molecular mechanisms of chemopreventive compounds, finding alternative indications of existing drugs by drug repositioning, and detecting adverse drug reactions and drug toxicity. Reverse screening can be divided into three major groups: shape screening, pharmacophore screening and reverse docking. Several large software packages, such as Schrödinger and Discovery Studio; typical software/network services such as ChemMapper, PharmMapper, idTarget and INVDOCK; and practical databases of known target ligands and receptor crystal structures, such as ChEMBL, BindingDB and the Protein Data Bank (PDB), are available for use in these computational methods. Different programs, online services and databases have different applications and constraints. Here, we conducted a systematic analysis and multilevel classification of the computational programs, online services and compound libraries available for shape screening, pharmacophore screening and reverse docking to enable non-specialist users to quickly learn and grasp the types of calculations used in protein target fishing. In addition, we review the main features of these methods, programs and databases and provide a variety of examples illustrating the application of one or a combination of reverse screening methods for accurate target prediction.

  8. Reverse Screening Methods to Search for the Protein Targets of Chemopreventive Compounds.

    PubMed

    Huang, Hongbin; Zhang, Guigui; Zhou, Yuquan; Lin, Chenru; Chen, Suling; Lin, Yutong; Mai, Shangkang; Huang, Zunnan

    2018-01-01

    This article is a systematic review of reverse screening methods used to search for the protein targets of chemopreventive compounds or drugs. Typical chemopreventive compounds include components of traditional Chinese medicine, natural compounds and Food and Drug Administration (FDA)-approved drugs. Such compounds are somewhat selective but are predisposed to bind multiple protein targets distributed throughout diverse signaling pathways in human cells. In contrast to conventional virtual screening, which identifies the ligands of a targeted protein from a compound database, reverse screening is used to identify the potential targets or unintended targets of a given compound from a large number of receptors by examining their known ligands or crystal structures. This method, also known as in silico or computational target fishing, is highly valuable for discovering the target receptors of query molecules from terrestrial or marine natural products, exploring the molecular mechanisms of chemopreventive compounds, finding alternative indications of existing drugs by drug repositioning, and detecting adverse drug reactions and drug toxicity. Reverse screening can be divided into three major groups: shape screening, pharmacophore screening and reverse docking. Several large software packages, such as Schrödinger and Discovery Studio; typical software/network services such as ChemMapper, PharmMapper, idTarget, and INVDOCK; and practical databases of known target ligands and receptor crystal structures, such as ChEMBL, BindingDB, and the Protein Data Bank (PDB), are available for use in these computational methods. Different programs, online services and databases have different applications and constraints. Here, we conducted a systematic analysis and multilevel classification of the computational programs, online services and compound libraries available for shape screening, pharmacophore screening and reverse docking to enable non-specialist users to quickly learn and grasp the types of calculations used in protein target fishing. In addition, we review the main features of these methods, programs and databases and provide a variety of examples illustrating the application of one or a combination of reverse screening methods for accurate target prediction.

  9. Reverse Screening Methods to Search for the Protein Targets of Chemopreventive Compounds

    PubMed Central

    Huang, Hongbin; Zhang, Guigui; Zhou, Yuquan; Lin, Chenru; Chen, Suling; Lin, Yutong; Mai, Shangkang; Huang, Zunnan

    2018-01-01

    This article is a systematic review of reverse screening methods used to search for the protein targets of chemopreventive compounds or drugs. Typical chemopreventive compounds include components of traditional Chinese medicine, natural compounds and Food and Drug Administration (FDA)-approved drugs. Such compounds are somewhat selective but are predisposed to bind multiple protein targets distributed throughout diverse signaling pathways in human cells. In contrast to conventional virtual screening, which identifies the ligands of a targeted protein from a compound database, reverse screening is used to identify the potential targets or unintended targets of a given compound from a large number of receptors by examining their known ligands or crystal structures. This method, also known as in silico or computational target fishing, is highly valuable for discovering the target receptors of query molecules from terrestrial or marine natural products, exploring the molecular mechanisms of chemopreventive compounds, finding alternative indications of existing drugs by drug repositioning, and detecting adverse drug reactions and drug toxicity. Reverse screening can be divided into three major groups: shape screening, pharmacophore screening and reverse docking. Several large software packages, such as Schrödinger and Discovery Studio; typical software/network services such as ChemMapper, PharmMapper, idTarget, and INVDOCK; and practical databases of known target ligands and receptor crystal structures, such as ChEMBL, BindingDB, and the Protein Data Bank (PDB), are available for use in these computational methods. Different programs, online services and databases have different applications and constraints. Here, we conducted a systematic analysis and multilevel classification of the computational programs, online services and compound libraries available for shape screening, pharmacophore screening and reverse docking to enable non-specialist users to quickly learn and grasp the types of calculations used in protein target fishing. In addition, we review the main features of these methods, programs and databases and provide a variety of examples illustrating the application of one or a combination of reverse screening methods for accurate target prediction. PMID:29868550

  10. Building Parts Inventory Files Using the AppleWorks Data Base Subprogram and Apple IIe or GS Computers.

    ERIC Educational Resources Information Center

    Schlenker, Richard M.

    This manual is a "how to" training device for building database files using the AppleWorks program with an Apple IIe or Apple IIGS Computer with Duodisk or two disk drives and an 80-column card. The manual provides step-by-step directions, and includes 25 figures depicting the computer screen at the various stages of the database file…

  11. Innovations: clinical computing: an audio computer-assisted self-interviewing system for research and screening in public mental health settings.

    PubMed

    Bertollo, David N; Alexander, Mary Jane; Shinn, Marybeth; Aybar, Jalila B

    2007-06-01

    This column describes the nonproprietary software Talker, used to adapt screening instruments to audio computer-assisted self-interviewing (ACASI) systems for low-literacy populations and other populations. Talker supports ease of programming, multiple languages, on-site scoring, and the ability to update a central research database. Key features include highly readable text display, audio presentation of questions and audio prompting of answers, and optional touch screen input. The scripting language for adapting instruments is briefly described as well as two studies in which respondents provided positive feedback on its use.

  12. Computer-aided diagnosis workstation and database system for chest diagnosis based on multi-helical CT images

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Mori, Kiyoshi; Eguchi, Kenji; Kaneko, Masahiro; Kakinuma, Ryutarou; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru; Sasagawa, Michizou

    2006-03-01

    Multi-helical CT scanner advanced remarkably at the speed at which the chest CT images were acquired for mass screening. Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images and a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification. We also have developed electronic medical recording system and prototype internet system for the community health in two or more regions by using the Virtual Private Network router and Biometric fingerprint authentication system and Biometric face authentication system for safety of medical information. Based on these diagnostic assistance methods, we have now developed a new computer-aided workstation and database that can display suspected lesions three-dimensionally in a short time. This paper describes basic studies that have been conducted to evaluate this new system. The results of this study indicate that our computer-aided diagnosis workstation and network system can increase diagnostic speed, diagnostic accuracy and safety of medical information.

  13. Developing science gateways for drug discovery in a grid environment.

    PubMed

    Pérez-Sánchez, Horacio; Rezaei, Vahid; Mezhuyev, Vitaliy; Man, Duhu; Peña-García, Jorge; den-Haan, Helena; Gesing, Sandra

    2016-01-01

    Methods for in silico screening of large databases of molecules increasingly complement and replace experimental techniques to discover novel compounds to combat diseases. As these techniques become more complex and computationally costly we are faced with an increasing problem to provide the research community of life sciences with a convenient tool for high-throughput virtual screening on distributed computing resources. To this end, we recently integrated the biophysics-based drug-screening program FlexScreen into a service, applicable for large-scale parallel screening and reusable in the context of scientific workflows. Our implementation is based on Pipeline Pilot and Simple Object Access Protocol and provides an easy-to-use graphical user interface to construct complex workflows, which can be executed on distributed computing resources, thus accelerating the throughput by several orders of magnitude.

  14. Needs assessment for next generation computer-aided mammography reference image databases and evaluation studies.

    PubMed

    Horsch, Alexander; Hapfelmeier, Alexander; Elter, Matthias

    2011-11-01

    Breast cancer is globally a major threat for women's health. Screening and adequate follow-up can significantly reduce the mortality from breast cancer. Human second reading of screening mammograms can increase breast cancer detection rates, whereas this has not been proven for current computer-aided detection systems as "second reader". Critical factors include the detection accuracy of the systems and the screening experience and training of the radiologist with the system. When assessing the performance of systems and system components, the choice of evaluation methods is particularly critical. Core assets herein are reference image databases and statistical methods. We have analyzed characteristics and usage of the currently largest publicly available mammography database, the Digital Database for Screening Mammography (DDSM) from the University of South Florida, in literature indexed in Medline, IEEE Xplore, SpringerLink, and SPIE, with respect to type of computer-aided diagnosis (CAD) (detection, CADe, or diagnostics, CADx), selection of database subsets, choice of evaluation method, and quality of descriptions. 59 publications presenting 106 evaluation studies met our selection criteria. In 54 studies (50.9%), the selection of test items (cases, images, regions of interest) extracted from the DDSM was not reproducible. Only 2 CADx studies, not any CADe studies, used the entire DDSM. The number of test items varies from 100 to 6000. Different statistical evaluation methods are chosen. Most common are train/test (34.9% of the studies), leave-one-out (23.6%), and N-fold cross-validation (18.9%). Database-related terminology tends to be imprecise or ambiguous, especially regarding the term "case". Overall, both the use of the DDSM as data source for evaluation of mammography CAD systems, and the application of statistical evaluation methods were found highly diverse. Results reported from different studies are therefore hardly comparable. Drawbacks of the DDSM (e.g. varying quality of lesion annotations) may contribute to the reasons. But larger bias seems to be caused by authors' own decisions upon study design. RECOMMENDATIONS/CONCLUSION: For future evaluation studies, we derive a set of 13 recommendations concerning the construction and usage of a test database, as well as the application of statistical evaluation methods.

  15. Database Dictionary for Ethiopian National Ground-Water DAtabase (ENGDA) Data Fields

    USGS Publications Warehouse

    Kuniansky, Eve L.; Litke, David W.; Tucci, Patrick

    2007-01-01

    Introduction This document describes the data fields that are used for both field forms and the Ethiopian National Ground-water Database (ENGDA) tables associated with information stored about production wells, springs, test holes, test wells, and water level or water-quality observation wells. Several different words are used in this database dictionary and in the ENGDA database to describe a narrow shaft constructed in the ground. The most general term is borehole, which is applicable to any type of hole. A well is a borehole specifically constructed to extract water from the ground; however, for this data dictionary and for the ENGDA database, the words well and borehole are used interchangeably. A production well is defined as any well used for water supply and includes hand-dug wells, small-diameter bored wells equipped with hand pumps, or large-diameter bored wells equipped with large-capacity motorized pumps. Test holes are borings made to collect information about the subsurface with continuous core or non-continuous core and/or where geophysical logs are collected. Test holes are not converted into wells. A test well is a well constructed for hydraulic testing of an aquifer in order to plan a larger ground-water production system. A water-level or water-quality observation well is a well that is used to collect information about an aquifer and not used for water supply. A spring is any naturally flowing, local, ground-water discharge site. The database dictionary is designed to help define all fields on both field data collection forms (provided in attachment 2 of this report) and for the ENGDA software screen entry forms (described in Litke, 2007). The data entered into each screen entry field are stored in relational database tables within the computer database. The organization of the database dictionary is designed based on field data collection and the field forms, because this is what the majority of people will use. After each field, however, the ENGDA database field name and relational database table is designated; along with the ENGDA screen entry form(s) and the ENGDA field form (attachment 2). The database dictionary is separated into sections. The first section, Basic Site Data Fields, describes the basic site information that is similar for all of the different types of sites. The remaining sections may be applicable for only one type of site; for example, the Well Drilling and Construction Data Fields and Lithologic Description Data Fields are applicable to boreholes and not to springs. Attachment 1 contains a table for conversion from English to metric units. Attachment 2 contains selected field forms used in conjunction with ENGDA. A separate document, 'Users Reference Manual for the Ethiopian National Ground-Water DAtabase (ENGDA),' by David W. Litke was developed as a users guide for the computer database and screen entry. This database dictionary serves as a reference for both the field forms and the computer database. Every effort has been made to have identical field names between the field forms and the screen entry forms in order to avoid confusion.

  16. Public Databases Supporting Computational Toxicology

    EPA Science Inventory

    A major goal of the emerging field of computational toxicology is the development of screening-level models that predict potential toxicity of chemicals from a combination of mechanistic in vitro assay data and chemical structure descriptors. In order to build these models, resea...

  17. How to benchmark methods for structure-based virtual screening of large compound libraries.

    PubMed

    Christofferson, Andrew J; Huang, Niu

    2012-01-01

    Structure-based virtual screening is a useful computational technique for ligand discovery. To systematically evaluate different docking approaches, it is important to have a consistent benchmarking protocol that is both relevant and unbiased. Here, we describe the designing of a benchmarking data set for docking screen assessment, a standard docking screening process, and the analysis and presentation of the enrichment of annotated ligands among a background decoy database.

  18. U.S. EPA computational toxicology programs: Central role of chemical-annotation efforts and molecular databases

    EPA Science Inventory

    EPA’s National Center for Computational Toxicology is engaged in high-profile research efforts to improve the ability to more efficiently and effectively prioritize and screen thousands of environmental chemicals for potential toxicity. A central component of these efforts invol...

  19. Distributed databases for materials study of thermo-kinetic properties

    NASA Astrophysics Data System (ADS)

    Toher, Cormac

    2015-03-01

    High-throughput computational materials science provides researchers with the opportunity to rapidly generate large databases of materials properties. To rapidly add thermal properties to the AFLOWLIB consortium and Materials Project repositories, we have implemented an automated quasi-harmonic Debye model, the Automatic GIBBS Library (AGL). This enables us to screen thousands of materials for thermal conductivity, bulk modulus, thermal expansion and related properties. The search and sort functions of the online database can then be used to identify suitable materials for more in-depth study using more precise computational or experimental techniques. AFLOW-AGL source code is public domain and will soon be released within the GNU-GPL license.

  20. Search for β2 Adrenergic Receptor Ligands by Virtual Screening via Grid Computing and Investigation of Binding Modes by Docking and Molecular Dynamics Simulations

    PubMed Central

    Bai, Qifeng; Shao, Yonghua; Pan, Dabo; Zhang, Yang; Liu, Huanxiang; Yao, Xiaojun

    2014-01-01

    We designed a program called MolGridCal that can be used to screen small molecule database in grid computing on basis of JPPF grid environment. Based on MolGridCal program, we proposed an integrated strategy for virtual screening and binding mode investigation by combining molecular docking, molecular dynamics (MD) simulations and free energy calculations. To test the effectiveness of MolGridCal, we screened potential ligands for β2 adrenergic receptor (β2AR) from a database containing 50,000 small molecules. MolGridCal can not only send tasks to the grid server automatically, but also can distribute tasks using the screensaver function. As for the results of virtual screening, the known agonist BI-167107 of β2AR is ranked among the top 2% of the screened candidates, indicating MolGridCal program can give reasonable results. To further study the binding mode and refine the results of MolGridCal, more accurate docking and scoring methods are used to estimate the binding affinity for the top three molecules (agonist BI-167107, neutral antagonist alprenolol and inverse agonist ICI 118,551). The results indicate agonist BI-167107 has the best binding affinity. MD simulation and free energy calculation are employed to investigate the dynamic interaction mechanism between the ligands and β2AR. The results show that the agonist BI-167107 also has the lowest binding free energy. This study can provide a new way to perform virtual screening effectively through integrating molecular docking based on grid computing, MD simulations and free energy calculations. The source codes of MolGridCal are freely available at http://molgridcal.codeplex.com. PMID:25229694

  1. Building a virtual ligand screening pipeline using free software: a survey.

    PubMed

    Glaab, Enrico

    2016-03-01

    Virtual screening, the search for bioactive compounds via computational methods, provides a wide range of opportunities to speed up drug development and reduce the associated risks and costs. While virtual screening is already a standard practice in pharmaceutical companies, its applications in preclinical academic research still remain under-exploited, in spite of an increasing availability of dedicated free databases and software tools. In this survey, an overview of recent developments in this field is presented, focusing on free software and data repositories for screening as alternatives to their commercial counterparts, and outlining how available resources can be interlinked into a comprehensive virtual screening pipeline using typical academic computing facilities. Finally, to facilitate the set-up of corresponding pipelines, a downloadable software system is provided, using platform virtualization to integrate pre-installed screening tools and scripts for reproducible application across different operating systems. © The Author 2015. Published by Oxford University Press.

  2. Building a virtual ligand screening pipeline using free software: a survey

    PubMed Central

    2016-01-01

    Virtual screening, the search for bioactive compounds via computational methods, provides a wide range of opportunities to speed up drug development and reduce the associated risks and costs. While virtual screening is already a standard practice in pharmaceutical companies, its applications in preclinical academic research still remain under-exploited, in spite of an increasing availability of dedicated free databases and software tools. In this survey, an overview of recent developments in this field is presented, focusing on free software and data repositories for screening as alternatives to their commercial counterparts, and outlining how available resources can be interlinked into a comprehensive virtual screening pipeline using typical academic computing facilities. Finally, to facilitate the set-up of corresponding pipelines, a downloadable software system is provided, using platform virtualization to integrate pre-installed screening tools and scripts for reproducible application across different operating systems. PMID:26094053

  3. Privacy-preserving search for chemical compound databases.

    PubMed

    Shimizu, Kana; Nuida, Koji; Arai, Hiromi; Mitsunari, Shigeo; Attrapadung, Nuttapong; Hamada, Michiaki; Tsuda, Koji; Hirokawa, Takatsugu; Sakuma, Jun; Hanaoka, Goichiro; Asai, Kiyoshi

    2015-01-01

    Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information.

  4. Privacy-preserving search for chemical compound databases

    PubMed Central

    2015-01-01

    Background Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. Results In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. Conclusion We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information. PMID:26678650

  5. CamMedNP: building the Cameroonian 3D structural natural products database for virtual screening.

    PubMed

    Ntie-Kang, Fidele; Mbah, James A; Mbaze, Luc Meva'a; Lifongo, Lydia L; Scharfe, Michael; Hanna, Joelle Ngo; Cho-Ngwa, Fidelis; Onguéné, Pascal Amoa; Owono Owono, Luc C; Megnassan, Eugene; Sippl, Wolfgang; Efange, Simon M N

    2013-04-16

    Computer-aided drug design (CADD) often involves virtual screening (VS) of large compound datasets and the availability of such is vital for drug discovery protocols. We present CamMedNP - a new database beginning with more than 2,500 compounds of natural origin, along with some of their derivatives which were obtained through hemisynthesis. These are pure compounds which have been previously isolated and characterized using modern spectroscopic methods and published by several research teams spread across Cameroon. In the present study, 224 distinct medicinal plant species belonging to 55 plant families from the Cameroonian flora have been considered. About 80 % of these have been previously published and/or referenced in internationally recognized journals. For each compound, the optimized 3D structure, drug-like properties, plant source, collection site and currently known biological activities are given, as well as literature references. We have evaluated the "drug-likeness" of this database using Lipinski's "Rule of Five". A diversity analysis has been carried out in comparison with the ChemBridge diverse database. CamMedNP could be highly useful for database screening and natural product lead generation programs.

  6. Exploiting PubChem for Virtual Screening

    PubMed Central

    Xie, Xiang-Qun

    2011-01-01

    Importance of the field PubChem is a public molecular information repository, a scientific showcase of the NIH Roadmap Initiative. The PubChem database holds over 27 million records of unique chemical structures of compounds (CID) derived from nearly 70 million substance depositions (SID), and contains more than 449,000 bioassay records with over thousands of in vitro biochemical and cell-based screening bioassays established, with targeting more than 7000 proteins and genes linking to over 1.8 million of substances. Areas covered in this review This review builds on recent PubChem-related computational chemistry research reported by other authors while providing readers with an overview of the PubChem database, focusing on its increasing role in cheminformatics, virtual screening and toxicity prediction modeling. What the reader will gain These publicly available datasets in PubChem provide great opportunities for scientists to perform cheminformatics and virtual screening research for computer-aided drug design. However, the high volume and complexity of the datasets, in particular the bioassay-associated false positives/negatives and highly imbalanced datasets in PubChem, also creates major challenges. Several approaches regarding the modeling of PubChem datasets and development of virtual screening models for bioactivity and toxicity predictions are also reviewed. Take home message Novel data-mining cheminformatics tools and virtual screening algorithms are being developed and used to retrieve, annotate and analyze the large-scale and highly complex PubChem biological screening data for drug design. PMID:21691435

  7. Molecular scaffold analysis of natural products databases in the public domain.

    PubMed

    Yongye, Austin B; Waddell, Jacob; Medina-Franco, José L

    2012-11-01

    Natural products represent important sources of bioactive compounds in drug discovery efforts. In this work, we compiled five natural products databases available in the public domain and performed a comprehensive chemoinformatic analysis focused on the content and diversity of the scaffolds with an overview of the diversity based on molecular fingerprints. The natural products databases were compared with each other and with a set of molecules obtained from in-house combinatorial libraries, and with a general screening commercial library. It was found that publicly available natural products databases have different scaffold diversity. In contrast to the common concept that larger libraries have the largest scaffold diversity, the largest natural products collection analyzed in this work was not the most diverse. The general screening library showed, overall, the highest scaffold diversity. However, considering the most frequent scaffolds, the general reference library was the least diverse. In general, natural products databases in the public domain showed low molecule overlap. In addition to benzene and acyclic compounds, flavones, coumarins, and flavanones were identified as the most frequent molecular scaffolds across the different natural products collections. The results of this work have direct implications in the computational and experimental screening of natural product databases for drug discovery. © 2012 John Wiley & Sons A/S.

  8. Smartphone home monitoring of ECG

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Charles; Moon, Gyu; Landa, Joseph; Nakajima, Hiroshi; Hata, Yutaka

    2012-06-01

    A system of ambulatory, halter, electrocardiography (ECG) monitoring system has already been commercially available for recording and transmitting heartbeats data by the Internet. However, it enjoys the confidence with a reservation and thus a limited market penetration, our system was targeting at aging global villagers having an increasingly biomedical wellness (BMW) homecare needs, not hospital related BMI (biomedical illness). It was designed within SWaP-C (Size, Weight, and Power, Cost) using 3 innovative modules: (i) Smart Electrode (lowpower mixed signal embedded with modern compressive sensing and nanotechnology to improve the electrodes' contact impedance); (ii) Learnable Database (in terms of adaptive wavelets transform QRST feature extraction, Sequential Query Relational database allowing home care monitoring retrievable Aided Target Recognition); (iii) Smartphone (touch screen interface, powerful computation capability, caretaker reporting with GPI, ID, and patient panic button for programmable emergence procedure). It can provide a supplementary home screening system for the post or the pre-diagnosis care at home with a build-in database searchable with the time, the place, and the degree of urgency happened, using in-situ screening.

  9. HyperCard to SPSS: improving data integrity.

    PubMed

    Gostel, R

    1993-01-01

    This article describes a database design that captures responses in a HyperCard stack and moves the data to SPSS for the Macintosh without the need to rekey data. Pregnant women used an interactive computer application with a touch screen to answer questions and receive educational information about fetal alcohol syndrome. A database design was created to capture survey responses through interaction with a computer by a sample of prenatal women during formative evaluation trials. The author does not compare this method of data collection to other methods. This article simply describes the method of data collection as a useful research tool.

  10. Information management and analysis system for groundwater data in Thailand

    NASA Astrophysics Data System (ADS)

    Gill, D.; Luckananurung, P.

    1992-01-01

    The Ground Water Division of the Thai Department of Mineral Resources maintains a large archive of groundwater data with information on some 50,000 water wells. Each well file contains information on well location, well completion, borehole geology, water levels, water quality, and pumping tests. In order to enable efficient use of this information a computer-based system for information management and analysis was created. The project was sponsored by the United Nations Development Program and the Thai Department of Mineral Resources. The system was designed to serve users who lack prior training in automated data processing. Access is through a friendly user/system dialogue. Tasks are segmented into a number of logical steps, each of which is managed by a separate screen. Selective retrieval is possible by four different methods of area definition and by compliance with user-specified constraints on any combination of database variables. The main types of outputs are: (1) files of retrieved data, screened according to users' specifications; (2) an assortment of pre-formatted reports; (3) computed geochemical parameters and various diagrams of water chemistry derived therefrom; (4) bivariate scatter diagrams and linear regression analysis; (5) posting of data and computed results on maps; and (6) hydraulic aquifer characteristics as computed from pumping tests. Data are entered directly from formatted screens. Most records can be copied directly from hand-written documents. The database-management program performs data integrity checks in real time, enabling corrections at the time of input. The system software can be grouped into: (1) database administration and maintenance—these functions are carried out by the SIR/DBMS software package; (2) user communication interface for task definition and execution control—the interface is written in the operating system command language (VMS/DCL) and in FORTRAN 77; and (3) scientific data-processing programs, written in FORTRAN 77. The system was implemented on a DEC MicroVAX II computer.

  11. Ligand.Info small-molecule Meta-Database.

    PubMed

    von Grotthuss, Marcin; Koczyk, Grzegorz; Pas, Jakub; Wyrwicz, Lucjan S; Rychlewski, Leszek

    2004-12-01

    Ligand.Info is a compilation of various publicly available databases of small molecules. The total size of the Meta-Database is over 1 million entries. The compound records contain calculated three-dimensional coordinates and sometimes information about biological activity. Some molecules have information about FDA drug approving status or about anti-HIV activity. Meta-Database can be downloaded from the http://Ligand.Info web page. The database can also be screened using a Java-based tool. The tool can interactively cluster sets of molecules on the user side and automatically download similar molecules from the server. The application requires the Java Runtime Environment 1.4 or higher, which can be automatically downloaded from Sun Microsystems or Apple Computer and installed during the first use of Ligand.Info on desktop systems, which support Java (Ms Windows, Mac OS, Solaris, and Linux). The Ligand.Info Meta-Database can be used for virtual high-throughput screening of new potential drugs. Presented examples showed that using a known antiviral drug as query the system was able to find others antiviral drugs and inhibitors.

  12. Integrative data mining of high-throughput in vitro screens, in vivo data, and disease information to identify Adverse Outcome Pathway (AOP) signatures:ToxCast high-throughput screening data and Comparative Toxicogenomics Database (CTD) as a case study.

    EPA Science Inventory

    The Adverse Outcome Pathway (AOP) framework provides a systematic way to describe linkages between molecular and cellular processes and organism or population level effects. The current AOP assembly methods however, are inefficient. Our goal is to generate computationally-pr...

  13. PubChem BioAssay: A Decade's Development toward Open High-Throughput Screening Data Sharing.

    PubMed

    Wang, Yanli; Cheng, Tiejun; Bryant, Stephen H

    2017-07-01

    High-throughput screening (HTS) is now routinely conducted for drug discovery by both pharmaceutical companies and screening centers at academic institutions and universities. Rapid advance in assay development, robot automation, and computer technology has led to the generation of terabytes of data in screening laboratories. Despite the technology development toward HTS productivity, fewer efforts were devoted to HTS data integration and sharing. As a result, the huge amount of HTS data was rarely made available to the public. To fill this gap, the PubChem BioAssay database ( https://www.ncbi.nlm.nih.gov/pcassay/ ) was set up in 2004 to provide open access to the screening results tested on chemicals and RNAi reagents. With more than 10 years' development and contributions from the community, PubChem has now become the largest public repository for chemical structures and biological data, which provides an information platform to worldwide researchers supporting drug development, medicinal chemistry study, and chemical biology research. This work presents a review of the HTS data content in the PubChem BioAssay database and the progress of data deposition to stimulate knowledge discovery and data sharing. It also provides a description of the database's data standard and basic utilities facilitating information access and use for new users.

  14. Designing Second Generation Anti-Alzheimer Compounds as Inhibitors of Human Acetylcholinesterase: Computational Screening of Synthetic Molecules and Dietary Phytochemicals

    PubMed Central

    Amat-ur-Rasool, Hafsa; Ahmed, Mehboob

    2015-01-01

    Alzheimer's disease (AD), a big cause of memory loss, is a progressive neurodegenerative disorder. The disease leads to irreversible loss of neurons that result in reduced level of acetylcholine neurotransmitter (ACh). The reduction of ACh level impairs brain functioning. One aspect of AD therapy is to maintain ACh level up to a safe limit, by blocking acetylcholinesterase (AChE), an enzyme that is naturally responsible for its degradation. This research presents an in-silico screening and designing of hAChE inhibitors as potential anti-Alzheimer drugs. Molecular docking results of the database retrieved (synthetic chemicals and dietary phytochemicals) and self-drawn ligands were compared with Food and Drug Administration (FDA) approved drugs against AD as controls. Furthermore, computational ADME studies were performed on the hits to assess their safety. Human AChE was found to be most approptiate target site as compared to commonly used Torpedo AChE. Among the tested dietry phytochemicals, berberastine, berberine, yohimbine, sanguinarine, elemol and naringenin are the worth mentioning phytochemicals as potential anti-Alzheimer drugs The synthetic leads were mostly dual binding site inhibitors with two binding subunits linked by a carbon chain i.e. second generation AD drugs. Fifteen new heterodimers were designed that were computationally more efficient inhibitors than previously reported compounds. Using computational methods, compounds present in online chemical databases can be screened to design more efficient and safer drugs against cognitive symptoms of AD. PMID:26325402

  15. Designing Second Generation Anti-Alzheimer Compounds as Inhibitors of Human Acetylcholinesterase: Computational Screening of Synthetic Molecules and Dietary Phytochemicals.

    PubMed

    Amat-Ur-Rasool, Hafsa; Ahmed, Mehboob

    2015-01-01

    Alzheimer's disease (AD), a big cause of memory loss, is a progressive neurodegenerative disorder. The disease leads to irreversible loss of neurons that result in reduced level of acetylcholine neurotransmitter (ACh). The reduction of ACh level impairs brain functioning. One aspect of AD therapy is to maintain ACh level up to a safe limit, by blocking acetylcholinesterase (AChE), an enzyme that is naturally responsible for its degradation. This research presents an in-silico screening and designing of hAChE inhibitors as potential anti-Alzheimer drugs. Molecular docking results of the database retrieved (synthetic chemicals and dietary phytochemicals) and self-drawn ligands were compared with Food and Drug Administration (FDA) approved drugs against AD as controls. Furthermore, computational ADME studies were performed on the hits to assess their safety. Human AChE was found to be most approptiate target site as compared to commonly used Torpedo AChE. Among the tested dietry phytochemicals, berberastine, berberine, yohimbine, sanguinarine, elemol and naringenin are the worth mentioning phytochemicals as potential anti-Alzheimer drugs The synthetic leads were mostly dual binding site inhibitors with two binding subunits linked by a carbon chain i.e. second generation AD drugs. Fifteen new heterodimers were designed that were computationally more efficient inhibitors than previously reported compounds. Using computational methods, compounds present in online chemical databases can be screened to design more efficient and safer drugs against cognitive symptoms of AD.

  16. Assessment methodologies and statistical issues for computer-aided diagnosis of lung nodules in computed tomography: contemporary research topics relevant to the lung image database consortium.

    PubMed

    Dodd, Lori E; Wagner, Robert F; Armato, Samuel G; McNitt-Gray, Michael F; Beiden, Sergey; Chan, Heang-Ping; Gur, David; McLennan, Geoffrey; Metz, Charles E; Petrick, Nicholas; Sahiner, Berkman; Sayre, Jim

    2004-04-01

    Cancer of the lung and bronchus is the leading fatal malignancy in the United States. Five-year survival is low, but treatment of early stage disease considerably improves chances of survival. Advances in multidetector-row computed tomography technology provide detection of smaller lung nodules and offer a potentially effective screening tool. The large number of images per exam, however, requires considerable radiologist time for interpretation and is an impediment to clinical throughput. Thus, computer-aided diagnosis (CAD) methods are needed to assist radiologists with their decision making. To promote the development of CAD methods, the National Cancer Institute formed the Lung Image Database Consortium (LIDC). The LIDC is charged with developing the consensus and standards necessary to create an image database of multidetector-row computed tomography lung images as a resource for CAD researchers. To develop such a prospective database, its potential uses must be anticipated. The ultimate applications will influence the information that must be included along with the images, the relevant measures of algorithm performance, and the number of required images. In this article we outline assessment methodologies and statistical issues as they relate to several potential uses of the LIDC database. We review methods for performance assessment and discuss issues of defining "truth" as well as the complications that arise when truth information is not available. We also discuss issues about sizing and populating a database.

  17. Miscellaneous Topics in Computer-Aided Drug Design: Synthetic Accessibility and GPU Computing, and Other Topics.

    PubMed

    Fukunishi, Yoshifumi; Mashimo, Tadaaki; Misoo, Kiyotaka; Wakabayashi, Yoshinori; Miyaki, Toshiaki; Ohta, Seiji; Nakamura, Mayu; Ikeda, Kazuyoshi

    2016-01-01

    Computer-aided drug design is still a state-of-the-art process in medicinal chemistry, and the main topics in this field have been extensively studied and well reviewed. These topics include compound databases, ligand-binding pocket prediction, protein-compound docking, virtual screening, target/off-target prediction, physical property prediction, molecular simulation and pharmacokinetics/pharmacodynamics (PK/PD) prediction. Message and Conclusion: However, there are also a number of secondary or miscellaneous topics that have been less well covered. For example, methods for synthesizing and predicting the synthetic accessibility (SA) of designed compounds are important in practical drug development, and hardware/software resources for performing the computations in computer-aided drug design are crucial. Cloud computing and general purpose graphics processing unit (GPGPU) computing have been used in virtual screening and molecular dynamics simulations. Not surprisingly, there is a growing demand for computer systems which combine these resources. In the present review, we summarize and discuss these various topics of drug design.

  18. Miscellaneous Topics in Computer-Aided Drug Design: Synthetic Accessibility and GPU Computing, and Other Topics

    PubMed Central

    Fukunishi, Yoshifumi; Mashimo, Tadaaki; Misoo, Kiyotaka; Wakabayashi, Yoshinori; Miyaki, Toshiaki; Ohta, Seiji; Nakamura, Mayu; Ikeda, Kazuyoshi

    2016-01-01

    Abstract: Background Computer-aided drug design is still a state-of-the-art process in medicinal chemistry, and the main topics in this field have been extensively studied and well reviewed. These topics include compound databases, ligand-binding pocket prediction, protein-compound docking, virtual screening, target/off-target prediction, physical property prediction, molecular simulation and pharmacokinetics/pharmacodynamics (PK/PD) prediction. Message and Conclusion: However, there are also a number of secondary or miscellaneous topics that have been less well covered. For example, methods for synthesizing and predicting the synthetic accessibility (SA) of designed compounds are important in practical drug development, and hardware/software resources for performing the computations in computer-aided drug design are crucial. Cloud computing and general purpose graphics processing unit (GPGPU) computing have been used in virtual screening and molecular dynamics simulations. Not surprisingly, there is a growing demand for computer systems which combine these resources. In the present review, we summarize and discuss these various topics of drug design. PMID:27075578

  19. Health literacy screening instruments for eHealth applications: a systematic review.

    PubMed

    Collins, Sarah A; Currie, Leanne M; Bakken, Suzanne; Vawdrey, David K; Stone, Patricia W

    2012-06-01

    To systematically review current health literacy (HL) instruments for use in consumer-facing and mobile health information technology screening and evaluation tools. The databases, PubMed, OVID, Google Scholar, Cochrane Library and Science Citation Index, were searched for health literacy assessment instruments using the terms "health", "literacy", "computer-based," and "psychometrics". All instruments identified by this method were critically appraised according to their reported psychometric properties and clinical feasibility. Eleven different health literacy instruments were found. Screening questions, such as asking a patient about his/her need for assistance in navigating health information, were evaluated in seven different studies and are promising for use as a valid, reliable, and feasible computer-based approach to identify patients that struggle with low health literacy. However, there was a lack of consistency in the types of screening questions proposed. There is also a lack of information regarding the psychometric properties of computer-based health literacy instruments. Only English language health literacy assessment instruments were reviewed and analyzed. Current health literacy screening tools demonstrate varying benefits depending on the context of their use. In many cases, it seems that a single screening question may be a reliable, valid, and feasible means for establishing health literacy. A combination of screening questions that assess health literacy and technological literacy may enable tailoring eHealth applications to user needs. Further research should determine the best screening question(s) and the best synthesis of various instruments' content and methodologies for computer-based health literacy screening and assessment. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Health Literacy Screening Instruments for eHealth Applications: A Systematic Review

    PubMed Central

    Collins, Sarah A.; Currie, Leanne M.; Bakken, Suzanne; Vawdrey, David K.; Stone, Patricia W.

    2012-01-01

    Objective To systematically review current health literacy (HL) instruments for use in consumer-facing and mobile health information technology screening and evaluation tools. Design The databases, PubMed, OVID, Google Scholar, Cochrane Library and Science Citation Index, were searched for health literacy assessment instruments using the terms “health”, “literacy”, “computer-based,” and “psychometrics”. All instruments identified by this method were critically appraised according to their reported psychometric properties and clinical feasibility. Results Eleven different health literacy instruments were found. Screening questions, such as asking a patient about his/her need for assistance in navigating health information, were evaluated in 7 different studies and are promising for use as a valid, reliable, and feasible computer-based approach to identify patients that struggle with low health literacy. However, there was a lack of consistency in the types of screening questions proposed. There is also a lack of information regarding the psychometric properties of computer-based health literacy instruments. Limitations Only English language health literacy assessment instruments were reviewed and analyzed. Conclusions Current health literacy screening tools demonstrate varying benefits depending on the context of their use. In many cases, it seems that a single screening question may be a reliable, valid, and feasible means for establishing health literacy. A combination of screening questions that assess health literacy and technological literacy may enable tailoring eHealth applications to user needs. Further research should determine the best screening question(s) and the best synthesis of various instruments’ content and methodologies for computer-based health literacy screening and assessment. PMID:22521719

  1. Use of a secure Internet Web site for collaborative medical research.

    PubMed

    Marshall, W W; Haley, R W

    2000-10-11

    Researchers who collaborate on clinical research studies from diffuse locations need a convenient, inexpensive, secure way to record and manage data. The Internet, with its World Wide Web, provides a vast network that enables researchers with diverse types of computers and operating systems anywhere in the world to log data through a common interface. Development of a Web site for scientific data collection can be organized into 10 steps, including planning the scientific database, choosing a database management software system, setting up database tables for each collaborator's variables, developing the Web site's screen layout, choosing a middleware software system to tie the database software to the Web site interface, embedding data editing and calculation routines, setting up the database on the central server computer, obtaining a unique Internet address and name for the Web site, applying security measures to the site, and training staff who enter data. Ensuring the security of an Internet database requires limiting the number of people who have access to the server, setting up the server on a stand-alone computer, requiring user-name and password authentication for server and Web site access, installing a firewall computer to prevent break-ins and block bogus information from reaching the server, verifying the identity of the server and client computers with certification from a certificate authority, encrypting information sent between server and client computers to avoid eavesdropping, establishing audit trails to record all accesses into the Web site, and educating Web site users about security techniques. When these measures are carefully undertaken, in our experience, information for scientific studies can be collected and maintained on Internet databases more efficiently and securely than through conventional systems of paper records protected by filing cabinets and locked doors. JAMA. 2000;284:1843-1849.

  2. Chemical Space: Big Data Challenge for Molecular Diversity.

    PubMed

    Awale, Mahendra; Visini, Ricardo; Probst, Daniel; Arús-Pous, Josep; Reymond, Jean-Louis

    2017-10-25

    Chemical space describes all possible molecules as well as multi-dimensional conceptual spaces representing the structural diversity of these molecules. Part of this chemical space is available in public databases ranging from thousands to billions of compounds. Exploiting these databases for drug discovery represents a typical big data problem limited by computational power, data storage and data access capacity. Here we review recent developments of our laboratory, including progress in the chemical universe databases (GDB) and the fragment subset FDB-17, tools for ligand-based virtual screening by nearest neighbor searches, such as our multi-fingerprint browser for the ZINC database to select purchasable screening compounds, and their application to discover potent and selective inhibitors for calcium channel TRPV6 and Aurora A kinase, the polypharmacology browser (PPB) for predicting off-target effects, and finally interactive 3D-chemical space visualization using our online tools WebDrugCS and WebMolCS. All resources described in this paper are available for public use at www.gdb.unibe.ch.

  3. LASSO-ing Potential Nuclear Receptor Agonists and Antagonists: A New Computational Method for Database Screening

    EPA Science Inventory

    Nuclear receptors (NRs) are important biological macromolecular transcription factors that are implicated in multiple biological pathways and may interact with other xenobiotics that are endocrine disruptors present in the environment. Examples of important NRs include the androg...

  4. Laboratory testing for cytomegalovirus among pregnant women in the United States: a retrospective study using administrative claims data

    PubMed Central

    2012-01-01

    Background Routine cytomegalovirus (CMV) screening during pregnancy is not recommended in the United States and the extent to which it is performed is unknown. Using a medical claims database, we computed rates of CMV-specific testing among pregnant women. Methods We used medical claims from the 2009 Truven Health MarketScan® Commercial databases. We computed CMV-specific testing rates using CPT codes. Results We identified 77,773 pregnant women, of whom 1,668 (2%) had a claim for CMV-specific testing. CMV-specific testing was significantly associated with older age, Northeast or urban residence, and a diagnostic code for mononucleosis. We identified 44 women with a diagnostic code for mononucleosis, of whom 14% had CMV-specific testing. Conclusions Few pregnant women had CMV-specific testing, suggesting that screening for CMV infection during pregnancy is not commonly performed. In the absence of national surveillance for CMV infections during pregnancy, healthcare claims are a potential source for monitoring practices of CMV-specific testing. PMID:23198949

  5. Automatic detection of anomalies in screening mammograms

    PubMed Central

    2013-01-01

    Background Diagnostic performance in breast screening programs may be influenced by the prior probability of disease. Since breast cancer incidence is roughly half a percent in the general population there is a large probability that the screening exam will be normal. That factor may contribute to false negatives. Screening programs typically exhibit about 83% sensitivity and 91% specificity. This investigation was undertaken to determine if a system could be developed to pre-sort screening-images into normal and suspicious bins based on their likelihood to contain disease. Wavelets were investigated as a method to parse the image data, potentially removing confounding information. The development of a classification system based on features extracted from wavelet transformed mammograms is reported. Methods In the multi-step procedure images were processed using 2D discrete wavelet transforms to create a set of maps at different size scales. Next, statistical features were computed from each map, and a subset of these features was the input for a concerted-effort set of naïve Bayesian classifiers. The classifier network was constructed to calculate the probability that the parent mammography image contained an abnormality. The abnormalities were not identified, nor were they regionalized. The algorithm was tested on two publicly available databases: the Digital Database for Screening Mammography (DDSM) and the Mammographic Images Analysis Society’s database (MIAS). These databases contain radiologist-verified images and feature common abnormalities including: spiculations, masses, geometric deformations and fibroid tissues. Results The classifier-network designs tested achieved sensitivities and specificities sufficient to be potentially useful in a clinical setting. This first series of tests identified networks with 100% sensitivity and up to 79% specificity for abnormalities. This performance significantly exceeds the mean sensitivity reported in literature for the unaided human expert. Conclusions Classifiers based on wavelet-derived features proved to be highly sensitive to a range of pathologies, as a result Type II errors were nearly eliminated. Pre-sorting the images changed the prior probability in the sorted database from 37% to 74%. PMID:24330643

  6. The EPA Comptox Chemistry Dashboard: A Web-Based Data ...

    EPA Pesticide Factsheets

    The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data driven approaches that integrate chemistry, exposure and biological data. As an outcome of these efforts the National Center for Computational Toxicology (NCCT) has measured, assembled and delivered an enormous quantity and diversity of data for the environmental sciences including high-throughput in vitro screening data, in vivo and functional use data, exposure models and chemical databases with associated properties. A series of software applications and databases have been produced over the past decade to deliver these data but recent developments have focused on the development of a new software architecture that assembles the resources into a single platform. A new web application, the CompTox Chemistry Dashboard provides access to data associated with ~720,000 chemical substances. These data include experimental and predicted physicochemical property data, bioassay screening data associated with the ToxCast program, product and functional use information and a myriad of related data of value to environmental scientists. The dashboard provides chemical-based searching based on chemical names, synonyms and CAS Registry Numbers. Flexible search capabilities allow for chemical identificati

  7. The EPA CompTox Chemistry Dashboard - an online resource ...

    EPA Pesticide Factsheets

    The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data driven approaches that integrate chemistry, exposure and biological data. As an outcome of these efforts the National Center for Computational Toxicology (NCCT) has measured, assembled and delivered an enormous quantity and diversity of data for the environmental sciences including high-throughput in vitro screening data, in vivo and functional use data, exposure models and chemical databases with associated properties. A series of software applications and databases have been produced over the past decade to deliver these data. Recent work has focused on the development of a new architecture that assembles the resources into a single platform. With a focus on delivering access to Open Data streams, web service integration accessibility and a user-friendly web application the CompTox Dashboard provides access to data associated with ~720,000 chemical substances. These data include research data in the form of bioassay screening data associated with the ToxCast program, experimental and predicted physicochemical properties, product and functional use information and related data of value to environmental scientists. This presentation will provide an overview of the CompTox Dashboard and its va

  8. ACToR-Aggregated Computational Resource | Science ...

    EPA Pesticide Factsheets

    ACToR (Aggregated Computational Toxicology Resource) is a database and set of software applications that bring into one central location many types and sources of data on environmental chemicals. Currently, the ACToR chemical database contains information on chemical structure, in vitro bioassays and in vivo toxicology assays derived from more than 150 sources including the U.S. Environmental Protection Agency (EPA), Centers for Disease Control (CDC), U.S. Food & Drug Administration (FDA), National Institutes of Health (NIH), state agencies, corresponding government agencies in Canada, Europe and Japan, universities, the World Health Organization (WHO) and non-governmental organizations (NGOs). At the EPA National Center for Computational Toxicology, ACToR helps manage large data sets being used in a high throughput environmental chemical screening and prioritization program called ToxCast(TM).

  9. Decision support methods for the detection of adverse events in post-marketing data.

    PubMed

    Hauben, M; Bate, A

    2009-04-01

    Spontaneous reporting is a crucial component of post-marketing drug safety surveillance despite its significant limitations. The size and complexity of some spontaneous reporting system databases represent a challenge for drug safety professionals who traditionally have relied heavily on the scientific and clinical acumen of the prepared mind. Computer algorithms that calculate statistical measures of reporting frequency for huge numbers of drug-event combinations are increasingly used to support pharamcovigilance analysts screening large spontaneous reporting system databases. After an overview of pharmacovigilance and spontaneous reporting systems, we discuss the theory and application of contemporary computer algorithms in regular use, those under development, and the practical considerations involved in the implementation of computer algorithms within a comprehensive and holistic drug safety signal detection program.

  10. ACToR - Aggregated Computational Toxicology Resource

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judson, Richard; Richard, Ann; Dix, David

    2008-11-15

    ACToR (Aggregated Computational Toxicology Resource) is a database and set of software applications that bring into one central location many types and sources of data on environmental chemicals. Currently, the ACToR chemical database contains information on chemical structure, in vitro bioassays and in vivo toxicology assays derived from more than 150 sources including the U.S. Environmental Protection Agency (EPA), Centers for Disease Control (CDC), U.S. Food and Drug Administration (FDA), National Institutes of Health (NIH), state agencies, corresponding government agencies in Canada, Europe and Japan, universities, the World Health Organization (WHO) and non-governmental organizations (NGOs). At the EPA National Centermore » for Computational Toxicology, ACToR helps manage large data sets being used in a high-throughput environmental chemical screening and prioritization program called ToxCast{sup TM}.« less

  11. Virtual Screening Approaches towards the Discovery of Toll-Like Receptor Modulators

    PubMed Central

    Pérez-Regidor, Lucía; Zarioh, Malik; Ortega, Laura; Martín-Santamaría, Sonsoles

    2016-01-01

    This review aims to summarize the latest efforts performed in the search for novel chemical entities such as Toll-like receptor (TLR) modulators by means of virtual screening techniques. This is an emergent research field with only very recent (and successful) contributions. Identification of drug-like molecules with potential therapeutic applications for the treatment of a variety of TLR-regulated diseases has attracted considerable interest due to the clinical potential. Additionally, the virtual screening databases and computational tools employed have been overviewed in a descriptive way, widening the scope for researchers interested in the field. PMID:27618029

  12. Logistical Consideration in Computer-Based Screening of Astronaut Applicants

    NASA Technical Reports Server (NTRS)

    Galarza, Laura

    2000-01-01

    This presentation reviews the logistical, ergonomic, and psychometric issues and data related to the development and operational use of a computer-based system for the psychological screening of astronaut applicants. The Behavioral Health and Performance Group (BHPG) at the Johnson Space Center upgraded its astronaut psychological screening and selection procedures for the 1999 astronaut applicants and subsequent astronaut selection cycles. The questionnaires, tests, and inventories were upgraded from a paper-and-pencil system to a computer-based system. Members of the BHPG and a computer programmer designed and developed needed interfaces (screens, buttons, etc.) and programs for the astronaut psychological assessment system. This intranet-based system included the user-friendly computer-based administration of tests, test scoring, generation of reports, the integration of test administration and test output to a single system, and a complete database for past, present, and future selection data. Upon completion of the system development phase, four beta and usability tests were conducted with the newly developed system. The first three tests included 1 to 3 participants each. The final system test was conducted with 23 participants tested simultaneously. Usability and ergonomic data were collected from the system (beta) test participants and from 1999 astronaut applicants who volunteered the information in exchange for anonymity. Beta and usability test data were analyzed to examine operational, ergonomic, programming, test administration and scoring issues related to computer-based testing. Results showed a preference for computer-based testing over paper-and -pencil procedures. The data also reflected specific ergonomic, usability, psychometric, and logistical concerns that should be taken into account in future selection cycles. Conclusion. Psychological, psychometric, human and logistical factors must be examined and considered carefully when developing and using a computer-based system for psychological screening and selection.

  13. Chloroplast 2010: A Database for Large-Scale Phenotypic Screening of Arabidopsis Mutants1[W][OA

    PubMed Central

    Lu, Yan; Savage, Linda J.; Larson, Matthew D.; Wilkerson, Curtis G.; Last, Robert L.

    2011-01-01

    Large-scale phenotypic screening presents challenges and opportunities not encountered in typical forward or reverse genetics projects. We describe a modular database and laboratory information management system that was implemented in support of the Chloroplast 2010 Project, an Arabidopsis (Arabidopsis thaliana) reverse genetics phenotypic screen of more than 5,000 mutants (http://bioinfo.bch.msu.edu/2010_LIMS; www.plastid.msu.edu). The software and laboratory work environment were designed to minimize operator error and detect systematic process errors. The database uses Ruby on Rails and Flash technologies to present complex quantitative and qualitative data and pedigree information in a flexible user interface. Examples are presented where the database was used to find opportunities for process changes that improved data quality. We also describe the use of the data-analysis tools to discover mutants defective in enzymes of leucine catabolism (heteromeric mitochondrial 3-methylcrotonyl-coenzyme A carboxylase [At1g03090 and At4g34030] and putative hydroxymethylglutaryl-coenzyme A lyase [At2g26800]) based upon a syndrome of pleiotropic seed amino acid phenotypes that resembles previously described isovaleryl coenzyme A dehydrogenase (At3g45300) mutants. In vitro assay results support the computational annotation of At2g26800 as hydroxymethylglutaryl-coenzyme A lyase. PMID:21224340

  14. Role of Chemical Reactivity and Transition State Modeling for Virtual Screening.

    PubMed

    Karthikeyan, Muthukumarasamy; Vyas, Renu; Tambe, Sanjeev S; Radhamohan, Deepthi; Kulkarni, Bhaskar D

    2015-01-01

    Every drug discovery research program involves synthesis of a novel and potential drug molecule utilizing atom efficient, economical and environment friendly synthetic strategies. The current work focuses on the role of the reactivity based fingerprints of compounds as filters for virtual screening using a tool ChemScore. A reactant-like (RLS) and a product- like (PLS) score can be predicted for a given compound using the binary fingerprints derived from the numerous known organic reactions which capture the molecule-molecule interactions in the form of addition, substitution, rearrangement, elimination and isomerization reactions. The reaction fingerprints were applied to large databases in biology and chemistry, namely ChEMBL, KEGG, HMDB, DSSTox, and the Drug Bank database. A large network of 1113 synthetic reactions was constructed to visualize and ascertain the reactant product mappings in the chemical reaction space. The cumulative reaction fingerprints were computed for 4000 molecules belonging to 29 therapeutic classes of compounds, and these were found capable of discriminating between the cognition disorder related and anti-allergy compounds with reasonable accuracy of 75% and AUC 0.8. In this study, the transition state based fingerprints were also developed and used effectively for virtual screening in drug related databases. The methodology presented here provides an efficient handle for the rapid scoring of molecular libraries for virtual screening.

  15. shRNA target prediction informed by comprehensive enquiry (SPICE): a supporting system for high-throughput screening of shRNA library.

    PubMed

    Kamatuka, Kenta; Hattori, Masahiro; Sugiyama, Tomoyasu

    2016-12-01

    RNA interference (RNAi) screening is extensively used in the field of reverse genetics. RNAi libraries constructed using random oligonucleotides have made this technology affordable. However, the new methodology requires exploration of the RNAi target gene information after screening because the RNAi library includes non-natural sequences that are not found in genes. Here, we developed a web-based tool to support RNAi screening. The system performs short hairpin RNA (shRNA) target prediction that is informed by comprehensive enquiry (SPICE). SPICE automates several tasks that are laborious but indispensable to evaluate the shRNAs obtained by RNAi screening. SPICE has four main functions: (i) sequence identification of shRNA in the input sequence (the sequence might be obtained by sequencing clones in the RNAi library), (ii) searching the target genes in the database, (iii) demonstrating biological information obtained from the database, and (iv) preparation of search result files that can be utilized in a local personal computer (PC). Using this system, we demonstrated that genes targeted by random oligonucleotide-derived shRNAs were not different from those targeted by organism-specific shRNA. The system facilitates RNAi screening, which requires sequence analysis after screening. The SPICE web application is available at http://www.spice.sugysun.org/.

  16. Pharmacophore modeling, docking, and principal component analysis based clustering: combined computer-assisted approaches to identify new inhibitors of the human rhinovirus coat protein.

    PubMed

    Steindl, Theodora M; Crump, Carolyn E; Hayden, Frederick G; Langer, Thierry

    2005-10-06

    The development and application of a sophisticated virtual screening and selection protocol to identify potential, novel inhibitors of the human rhinovirus coat protein employing various computer-assisted strategies are described. A large commercially available database of compounds was screened using a highly selective, structure-based pharmacophore model generated with the program Catalyst. A docking study and a principal component analysis were carried out within the software package Cerius and served to validate and further refine the obtained results. These combined efforts led to the selection of six candidate structures, for which in vitro anti-rhinoviral activity could be shown in a biological assay.

  17. Predictive Toxicology and Computer Simulation of Male Reproductive Development (Duke U KURe and PMRC research day)

    EPA Science Inventory

    The reproductive tract is a complex, integrated organ system with diverse embryology and unique sensitivity to prenatal environmental exposures that disrupt morphoregulatory processes and endocrine signaling. U.S. EPA’s in vitro high-throughput screening (HTS) database (ToxCastDB...

  18. EVALUATION OF DNA CHIPS (MICROARRAYS) FOR DETERMINING VIRULENCE FACTOR ACTIVITY RELATIONSHIPS (VFARS)

    EPA Science Inventory

    Computational toxicology is a rapid approach to screening for toxic effects and looking for common outcomes that can result in predictive models. The long term project will result in the development of a database of mRNA responses to known water-borne pathogens. An understanding...

  19. Nurse-computer performance. Considerations for the nurse administrator.

    PubMed

    Mills, M E; Staggers, N

    1994-11-01

    Regulatory reporting requirements and economic pressures to create a unified healthcare database are leading to the development of a fully computerized patient record. Nursing staff members will be responsible increasingly for using this technology, yet little is known about the interaction effect of staff characteristics and computer screen design on on-line accuracy and speed. In examining these issues, new considerations are raised for nurse administrators interested in facilitating staff use of clinical information systems.

  20. Computer-Assisted Telephone Screening: A New System for Patient Evaluation and Recruitment

    PubMed Central

    Radcliffe, Jeanne M.; Latham, Georgia S.; Sunderland, Trey; Lawlor, Brian A.

    1990-01-01

    Recruitment of subjects for research studies is a time consuming process for any research coordinator. This paper introduces three computerized databases designed to help screen potential research candidates by telephone. The three programs discussed are designed to evaluate specific populations: geriatric patients (i.e. Alzheimer's patients), patients with affective disorders and normal volunteers. The interview content, software development, and the utility of these programs is discussed with particular focus on how they can be helpful in the research setting.

  1. A prediction model-based algorithm for computer-assisted database screening of adverse drug reactions in the Netherlands.

    PubMed

    Scholl, Joep H G; van Hunsel, Florence P A M; Hak, Eelko; van Puijenbroek, Eugène P

    2018-02-01

    The statistical screening of pharmacovigilance databases containing spontaneously reported adverse drug reactions (ADRs) is mainly based on disproportionality analysis. The aim of this study was to improve the efficiency of full database screening using a prediction model-based approach. A logistic regression-based prediction model containing 5 candidate predictors was developed and internally validated using the Summary of Product Characteristics as the gold standard for the outcome. All drug-ADR associations, with the exception of those related to vaccines, with a minimum of 3 reports formed the training data for the model. Performance was based on the area under the receiver operating characteristic curve (AUC). Results were compared with the current method of database screening based on the number of previously analyzed associations. A total of 25 026 unique drug-ADR associations formed the training data for the model. The final model contained all 5 candidate predictors (number of reports, disproportionality, reports from healthcare professionals, reports from marketing authorization holders, Naranjo score). The AUC for the full model was 0.740 (95% CI; 0.734-0.747). The internal validity was good based on the calibration curve and bootstrapping analysis (AUC after bootstrapping = 0.739). Compared with the old method, the AUC increased from 0.649 to 0.740, and the proportion of potential signals increased by approximately 50% (from 12.3% to 19.4%). A prediction model-based approach can be a useful tool to create priority-based listings for signal detection in databases consisting of spontaneous ADRs. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  2. Identifying potential selective fluorescent probes for cancer-associated protein carbonic anhydrase IX using a computational approach.

    PubMed

    Kamstra, Rhiannon L; Floriano, Wely B

    2014-11-01

    Carbonic anhydrase IX (CAIX) is a biomarker for tumor hypoxia. Fluorescent inhibitors of CAIX have been used to study hypoxic tumor cell lines. However, these inhibitor-based fluorescent probes may have a therapeutic effect that is not appropriate for monitoring treatment efficacy. In the search for novel fluorescent probes that are not based on known inhibitors, a database of 20,860 fluorescent compounds was virtually screened against CAIX using hierarchical virtual ligand screening (HierVLS). The screening database contained 14,862 compounds tagged with the ATTO680 fluorophore plus an additional 5998 intrinsically fluorescent compounds. Overall ranking of compounds to identify hit molecular probe candidates utilized a principal component analysis (PCA) approach. Four potential binding sites, including the catalytic site, were identified within the structure of the protein and targeted for virtual screening. Available sequence information for 23 carbonic anhydrase isoforms was used to prioritize the four sites based on the estimated "uniqueness" of each site in CAIX relative to the other isoforms. A database of 32 known inhibitors and 478 decoy compounds was used to validate the methodology. A receiver-operating characteristic (ROC) analysis using the first principal component (PC1) as predictive score for the validation database yielded an area under the curve (AUC) of 0.92. AUC is interpreted as the probability that a binder will have a better score than a non-binder. The use of first component analysis of binding energies for multiple sites is a novel approach for hit selection. The very high prediction power for this approach increases confidence in the outcome from the fluorescent library screening. Ten of the top scoring candidates for isoform-selective putative binding sites are suggested for future testing as fluorescent molecular probe candidates. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. An automated tuberculosis screening strategy combining X-ray-based computer-aided detection and clinical information

    NASA Astrophysics Data System (ADS)

    Melendez, Jaime; Sánchez, Clara I.; Philipsen, Rick H. H. M.; Maduskar, Pragnya; Dawson, Rodney; Theron, Grant; Dheda, Keertan; van Ginneken, Bram

    2016-04-01

    Lack of human resources and radiological interpretation expertise impair tuberculosis (TB) screening programmes in TB-endemic countries. Computer-aided detection (CAD) constitutes a viable alternative for chest radiograph (CXR) reading. However, no automated techniques that exploit the additional clinical information typically available during screening exist. To address this issue and optimally exploit this information, a machine learning-based combination framework is introduced. We have evaluated this framework on a database containing 392 patient records from suspected TB subjects prospectively recruited in Cape Town, South Africa. Each record comprised a CAD score, automatically computed from a CXR, and 12 clinical features. Comparisons with strategies relying on either CAD scores or clinical information alone were performed. Our results indicate that the combination framework outperforms the individual strategies in terms of the area under the receiving operating characteristic curve (0.84 versus 0.78 and 0.72), specificity at 95% sensitivity (49% versus 24% and 31%) and negative predictive value (98% versus 95% and 96%). Thus, it is believed that combining CAD and clinical information to estimate the risk of active disease is a promising tool for TB screening.

  4. High-throughput screening for thermoelectric sulphides by using crystal structure features as descriptors

    NASA Astrophysics Data System (ADS)

    Zhang, Ruizhi; Du, Baoli; Chen, Kan; Reece, Mike; Materials Research Insititute Team

    With the increasing computational power and reliable databases, high-throughput screening is playing a more and more important role in the search of new thermoelectric materials. Rather than the well established density functional theory (DFT) calculation based methods, we propose an alternative approach to screen for new TE materials: using crystal structural features as 'descriptors'. We show that a non-distorted transition metal sulphide polyhedral network can be a good descriptor for high power factor according to crystal filed theory. By using Cu/S containing compounds as an example, 1600+ Cu/S containing entries in the Inorganic Crystal Structure Database (ICSD) were screened, and of those 84 phases are identified as promising thermoelectric materials. The screening results are validated by both electronic structure calculations and experimental results from the literature. We also fabricated some new compounds to test our screening results. Another advantage of using crystal structure features as descriptors is that we can easily establish structural relationships between the identified phases. Based on this, two material design approaches are discussed: 1) High-pressure synthesis of metastable phase; 2) In-situ 2-phase composites with coherent interface. This work was supported by a Marie Curie International Incoming Fellowship of the European Community Human Potential Program.

  5. The Case for Lung Cancer Screening: What Nurses Need to Know.

    PubMed

    Sorrie, Kerrin; Cates, Lisa; Hill, Alethea

    2016-06-01

    Lung cancer screening with low-dose helical computed tomography (LDCT) can improve high-risk individuals' chances of being diagnosed at an earlier stage and increase survival. The aims of this article are to present the risk factors associated with the development of lung cancer, identify patients at high risk for lung cancer qualifying for LDCT screening, and understand the importance of early lung cancer detection through the use of LDCT screening. PubMed and CINAHL® databases were searched with key words lung cancer screening to identify full-text academic articles from 2004-2014. This resulted in 529 articles from PubMed and 195 from CINAHL. PubMed offered suggestions for additional relevant journal articles. The National Comprehensive Cancer Network guidelines also provided substantial evidence-based information. Nurses need to provide support, education, and resources for patients undergoing lung cancer screening.

  6. Inverse Band Structure Design via Materials Database Screening: Application to Square Planar Thermoelectrics

    DOE PAGES

    Isaacs, Eric B.; Wolverton, Chris

    2018-02-26

    Electronic band structure contains a wealth of information on the electronic properties of a solid and is routinely computed. However, the more difficult problem of designing a solid with a desired band structure is an outstanding challenge. In order to address this inverse band structure design problem, we devise an approach using materials database screening with materials attributes based on the constituent elements, nominal electron count, crystal structure, and thermodynamics. Our strategy is tested in the context of thermoelectric materials, for which a targeted band structure containing both flat and dispersive components with respect to crystal momentum is highly desirable.more » We screen for thermodynamically stable or metastable compounds containing d 8 transition metals coordinated by anions in a square planar geometry in order to mimic the properties of recently identified oxide thermoelectrics with such a band structure. In doing so, we identify 157 compounds out of a total of over half a million candidates. After further screening based on electronic band gap and structural anisotropy, we explicitly compute the band structures for the several of the candidates in order to validate the approach. We successfully find two new oxide systems that achieve the targeted band structure. Electronic transport calculations on these two compounds, Ba 2PdO 3 and La 4PdO 7, confirm promising thermoelectric power factor behavior for the compounds. This methodology is easily adapted to other targeted band structures and should be widely applicable to a variety of design problems.« less

  7. Inverse Band Structure Design via Materials Database Screening: Application to Square Planar Thermoelectrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isaacs, Eric B.; Wolverton, Chris

    Electronic band structure contains a wealth of information on the electronic properties of a solid and is routinely computed. However, the more difficult problem of designing a solid with a desired band structure is an outstanding challenge. In order to address this inverse band structure design problem, we devise an approach using materials database screening with materials attributes based on the constituent elements, nominal electron count, crystal structure, and thermodynamics. Our strategy is tested in the context of thermoelectric materials, for which a targeted band structure containing both flat and dispersive components with respect to crystal momentum is highly desirable.more » We screen for thermodynamically stable or metastable compounds containing d 8 transition metals coordinated by anions in a square planar geometry in order to mimic the properties of recently identified oxide thermoelectrics with such a band structure. In doing so, we identify 157 compounds out of a total of over half a million candidates. After further screening based on electronic band gap and structural anisotropy, we explicitly compute the band structures for the several of the candidates in order to validate the approach. We successfully find two new oxide systems that achieve the targeted band structure. Electronic transport calculations on these two compounds, Ba 2PdO 3 and La 4PdO 7, confirm promising thermoelectric power factor behavior for the compounds. This methodology is easily adapted to other targeted band structures and should be widely applicable to a variety of design problems.« less

  8. Solubility prediction, solvate and cocrystal screening as tools for rational crystal engineering.

    PubMed

    Loschen, Christoph; Klamt, Andreas

    2015-06-01

    The fact that novel drug candidates are becoming increasingly insoluble is a major problem of current drug development. Computational tools may address this issue by screening for suitable solvents or by identifying potential novel cocrystal formers that increase bioavailability. In contrast to other more specialized methods, the fluid phase thermodynamics approach COSMO-RS (conductor-like screening model for real solvents) allows for a comprehensive treatment of drug solubility, solvate and cocrystal formation and many other thermodynamics properties in liquids. This article gives an overview of recent COSMO-RS developments that are of interest for drug development and contains several new application examples for solubility prediction and solvate/cocrystal screening. For all property predictions COSMO-RS has been used. The basic concept of COSMO-RS consists of using the screening charge density as computed from first principles calculations in combination with fast statistical thermodynamics to compute the chemical potential of a compound in solution. The fast and accurate assessment of drug solubility and the identification of suitable solvents, solvate or cocrystal formers is nowadays possible and may be used to complement modern drug development. Efficiency is increased by avoiding costly quantum-chemical computations using a database of previously computed molecular fragments. COSMO-RS theory can be applied to a range of physico-chemical properties, which are of interest in rational crystal engineering. Most notably, in combination with experimental reference data, accurate quantitative solubility predictions in any solvent or solvent mixture are possible. Additionally, COSMO-RS can be extended to the prediction of cocrystal formation, which results in considerable predictive accuracy concerning coformer screening. In a recent variant costly quantum chemical calculations are avoided resulting in a significant speed-up and ease-of-use. © 2015 Royal Pharmaceutical Society.

  9. Intelligent Interfaces for Mining Large-Scale RNAi-HCS Image Databases

    PubMed Central

    Lin, Chen; Mak, Wayne; Hong, Pengyu; Sepp, Katharine; Perrimon, Norbert

    2010-01-01

    Recently, High-content screening (HCS) has been combined with RNA interference (RNAi) to become an essential image-based high-throughput method for studying genes and biological networks through RNAi-induced cellular phenotype analyses. However, a genome-wide RNAi-HCS screen typically generates tens of thousands of images, most of which remain uncategorized due to the inadequacies of existing HCS image analysis tools. Until now, it still requires highly trained scientists to browse a prohibitively large RNAi-HCS image database and produce only a handful of qualitative results regarding cellular morphological phenotypes. For this reason we have developed intelligent interfaces to facilitate the application of the HCS technology in biomedical research. Our new interfaces empower biologists with computational power not only to effectively and efficiently explore large-scale RNAi-HCS image databases, but also to apply their knowledge and experience to interactive mining of cellular phenotypes using Content-Based Image Retrieval (CBIR) with Relevance Feedback (RF) techniques. PMID:21278820

  10. Application of the ToxMiner Database: Network Analysis of ...

    EPA Pesticide Factsheets

    The US EPA ToxCast program is using in vitro HTS (High-Throughput Screening) methods to profile and model bioactivity of environmental chemicals. The main goals of the ToxCast program are to generate predictive signatures of toxicity, and ultimately provide rapid and cost-effective alternatives to animal testing. The chemicals selected for Phase I are composed largely by a diverse set of pesticide active ingredients, which had sufficient supporting in vivo data included as part of their registration process with the EPA. Other miscellaneous chemicals of environmental concern were also included. Application of HTS to environmental toxicants is a novel approach to predictive toxicology and health risk assessment, and differs from what is required for drug efficacy screening in that biochemical interaction of environmental chemicals are sometimes weaker than that seen with drugs and their intended targets. Additionally, the chemical space covered by environmental chemicals is much broader compared to that of pharmaceuticals. The ToxMiner database has been created and added to the EPA’s ACToR (Aggregated Computational Toxicology Resource) chemical database. One purpose of the ToxMiner database is to link biological, metabolic and cellular pathway data to genes and in vitro assay data for the initial subset of chemicals screened in the ToxCast Phase I HTS assays. Also included in ToxMiner is human disease information, which correlates with ToxCast assays that tar

  11. Application of the ToxMiner Database: Network Analysis ...

    EPA Pesticide Factsheets

    The US EPA ToxCast program is using in vitro HTS (High-Throughput Screening) methods to profile and model bioactivity of environmental chemicals. The main goals of the ToxCast program are to generate predictive signatures of toxicity, and ultimately provide rapid and cost-effective alternatives to animal testing. The chemicals selected for Phase I are composed largely by a diverse set of pesticide active ingredients, which had sufficient supporting in vivo data included as part of their registration process with the EPA. Other miscellaneous chemicals of environmental concern were also included. Application of HTS to environmental toxicants is a novel approach to predictive toxicology and health risk assessment, and differs from what is required for drug efficacy screening in that biochemical interaction of environmental chemicals are sometimes weaker than that seen with drugs and their intended targets. Additionally, the chemical space covered by environmental chemicals is much broader compared to that of pharmaceuticals. The ToxMiner database has been created and added to the EPA’s ACToR (Aggregated Computational Toxicology Resource) chemical database. One purpose of the ToxMiner database is to link biological, metabolic, and cellular pathway data to genes and in vitro assay data for the initial subset of chemicals screened in the ToxCast Phase I HTS assays. Also included in ToxMiner is human disease information, which correlates with ToxCast assays that ta

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The system is developed to collect, process, store and present the information provided by the radio frequency identification (RFID) devices. The system contains three parts, the application software, the database and the web page. The application software manages multiple RFID devices, such as readers and portals, simultaneously. It communicates with the devices through application programming interface (API) provided by the device vendor. The application software converts data collected by the RFID readers and portals to readable information. It is capable of encrypting data using 256 bits advanced encryption standard (AES). The application software has a graphical user interface (GUI). Themore » GUI mimics the configurations of the nucler material storage sites or transport vehicles. The GUI gives the user and system administrator an intuitive way to read the information and/or configure the devices. The application software is capable of sending the information to a remote, dedicated and secured web and database server. Two captured screen samples, one for storage and transport, are attached. The database is constructed to handle a large number of RFID tag readers and portals. A SQL server is employed for this purpose. An XML script is used to update the database once the information is sent from the application software. The design of the web page imitates the design of the application software. The web page retrieves data from the database and presents it in different panels. The user needs a user name combined with a password to access the web page. The web page is capable of sending e-mail and text messages based on preset criteria, such as when alarm thresholds are excceeded. A captured screen sample is attached. The application software is designed to be installed on a local computer. The local computer is directly connected to the RFID devices and can be controlled locally or remotely. There are multiple local computers managing different sites or transport vehicles. The control from remote sites and information transmitted to a central database server is through secured internet. The information stored in the central databaser server is shown on the web page. The users can view the web page on the internet. A dedicated and secured web and database server (https) is used to provide information security.« less

  13. Aerothermal Testing for Project Orion Crew Exploration Vehicle

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Horvath, Thomas J.; Lillard, Randolph P.; Kirk, Benjamin S.; Fischer-Cassady, Amy

    2009-01-01

    The Project Orion Crew Exploration Vehicle aerothermodynamic experimentation strategy, as it relates to flight database development, is reviewed. Experimental data has been obtained to both validate the computational predictions utilized as part of the database and support the development of engineering models for issues not adequately addressed with computations. An outline is provided of the working groups formed to address the key deficiencies in data and knowledge for blunt reentry vehicles. The facilities utilized to address these deficiencies are reviewed, along with some of the important results obtained thus far. For smooth wall comparisons of computational convective heating predictions against experimental data from several facilities, confidence was gained with the use of algebraic turbulence model solutions as part of the database. For cavities and protuberances, experimental data is being used for screening various designs, plus providing support to the development of engineering models. With the reaction-control system testing, experimental data were acquired on the surface in combination with off-body flow visualization of the jet plumes and interactions. These results are being compared against predictions for improved understanding of aftbody thermal environments and uncertainties.

  14. Computer-aided diagnosis workstation and network system for chest diagnosis based on multislice CT images

    NASA Astrophysics Data System (ADS)

    Satoh, Hitoshi; Niki, Noboru; Mori, Kiyoshi; Eguchi, Kenji; Kaneko, Masahiro; Kakinuma, Ryutarou; Moriyama, Noriyuki; Ohmatsu, Hironobu; Masuda, Hideo; Machida, Suguru

    2007-03-01

    Multislice CT scanner advanced remarkably at the speed at which the chest CT images were acquired for mass screening. Mass screening based on multislice CT images requires a considerable number of images to be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically detects suspected lung cancers in helical CT images and a coronary artery calcification screening algorithm that automatically detects suspected coronary artery calcification. Moreover, we have provided diagnostic assistance methods to medical screening specialists by using a lung cancer screening algorithm built into mobile helical CT scanner for the lung cancer mass screening done in the region without the hospital. We also have developed electronic medical recording system and prototype internet system for the community health in two or more regions by using the Virtual Private Network router and Biometric fingerprint authentication system and Biometric face authentication system for safety of medical information. Based on these diagnostic assistance methods, we have now developed a new computer-aided workstation and database that can display suspected lesions three-dimensionally in a short time. This paper describes basic studies that have been conducted to evaluate this new system.

  15. Role of Open Source Tools and Resources in Virtual Screening for Drug Discovery.

    PubMed

    Karthikeyan, Muthukumarasamy; Vyas, Renu

    2015-01-01

    Advancement in chemoinformatics research in parallel with availability of high performance computing platform has made handling of large scale multi-dimensional scientific data for high throughput drug discovery easier. In this study we have explored publicly available molecular databases with the help of open-source based integrated in-house molecular informatics tools for virtual screening. The virtual screening literature for past decade has been extensively investigated and thoroughly analyzed to reveal interesting patterns with respect to the drug, target, scaffold and disease space. The review also focuses on the integrated chemoinformatics tools that are capable of harvesting chemical data from textual literature information and transform them into truly computable chemical structures, identification of unique fragments and scaffolds from a class of compounds, automatic generation of focused virtual libraries, computation of molecular descriptors for structure-activity relationship studies, application of conventional filters used in lead discovery along with in-house developed exhaustive PTC (Pharmacophore, Toxicophores and Chemophores) filters and machine learning tools for the design of potential disease specific inhibitors. A case study on kinase inhibitors is provided as an example.

  16. Contributions of computational chemistry and biophysical techniques to fragment-based drug discovery.

    PubMed

    Gozalbes, Rafael; Carbajo, Rodrigo J; Pineda-Lucena, Antonio

    2010-01-01

    In the last decade, fragment-based drug discovery (FBDD) has evolved from a novel approach in the search of new hits to a valuable alternative to the high-throughput screening (HTS) campaigns of many pharmaceutical companies. The increasing relevance of FBDD in the drug discovery universe has been concomitant with an implementation of the biophysical techniques used for the detection of weak inhibitors, e.g. NMR, X-ray crystallography or surface plasmon resonance (SPR). At the same time, computational approaches have also been progressively incorporated into the FBDD process and nowadays several computational tools are available. These stretch from the filtering of huge chemical databases in order to build fragment-focused libraries comprising compounds with adequate physicochemical properties, to more evolved models based on different in silico methods such as docking, pharmacophore modelling, QSAR and virtual screening. In this paper we will review the parallel evolution and complementarities of biophysical techniques and computational methods, providing some representative examples of drug discovery success stories by using FBDD.

  17. Coordinating Center: Molecular and Cellular Findings of Screen-Detected Lesions | Division of Cancer Prevention

    Cancer.gov

    The Molecular and Cellular Characterization of Screen‐Detected Lesions ‐ Coordinating Center and Data Management Group will provide support for the participating studies responding to RFA CA14‐10. The coordinating center supports three main domains: network coordination, statistical support and computational analysis and protocol development and database support. Support for

  18. The CEBAF Element Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theodore Larrieu, Christopher Slominski, Michele Joyce

    2011-03-01

    With the inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting control computers to building controls screens. A requirement influencing the CED design is that it provide access to not only present, but also future and past configurations of the accelerator. To accomplish this, an introspective database schema was designed that allows new elements, types, and properties to be defined on-the-fly withmore » no changes to table structure. Used in conjunction with Oracle Workspace Manager, it allows users to query data from any time in the database history with the same tools used to query the present configuration. Users can also check-out workspaces to use as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented Application Programming Interface (API) that is translated automatically from original C++ source code into native libraries for scripting languages such as perl, php, and TCL making access to the CED easy and ubiquitous.« less

  19. Targeted mutation screening panels expose systematic population bias in detection of cystic fibrosis risk.

    PubMed

    Lim, Regine M; Silver, Ari J; Silver, Maxwell J; Borroto, Carlos; Spurrier, Brett; Petrossian, Tanya C; Larson, Jessica L; Silver, Lee M

    2016-02-01

    Carrier screening for mutations contributing to cystic fibrosis (CF) is typically accomplished with panels composed of variants that are clinically validated primarily in patients of European descent. This approach has created a static genetic and phenotypic profile for CF. An opportunity now exists to reevaluate the disease profile of CFTR at a global population level. CFTR allele and genotype frequencies were obtained from a nonpatient cohort with more than 60,000 unrelated personal genomes collected by the Exome Aggregation Consortium. Likely disease-contributing mutations were identified with the use of public database annotations and computational tools. We identified 131 previously described and likely pathogenic variants and another 210 untested variants with a high probability of causing protein damage. None of the current genetic screening panels or existing CFTR mutation databases covered a majority of deleterious variants in any geographical population outside of Europe. Both clinical annotation and mutation coverage by commercially available targeted screening panels for CF are strongly biased toward detection of reproductive risk in persons of European descent. South and East Asian populations are severely underrepresented, in part because of a definition of disease that preferences the phenotype associated with European-typical CFTR alleles.

  20. Congestion game scheduling for virtual drug screening optimization

    NASA Astrophysics Data System (ADS)

    Nikitina, Natalia; Ivashko, Evgeny; Tchernykh, Andrei

    2018-02-01

    In virtual drug screening, the chemical diversity of hits is an important factor, along with their predicted activity. Moreover, interim results are of interest for directing the further research, and their diversity is also desirable. In this paper, we consider a problem of obtaining a diverse set of virtual screening hits in a short time. To this end, we propose a mathematical model of task scheduling for virtual drug screening in high-performance computational systems as a congestion game between computational nodes to find the equilibrium solutions for best balancing the number of interim hits with their chemical diversity. The model considers the heterogeneous environment with workload uncertainty, processing time uncertainty, and limited knowledge about the input dataset structure. We perform computational experiments and evaluate the performance of the developed approach considering organic molecules database GDB-9. The used set of molecules is rich enough to demonstrate the feasibility and practicability of proposed solutions. We compare the algorithm with two known heuristics used in practice and observe that game-based scheduling outperforms them by the hit discovery rate and chemical diversity at earlier steps. Based on these results, we use a social utility metric for assessing the efficiency of our equilibrium solutions and show that they reach greatest values.

  1. Selection of examples in case-based computer-aided decision systems

    PubMed Central

    Mazurowski, Maciej A.; Zurada, Jacek M.; Tourassi, Georgia D.

    2013-01-01

    Case-based computer-aided decision (CB-CAD) systems rely on a database of previously stored, known examples when classifying new, incoming queries. Such systems can be particularly useful since they do not need retraining every time a new example is deposited in the case base. The adaptive nature of case-based systems is well suited to the current trend of continuously expanding digital databases in the medical domain. To maintain efficiency, however, such systems need sophisticated strategies to effectively manage the available evidence database. In this paper, we discuss the general problem of building an evidence database by selecting the most useful examples to store while satisfying existing storage requirements. We evaluate three intelligent techniques for this purpose: genetic algorithm-based selection, greedy selection and random mutation hill climbing. These techniques are compared to a random selection strategy used as the baseline. The study is performed with a previously presented CB-CAD system applied for false positive reduction in screening mammograms. The experimental evaluation shows that when the development goal is to maximize the system’s diagnostic performance, the intelligent techniques are able to reduce the size of the evidence database to 37% of the original database by eliminating superfluous and/or detrimental examples while at the same time significantly improving the CAD system’s performance. Furthermore, if the case-base size is a main concern, the total number of examples stored in the system can be reduced to only 2–4% of the original database without a decrease in the diagnostic performance. Comparison of the techniques shows that random mutation hill climbing provides the best balance between the diagnostic performance and computational efficiency when building the evidence database of the CB-CAD system. PMID:18854606

  2. Efficient method for high-throughput virtual screening based on flexible docking: discovery of novel acetylcholinesterase inhibitors.

    PubMed

    Mizutani, Miho Yamada; Itai, Akiko

    2004-09-23

    A method of easily finding ligands, with a variety of core structures, for a given target macromolecule would greatly contribute to the rapid identification of novel lead compounds for drug development. We have developed an efficient method for discovering ligand candidates from a number of flexible compounds included in databases, when the three-dimensional (3D) structure of the drug target is available. The method, named ADAM&EVE, makes use of our automated docking method ADAM, which has already been reported. Like ADAM, ADAM&EVE takes account of the flexibility of each molecule in databases, by exploring the conformational space fully and continuously. Database screening has been made much faster than with ADAM through the tuning of parameters, so that computational screening of several hundred thousand compounds is possible in a practical time. Promising ligand candidates can be selected according to various criteria based on the docking results and characteristics of compounds. Furthermore, we have developed a new tool, EVE-MAKE, for automatically preparing the additional compound data necessary for flexible docking calculation, prior to 3D database screening. Among several successful cases of lead discovery by ADAM&EVE, the finding of novel acetylcholinesterase (AChE) inhibitors is presented here. We performed a virtual screening of about 160 000 commercially available compounds against the X-ray crystallographic structure of AChE. Among 114 compounds that could be purchased and assayed, 35 molecules with various core structures showed inhibitory activities with IC(50) values less than 100 microM. Thirteen compounds had IC(50) values between 0.5 and 10 microM, and almost all their core structures are very different from those of known inhibitors. The results demonstrate the effectiveness and validity of the ADAM&EVE approach and provide a starting point for development of novel drugs to treat Alzheimer's disease.

  3. Database for CO2 Separation Performances of MOFs Based on Computational Materials Screening.

    PubMed

    Altintas, Cigdem; Avci, Gokay; Daglar, Hilal; Nemati Vesali Azar, Ayda; Velioglu, Sadiye; Erucar, Ilknur; Keskin, Seda

    2018-05-23

    Metal-organic frameworks (MOFs) are potential adsorbents for CO 2 capture. Because thousands of MOFs exist, computational studies become very useful in identifying the top performing materials for target applications in a time-effective manner. In this study, molecular simulations were performed to screen the MOF database to identify the best materials for CO 2 separation from flue gas (CO 2 /N 2 ) and landfill gas (CO 2 /CH 4 ) under realistic operating conditions. We validated the accuracy of our computational approach by comparing the simulation results for the CO 2 uptakes, CO 2 /N 2 and CO 2 /CH 4 selectivities of various types of MOFs with the available experimental data. Binary CO 2 /N 2 and CO 2 /CH 4 mixture adsorption data were then calculated for the entire MOF database. These data were then used to predict selectivity, working capacity, regenerability, and separation potential of MOFs. The top performing MOF adsorbents that can separate CO 2 /N 2 and CO 2 /CH 4 with high performance were identified. Molecular simulations for the adsorption of a ternary CO 2 /N 2 /CH 4 mixture were performed for these top materials to provide a more realistic performance assessment of MOF adsorbents. The structure-performance analysis showed that MOFs with Δ Q st 0 > 30 kJ/mol, 3.8 Å < pore-limiting diameter < 5 Å, 5 Å < largest cavity diameter < 7.5 Å, 0.5 < ϕ < 0.75, surface area < 1000 m 2 /g, and ρ > 1 g/cm 3 are the best candidates for selective separation of CO 2 from flue gas and landfill gas. This information will be very useful to design novel MOFs exhibiting high CO 2 separation potentials. Finally, an online, freely accessible database https://cosmoserc.ku.edu.tr was established, for the first time in the literature, which reports all of the computed adsorbent metrics of 3816 MOFs for CO 2 /N 2 , CO 2 /CH 4 , and CO 2 /N 2 /CH 4 separations in addition to various structural properties of MOFs.

  4. Improved Classification of Lung Cancer Using Radial Basis Function Neural Network with Affine Transforms of Voss Representation.

    PubMed

    Adetiba, Emmanuel; Olugbara, Oludayo O

    2015-01-01

    Lung cancer is one of the diseases responsible for a large number of cancer related death cases worldwide. The recommended standard for screening and early detection of lung cancer is the low dose computed tomography. However, many patients diagnosed die within one year, which makes it essential to find alternative approaches for screening and early detection of lung cancer. We present computational methods that can be implemented in a functional multi-genomic system for classification, screening and early detection of lung cancer victims. Samples of top ten biomarker genes previously reported to have the highest frequency of lung cancer mutations and sequences of normal biomarker genes were respectively collected from the COSMIC and NCBI databases to validate the computational methods. Experiments were performed based on the combinations of Z-curve and tetrahedron affine transforms, Histogram of Oriented Gradient (HOG), Multilayer perceptron and Gaussian Radial Basis Function (RBF) neural networks to obtain an appropriate combination of computational methods to achieve improved classification of lung cancer biomarker genes. Results show that a combination of affine transforms of Voss representation, HOG genomic features and Gaussian RBF neural network perceptibly improves classification accuracy, specificity and sensitivity of lung cancer biomarker genes as well as achieving low mean square error.

  5. Red Lesion Detection Using Dynamic Shape Features for Diabetic Retinopathy Screening.

    PubMed

    Seoud, Lama; Hurtut, Thomas; Chelbi, Jihed; Cheriet, Farida; Langlois, J M Pierre

    2016-04-01

    The development of an automatic telemedicine system for computer-aided screening and grading of diabetic retinopathy depends on reliable detection of retinal lesions in fundus images. In this paper, a novel method for automatic detection of both microaneurysms and hemorrhages in color fundus images is described and validated. The main contribution is a new set of shape features, called Dynamic Shape Features, that do not require precise segmentation of the regions to be classified. These features represent the evolution of the shape during image flooding and allow to discriminate between lesions and vessel segments. The method is validated per-lesion and per-image using six databases, four of which are publicly available. It proves to be robust with respect to variability in image resolution, quality and acquisition system. On the Retinopathy Online Challenge's database, the method achieves a FROC score of 0.420 which ranks it fourth. On the Messidor database, when detecting images with diabetic retinopathy, the proposed method achieves an area under the ROC curve of 0.899, comparable to the score of human experts, and it outperforms state-of-the-art approaches.

  6. FilTer BaSe: A web accessible chemical database for small compound libraries.

    PubMed

    Kolte, Baban S; Londhe, Sanjay R; Solanki, Bhushan R; Gacche, Rajesh N; Meshram, Rohan J

    2018-03-01

    Finding novel chemical agents for targeting disease associated drug targets often requires screening of large number of new chemical libraries. In silico methods are generally implemented at initial stages for virtual screening. Filtering of such compound libraries on physicochemical and substructure ground is done to ensure elimination of compounds with undesired chemical properties. Filtering procedure, is redundant, time consuming and requires efficient bioinformatics/computer manpower along with high end software involving huge capital investment that forms a major obstacle in drug discovery projects in academic setup. We present an open source resource, FilTer BaSe- a chemoinformatics platform (http://bioinfo.net.in/filterbase/) that host fully filtered, ready to use compound libraries with workable size. The resource also hosts a database that enables efficient searching the chemical space of around 348,000 compounds on the basis of physicochemical and substructure properties. Ready to use compound libraries and database presented here is expected to aid a helping hand for new drug developers and medicinal chemists. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Structure-based virtual screening and characterization of a novel IL-6 antagonistic compound from synthetic compound database.

    PubMed

    Wang, Jing; Qiao, Chunxia; Xiao, He; Lin, Zhou; Li, Yan; Zhang, Jiyan; Shen, Beifen; Fu, Tinghuan; Feng, Jiannan

    2016-01-01

    According to the three-dimensional (3D) complex structure of (hIL-6⋅hIL-6R⋅gp 130) 2 and the binding orientation of hIL-6, three compounds with high affinity to hIL-6R and bioactivity to block hIL-6 in vitro were screened theoretically from the chemical databases, including 3D-Available Chemicals Directory (ACD) and MDL Drug Data Report (MDDR), by means of the computer-guided virtual screening method. Using distance geometry, molecular modeling and molecular dynamics trajectory analysis methods, the binding mode and binding energy of the three compounds were evaluated theoretically. Enzyme-linked immunosorbent assay analysis demonstrated that all the three compounds could block IL-6 binding to IL-6R specifically. However, only compound 1 could effectively antagonize the function of hIL-6 and inhibit the proliferation of XG-7 cells in a dose-dependent manner, whereas it showed no cytotoxicity to SP2/0 or L929 cells. These data demonstrated that the compound 1 could be a promising candidate of hIL-6 antagonist.

  8. Aero/fluids database system

    NASA Technical Reports Server (NTRS)

    Reardon, John E.; Violett, Duane L., Jr.

    1991-01-01

    The AFAS Database System was developed to provide the basic structure of a comprehensive database system for the Marshall Space Flight Center (MSFC) Structures and Dynamics Laboratory Aerophysics Division. The system is intended to handle all of the Aerophysics Division Test Facilities as well as data from other sources. The system was written for the DEC VAX family of computers in FORTRAN-77 and utilizes the VMS indexed file system and screen management routines. Various aspects of the system are covered, including a description of the user interface, lists of all code structure elements, descriptions of the file structures, a description of the security system operation, a detailed description of the data retrieval tasks, a description of the session log, and a description of the archival system.

  9. [Adverse Effect Predictions Based on Computational Toxicology Techniques and Large-scale Databases].

    PubMed

    Uesawa, Yoshihiro

    2018-01-01

     Understanding the features of chemical structures related to the adverse effects of drugs is useful for identifying potential adverse effects of new drugs. This can be based on the limited information available from post-marketing surveillance, assessment of the potential toxicities of metabolites and illegal drugs with unclear characteristics, screening of lead compounds at the drug discovery stage, and identification of leads for the discovery of new pharmacological mechanisms. This present paper describes techniques used in computational toxicology to investigate the content of large-scale spontaneous report databases of adverse effects, and it is illustrated with examples. Furthermore, volcano plotting, a new visualization method for clarifying the relationships between drugs and adverse effects via comprehensive analyses, will be introduced. These analyses may produce a great amount of data that can be applied to drug repositioning.

  10. Drug search for leishmaniasis: a virtual screening approach by grid computing

    NASA Astrophysics Data System (ADS)

    Ochoa, Rodrigo; Watowich, Stanley J.; Flórez, Andrés; Mesa, Carol V.; Robledo, Sara M.; Muskus, Carlos

    2016-07-01

    The trypanosomatid protozoa Leishmania is endemic in 100 countries, with infections causing 2 million new cases of leishmaniasis annually. Disease symptoms can include severe skin and mucosal ulcers, fever, anemia, splenomegaly, and death. Unfortunately, therapeutics approved to treat leishmaniasis are associated with potentially severe side effects, including death. Furthermore, drug-resistant Leishmania parasites have developed in most endemic countries. To address an urgent need for new, safe and inexpensive anti-leishmanial drugs, we utilized the IBM World Community Grid to complete computer-based drug discovery screens (Drug Search for Leishmaniasis) using unique leishmanial proteins and a database of 600,000 drug-like small molecules. Protein structures from different Leishmania species were selected for molecular dynamics (MD) simulations, and a series of conformational "snapshots" were chosen from each MD trajectory to simulate the protein's flexibility. A Relaxed Complex Scheme methodology was used to screen 2000 MD conformations against the small molecule database, producing >1 billion protein-ligand structures. For each protein target, a binding spectrum was calculated to identify compounds predicted to bind with highest average affinity to all protein conformations. Significantly, four different Leishmania protein targets were predicted to strongly bind small molecules, with the strongest binding interactions predicted to occur for dihydroorotate dehydrogenase (LmDHODH; PDB:3MJY). A number of predicted tight-binding LmDHODH inhibitors were tested in vitro and potent selective inhibitors of Leishmania panamensis were identified. These promising small molecules are suitable for further development using iterative structure-based optimization and in vitro/in vivo validation assays.

  11. Drug search for leishmaniasis: a virtual screening approach by grid computing.

    PubMed

    Ochoa, Rodrigo; Watowich, Stanley J; Flórez, Andrés; Mesa, Carol V; Robledo, Sara M; Muskus, Carlos

    2016-07-01

    The trypanosomatid protozoa Leishmania is endemic in ~100 countries, with infections causing ~2 million new cases of leishmaniasis annually. Disease symptoms can include severe skin and mucosal ulcers, fever, anemia, splenomegaly, and death. Unfortunately, therapeutics approved to treat leishmaniasis are associated with potentially severe side effects, including death. Furthermore, drug-resistant Leishmania parasites have developed in most endemic countries. To address an urgent need for new, safe and inexpensive anti-leishmanial drugs, we utilized the IBM World Community Grid to complete computer-based drug discovery screens (Drug Search for Leishmaniasis) using unique leishmanial proteins and a database of 600,000 drug-like small molecules. Protein structures from different Leishmania species were selected for molecular dynamics (MD) simulations, and a series of conformational "snapshots" were chosen from each MD trajectory to simulate the protein's flexibility. A Relaxed Complex Scheme methodology was used to screen ~2000 MD conformations against the small molecule database, producing >1 billion protein-ligand structures. For each protein target, a binding spectrum was calculated to identify compounds predicted to bind with highest average affinity to all protein conformations. Significantly, four different Leishmania protein targets were predicted to strongly bind small molecules, with the strongest binding interactions predicted to occur for dihydroorotate dehydrogenase (LmDHODH; PDB:3MJY). A number of predicted tight-binding LmDHODH inhibitors were tested in vitro and potent selective inhibitors of Leishmania panamensis were identified. These promising small molecules are suitable for further development using iterative structure-based optimization and in vitro/in vivo validation assays.

  12. The use of a personal digital assistant for wireless entry of data into a database via the Internet.

    PubMed

    Fowler, D L; Hogle, N J; Martini, F; Roh, M S

    2002-01-01

    Researchers typically record data on a worksheet and at some later time enter it into the database. Wireless data entry and retrieval using a personal digital assistant (PDA) at the site of patient contact can simplify this process and improve efficiency. A surgeon and a nurse coordinator provided the content for the database. The computer programmer created the database, placed the pages of the database on the PDA screen, and researched and installed security measures. Designing the database took 6 months. Meeting Health Insurance Portability and Accountability Act of 1996 (HIPAA) requirements for patient confidentiality, satisfying institutional Information Services requirements, and ensuring connectivity required an additional 8 months before the functional system was complete. It is now possible to achieve wireless entry and retrieval of data using a PDA. Potential advantages include collection and entry of data at the same time, easy entry of data from multiple sites, and retrieval of data at the patient's bedside.

  13. Computational Study on New Natural Compound Inhibitors of Pyruvate Dehydrogenase Kinases

    PubMed Central

    Zhou, Xiaoli; Yu, Shanshan; Su, Jing; Sun, Liankun

    2016-01-01

    Pyruvate dehydrogenase kinases (PDKs) are key enzymes in glucose metabolism, negatively regulating pyruvate dehyrogenase complex (PDC) activity through phosphorylation. Inhibiting PDKs could upregulate PDC activity and drive cells into more aerobic metabolism. Therefore, PDKs are potential targets for metabolism related diseases, such as cancers and diabetes. In this study, a series of computer-aided virtual screening techniques were utilized to discover potential inhibitors of PDKs. Structure-based screening using Libdock was carried out following by ADME (adsorption, distribution, metabolism, excretion) and toxicity prediction. Molecular docking was used to analyze the binding mechanism between these compounds and PDKs. Molecular dynamic simulation was utilized to confirm the stability of potential compound binding. From the computational results, two novel natural coumarins compounds (ZINC12296427 and ZINC12389251) from the ZINC database were found binding to PDKs with favorable interaction energy and predicted to be non-toxic. Our study provide valuable information of PDK-coumarins binding mechanisms in PDK inhibitor-based drug discovery. PMID:26959013

  14. Computational Study on New Natural Compound Inhibitors of Pyruvate Dehydrogenase Kinases.

    PubMed

    Zhou, Xiaoli; Yu, Shanshan; Su, Jing; Sun, Liankun

    2016-03-04

    Pyruvate dehydrogenase kinases (PDKs) are key enzymes in glucose metabolism, negatively regulating pyruvate dehyrogenase complex (PDC) activity through phosphorylation. Inhibiting PDKs could upregulate PDC activity and drive cells into more aerobic metabolism. Therefore, PDKs are potential targets for metabolism related diseases, such as cancers and diabetes. In this study, a series of computer-aided virtual screening techniques were utilized to discover potential inhibitors of PDKs. Structure-based screening using Libdock was carried out following by ADME (adsorption, distribution, metabolism, excretion) and toxicity prediction. Molecular docking was used to analyze the binding mechanism between these compounds and PDKs. Molecular dynamic simulation was utilized to confirm the stability of potential compound binding. From the computational results, two novel natural coumarins compounds (ZINC12296427 and ZINC12389251) from the ZINC database were found binding to PDKs with favorable interaction energy and predicted to be non-toxic. Our study provide valuable information of PDK-coumarins binding mechanisms in PDK inhibitor-based drug discovery.

  15. Enabling the hypothesis-driven prioritization of ligand candidates in big databases: Screenlamp and its application to GPCR inhibitor discovery for invasive species control

    NASA Astrophysics Data System (ADS)

    Raschka, Sebastian; Scott, Anne M.; Liu, Nan; Gunturu, Santosh; Huertas, Mar; Li, Weiming; Kuhn, Leslie A.

    2018-03-01

    While the advantage of screening vast databases of molecules to cover greater molecular diversity is often mentioned, in reality, only a few studies have been published demonstrating inhibitor discovery by screening more than a million compounds for features that mimic a known three-dimensional (3D) ligand. Two factors contribute: the general difficulty of discovering potent inhibitors, and the lack of free, user-friendly software to incorporate project-specific knowledge and user hypotheses into 3D ligand-based screening. The Screenlamp modular toolkit presented here was developed with these needs in mind. We show Screenlamp's ability to screen more than 12 million commercially available molecules and identify potent in vivo inhibitors of a G protein-coupled bile acid receptor within the first year of a discovery project. This pheromone receptor governs sea lamprey reproductive behavior, and to our knowledge, this project is the first to establish the efficacy of computational screening in discovering lead compounds for aquatic invasive species control. Significant enhancement in activity came from selecting compounds based on one of the hypotheses: that matching two distal oxygen groups in the 3D structure of the pheromone is crucial for activity. Six of the 15 most active compounds met these criteria. A second hypothesis—that presence of an alkyl sulfate side chain results in high activity—identified another 6 compounds in the top 10, demonstrating the significant benefits of hypothesis-driven screening.

  16. Ligand- and structure-based in silico studies to identify kinesin spindle protein (KSP) inhibitors as potential anticancer agents.

    PubMed

    Balakumar, Chandrasekaran; Ramesh, Muthusamy; Tham, Chuin Lean; Khathi, Samukelisiwe Pretty; Kozielski, Frank; Srinivasulu, Cherukupalli; Hampannavar, Girish A; Sayyad, Nisar; Soliman, Mahmoud E; Karpoormath, Rajshekhar

    2017-11-29

    Kinesin spindle protein (KSP) belongs to the kinesin superfamily of microtubule-based motor proteins. KSP is responsible for the establishment of the bipolar mitotic spindle which mediates cell division. Inhibition of KSP expedites the blockade of the normal cell cycle during mitosis through the generation of monoastral MT arrays that finally cause apoptotic cell death. As KSP is highly expressed in proliferating/cancer cells, it has gained considerable attention as a potential drug target for cancer chemotherapy. Therefore, this study envisaged to design novel KSP inhibitors by employing computational techniques/tools such as pharmacophore modelling, virtual database screening, molecular docking and molecular dynamics. Initially, the pharmacophore models were generated from the data-set of highly potent KSP inhibitors and the pharmacophore models were validated against in house test set ligands. The validated pharmacophore model was then taken for database screening (Maybridge and ChemBridge) to yield hits, which were further filtered for their drug-likeliness. The potential hits retrieved from virtual database screening were docked using CDOCKER to identify the ligand binding landscape. The top-ranked hits obtained from molecular docking were progressed to molecular dynamics (AMBER) simulations to deduce the ligand binding affinity. This study identified MB-41570 and CB-10358 as potential hits and evaluated these experimentally using in vitro KSP ATPase inhibition assays.

  17. Computational Toxicology at the US EPA | Science Inventory ...

    EPA Pesticide Factsheets

    Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in America’s air, water, and hazardous-waste sites. The ORD Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the EPA Science to Achieve Results (STAR) program. Key intramural projects of the CTRP include digitizing legacy toxicity testing information toxicity reference database (ToxRefDB), predicting toxicity (ToxCast™) and exposure (ExpoCast™), and creating virtual liver (v-Liver™) and virtual embryo (v-Embryo™) systems models. The models and underlying data are being made publicly available t

  18. Toward a standard reference database for computer-aided mammography

    NASA Astrophysics Data System (ADS)

    Oliveira, Júlia E. E.; Gueld, Mark O.; de A. Araújo, Arnaldo; Ott, Bastian; Deserno, Thomas M.

    2008-03-01

    Because of the lack of mammography databases with a large amount of codified images and identified characteristics like pathology, type of breast tissue, and abnormality, there is a problem for the development of robust systems for computer-aided diagnosis. Integrated to the Image Retrieval in Medical Applications (IRMA) project, we present an available mammography database developed from the union of: The Mammographic Image Analysis Society Digital Mammogram Database (MIAS), The Digital Database for Screening Mammography (DDSM), the Lawrence Livermore National Laboratory (LLNL), and routine images from the Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen. Using the IRMA code, standardized coding of tissue type, tumor staging, and lesion description was developed according to the American College of Radiology (ACR) tissue codes and the ACR breast imaging reporting and data system (BI-RADS). The import was done automatically using scripts for image download, file format conversion, file name, web page and information file browsing. Disregarding the resolution, this resulted in a total of 10,509 reference images, and 6,767 images are associated with an IRMA contour information feature file. In accordance to the respective license agreements, the database will be made freely available for research purposes, and may be used for image based evaluation campaigns such as the Cross Language Evaluation Forum (CLEF). We have also shown that it can be extended easily with further cases imported from a picture archiving and communication system (PACS).

  19. Holistic computational structure screening of more than 12,000 candidates for solid lithium-ion conductor materials

    NASA Astrophysics Data System (ADS)

    Sendek, Austin D.; Yang, Qian; Cubuk, Ekin D.; Duerloo, Karel-Alexander N.; Cui, Yi; Reed, Evan J.

    We present a new type of large-scale computational screening approach for identifying promising candidate materials for solid state electrolytes for lithium ion batteries that is capable of screening all known lithium containing solids. To predict the likelihood of a candidate material exhibiting high lithium ion conductivity, we leverage machine learning techniques to train an ionic conductivity classification model using logistic regression based on experimental measurements reported in the literature. This model, which is built on easily calculable atomistic descriptors, provides new insight into the structure-property relationship for superionic behavior in solids and is approximately one million times faster to evaluate than DFT-based approaches to calculating diffusion coefficients or migration barriers. We couple this model with several other technologically motivated heuristics to reduce the list of candidate materials from the more than 12,000 known lithium containing solids to 21 structures that show promise as electrolytes, few of which have been examined experimentally. Our screening utilizes structures and electronic information contained in the Materials Project database. This work is supported by an Office of Technology Licensing Fellowship through the Stanford Graduate Fellowship Program and a seed Grant from the TomKat Center for Sustainable Energy at Stanford.

  20. When drug discovery meets web search: Learning to Rank for ligand-based virtual screening.

    PubMed

    Zhang, Wei; Ji, Lijuan; Chen, Yanan; Tang, Kailin; Wang, Haiping; Zhu, Ruixin; Jia, Wei; Cao, Zhiwei; Liu, Qi

    2015-01-01

    The rapid increase in the emergence of novel chemical substances presents a substantial demands for more sophisticated computational methodologies for drug discovery. In this study, the idea of Learning to Rank in web search was presented in drug virtual screening, which has the following unique capabilities of 1). Applicable of identifying compounds on novel targets when there is not enough training data available for these targets, and 2). Integration of heterogeneous data when compound affinities are measured in different platforms. A standard pipeline was designed to carry out Learning to Rank in virtual screening. Six Learning to Rank algorithms were investigated based on two public datasets collected from Binding Database and the newly-published Community Structure-Activity Resource benchmark dataset. The results have demonstrated that Learning to rank is an efficient computational strategy for drug virtual screening, particularly due to its novel use in cross-target virtual screening and heterogeneous data integration. To the best of our knowledge, we have introduced here the first application of Learning to Rank in virtual screening. The experiment workflow and algorithm assessment designed in this study will provide a standard protocol for other similar studies. All the datasets as well as the implementations of Learning to Rank algorithms are available at http://www.tongji.edu.cn/~qiliu/lor_vs.html. Graphical AbstractThe analogy between web search and ligand-based drug discovery.

  1. Computational toxicology as implemented by the U.S. EPA: providing high throughput decision support tools for screening and assessing chemical exposure, hazard and risk.

    PubMed

    Kavlock, Robert; Dix, David

    2010-02-01

    Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the Toxicity of Chemicals (U.S. EPA, 2009a). Key intramural projects of the CTRP include digitizing legacy toxicity testing information toxicity reference database (ToxRefDB), predicting toxicity (ToxCast) and exposure (ExpoCast), and creating virtual liver (v-Liver) and virtual embryo (v-Embryo) systems models. U.S. EPA-funded STAR centers are also providing bioinformatics, computational toxicology data and models, and developmental toxicity data and models. The models and underlying data are being made publicly available through the Aggregated Computational Toxicology Resource (ACToR), the Distributed Structure-Searchable Toxicity (DSSTox) Database Network, and other U.S. EPA websites. While initially focused on improving the hazard identification process, the CTRP is placing increasing emphasis on using high-throughput bioactivity profiling data in systems modeling to support quantitative risk assessments, and in developing complementary higher throughput exposure models. This integrated approach will enable analysis of life-stage susceptibility, and understanding of the exposures, pathways, and key events by which chemicals exert their toxicity in developing systems (e.g., endocrine-related pathways). The CTRP will be a critical component in next-generation risk assessments utilizing quantitative high-throughput data and providing a much higher capacity for assessing chemical toxicity than is currently available.

  2. Breast Imaging in the Era of Big Data: Structured Reporting and Data Mining.

    PubMed

    Margolies, Laurie R; Pandey, Gaurav; Horowitz, Eliot R; Mendelson, David S

    2016-02-01

    The purpose of this article is to describe structured reporting and the development of large databases for use in data mining in breast imaging. The results of millions of breast imaging examinations are reported with structured tools based on the BI-RADS lexicon. Much of these data are stored in accessible media. Robust computing power creates great opportunity for data scientists and breast imagers to collaborate to improve breast cancer detection and optimize screening algorithms. Data mining can create knowledge, but the questions asked and their complexity require extremely powerful and agile databases. New data technologies can facilitate outcomes research and precision medicine.

  3. A proposed computer diagnostic system for malignant melanoma (CDSMM).

    PubMed

    Shao, S; Grams, R R

    1994-04-01

    This paper describes a computer diagnostic system for malignant melanoma. The diagnostic system is a rule base system based on image analyses and works under the PC windows environment. It consists of seven modules: I/O module, Patient/Clinic database, image processing module, classification module, rule base module and system control module. In the system, the image analyses are automatically carried out, and database management is efficient and fast. Both final clinic results and immediate results from various modules such as measured features, feature pictures and history records of the disease lesion can be presented on screen or printed out from each corresponding module or from the I/O module. The system can also work as a doctor's office-based tool to aid dermatologists with details not perceivable by the human eye. Since the system operates on a general purpose PC, it can be made portable if the I/O module is disconnected.

  4. Identification of Transthyretin Fibril Formation Inhibitors Using Structure-Based Virtual Screening.

    PubMed

    Ortore, Gabriella; Martinelli, Adriano

    2017-08-22

    Transthyretin (TTR) is the primary carrier for thyroxine (T 4 ) in cerebrospinal fluid and a secondary carrier in blood. TTR is a stable homotetramer, but certain factors, genetic or environmental, could promote its degradation to form amyloid fibrils. A docking study using crystal structures of wild-type TTR was planned; our aim was to design new ligands that are able to inhibit TTR fibril formation. The computational protocol was thought to overcome the multiple binding modes of the ligands induced by the peculiarity of the TTR binding site and by the pseudosymmetry of the site pockets, which generally weaken such structure-based studies. Two docking steps, one that is very fast and a subsequent step that is more accurate, were used to screen the Aldrich Market Select database. Five compounds were selected, and their activity toward inhibiting TTR fibril formation was assessed. Three compounds were observed to be actives, two of which have the same potency as the positive control, and the other was found to be a promising lead compound. These results validate a computational protocol that is able to archive information on the key interactions between database compounds and TTR, which is valuable for supporting further studies. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. High-throughput screening of inorganic compounds for the discovery of novel dielectric and optical materials

    DOE PAGES

    Petousis, Ioannis; Mrdjenovich, David; Ballouz, Eric; ...

    2017-01-31

    Dielectrics are an important class of materials that are ubiquitous in modern electronic applications. Even though their properties are important for the performance of devices, the number of compounds with known dielectric constant is on the order of a few hundred. Here, we use Density Functional Perturbation Theory as a way to screen for the dielectric constant and refractive index of materials in a fast and computationally efficient way. Our results constitute the largest dielectric tensors database to date, containing 1,056 compounds. Details regarding the computational methodology and technical validation are presented along with the format of our publicly availablemore » data. In addition, we integrate our dataset with the Materials Project allowing users easy access to material properties. Finally, we explain how our dataset and calculation methodology can be used in the search for novel dielectric compounds.« less

  6. High-throughput screening of inorganic compounds for the discovery of novel dielectric and optical materials

    PubMed Central

    Petousis, Ioannis; Mrdjenovich, David; Ballouz, Eric; Liu, Miao; Winston, Donald; Chen, Wei; Graf, Tanja; Schladt, Thomas D.; Persson, Kristin A.; Prinz, Fritz B.

    2017-01-01

    Dielectrics are an important class of materials that are ubiquitous in modern electronic applications. Even though their properties are important for the performance of devices, the number of compounds with known dielectric constant is on the order of a few hundred. Here, we use Density Functional Perturbation Theory as a way to screen for the dielectric constant and refractive index of materials in a fast and computationally efficient way. Our results constitute the largest dielectric tensors database to date, containing 1,056 compounds. Details regarding the computational methodology and technical validation are presented along with the format of our publicly available data. In addition, we integrate our dataset with the Materials Project allowing users easy access to material properties. Finally, we explain how our dataset and calculation methodology can be used in the search for novel dielectric compounds. PMID:28140408

  7. High-throughput screening of inorganic compounds for the discovery of novel dielectric and optical materials.

    PubMed

    Petousis, Ioannis; Mrdjenovich, David; Ballouz, Eric; Liu, Miao; Winston, Donald; Chen, Wei; Graf, Tanja; Schladt, Thomas D; Persson, Kristin A; Prinz, Fritz B

    2017-01-31

    Dielectrics are an important class of materials that are ubiquitous in modern electronic applications. Even though their properties are important for the performance of devices, the number of compounds with known dielectric constant is on the order of a few hundred. Here, we use Density Functional Perturbation Theory as a way to screen for the dielectric constant and refractive index of materials in a fast and computationally efficient way. Our results constitute the largest dielectric tensors database to date, containing 1,056 compounds. Details regarding the computational methodology and technical validation are presented along with the format of our publicly available data. In addition, we integrate our dataset with the Materials Project allowing users easy access to material properties. Finally, we explain how our dataset and calculation methodology can be used in the search for novel dielectric compounds.

  8. 76 FR 39315 - Privacy Act of 1974: Implementation of Exemptions; Department of Homeland Security/ALL-030 Use of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-06

    ... Terrorist Screening Database System of Records AGENCY: Privacy Office, DHS. ACTION: Notice of proposed... Use of the Terrorist Screening Database System of Records'' and this proposed rulemaking. In this... Use of the Terrorist Screening Database (TSDB) System of Records.'' DHS is maintaining a mirror copy...

  9. Version 1.00 programmer`s tools used in constructing the INEL RML/analytical radiochemistry sample tracking database and its user interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Femec, D.A.

    This report describes two code-generating tools used to speed design and implementation of relational databases and user interfaces: CREATE-SCHEMA and BUILD-SCREEN. CREATE-SCHEMA produces the SQL commands that actually create and define the database. BUILD-SCREEN takes templates for data entry screens and generates the screen management system routine calls to display the desired screen. Both tools also generate the related FORTRAN declaration statements and precompiled SQL calls. Included with this report is the source code for a number of FORTRAN routines and functions used by the user interface. This code is broadly applicable to a number of different databases.

  10. Benchmarking Ligand-Based Virtual High-Throughput Screening with the PubChem Database

    PubMed Central

    Butkiewicz, Mariusz; Lowe, Edward W.; Mueller, Ralf; Mendenhall, Jeffrey L.; Teixeira, Pedro L.; Weaver, C. David; Meiler, Jens

    2013-01-01

    With the rapidly increasing availability of High-Throughput Screening (HTS) data in the public domain, such as the PubChem database, methods for ligand-based computer-aided drug discovery (LB-CADD) have the potential to accelerate and reduce the cost of probe development and drug discovery efforts in academia. We assemble nine data sets from realistic HTS campaigns representing major families of drug target proteins for benchmarking LB-CADD methods. Each data set is public domain through PubChem and carefully collated through confirmation screens validating active compounds. These data sets provide the foundation for benchmarking a new cheminformatics framework BCL::ChemInfo, which is freely available for non-commercial use. Quantitative structure activity relationship (QSAR) models are built using Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Decision Trees (DTs), and Kohonen networks (KNs). Problem-specific descriptor optimization protocols are assessed including Sequential Feature Forward Selection (SFFS) and various information content measures. Measures of predictive power and confidence are evaluated through cross-validation, and a consensus prediction scheme is tested that combines orthogonal machine learning algorithms into a single predictor. Enrichments ranging from 15 to 101 for a TPR cutoff of 25% are observed. PMID:23299552

  11. Case retrieval in medical databases by fusing heterogeneous information.

    PubMed

    Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Roux, Christian; Cochener, Béatrice

    2011-01-01

    A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.

  12. Prevalence of incidental pulmonary nodules on computed tomography of the thorax in trauma patients.

    PubMed

    Hammerschlag, G; Cao, J; Gumm, K; Irving, L; Steinfort, D

    2015-06-01

    Lung cancer is the third leading cause of death in high-income countries. Early detection leads to improved clinical outcomes, with evidence showing that lung cancer screening reduces lung cancer mortality. Knowledge of the population prevalence of pulmonary nodules affects the efficacy and cost-effectiveness of a local screening programme. We performed a retrospective review of our trauma database looking for the prevalence of incidental pulmonary nodules on computed tomography of the thorax. Prevalence of nodules and follow up according to Fleischner Guidelines were reviewed. Two hundred and forty-eight patients underwent a computed tomography thorax as part of their trauma assessment. 8.5% (21/248) had incidental pulmonary nodules. Eighty-one per cent of these (17/21) required follow up according to the Fleischner Society Guidelines. One was subsequently diagnosed with primary lung cancer, one with metastatic sigmoid cancer and one with invasive aspergillus. Incidental pulmonary nodules are common in the general population. This has implications for possible lung cancer screening recommendations in the Australian population. Referral and/or review systems are essential to ensure adequate follow up of incidental findings, as it is likely some patients are not receiving adequate follow up at present. © 2015 Royal Australasian College of Physicians.

  13. A Mobile Health Data Collection System for Remote Areas to Monitor Women Participating in a Cervical Cancer Screening Campaign.

    PubMed

    Quercia, Kelly; Tran, Phuong Lien; Jinoro, Jéromine; Herniainasolo, Joséa Lea; Viviano, Manuela; Vassilakos, Pierre; Benski, Caroline; Petignat, Patrick

    2018-04-01

    Barriers to efficient cervical cancer screening in low- and medium-income countries include the lack of systematic monitoring of the participants' data. The aim of this study was to assess the feasibility of a mobile health (m-Health) data collection system to facilitate monitoring of women participating to cervical cancer screening campaign. Women aged 30-65 years, participating in a cervical cancer screening campaign in Ambanja, Madagascar, were invited to participate in the study. Cervical Cancer Prevention System, an m-Health application, allows the registration of clinical data, while women are undergoing cervical cancer screening. All data registered in the smartphone were transmitted onto a secure, Web-based platform through the use of an Internet connection. Healthcare providers had access to the central database and could use it for the follow-up visits. Quality of data was assessed by computing the percentage of key data missing. A total of 151 women were recruited in the study. Mean age of participants was 41.8 years. The percentage of missing data for the key variables was less than 0.02%, corresponding to one woman's medical history data, which was not sent to the central database. Technical problems, including transmission of photos, human papillomavirus test results, and pelvic examination data, have subsequently been solved through a system update. The quality of the data was satisfactory and allowed monitoring of cervical cancer screening data of participants. Larger studies evaluating the efficacy of the system for the women's follow-up are needed in order to confirm its efficiency on a long-term scale.

  14. Classification and virtual screening of androgen receptor antagonists.

    PubMed

    Li, Jiazhong; Gramatica, Paola

    2010-05-24

    Computational tools, such as quantitative structure-activity relationship (QSAR), are highly useful as screening support for prioritization of substances of very high concern (SVHC). From the practical point of view, QSAR models should be effective to pick out more active rather than inactive compounds, expressed as sensitivity in classification works. This research investigates the classification of a big data set of endocrine-disrupting chemicals (EDCs)-androgen receptor (AR) antagonists, mainly aiming to improve the external sensitivity and to screen for potential AR binders. The kNN, lazy IB1, and ADTree methods and the consensus approach were used to build different models, which improve the sensitivity on external chemicals from 57.1% (literature) to 76.4%. Additionally, the models' predictive abilities were further validated on a blind collected data set (sensitivity: 85.7%). Then the proposed classifiers were used: (i) to distinguish a set of AR binders into antagonists and agonists; (ii) to screen a combined estrogen receptor binder database to find out possible chemicals that can bind to both AR and ER; and (iii) to virtually screen our in-house environmental chemical database. The in silico screening results suggest: (i) that some compounds can affect the normal endocrine system through a complex mechanism binding both to ER and AR; (ii) new EDCs, which are nonER binders, but can in silico bind to AR, are recognized; and (iii) about 20% of compounds in a big data set of environmental chemicals are predicted as new AR antagonists. The priority should be given to them to experimentally test the binding activities with AR.

  15. New drug candidates for liposomal delivery identified by computer modeling of liposomes' remote loading and leakage.

    PubMed

    Cern, Ahuva; Marcus, David; Tropsha, Alexander; Barenholz, Yechezkel; Goldblum, Amiram

    2017-04-28

    Remote drug loading into nano-liposomes is in most cases the best method for achieving high concentrations of active pharmaceutical ingredients (API) per nano-liposome that enable therapeutically viable API-loaded nano-liposomes, referred to as nano-drugs. This approach also enables controlled drug release. Recently, we constructed computational models to identify APIs that can achieve the desired high concentrations in nano-liposomes by remote loading. While those previous models included a broad spectrum of experimental conditions and dealt only with loading, here we reduced the scope to the molecular characteristics alone. We model and predict API suitability for nano-liposomal delivery by fixing the main experimental conditions: liposome lipid composition and size to be similar to those of Doxil® liposomes. On that basis, we add a prediction of drug leakage from the nano-liposomes during storage. The latter is critical for having pharmaceutically viable nano-drugs. The "load and leak" models were used to screen two large molecular databases in search of candidate APIs for delivery by nano-liposomes. The distribution of positive instances in both loading and leakage models was similar in the two databases screened. The screening process identified 667 molecules that were positives by both loading and leakage models (i.e., both high-loading and stable). Among them, 318 molecules received a high score in both properties and of these, 67 are FDA-approved drugs. This group of molecules, having diverse pharmacological activities, may be the basis for future liposomal drug development. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. [Computational chemistry in structure-based drug design].

    PubMed

    Cao, Ran; Li, Wei; Sun, Han-Zi; Zhou, Yu; Huang, Niu

    2013-07-01

    Today, the understanding of the sequence and structure of biologically relevant targets is growing rapidly and researchers from many disciplines, physics and computational science in particular, are making significant contributions to modern biology and drug discovery. However, it remains challenging to rationally design small molecular ligands with desired biological characteristics based on the structural information of the drug targets, which demands more accurate calculation of ligand binding free-energy. With the rapid advances in computer power and extensive efforts in algorithm development, physics-based computational chemistry approaches have played more important roles in structure-based drug design. Here we reviewed the newly developed computational chemistry methods in structure-based drug design as well as the elegant applications, including binding-site druggability assessment, large scale virtual screening of chemical database, and lead compound optimization. Importantly, here we address the current bottlenecks and propose practical solutions.

  17. Identification of antipsychotic drug fluspirilene as a potential p53-MDM2 inhibitor: a combined computational and experimental study

    NASA Astrophysics Data System (ADS)

    Patil, Sachin P.; Pacitti, Michael F.; Gilroy, Kevin S.; Ruggiero, John C.; Griffin, Jonathan D.; Butera, Joseph J.; Notarfrancesco, Joseph M.; Tran, Shawn; Stoddart, John W.

    2015-02-01

    The inhibition of tumor suppressor p53 protein due to its direct interaction with oncogenic murine double minute 2 (MDM2) protein, plays a central role in almost 50 % of all human tumor cells. Therefore, pharmacological inhibition of the p53-binding pocket on MDM2, leading to p53 activation, presents an important therapeutic target against these cancers expressing wild-type p53. In this context, the present study utilized an integrated virtual and experimental screening approach to screen a database of approved drugs for potential p53-MDM2 interaction inhibitors. Specifically, using an ensemble rigid-receptor docking approach with four MDM2 protein crystal structures, six drug molecules were identified as possible p53-MDM2 inhibitors. These drug molecules were then subjected to further molecular modeling investigation through flexible-receptor docking followed by Prime/MM-GBSA binding energy analysis. These studies identified fluspirilene, an approved antipsychotic drug, as a top hit with MDM2 binding mode and energy similar to that of a native MDM2 crystal ligand. The molecular dynamics simulations suggested stable binding of fluspirilene to the p53-binding pocket on MDM2 protein. The experimental testing of fluspirilene showed significant growth inhibition of human colon tumor cells in a p53-dependent manner. Fluspirilene also inhibited growth of several other human tumor cell lines in the NCI60 cell line panel. Taken together, these computational and experimental data suggest a potentially novel role of fluspirilene in inhibiting the p53-MDM2 interaction. It is noteworthy here that fluspirilene has a long history of safe human use, thus presenting immediate clinical potential as a cancer therapeutic. Furthermore, fluspirilene could also serve as a structurally-novel lead molecule for the development of more potent, small-molecule p53-MDM2 inhibitors against several types of cancer. Importantly, the combined computational and experimental screening protocol presented in this study may also prove useful for screening other commercially-available compound databases for identification of novel, small molecule p53-MDM2 inhibitors.

  18. ACToR Chemical Structure processing using Open Source ...

    EPA Pesticide Factsheets

    ACToR (Aggregated Computational Toxicology Resource) is a centralized database repository developed by the National Center for Computational Toxicology (NCCT) at the U.S. Environmental Protection Agency (EPA). Free and open source tools were used to compile toxicity data from over 1,950 public sources. ACToR contains chemical structure information and toxicological data for over 558,000 unique chemicals. The database primarily includes data from NCCT research programs, in vivo toxicity data from ToxRef, human exposure data from ExpoCast, high-throughput screening data from ToxCast and high quality chemical structure information from the EPA DSSTox program. The DSSTox database is a chemical structure inventory for the NCCT programs and currently has about 16,000 unique structures. Included are also data from PubChem, ChemSpider, USDA, FDA, NIH and several other public data sources. ACToR has been a resource to various international and national research groups. Most of our recent efforts on ACToR are focused on improving the structural identifiers and Physico-Chemical properties of the chemicals in the database. Organizing this huge collection of data and improving the chemical structure quality of the database has posed some major challenges. Workflows have been developed to process structures, calculate chemical properties and identify relationships between CAS numbers. The Structure processing workflow integrates web services (PubChem and NIH NCI Cactus) to d

  19. The application of knowledge discovery in databases to post-marketing drug safety: example of the WHO database.

    PubMed

    Bate, A; Lindquist, M; Edwards, I R

    2008-04-01

    After market launch, new information on adverse effects of medicinal products is almost exclusively first highlighted by spontaneous reporting. As data sets of spontaneous reports have become larger, and computational capability has increased, quantitative methods have been increasingly applied to such data sets. The screening of such data sets is an application of knowledge discovery in databases (KDD). Effective KDD is an iterative and interactive process made up of the following steps: developing an understanding of an application domain, creating a target data set, data cleaning and pre-processing, data reduction and projection, choosing the data mining task, choosing the data mining algorithm, data mining, interpretation of results and consolidating and using acquired knowledge. The process of KDD as it applies to the analysis of spontaneous reports can be exemplified by its routine use on the 3.5 million suspected adverse drug reaction (ADR) reports in the WHO ADR database. Examples of new adverse effects first highlighted by the KDD process on WHO data include topiramate glaucoma, infliximab vasculitis and the association of selective serotonin reuptake inhibitors (SSRIs) and neonatal convulsions. The KDD process has already improved our ability to highlight previously unsuspected ADRs for clinical review in spontaneous reporting, and we anticipate that such techniques will be increasingly used in the successful screening of other healthcare data sets such as patient records in the future.

  20. Expression and Purification of a Novel Computationally Designed Antigen for Simultaneously Detection of HTLV-1 and HBV Antibodies.

    PubMed

    Heydari Zarnagh, Hafez; Ravanshad, Mehrdad; Pourfatollah, Ali Akbar; Rasaee, Mohammad Javad

    2015-04-01

    Computational tools are reliable alternatives to laborious work in chimeric protein design. In this study, a chimeric antigen was designed using computational techniques for simultaneous detection of anti-HTLV-I and anti-HBV in infected sera. Databases were searched for amino acid sequences of HBV/HLV-I diagnostic antigens. The immunodominant fragments were selected based on propensity scales. The diagnostic antigen was designed using these fragments. Secondary and tertiary structures were predicted and the B-cell epitopes were mapped on the surface of built model. The synthetic DNA coding antigen was sub-cloned into pGS21a expression vector. SDS-PAGE analysis showed that glutathione fused antigen was highly expressed in E. coli BL21 (DE3) cells. The recombinant antigen was purified by nickel affinity chromatography. ELISA results showed that soluble antigen could specifically react with the HTLV-I and HBV infected sera. This specific antigen could be used as suitable agent for antibody-antigen based screening tests and can help clinicians in order to perform quick and precise screening of the HBV and HTLV-I infections.

  1. An original imputation technique of missing data for assessing exposure of newborns to perchlorate in drinking water.

    PubMed

    Caron, Alexandre; Clement, Guillaume; Heyman, Christophe; Aernout, Eva; Chazard, Emmanuel; Le Tertre, Alain

    2015-01-01

    Incompleteness of epidemiological databases is a major drawback when it comes to analyzing data. We conceived an epidemiological study to assess the association between newborn thyroid function and the exposure to perchlorates found in the tap water of the mother's home. Only 9% of newborn's exposure to perchlorate was known. The aim of our study was to design, test and evaluate an original method for imputing perchlorate exposure of newborns based on their maternity of birth. In a first database, an exhaustive collection of newborn's thyroid function measured during a systematic neonatal screening was collected. In this database the municipality of residence of the newborn's mother was only available for 2012. Between 2004 and 2011, the closest data available was the municipality of the maternity of birth. Exposure was assessed using a second database which contained the perchlorate levels for each municipality. We computed the catchment area of every maternity ward based on the French nationwide exhaustive database of inpatient stay. Municipality, and consequently perchlorate exposure, was imputed by a weighted draw in the catchment area. Missing values for remaining covariates were imputed by chained equation. A linear mixture model was computed on each imputed dataset. We compared odds ratios (ORs) and 95% confidence intervals (95% CI) estimated on real versus imputed 2012 data. The same model was then carried out for the whole imputed database. The ORs estimated on 36,695 observations by our multiple imputation method are comparable to the real 2012 data. On the 394,979 observations of the whole database, the ORs remain stable but the 95% CI tighten considerably. The model estimates computed on imputed data are similar to those calculated on real data. The main advantage of multiple imputation is to provide unbiased estimate of the ORs while maintaining their variances. Thus, our method will be used to increase the statistical power of future studies by including all 394,979 newborns.

  2. A PATO-compliant zebrafish screening database (MODB): management of morpholino knockdown screen information.

    PubMed

    Knowlton, Michelle N; Li, Tongbin; Ren, Yongliang; Bill, Brent R; Ellis, Lynda Bm; Ekker, Stephen C

    2008-01-07

    The zebrafish is a powerful model vertebrate amenable to high throughput in vivo genetic analyses. Examples include reverse genetic screens using morpholino knockdown, expression-based screening using enhancer trapping and forward genetic screening using transposon insertional mutagenesis. We have created a database to facilitate web-based distribution of data from such genetic studies. The MOrpholino DataBase is a MySQL relational database with an online, PHP interface. Multiple quality control levels allow differential access to data in raw and finished formats. MODBv1 includes sequence information relating to almost 800 morpholinos and their targets and phenotypic data regarding the dose effect of each morpholino (mortality, toxicity and defects). To improve the searchability of this database, we have incorporated a fixed-vocabulary defect ontology that allows for the organization of morpholino affects based on anatomical structure affected and defect produced. This also allows comparison between species utilizing Phenotypic Attribute Trait Ontology (PATO) designated terminology. MODB is also cross-linked with ZFIN, allowing full searches between the two databases. MODB offers users the ability to retrieve morpholino data by sequence of morpholino or target, name of target, anatomical structure affected and defect produced. MODB data can be used for functional genomic analysis of morpholino design to maximize efficacy and minimize toxicity. MODB also serves as a template for future sequence-based functional genetic screen databases, and it is currently being used as a model for the creation of a mutagenic insertional transposon database.

  3. AUTOCASK (AUTOmatic Generation of 3-D CASK models). A microcomputer based system for shipping cask design review analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard, M.A.; Sommer, S.C.

    1995-04-01

    AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.

  4. Open PHACTS computational protocols for in silico target validation of cellular phenotypic screens: knowing the knowns† †The authors declare no competing interests. ‡ ‡Electronic supplementary information (ESI) available: Pipeline Pilot protocols, xls file with the output of the Pipeline Pilot protocols, KNIME workflows, and supplementary figures showing the Pipeline Pilot protocols. See DOI: 10.1039/c6md00065g Click here for additional data file.

    PubMed Central

    Zdrazil, B.; Neefs, J.-M.; Van Vlijmen, H.; Herhaus, C.; Caracoti, A.; Brea, J.; Roibás, B.; Loza, M. I.; Queralt-Rosinach, N.; Furlong, L. I.; Gaulton, A.; Bartek, L.; Senger, S.; Chichester, C.; Engkvist, O.; Evelo, C. T.; Franklin, N. I.; Marren, D.; Ecker, G. F.

    2016-01-01

    Phenotypic screening is in a renaissance phase and is expected by many academic and industry leaders to accelerate the discovery of new drugs for new biology. Given that phenotypic screening is per definition target agnostic, the emphasis of in silico and in vitro follow-up work is on the exploration of possible molecular mechanisms and efficacy targets underlying the biological processes interrogated by the phenotypic screening experiments. Herein, we present six exemplar computational protocols for the interpretation of cellular phenotypic screens based on the integration of compound, target, pathway, and disease data established by the IMI Open PHACTS project. The protocols annotate phenotypic hit lists and allow follow-up experiments and mechanistic conclusions. The annotations included are from ChEMBL, ChEBI, GO, WikiPathways and DisGeNET. Also provided are protocols which select from the IUPHAR/BPS Guide to PHARMACOLOGY interaction file selective compounds to probe potential targets and a correlation robot which systematically aims to identify an overlap of active compounds in both the phenotypic as well as any kinase assay. The protocols are applied to a phenotypic pre-lamin A/C splicing assay selected from the ChEMBL database to illustrate the process. The computational protocols make use of the Open PHACTS API and data and are built within the Pipeline Pilot and KNIME workflow tools. PMID:27774140

  5. The Protein Disease Database of human body fluids: II. Computer methods and data issues.

    PubMed

    Lemkin, P F; Orr, G A; Goldstein, M P; Creed, G J; Myrick, J E; Merril, C R

    1995-01-01

    The Protein Disease Database (PDD) is a relational database of proteins and diseases. With this database it is possible to screen for quantitative protein abnormalities associated with disease states. These quantitative relationships use data drawn from the peer-reviewed biomedical literature. Assays may also include those observed in high-resolution electrophoretic gels that offer the potential to quantitate many proteins in a single test as well as data gathered by enzymatic or immunologic assays. We are using the Internet World Wide Web (WWW) and the Web browser paradigm as an access method for wide distribution and querying of the Protein Disease Database. The WWW hypertext transfer protocol and its Common Gateway Interface make it possible to build powerful graphical user interfaces that can support easy-to-use data retrieval using query specification forms or images. The details of these interactions are totally transparent to the users of these forms. Using a client-server SQL relational database, user query access, initial data entry and database maintenance are all performed over the Internet with a Web browser. We discuss the underlying design issues, mapping mechanisms and assumptions that we used in constructing the system, data entry, access to the database server, security, and synthesis of derived two-dimensional gel image maps and hypertext documents resulting from SQL database searches.

  6. Computer-aided diagnosis of breast cancer via Gabor wavelet bank and binary-class SVM in mammographic images

    NASA Astrophysics Data System (ADS)

    Torrents-Barrena, Jordina; Puig, Domenec; Melendez, Jaime; Valls, Aida

    2016-03-01

    Breast cancer is one of the most dangerous diseases that attack women in their 40s worldwide. Due to this fact, it is estimated that one in eight women will develop a malignant carcinoma during their life. In addition, the carelessness of performing regular screenings is an important reason for the increase of mortality. However, computer-aided diagnosis systems attempt to enhance the quality of mammograms as well as the detection of early signs related to the disease. In this paper we propose a bank of Gabor filters to calculate the mean, standard deviation, skewness and kurtosis features by four-sized evaluation windows. Therefore, an active strategy is used to select the most relevant pixels. Finally, a supervised classification stage using two-class support vector machines is utilised through an accurate estimation of kernel parameters. In order to show the development of our methodology based on mammographic image analysis, two main experiments are fulfilled: abnormal/normal breast tissue classification and the ability to detect the different breast cancer types. Moreover, the public screen-film mini-MIAS database is compared with a digitised breast cancer database to evaluate the method robustness. The area under the receiver operating characteristic curve is used to measure the performance of the method. Furthermore, both confusion matrix and accuracy are calculated to assess the results of the proposed algorithm.

  7. Evaluation of a New Ensemble Learning Framework for Mass Classification in Mammograms.

    PubMed

    Rahmani Seryasat, Omid; Haddadnia, Javad

    2018-06-01

    Mammography is the most common screening method for diagnosis of breast cancer. In this study, a computer-aided system for diagnosis of benignity and malignity of the masses was implemented in mammogram images. In the computer aided diagnosis system, we first reduce the noise in the mammograms using an effective noise removal technique. After the noise removal, the mass in the region of interest must be segmented and this segmentation is done using a deformable model. After the mass segmentation, a number of features are extracted from it. These features include: features of the mass shape and border, tissue properties, and the fractal dimension. After extracting a large number of features, a proper subset must be chosen from among them. In this study, we make use of a new method on the basis of a genetic algorithm for selection of a proper set of features. After determining the proper features, a classifier is trained. To classify the samples, a new architecture for combination of the classifiers is proposed. In this architecture, easy and difficult samples are identified and trained using different classifiers. Finally, the proposed mass diagnosis system was also tested on mini-Mammographic Image Analysis Society and digital database for screening mammography databases. The obtained results indicate that the proposed system can compete with the state-of-the-art methods in terms of accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. DG-AMMOS: a new tool to generate 3d conformation of small molecules using distance geometry and automated molecular mechanics optimization for in silico screening.

    PubMed

    Lagorce, David; Pencheva, Tania; Villoutreix, Bruno O; Miteva, Maria A

    2009-11-13

    Discovery of new bioactive molecules that could enter drug discovery programs or that could serve as chemical probes is a very complex and costly endeavor. Structure-based and ligand-based in silico screening approaches are nowadays extensively used to complement experimental screening approaches in order to increase the effectiveness of the process and facilitating the screening of thousands or millions of small molecules against a biomolecular target. Both in silico screening methods require as input a suitable chemical compound collection and most often the 3D structure of the small molecules has to be generated since compounds are usually delivered in 1D SMILES, CANSMILES or in 2D SDF formats. Here, we describe the new open source program DG-AMMOS which allows the generation of the 3D conformation of small molecules using Distance Geometry and their energy minimization via Automated Molecular Mechanics Optimization. The program is validated on the Astex dataset, the ChemBridge Diversity database and on a number of small molecules with known crystal structures extracted from the Cambridge Structural Database. A comparison with the free program Balloon and the well-known commercial program Omega generating the 3D of small molecules is carried out. The results show that the new free program DG-AMMOS is a very efficient 3D structure generator engine. DG-AMMOS provides fast, automated and reliable access to the generation of 3D conformation of small molecules and facilitates the preparation of a compound collection prior to high-throughput virtual screening computations. The validation of DG-AMMOS on several different datasets proves that generated structures are generally of equal quality or sometimes better than structures obtained by other tested methods.

  9. Performance Studies on Distributed Virtual Screening

    PubMed Central

    Krüger, Jens; de la Garza, Luis; Kohlbacher, Oliver; Nagel, Wolfgang E.

    2014-01-01

    Virtual high-throughput screening (vHTS) is an invaluable method in modern drug discovery. It permits screening large datasets or databases of chemical structures for those structures binding possibly to a drug target. Virtual screening is typically performed by docking code, which often runs sequentially. Processing of huge vHTS datasets can be parallelized by chunking the data because individual docking runs are independent of each other. The goal of this work is to find an optimal splitting maximizing the speedup while considering overhead and available cores on Distributed Computing Infrastructures (DCIs). We have conducted thorough performance studies accounting not only for the runtime of the docking itself, but also for structure preparation. Performance studies were conducted via the workflow-enabled science gateway MoSGrid (Molecular Simulation Grid). As input we used benchmark datasets for protein kinases. Our performance studies show that docking workflows can be made to scale almost linearly up to 500 concurrent processes distributed even over large DCIs, thus accelerating vHTS campaigns significantly. PMID:25032219

  10. Effect of intervention programs in schools to reduce screen time: a meta-analysis.

    PubMed

    Friedrich, Roberta Roggia; Polet, Jéssica Pinto; Schuch, Ilaine; Wagner, Mário Bernardes

    2014-01-01

    to evaluate the effects of intervention program strategies on the time spent on activities such as watching television, playing videogames, and using the computer among schoolchildren. a search for randomized controlled trials available in the literature was performed in the following electronic databases: PubMed, Lilacs, Embase, Scopus, Web of Science, and Cochrane Library using the following Keywords randomized controlled trial, intervention studies, sedentary lifestyle, screen time, and school. A summary measure based on the standardized mean difference was used with a 95% confidence interval. a total of 1,552 studies were identified, of which 16 were included in the meta-analysis. The interventions in the randomized controlled trials (n=8,785) showed a significant effect in reducing screen time, with a standardized mean difference (random effect) of: -0.25 (-0.37, -0.13), p<0.01. interventions have demonstrated the positive effects of the decrease of screen time among schoolchildren. Copyright © 2014 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.

  11. Application of the 4D Fingerprint Method with a Robust Scoring Function for Scaffold-Hopping and Drug Repurposing Strategies

    PubMed Central

    2015-01-01

    Two factors contribute to the inefficiency associated with screening pharmaceutical library collections as a means of identifying new drugs: [1] the limited success of virtual screening (VS) methods in identifying new scaffolds; [2] the limited accuracy of computational methods in predicting off-target effects. We recently introduced a 3D shape-based similarity algorithm of the SABRE program, which encodes a consensus molecular shape pattern of a set of active ligands into a 4D fingerprint descriptor. Here, we report a mathematical model for shape similarity comparisons and ligand database filtering using this 4D fingerprint method and benchmarked the scoring function HWK (Hamza–Wei–Korotkov), using the 81 targets of the DEKOIS database. Subsequently, we applied our combined 4D fingerprint and HWK scoring function VS approach in scaffold-hopping and drug repurposing using the National Cancer Institute (NCI) and Food and Drug Administration (FDA) databases, and we identified new inhibitors with different scaffolds of MycP1 protease from the mycobacterial ESX-1 secretion system. Experimental evaluation of nine compounds from the NCI database and three from the FDA database displayed IC50 values ranging from 70 to 100 μM against MycP1 and possessed high structural diversity, which provides departure points for further structure–activity relationship (SAR) optimization. In addition, this study demonstrates that the combination of our 4D fingerprint algorithm and the HWK scoring function may provide a means for identifying repurposed drugs for the treatment of infectious diseases and may be used in the drug-target profile strategy. PMID:25229183

  12. Application of the 4D fingerprint method with a robust scoring function for scaffold-hopping and drug repurposing strategies.

    PubMed

    Hamza, Adel; Wagner, Jonathan M; Wei, Ning-Ning; Kwiatkowski, Stefan; Zhan, Chang-Guo; Watt, David S; Korotkov, Konstantin V

    2014-10-27

    Two factors contribute to the inefficiency associated with screening pharmaceutical library collections as a means of identifying new drugs: [1] the limited success of virtual screening (VS) methods in identifying new scaffolds; [2] the limited accuracy of computational methods in predicting off-target effects. We recently introduced a 3D shape-based similarity algorithm of the SABRE program, which encodes a consensus molecular shape pattern of a set of active ligands into a 4D fingerprint descriptor. Here, we report a mathematical model for shape similarity comparisons and ligand database filtering using this 4D fingerprint method and benchmarked the scoring function HWK (Hamza-Wei-Korotkov), using the 81 targets of the DEKOIS database. Subsequently, we applied our combined 4D fingerprint and HWK scoring function VS approach in scaffold-hopping and drug repurposing using the National Cancer Institute (NCI) and Food and Drug Administration (FDA) databases, and we identified new inhibitors with different scaffolds of MycP1 protease from the mycobacterial ESX-1 secretion system. Experimental evaluation of nine compounds from the NCI database and three from the FDA database displayed IC50 values ranging from 70 to 100 μM against MycP1 and possessed high structural diversity, which provides departure points for further structure-activity relationship (SAR) optimization. In addition, this study demonstrates that the combination of our 4D fingerprint algorithm and the HWK scoring function may provide a means for identifying repurposed drugs for the treatment of infectious diseases and may be used in the drug-target profile strategy.

  13. Low-dose chest computed tomography for lung cancer screening among Hodgkin lymphoma survivors: a cost-effectiveness analysis.

    PubMed

    Wattson, Daniel A; Hunink, M G Myriam; DiPiro, Pamela J; Das, Prajnan; Hodgson, David C; Mauch, Peter M; Ng, Andrea K

    2014-10-01

    Hodgkin lymphoma (HL) survivors face an increased risk of treatment-related lung cancer. Screening with low-dose computed tomography (LDCT) may allow detection of early stage, resectable cancers. We developed a Markov decision-analytic and cost-effectiveness model to estimate the merits of annual LDCT screening among HL survivors. Population databases and HL-specific literature informed key model parameters, including lung cancer rates and stage distribution, cause-specific survival estimates, and utilities. Relative risks accounted for radiation therapy (RT) technique, smoking status (>10 pack-years or current smokers vs not), age at HL diagnosis, time from HL treatment, and excess radiation from LDCTs. LDCT assumptions, including expected stage-shift, false-positive rates, and likely additional workup were derived from the National Lung Screening Trial and preliminary results from an internal phase 2 protocol that performed annual LDCTs in 53 HL survivors. We assumed a 3% discount rate and a willingness-to-pay (WTP) threshold of $50,000 per quality-adjusted life year (QALY). Annual LDCT screening was cost effective for all smokers. A male smoker treated with mantle RT at age 25 achieved maximum QALYs by initiating screening 12 years post-HL, with a life expectancy benefit of 2.1 months and an incremental cost of $34,841/QALY. Among nonsmokers, annual screening produced a QALY benefit in some cases, but the incremental cost was not below the WTP threshold for any patient subsets. As age at HL diagnosis increased, earlier initiation of screening improved outcomes. Sensitivity analyses revealed that the model was most sensitive to the lung cancer incidence and mortality rates and expected stage-shift from screening. HL survivors are an important high-risk population that may benefit from screening, especially those treated in the past with large radiation fields including mantle or involved-field RT. Screening may be cost effective for all smokers but possibly not for nonsmokers despite a small life expectancy benefit. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Modeling and Prediction of Solvent Effect on Human Skin Permeability using Support Vector Regression and Random Forest.

    PubMed

    Baba, Hiromi; Takahara, Jun-ichi; Yamashita, Fumiyoshi; Hashida, Mitsuru

    2015-11-01

    The solvent effect on skin permeability is important for assessing the effectiveness and toxicological risk of new dermatological formulations in pharmaceuticals and cosmetics development. The solvent effect occurs by diverse mechanisms, which could be elucidated by efficient and reliable prediction models. However, such prediction models have been hampered by the small variety of permeants and mixture components archived in databases and by low predictive performance. Here, we propose a solution to both problems. We first compiled a novel large database of 412 samples from 261 structurally diverse permeants and 31 solvents reported in the literature. The data were carefully screened to ensure their collection under consistent experimental conditions. To construct a high-performance predictive model, we then applied support vector regression (SVR) and random forest (RF) with greedy stepwise descriptor selection to our database. The models were internally and externally validated. The SVR achieved higher performance statistics than RF. The (externally validated) determination coefficient, root mean square error, and mean absolute error of SVR were 0.899, 0.351, and 0.268, respectively. Moreover, because all descriptors are fully computational, our method can predict as-yet unsynthesized compounds. Our high-performance prediction model offers an attractive alternative to permeability experiments for pharmaceutical and cosmetic candidate screening and optimizing skin-permeable topical formulations.

  15. Computational prediction of new auxetic materials.

    PubMed

    Dagdelen, John; Montoya, Joseph; de Jong, Maarten; Persson, Kristin

    2017-08-22

    Auxetics comprise a rare family of materials that manifest negative Poisson's ratio, which causes an expansion instead of contraction under tension. Most known homogeneously auxetic materials are porous foams or artificial macrostructures and there are few examples of inorganic materials that exhibit this behavior as polycrystalline solids. It is now possible to accelerate the discovery of materials with target properties, such as auxetics, using high-throughput computations, open databases, and efficient search algorithms. Candidates exhibiting features correlating with auxetic behavior were chosen from the set of more than 67 000 materials in the Materials Project database. Poisson's ratios were derived from the calculated elastic tensor of each material in this reduced set of compounds. We report that this strategy results in the prediction of three previously unidentified homogeneously auxetic materials as well as a number of compounds with a near-zero homogeneous Poisson's ratio, which are here denoted "anepirretic materials".There are very few inorganic materials with auxetic homogenous Poisson's ratio in polycrystalline form. Here authors develop an approach to screening materials databases for target properties such as negative Poisson's ratio by using stability and structural motifs to predict new instances of homogenous auxetic behavior as well as a number of materials with near-zero Poisson's ratio.

  16. Getting the Most out of PubChem for Virtual Screening

    PubMed Central

    Kim, Sunghwan

    2016-01-01

    Introduction With the emergence of the “big data” era, the biomedical research community has great interest in exploiting publicly available chemical information for drug discovery. PubChem is an example of public databases that provide a large amount of chemical information free of charge. Areas covered This article provides an overview of how PubChem’s data, tools, and services can be used for virtual screening and reviews recent publications that discuss important aspects of exploiting PubChem for drug discovery. Expert opinion PubChem offers comprehensive chemical information useful for drug discovery. It also provides multiple programmatic access routes, which are essential to build automated virtual screening pipelines that exploit PubChem data. In addition, PubChemRDF allows users to download PubChem data and load them into a local computing facility, facilitating data integration between PubChem and other resources. PubChem resources have been used in many studies for developing bioactivity and toxicity prediction models, discovering polypharmacologic (multi-target) ligands, and identifying new macromolecule targets of compounds (for drug-repurposing or off-target side effect prediction). These studies demonstrate the usefulness of PubChem as a key resource for computer-aided drug discovery and related area. PMID:27454129

  17. Bi-model processing for early detection of breast tumor in CAD system

    NASA Astrophysics Data System (ADS)

    Mughal, Bushra; Sharif, Muhammad; Muhammad, Nazeer

    2017-06-01

    Early screening of skeptical masses in mammograms may reduce mortality rate among women. This rate can be further reduced upon developing the computer-aided diagnosis system with decrease in false assumptions in medical informatics. This method highlights the early tumor detection in digitized mammograms. For improving the performance of this system, a novel bi-model processing algorithm is introduced. It divides the region of interest into two parts, the first one is called pre-segmented region (breast parenchyma) and other is the post-segmented region (suspicious region). This system follows the scheme of the preprocessing technique of contrast enhancement that can be utilized to segment and extract the desired feature of the given mammogram. In the next phase, a hybrid feature block is presented to show the effective performance of computer-aided diagnosis. In order to assess the effectiveness of the proposed method, a database provided by the society of mammographic images is tested. Our experimental outcomes on this database exhibit the usefulness and robustness of the proposed method.

  18. In silico discovery and in vitro activity of inhibitors against Mycobacterium tuberculosis 7,8-diaminopelargonic acid synthase (Mtb BioA).

    PubMed

    Billones, Junie B; Carrillo, Maria Constancia O; Organo, Voltaire G; Sy, Jamie Bernadette A; Clavio, Nina Abigail B; Macalino, Stephani Joy Y; Emnacen, Inno A; Lee, Alexandra P; Ko, Paul Kenny L; Concepcion, Gisela P

    2017-01-01

    Computer-aided drug discovery and development approaches such as virtual screening, molecular docking, and in silico drug property calculations have been utilized in this effort to discover new lead compounds against tuberculosis. The enzyme 7,8-diaminopelargonic acid aminotransferase (BioA) in Mycobacterium tuberculosis ( Mtb ), primarily involved in the lipid biosynthesis pathway, was chosen as the drug target due to the fact that humans are not capable of synthesizing biotin endogenously. The computational screening of 4.5 million compounds from the Enamine REAL database has ultimately yielded 45 high-scoring, high-affinity compounds with desirable in silico absorption, distribution, metabolism, excretion, and toxicity properties. Seventeen of the 45 compounds were subjected to bioactivity validation using the resazurin microtiter assay. Among the 4 actives, compound 7 (( Z )- N -(2-isopropoxyphenyl)-2-oxo-2-((3-(trifluoromethyl)cyclohexyl)amino)acetimidic acid) displayed inhibitory activity up to 83% at 10 μg/mL concentration against the growth of the Mtb H37Ra strain.

  19. In silico discovery and in vitro activity of inhibitors against Mycobacterium tuberculosis 7,8-diaminopelargonic acid synthase (Mtb BioA)

    PubMed Central

    Billones, Junie B; Carrillo, Maria Constancia O; Organo, Voltaire G; Sy, Jamie Bernadette A; Clavio, Nina Abigail B; Macalino, Stephani Joy Y; Emnacen, Inno A; Lee, Alexandra P; Ko, Paul Kenny L; Concepcion, Gisela P

    2017-01-01

    Computer-aided drug discovery and development approaches such as virtual screening, molecular docking, and in silico drug property calculations have been utilized in this effort to discover new lead compounds against tuberculosis. The enzyme 7,8-diaminopelargonic acid aminotransferase (BioA) in Mycobacterium tuberculosis (Mtb), primarily involved in the lipid biosynthesis pathway, was chosen as the drug target due to the fact that humans are not capable of synthesizing biotin endogenously. The computational screening of 4.5 million compounds from the Enamine REAL database has ultimately yielded 45 high-scoring, high-affinity compounds with desirable in silico absorption, distribution, metabolism, excretion, and toxicity properties. Seventeen of the 45 compounds were subjected to bioactivity validation using the resazurin microtiter assay. Among the 4 actives, compound 7 ((Z)-N-(2-isopropoxyphenyl)-2-oxo-2-((3-(trifluoromethyl)cyclohexyl)amino)acetimidic acid) displayed inhibitory activity up to 83% at 10 μg/mL concentration against the growth of the Mtb H37Ra strain. PMID:28280303

  20. Optic cup segmentation from fundus images for glaucoma diagnosis.

    PubMed

    Hu, Man; Zhu, Chenghao; Li, Xiaoxing; Xu, Yongli

    2017-01-02

    Glaucoma is a serious disease that can cause complete, permanent blindness, and its early diagnosis is very difficult. In recent years, computer-aided screening and diagnosis of glaucoma has made considerable progress. The optic cup segmentation from fundus images is an extremely important part for the computer-aided screening and diagnosis of glaucoma. This paper presented an automatic optic cup segmentation method that used both color difference information and vessel bends information from fundus images to determine the optic cup boundary. During the implementation of this algorithm, not only were the locations of the 2 types of information points used, but also the confidences of the information points were evaluated. In this way, the information points with higher confidence levels contributed more to the determination of the final cup boundary. The proposed method was evaluated using a public database for fundus images. The experimental results demonstrated that the cup boundaries obtained by the proposed method were more consistent than existing methods with the results obtained by ophthalmologists.

  1. Optic cup segmentation from fundus images for glaucoma diagnosis

    PubMed Central

    Hu, Man; Zhu, Chenghao; Li, Xiaoxing; Xu, Yongli

    2017-01-01

    ABSTRACT Glaucoma is a serious disease that can cause complete, permanent blindness, and its early diagnosis is very difficult. In recent years, computer-aided screening and diagnosis of glaucoma has made considerable progress. The optic cup segmentation from fundus images is an extremely important part for the computer-aided screening and diagnosis of glaucoma. This paper presented an automatic optic cup segmentation method that used both color difference information and vessel bends information from fundus images to determine the optic cup boundary. During the implementation of this algorithm, not only were the locations of the 2 types of information points used, but also the confidences of the information points were evaluated. In this way, the information points with higher confidence levels contributed more to the determination of the final cup boundary. The proposed method was evaluated using a public database for fundus images. The experimental results demonstrated that the cup boundaries obtained by the proposed method were more consistent than existing methods with the results obtained by ophthalmologists. PMID:27764542

  2. Design and Implementation of the CEBAF Element Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theodore Larrieu, Christopher Slominski, Michele Joyce

    2011-10-01

    With inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a first step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting front-end computers to building controls screens. A particular requirement influencing the CED design is that it must provide consistent access to not only present, but also future, and eventually past, configurations of the CEBAF accelerator. To accomplish this, an introspective database schema was designed that allows new elements, element types, andmore » element properties to be defined on-the-fly without changing table structure. When used in conjunction with the Oracle Workspace Manager, it allows users to seamlessly query data from any time in the database history with the exact same tools as they use for querying the present configuration. Users can also check-out workspaces and use them as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented API that is translated automatically from original C++ into native libraries for script languages such as perl, php, and TCL making access to the CED easy and ubiquitous. Notice: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.« less

  3. The CSB Incident Screening Database: description, summary statistics and uses.

    PubMed

    Gomez, Manuel R; Casper, Susan; Smith, E Allen

    2008-11-15

    This paper briefly describes the Chemical Incident Screening Database currently used by the CSB to identify and evaluate chemical incidents for possible investigations, and summarizes descriptive statistics from this database that can potentially help to estimate the number, character, and consequences of chemical incidents in the US. The report compares some of the information in the CSB database to roughly similar information available from databases operated by EPA and the Agency for Toxic Substances and Disease Registry (ATSDR), and explores the possible implications of these comparisons with regard to the dimension of the chemical incident problem. Finally, the report explores in a preliminary way whether a system modeled after the existing CSB screening database could be developed to serve as a national surveillance tool for chemical incidents.

  4. CrossCheck: an open-source web tool for high-throughput screen data analysis.

    PubMed

    Najafov, Jamil; Najafov, Ayaz

    2017-07-19

    Modern high-throughput screening methods allow researchers to generate large datasets that potentially contain important biological information. However, oftentimes, picking relevant hits from such screens and generating testable hypotheses requires training in bioinformatics and the skills to efficiently perform database mining. There are currently no tools available to general public that allow users to cross-reference their screen datasets with published screen datasets. To this end, we developed CrossCheck, an online platform for high-throughput screen data analysis. CrossCheck is a centralized database that allows effortless comparison of the user-entered list of gene symbols with 16,231 published datasets. These datasets include published data from genome-wide RNAi and CRISPR screens, interactome proteomics and phosphoproteomics screens, cancer mutation databases, low-throughput studies of major cell signaling mediators, such as kinases, E3 ubiquitin ligases and phosphatases, and gene ontological information. Moreover, CrossCheck includes a novel database of predicted protein kinase substrates, which was developed using proteome-wide consensus motif searches. CrossCheck dramatically simplifies high-throughput screen data analysis and enables researchers to dig deep into the published literature and streamline data-driven hypothesis generation. CrossCheck is freely accessible as a web-based application at http://proteinguru.com/crosscheck.

  5. Reference manual for data base on Nevada water-rights permits

    USGS Publications Warehouse

    Cartier, K.D.; Bauer, E.M.; Farnham, J.L.

    1995-01-01

    The U.S. Geological Survey and Nevada Division of Water Resources have cooperatively developed and implemented a data-base system for managing water-rights permit information for the State of Nevada. The Water-Rights Permit data base is part of an integrated system of computer data bases using the Ingres Relational Data-Base Manage-ment System, which allows efficient storage and access to water information from the State Engineer's office. The data base contains a main table, three ancillary tables, and five lookup tables, as well as a menu-driven system for entering, updating, and reporting on the data. This reference guide outlines the general functions of the system and provides a brief description of data tables and data-entry screens.

  6. Predictive framework for shape-selective separations in three-dimensional zeolites and metal-organic frameworks.

    PubMed

    First, Eric L; Gounaris, Chrysanthos E; Floudas, Christodoulos A

    2013-05-07

    With the growing number of zeolites and metal-organic frameworks (MOFs) available, computational methods are needed to screen databases of structures to identify those most suitable for applications of interest. We have developed novel methods based on mathematical optimization to predict the shape selectivity of zeolites and MOFs in three dimensions by considering the energy costs of transport through possible pathways. Our approach is applied to databases of over 1800 microporous materials including zeolites, MOFs, zeolitic imidazolate frameworks, and hypothetical MOFs. New materials are identified for applications in gas separations (CO2/N2, CO2/CH4, and CO2/H2), air separation (O2/N2), and chemicals (propane/propylene, ethane/ethylene, styrene/ethylbenzene, and xylenes).

  7. Improving compound-protein interaction prediction by building up highly credible negative samples.

    PubMed

    Liu, Hui; Sun, Jianjiang; Guan, Jihong; Zheng, Jie; Zhou, Shuigeng

    2015-06-15

    Computational prediction of compound-protein interactions (CPIs) is of great importance for drug design and development, as genome-scale experimental validation of CPIs is not only time-consuming but also prohibitively expensive. With the availability of an increasing number of validated interactions, the performance of computational prediction approaches is severely impended by the lack of reliable negative CPI samples. A systematic method of screening reliable negative sample becomes critical to improving the performance of in silico prediction methods. This article aims at building up a set of highly credible negative samples of CPIs via an in silico screening method. As most existing computational models assume that similar compounds are likely to interact with similar target proteins and achieve remarkable performance, it is rational to identify potential negative samples based on the converse negative proposition that the proteins dissimilar to every known/predicted target of a compound are not much likely to be targeted by the compound and vice versa. We integrated various resources, including chemical structures, chemical expression profiles and side effects of compounds, amino acid sequences, protein-protein interaction network and functional annotations of proteins, into a systematic screening framework. We first tested the screened negative samples on six classical classifiers, and all these classifiers achieved remarkably higher performance on our negative samples than on randomly generated negative samples for both human and Caenorhabditis elegans. We then verified the negative samples on three existing prediction models, including bipartite local model, Gaussian kernel profile and Bayesian matrix factorization, and found that the performances of these models are also significantly improved on the screened negative samples. Moreover, we validated the screened negative samples on a drug bioactivity dataset. Finally, we derived two sets of new interactions by training an support vector machine classifier on the positive interactions annotated in DrugBank and our screened negative interactions. The screened negative samples and the predicted interactions provide the research community with a useful resource for identifying new drug targets and a helpful supplement to the current curated compound-protein databases. Supplementary files are available at: http://admis.fudan.edu.cn/negative-cpi/. © The Author 2015. Published by Oxford University Press.

  8. VSDMIP: virtual screening data management on an integrated platform

    NASA Astrophysics Data System (ADS)

    Gil-Redondo, Rubén; Estrada, Jorge; Morreale, Antonio; Herranz, Fernando; Sancho, Javier; Ortiz, Ángel R.

    2009-03-01

    A novel software (VSDMIP) for the virtual screening (VS) of chemical libraries integrated within a MySQL relational database is presented. Two main features make VSDMIP clearly distinguishable from other existing computational tools: (i) its database, which stores not only ligand information but also the results from every step in the VS process, and (ii) its modular and pluggable architecture, which allows customization of the VS stages (such as the programs used for conformer generation or docking), through the definition of a detailed workflow employing user-configurable XML files. VSDMIP, therefore, facilitates the storage and retrieval of VS results, easily adapts to the specific requirements of each method and tool used in the experiments, and allows the comparison of different VS methodologies. To validate the usefulness of VSDMIP as an automated tool for carrying out VS several experiments were run on six protein targets (acetylcholinesterase, cyclin-dependent kinase 2, coagulation factor Xa, estrogen receptor alpha, p38 MAP kinase, and neuraminidase) using nine binary (actives/inactive) test sets. The performance of several VS configurations was evaluated by means of enrichment factors and receiver operating characteristic plots.

  9. Organization and evolution of organized cervical cytology screening in Thailand.

    PubMed

    Khuhaprema, Thiravud; Attasara, Pattarawin; Srivatanakul, Petcharin; Sangrajrang, Suleeporn; Muwonge, Richard; Sauvaget, Catherine; Sankaranarayanan, Rengaswamy

    2012-08-01

    To describe phase 1 of an organized cytology screening project initiated in Thailand by the Ministry of Public Health and the National Health Security Office. Women aged 35-60 years were encouraged to undergo cervical screening in primary care units and hospitals through awareness programs. Papanicolaou smears were processed and reported at district or provincial cytology laboratories. Women with normal test results were advised to undergo repeat screening after 5 years, while those with precancerous and cancerous lesions were referred for colposcopy, biopsy, and treatment. Information on screening, referral, investigations, and therapy were logged in a computer database. Between 2005 and 2009, 69.2% of the 4030833 targeted women were screened. In all, 20991 women had inadequate smears; 27253 had low-grade squamous intraepithelial lesions; 15706 had high-grade squamous intraepithelial lesions; and 2920 had invasive cancers. Information on the management of precancerous lesions was available for only 17.4% of women referred for colposcopy. Although follow-up data on women with positive test results were inadequately documented, the present findings indicate that provision of cytology services through the existing healthcare system is feasible. Copyright © 2012 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.

  10. Inhibitors of Helicobacter pylori Protease HtrA Found by ‘Virtual Ligand’ Screening Combat Bacterial Invasion of Epithelia

    PubMed Central

    Schneider, Petra; Hoy, Benjamin; Wessler, Silja; Schneider, Gisbert

    2011-01-01

    Background The human pathogen Helicobacter pylori (H. pylori) is a main cause for gastric inflammation and cancer. Increasing bacterial resistance against antibiotics demands for innovative strategies for therapeutic intervention. Methodology/Principal Findings We present a method for structure-based virtual screening that is based on the comprehensive prediction of ligand binding sites on a protein model and automated construction of a ligand-receptor interaction map. Pharmacophoric features of the map are clustered and transformed in a correlation vector (‘virtual ligand’) for rapid virtual screening of compound databases. This computer-based technique was validated for 18 different targets of pharmaceutical interest in a retrospective screening experiment. Prospective screening for inhibitory agents was performed for the protease HtrA from the human pathogen H. pylori using a homology model of the target protein. Among 22 tested compounds six block E-cadherin cleavage by HtrA in vitro and result in reduced scattering and wound healing of gastric epithelial cells, thereby preventing bacterial infiltration of the epithelium. Conclusions/Significance This study demonstrates that receptor-based virtual screening with a permissive (‘fuzzy’) pharmacophore model can help identify small bioactive agents for combating bacterial infection. PMID:21483848

  11. "On-screen" writing and composing: two years experience with Manuscript Manager, Apple II and IBM-PC versions.

    PubMed

    Offerhaus, L

    1989-06-01

    The problems of the direct composition of a biomedical manuscript on a personal computer are discussed. Most word processing software is unsuitable because literature references, once stored, cannot be rearranged if major changes are necessary. These obstacles have been overcome in Manuscript Manager, a combination of word processing and database software. As it follows Council of Biology Editors and Vancouver rules, the printouts should be technically acceptable to most leading biomedical journals.

  12. Assessment of two mammographic density related features in predicting near-term breast cancer risk

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Sumkin, Jules H.; Zuley, Margarita L.; Wang, Xingwei; Klym, Amy H.; Gur, David

    2012-02-01

    In order to establish a personalized breast cancer screening program, it is important to develop risk models that have high discriminatory power in predicting the likelihood of a woman developing an imaging detectable breast cancer in near-term (e.g., <3 years after a negative examination in question). In epidemiology-based breast cancer risk models, mammographic density is considered the second highest breast cancer risk factor (second to woman's age). In this study we explored a new feature, namely bilateral mammographic density asymmetry, and investigated the feasibility of predicting near-term screening outcome. The database consisted of 343 negative examinations, of which 187 depicted cancers that were detected during the subsequent screening examination and 155 that remained negative. We computed the average pixel value of the segmented breast areas depicted on each cranio-caudal view of the initial negative examinations. We then computed the mean and difference mammographic density for paired bilateral images. Using woman's age, subjectively rated density (BIRADS), and computed mammographic density related features we compared classification performance in estimating the likelihood of detecting cancer during the subsequent examination using areas under the ROC curves (AUC). The AUCs were 0.63+/-0.03, 0.54+/-0.04, 0.57+/-0.03, 0.68+/-0.03 when using woman's age, BIRADS rating, computed mean density and difference in computed bilateral mammographic density, respectively. Performance increased to 0.62+/-0.03 and 0.72+/-0.03 when we fused mean and difference in density with woman's age. The results suggest that, in this study, bilateral mammographic tissue density is a significantly stronger (p<0.01) risk indicator than both woman's age and mean breast density.

  13. Network-based reading system for lung cancer screening CT

    NASA Astrophysics Data System (ADS)

    Fujino, Yuichi; Fujimura, Kaori; Nomura, Shin-ichiro; Kawashima, Harumi; Tsuchikawa, Megumu; Matsumoto, Toru; Nagao, Kei-ichi; Uruma, Takahiro; Yamamoto, Shinji; Takizawa, Hotaka; Kuroda, Chikazumi; Nakayama, Tomio

    2006-03-01

    This research aims to support chest computed tomography (CT) medical checkups to decrease the death rate by lung cancer. We have developed a remote cooperative reading system for lung cancer screening over the Internet, a secure transmission function, and a cooperative reading environment. It is called the Network-based Reading System. A telemedicine system involves many issues, such as network costs and data security if we use it over the Internet, which is an open network. In Japan, broadband access is widespread and its cost is the lowest in the world. We developed our system considering human machine interface and security. It consists of data entry terminals, a database server, a computer aided diagnosis (CAD) system, and some reading terminals. It uses a secure Digital Imaging and Communication in Medicine (DICOM) encrypting method and Public Key Infrastructure (PKI) based secure DICOM image data distribution. We carried out an experimental trial over the Japan Gigabit Network (JGN), which is the testbed for the Japanese next-generation network, and conducted verification experiments of secure screening image distribution, some kinds of data addition, and remote cooperative reading. We found that network bandwidth of about 1.5 Mbps enabled distribution of screening images and cooperative reading and that the encryption and image distribution methods we proposed were applicable to the encryption and distribution of general DICOM images via the Internet.

  14. Toward virtual anatomy: a stereoscopic 3-D interactive multimedia computer program for cranial osteology.

    PubMed

    Trelease, R B

    1996-01-01

    Advances in computer visualization and user interface technologies have enabled development of "virtual reality" programs that allow users to perceive and to interact with objects in artificial three-dimensional environments. Such technologies were used to create an image database and program for studying the human skull, a specimen that has become increasingly expensive and scarce. Stereoscopic image pairs of a museum-quality skull were digitized from multiple views. For each view, the stereo pairs were interlaced into a single, field-sequential stereoscopic picture using an image processing program. The resulting interlaced image files are organized in an interactive multimedia program. At run-time, gray-scale 3-D images are displayed on a large-screen computer monitor and observed through liquid-crystal shutter goggles. Users can then control the program and change views with a mouse and cursor to point-and-click on screen-level control words ("buttons"). For each view of the skull, an ID control button can be used to overlay pointers and captions for important structures. Pointing and clicking on "hidden buttons" overlying certain structures triggers digitized audio spoken word descriptions or mini lectures.

  15. A Public-Use, Full-Screen Interface for SPIRES Databases.

    ERIC Educational Resources Information Center

    Kriz, Harry M.

    This paper describes the techniques for implementing a full-screen, custom SPIRES interface for a public-use library database. The database-independent protocol that controls the system is described in detail. Source code for an entire working application using this interface is included. The protocol, with less than 170 lines of procedural code,…

  16. WISDOM-II: screening against multiple targets implicated in malaria using computational grid infrastructures.

    PubMed

    Kasam, Vinod; Salzemann, Jean; Botha, Marli; Dacosta, Ana; Degliesposti, Gianluca; Isea, Raul; Kim, Doman; Maass, Astrid; Kenyon, Colin; Rastelli, Giulio; Hofmann-Apitius, Martin; Breton, Vincent

    2009-05-01

    Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever needed. Out of the many proteins involved in the metabolic activities of the Plasmodium parasite, some are promising targets to carry out rational drug discovery. Recent years have witnessed the emergence of grids, which are highly distributed computing infrastructures particularly well fitted for embarrassingly parallel computations like docking. In 2005, a first attempt at using grids for large-scale virtual screening focused on plasmepsins and ended up in the identification of previously unknown scaffolds, which were confirmed in vitro to be active plasmepsin inhibitors. Following this success, a second deployment took place in the fall of 2006 focussing on one well known target, dihydrofolate reductase (DHFR), and on a new promising one, glutathione-S-transferase. In silico drug design, especially vHTS is a widely and well-accepted technology in lead identification and lead optimization. This approach, therefore builds, upon the progress made in computational chemistry to achieve more accurate in silico docking and in information technology to design and operate large scale grid infrastructures. On the computational side, a sustained infrastructure has been developed: docking at large scale, using different strategies in result analysis, storing of the results on the fly into MySQL databases and application of molecular dynamics refinement are MM-PBSA and MM-GBSA rescoring. The modeling results obtained are very promising. Based on the modeling results, In vitro results are underway for all the targets against which screening is performed. The current paper describes the rational drug discovery activity at large scale, especially molecular docking using FlexX software on computational grids in finding hits against three different targets (PfGST, PfDHFR, PvDHFR (wild type and mutant forms) implicated in malaria. Grid-enabled virtual screening approach is proposed to produce focus compound libraries for other biological targets relevant to fight the infectious diseases of the developing world.

  17. Identification of Novel Potential β-N-Acetyl-D-Hexosaminidase Inhibitors by Virtual Screening, Molecular Dynamics Simulation and MM-PBSA Calculations

    PubMed Central

    Liu, Jianling; Liu, Mengmeng; Yao, Yao; Wang, Jinan; Li, Yan; Li, Guohui; Wang, Yonghua

    2012-01-01

    Chitinolytic β-N-acetyl-d-hexosaminidases, as a class of chitin hydrolysis enzyme in insects, are a potential species-specific target for developing environmentally-friendly pesticides. Until now, pesticides targeting chitinolytic β-N-acetyl-d-hexosaminidase have not been developed. This study demonstrates a combination of different theoretical methods for investigating the key structural features of this enzyme responsible for pesticide inhibition, thus allowing for the discovery of novel small molecule inhibitors. Firstly, based on the currently reported crystal structure of this protein (OfHex1.pdb), we conducted a pre-screening of a drug-like compound database with 8 × 106 compounds by using the expanded pesticide-likeness criteria, followed by docking-based screening, obtaining 5 top-ranked compounds with favorable docking conformation into OfHex1. Secondly, molecular docking and molecular dynamics simulations are performed for the five complexes and demonstrate that one main hydrophobic pocket formed by residues Trp424, Trp448 and Trp524, which is significant for stabilization of the ligand–receptor complex, and key residues Asp477 and Trp490, are respectively responsible for forming hydrogen-bonding and π–π stacking interactions with the ligands. Finally, the molecular mechanics Poisson–Boltzmann surface area (MM-PBSA) analysis indicates that van der Waals interactions are the main driving force for the inhibitor binding that agrees with the fact that the binding pocket of OfHex1 is mainly composed of hydrophobic residues. These results suggest that screening the ZINC database can maximize the identification of potential OfHex1 inhibitors and the computational protocol will be valuable for screening potential inhibitors of the binding mode, which is useful for the future rational design of novel, potent OfHex1-specific pesticides. PMID:22605995

  18. An Automated System Combining Safety Signal Detection and Prioritization from Healthcare Databases: A Pilot Study.

    PubMed

    Arnaud, Mickael; Bégaud, Bernard; Thiessard, Frantz; Jarrion, Quentin; Bezin, Julien; Pariente, Antoine; Salvo, Francesco

    2018-04-01

    Signal detection from healthcare databases is possible, but is not yet used for routine surveillance of drug safety. One challenge is to develop methods for selecting signals that should be assessed with priority. The aim of this study was to develop an automated system combining safety signal detection and prioritization from healthcare databases and applicable to drugs used in chronic diseases. Patients present in the French EGB healthcare database for at least 1 year between 2005 and 2015 were considered. Noninsulin glucose-lowering drugs (NIGLDs) were selected as a case study, and hospitalization data were used to select important medical events (IME). Signal detection was performed quarterly from 2008 to 2015 using sequence symmetry analysis. NIGLD/IME associations were screened if one or more exposed case was identified in the quarter, and three or more exposed cases were identified in the population at the date of screening. Detected signals were prioritized using the Longitudinal-SNIP (L-SNIP) algorithm based on strength (S), novelty (N), and potential impact of signal (I), and pattern of drug use (P). Signals scored in the top 10% were identified as of high priority. A reference set was built based on NIGLD summaries of product characteristics (SPCs) to compute the performance of the developed system. A total of 815 associations were screened and 241 (29.6%) were detected as signals; among these, 58 (24.1%) were prioritized. The performance for signal detection was sensitivity = 47%; specificity = 80%; positive predictive value (PPV) 33%; negative predictive value = 82%. The use of the L-SNIP algorithm increased the early identification of positive controls, restricted to those mentioned in the SPCs after 2008: PPV = 100% versus PPV = 14% with its non-use. The system revealed a strong new signal with dipeptidylpeptidase-4 inhibitors and venous thromboembolism. The developed system seems promising for the routine use of healthcare data for safety surveillance of drugs used in chronic diseases.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gragg, Evan James; Middleton, Richard Stephen

    This report describes the benefits of the BECCUS screening tools. The goals of this project are to utilize NATCARB database for site screening; enhance NATCARB database; run CO 2-EOR simulations and economic models using updated reservoir data sets (SCO 2T-EOR).

  20. Computer-aided diagnosis of malignant mammograms using Zernike moments and SVM.

    PubMed

    Sharma, Shubhi; Khanna, Pritee

    2015-02-01

    This work is directed toward the development of a computer-aided diagnosis (CAD) system to detect abnormalities or suspicious areas in digital mammograms and classify them as malignant or nonmalignant. Original mammogram is preprocessed to separate the breast region from its background. To work on the suspicious area of the breast, region of interest (ROI) patches of a fixed size of 128×128 are extracted from the original large-sized digital mammograms. For training, patches are extracted manually from a preprocessed mammogram. For testing, patches are extracted from a highly dense area identified by clustering technique. For all extracted patches corresponding to a mammogram, Zernike moments of different orders are computed and stored as a feature vector. A support vector machine (SVM) is used to classify extracted ROI patches. The experimental study shows that the use of Zernike moments with order 20 and SVM classifier gives better results among other studies. The proposed system is tested on Image Retrieval In Medical Application (IRMA) reference dataset and Digital Database for Screening Mammography (DDSM) mammogram database. On IRMA reference dataset, it attains 99% sensitivity and 99% specificity, and on DDSM mammogram database, it obtained 97% sensitivity and 96% specificity. To verify the applicability of Zernike moments as a fitting texture descriptor, the performance of the proposed CAD system is compared with the other well-known texture descriptors namely gray-level co-occurrence matrix (GLCM) and discrete cosine transform (DCT).

  1. Application of Quantitative Structure–Activity Relationship Models of 5-HT1A Receptor Binding to Virtual Screening Identifies Novel and Potent 5-HT1A Ligands

    PubMed Central

    2015-01-01

    The 5-hydroxytryptamine 1A (5-HT1A) serotonin receptor has been an attractive target for treating mood and anxiety disorders such as schizophrenia. We have developed binary classification quantitative structure–activity relationship (QSAR) models of 5-HT1A receptor binding activity using data retrieved from the PDSP Ki database. The prediction accuracy of these models was estimated by external 5-fold cross-validation as well as using an additional validation set comprising 66 structurally distinct compounds from the World of Molecular Bioactivity database. These validated models were then used to mine three major types of chemical screening libraries, i.e., drug-like libraries, GPCR targeted libraries, and diversity libraries, to identify novel computational hits. The five best hits from each class of libraries were chosen for further experimental testing in radioligand binding assays, and nine of the 15 hits were confirmed to be active experimentally with binding affinity better than 10 μM. The most active compound, Lysergol, from the diversity library showed very high binding affinity (Ki) of 2.3 nM against 5-HT1A receptor. The novel 5-HT1A actives identified with the QSAR-based virtual screening approach could be potentially developed as novel anxiolytics or potential antischizophrenic drugs. PMID:24410373

  2. Large-Scale Chemical Similarity Networks for Target Profiling of Compounds Identified in Cell-Based Chemical Screens

    PubMed Central

    Lo, Yu-Chen; Senese, Silvia; Li, Chien-Ming; Hu, Qiyang; Huang, Yong; Damoiseaux, Robert; Torres, Jorge Z.

    2015-01-01

    Target identification is one of the most critical steps following cell-based phenotypic chemical screens aimed at identifying compounds with potential uses in cell biology and for developing novel disease therapies. Current in silico target identification methods, including chemical similarity database searches, are limited to single or sequential ligand analysis that have limited capabilities for accurate deconvolution of a large number of compounds with diverse chemical structures. Here, we present CSNAP (Chemical Similarity Network Analysis Pulldown), a new computational target identification method that utilizes chemical similarity networks for large-scale chemotype (consensus chemical pattern) recognition and drug target profiling. Our benchmark study showed that CSNAP can achieve an overall higher accuracy (>80%) of target prediction with respect to representative chemotypes in large (>200) compound sets, in comparison to the SEA approach (60–70%). Additionally, CSNAP is capable of integrating with biological knowledge-based databases (Uniprot, GO) and high-throughput biology platforms (proteomic, genetic, etc) for system-wise drug target validation. To demonstrate the utility of the CSNAP approach, we combined CSNAP's target prediction with experimental ligand evaluation to identify the major mitotic targets of hit compounds from a cell-based chemical screen and we highlight novel compounds targeting microtubules, an important cancer therapeutic target. The CSNAP method is freely available and can be accessed from the CSNAP web server (http://services.mbi.ucla.edu/CSNAP/). PMID:25826798

  3. An SPSS implementation of the nonrecursive outlier deletion procedure with shifting z score criterion (Van Selst & Jolicoeur, 1994).

    PubMed

    Thompson, Glenn L

    2006-05-01

    Sophisticated univariate outlier screening procedures are not yet available in widely used statistical packages such as SPSS. However, SPSS can accept user-supplied programs for executing these procedures. Failing this, researchers tend to rely on simplistic alternatives that can distort data because they do not adjust to cell-specific characteristics. Despite their popularity, these simple procedures may be especially ill suited for some applications (e.g., data from reaction time experiments). A user friendly SPSS Production Facility implementation of the shifting z score criterion procedure (Van Selst & Jolicoeur, 1994) is presented in an attempt to make it easier to use. In addition to outlier screening, optional syntax modules can be added that will perform tedious database management tasks (e.g., restructuring or computing means).

  4. SuperNatural: a searchable database of available natural compounds

    PubMed Central

    Dunkel, Mathias; Fullbeck, Melanie; Neumann, Stefanie; Preissner, Robert

    2006-01-01

    Although tremendous effort has been put into synthetic libraries, most drugs on the market are still natural compounds or derivatives thereof. There are encyclopaedias of natural compounds, but the availability of these compounds is often unclear and catalogues from numerous suppliers have to be checked. To overcome these problems we have compiled a database of ∼50 000 natural compounds from different suppliers. To enable efficient identification of the desired compounds, we have implemented substructure searches with typical templates. Starting points for in silico screenings are about 2500 well-known and classified natural compounds from a compendium that we have added. Possible medical applications can be ascertained via automatic searches for similar drugs in a free conformational drug database containing WHO indications. Furthermore, we have computed about three million conformers, which are deployed to account for the flexibilities of the compounds when the 3D superposition algorithm that we have developed is used. The SuperNatural Database is publicly available at . Viewing requires the free Chime-plugin from MDL (Chime) or Java2 Runtime Environment (MView), which is also necessary for using Marvin application for chemical drawing. PMID:16381957

  5. Discovery and Development of ATP-Competitive mTOR Inhibitors Using Computational Approaches.

    PubMed

    Luo, Yao; Wang, Ling

    2017-11-16

    The mammalian target of rapamycin (mTOR) is a central controller of cell growth, proliferation, metabolism, and angiogenesis. This protein is an attractive target for new anticancer drug development. Significant progress has been made in hit discovery, lead optimization, drug candidate development and determination of the three-dimensional (3D) structure of mTOR. Computational methods have been applied to accelerate the discovery and development of mTOR inhibitors helping to model the structure of mTOR, screen compound databases, uncover structure-activity relationship (SAR) and optimize the hits, mine the privileged fragments and design focused libraries. Besides, computational approaches were also applied to study protein-ligand interactions mechanisms and in natural product-driven drug discovery. Herein, we survey the most recent progress on the application of computational approaches to advance the discovery and development of compounds targeting mTOR. Future directions in the discovery of new mTOR inhibitors using computational methods are also discussed. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  6. 3D Pharmacophore-Based Virtual Screening and Docking Approaches toward the Discovery of Novel HPPD Inhibitors.

    PubMed

    Fu, Ying; Sun, Yi-Na; Yi, Ke-Han; Li, Ming-Qiang; Cao, Hai-Feng; Li, Jia-Zhong; Ye, Fei

    2017-06-09

    p -Hydroxyphenylpyruvate dioxygenase (HPPD) is not only the useful molecular target in treating life-threatening tyrosinemia type I, but also an important target for chemical herbicides. A combined in silico structure-based pharmacophore and molecular docking-based virtual screening were performed to identify novel potential HPPD inhibitors. The complex-based pharmacophore model (CBP) with 0.721 of ROC used for screening compounds showed remarkable ability to retrieve known active ligands from among decoy molecules. The ChemDiv database was screened using CBP-Hypo2 as a 3D query, and the best-fit hits subjected to molecular docking with two methods of LibDock and CDOCKER in Accelrys Discovery Studio 2.5 (DS 2.5) to discern interactions with key residues at the active site of HPPD. Four compounds with top rankings in the HipHop model and well-known binding model were finally chosen as lead compounds with potential inhibitory effects on the active site of target. The results provided powerful insight into the development of novel HPPD inhibitors herbicides using computational techniques.

  7. Quantum probability ranking principle for ligand-based virtual screening.

    PubMed

    Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal

    2017-04-01

    Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.

  8. Quantum probability ranking principle for ligand-based virtual screening

    NASA Astrophysics Data System (ADS)

    Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal

    2017-04-01

    Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.

  9. Data integration and warehousing: coordination between newborn screening and related public health programs.

    PubMed

    Therrell, Bradford L

    2003-01-01

    At birth, patient demographic and health information begin to accumulate in varied databases. There are often multiple sources of the same or similar data. New public health programs are often created without considering data linkages. Recently, newborn hearing screening (NHS) programs and immunization programs have virtually ignored the existence of newborn dried blood spot (DBS) newborn screening databases containing similar demographic data, creating data duplication in their 'new' systems. Some progressive public health departments are developing data warehouses of basic, recurrent patient information, and linking these databases to other health program databases where programs and services can benefit from such linkages. Demographic data warehousing saves time (and money) by eliminating duplicative data entry and reducing the chances of data errors. While newborn screening data are usually the first data available, they should not be the only data source considered for early data linkage or for populating a data warehouse. Birth certificate information should also be considered along with other data sources for infants that may not have received newborn screening or who may have been born outside of the jurisdiction and not have birth certificate information locally available. This newborn screening serial number provides a convenient identification number for use in the DBS program and for linking with other systems. As a minimum, data linkages should exist between newborn dried blood spot screening, newborn hearing screening, immunizations, birth certificates and birth defect registries.

  10. In silico genotoxicity of coumarins: application of the Phenol-Explorer food database to functional food science.

    PubMed

    Guardado Yordi, E; Matos, M J; Pérez Martínez, A; Tornes, A C; Santana, L; Molina, E; Uriarte, E

    2017-08-01

    Coumarins are a group of phytochemicals that may be beneficial or harmful to health depending on their type and dosage and the matrix that contains them. Some of these compounds have been proven to display pro-oxidant and clastogenic activities. Therefore, in the current work, we have studied the coumarins that are present in food sources extracted from the Phenol-Explorer database in order to predict their clastogenic activity and identify the structure-activity relationships and genotoxic structural alerts using alternative methods in the field of computational toxicology. It was necessary to compile information on the type and amount of coumarins in different food sources through the analysis of databases of food composition available online. A virtual screening using a clastogenic model and different software, such as MODESLAB, ChemDraw and STATISTIC, was performed. As a result, a table of food composition was prepared and qualitative information from this data was extracted. The virtual screening showed that the esterified substituents inactivate molecules, while the methoxyl and hydroxyl substituents contribute to their activity and constitute, together with the basic structures of the studied subclasses, clastogenic structural alerts. Chemical subclasses of simple coumarins and furocoumarins were classified as active (xanthotoxin, isopimpinellin, esculin, scopoletin, scopolin and bergapten). In silico genotoxicity was mainly predicted for coumarins found in beer, sherry, dried parsley, fresh parsley and raw celery stalks. The results obtained can be interesting for the future design of functional foods and dietary supplements. These studies constitute a reference for the genotoxic chemoinformatic analysis of bioactive compounds present in databases of food composition.

  11. SABRE: ligand/structure-based virtual screening approach using consensus molecular-shape pattern recognition.

    PubMed

    Wei, Ning-Ning; Hamza, Adel

    2014-01-27

    We present an efficient and rational ligand/structure shape-based virtual screening approach combining our previous ligand shape-based similarity SABRE (shape-approach-based routines enhanced) and the 3D shape of the receptor binding site. Our approach exploits the pharmacological preferences of a number of known active ligands to take advantage of the structural diversities and chemical similarities, using a linear combination of weighted molecular shape density. Furthermore, the algorithm generates a consensus molecular-shape pattern recognition that is used to filter and place the candidate structure into the binding pocket. The descriptor pool used to construct the consensus molecular-shape pattern consists of four dimensional (4D) fingerprints generated from the distribution of conformer states available to a molecule and the 3D shapes of a set of active ligands computed using SABRE software. The virtual screening efficiency of SABRE was validated using the Database of Useful Decoys (DUD) and the filtered version (WOMBAT) of 10 DUD targets. The ligand/structure shape-based similarity SABRE algorithm outperforms several other widely used virtual screening methods which uses the data fusion of multiscreening tools (2D and 3D fingerprints) and demonstrates a superior early retrieval rate of active compounds (EF(0.1%) = 69.0% and EF(1%) = 98.7%) from a large size of ligand database (∼95,000 structures). Therefore, our developed similarity approach can be of particular use for identifying active compounds that are similar to reference molecules and predicting activity against other targets (chemogenomics). An academic license of the SABRE program is available on request.

  12. Access to digital library databases in higher education: design problems and infrastructural gaps.

    PubMed

    Oswal, Sushil K

    2014-01-01

    After defining accessibility and usability, the author offers a broad survey of the research studies on digital content databases which have thus far primarily depended on data drawn from studies conducted by sighted researchers with non-disabled users employing screen readers and low vision devices. This article aims at producing a detailed description of the difficulties confronted by blind screen reader users with online library databases which now hold most of the academic, peer-reviewed journal and periodical content essential for research and teaching in higher education. The approach taken here is borrowed from descriptive ethnography which allows the author to create a complete picture of the accessibility and usability problems faced by an experienced academic user of digital library databases and screen readers. The author provides a detailed analysis of the different aspects of accessibility issues in digital databases under several headers with a special focus on full-text PDF files. The author emphasizes that long-term studies with actual, blind screen reader users employing both qualitative and computerized research tools can yield meaningful data for the designers and developers to improve these databases to a level that they begin to provide an equal access to the blind.

  13. Comparing the outcomes of two strategies for colorectal tumor detection: policy-promoted screening program versus health promotion service.

    PubMed

    Wu, Ping-Hsiu; Lin, Yu-Min; Liao, Chao-Sheng; Chang, Hung-Chuen; Chen, Yu-Hung; Yang, Kuo-Ching; Shih, Chia-Hui

    2013-06-01

    The Taiwanese government has proposed a population-based colorectal tumor detection program for the average-risk population. This study's objectives were to understand the outcomes of these screening policies and to evaluate the effectiveness of the program. We compared two databases compiled in one medical center. The "policy-promoted cancer screening" (PPS) database was built on the basis of the policy of the Taiwan Bureau of National Health Insurance for cancer screening. The "health promotion service" (HPS) database was built to provide health check-ups for self-paid volunteers. Both the PPS and HPS databases employ the immunochemical fecal occult blood test (iFOBT) and colonoscopy for colorectal tumor screening using different strategies. A comparison of outcomes between the PPS and HPS included: (1) quality indicators-compliance rate, cecum reaching rate, and tumor detection rate; and (2) validity indicators-sensitivity, specificity, positive, and negative predictive values for detecting colorectal neoplasms. A total of 10,563 and 1481 individuals were enrolled in PPS and HPS, respectively. Among quality indicators, there was no statistically significant difference in the cecum reaching rate between PPS and HPS. The compliance rates were 56.1% for PPS and 91.8% for HPS (p < 0.001). The advanced adenoma detection rates of PPS and HPS were 1.0% and 3.6%, respectively (p < 0.01). The carcinoma detection rates were 0.3% and 0.4%, respectively (p = 0.59). For validity indicators, PPS provides only a positive predictive value for colorectal tumor detection. HPS provides additional validity indicators, including sensitivity, specificity, positive predictive value, and negative predictive value, for colorectal tumor screening. In comparison with the outcomes of the HPS database, the screening efficacy of the PPS database is even for detecting colorectal carcinoma but is limited in detecting advanced adenoma. HPS may provide comprehensive validity indicators and will be helpful in adjusting current policies for improving screening performance. Copyright © 2013. Published by Elsevier B.V.

  14. Low-Dose Chest Computed Tomography for Lung Cancer Screening Among Hodgkin Lymphoma Survivors: A Cost-Effectiveness Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wattson, Daniel A., E-mail: dwattson@partners.org; Hunink, M.G. Myriam; DiPiro, Pamela J.

    2014-10-01

    Purpose: Hodgkin lymphoma (HL) survivors face an increased risk of treatment-related lung cancer. Screening with low-dose computed tomography (LDCT) may allow detection of early stage, resectable cancers. We developed a Markov decision-analytic and cost-effectiveness model to estimate the merits of annual LDCT screening among HL survivors. Methods and Materials: Population databases and HL-specific literature informed key model parameters, including lung cancer rates and stage distribution, cause-specific survival estimates, and utilities. Relative risks accounted for radiation therapy (RT) technique, smoking status (>10 pack-years or current smokers vs not), age at HL diagnosis, time from HL treatment, and excess radiation from LDCTs.more » LDCT assumptions, including expected stage-shift, false-positive rates, and likely additional workup were derived from the National Lung Screening Trial and preliminary results from an internal phase 2 protocol that performed annual LDCTs in 53 HL survivors. We assumed a 3% discount rate and a willingness-to-pay (WTP) threshold of $50,000 per quality-adjusted life year (QALY). Results: Annual LDCT screening was cost effective for all smokers. A male smoker treated with mantle RT at age 25 achieved maximum QALYs by initiating screening 12 years post-HL, with a life expectancy benefit of 2.1 months and an incremental cost of $34,841/QALY. Among nonsmokers, annual screening produced a QALY benefit in some cases, but the incremental cost was not below the WTP threshold for any patient subsets. As age at HL diagnosis increased, earlier initiation of screening improved outcomes. Sensitivity analyses revealed that the model was most sensitive to the lung cancer incidence and mortality rates and expected stage-shift from screening. Conclusions: HL survivors are an important high-risk population that may benefit from screening, especially those treated in the past with large radiation fields including mantle or involved-field RT. Screening may be cost effective for all smokers but possibly not for nonsmokers despite a small life expectancy benefit.« less

  15. A personal digital assistant application (MobilDent) for dental fieldwork data collection, information management and database handling.

    PubMed

    Forsell, M; Häggström, M; Johansson, O; Sjögren, P

    2008-11-08

    To develop a personal digital assistant (PDA) application for oral health assessment fieldwork, including back-office and database systems (MobilDent). System design, construction and implementation of PDA, back-office and database systems. System requirements for MobilDent were collected, analysed and translated into system functions. User interfaces were implemented and system architecture was outlined. MobilDent was based on a platform with. NET (Microsoft) components, using an SQL Server 2005 (Microsoft) for data storage with Windows Mobile (Microsoft) operating system. The PDA devices were Dell Axim. System functions and user interfaces were specified for MobilDent. User interfaces for PDA, back-office and database systems were based on. NET programming. The PDA user interface was based on Windows suitable to a PDA display, whereas the back-office interface was designed for a normal-sized computer screen. A synchronisation module (MS Active Sync, Microsoft) was used to enable download of field data from PDA to the database. MobilDent is a feasible application for oral health assessment fieldwork, and the oral health assessment database may prove a valuable source for care planning, educational and research purposes. Further development of the MobilDent system will include wireless connectivity with download-on-demand technology.

  16. Computational databases, pathway and cheminformatics tools for tuberculosis drug discovery

    PubMed Central

    Ekins, Sean; Freundlich, Joel S.; Choi, Inhee; Sarker, Malabika; Talcott, Carolyn

    2010-01-01

    We are witnessing the growing menace of both increasing cases of drug-sensitive and drug-resistant Mycobacterium tuberculosis strains and the challenge to produce the first new tuberculosis (TB) drug in well over 40 years. The TB community, having invested in extensive high-throughput screening efforts, is faced with the question of how to optimally leverage this data in order to move from a hit to a lead to a clinical candidate and potentially a new drug. Complementing this approach, yet conducted on a much smaller scale, cheminformatic techniques have been leveraged and are herein reviewed. We suggest these computational approaches should be more optimally integrated in a workflow with experimental approaches to accelerate TB drug discovery. PMID:21129975

  17. Confidentiality breach.

    PubMed

    1997-08-22

    A former Pinellas County, FL public health worker, [name removed], is charged with using a government AIDS surveillance database for his own personal dating scheme. He kept the county health department records on his own laptop computer and used the information to screen potential dates for himself and his friends. [Name removed] filed a pretrial free speech argument contending that his First Amendment rights were being violated. The Pinellas County judge dismissed that argument, clearing the way for a September trial. [Name removed] could face a year in prison on a first-degree misdemeanor charge.

  18. Fingerprint-Based Structure Retrieval Using Electron Density

    PubMed Central

    Yin, Shuangye; Dokholyan, Nikolay V.

    2010-01-01

    We present a computational approach that can quickly search a large protein structural database to identify structures that fit a given electron density, such as determined by cryo-electron microscopy. We use geometric invariants (fingerprints) constructed using 3D Zernike moments to describe the electron density, and reduce the problem of fitting of the structure to the electron density to simple fingerprint comparison. Using this approach, we are able to screen the entire Protein Data Bank and identify structures that fit two experimental electron densities determined by cryo-electron microscopy. PMID:21287628

  19. Fingerprint-based structure retrieval using electron density.

    PubMed

    Yin, Shuangye; Dokholyan, Nikolay V

    2011-03-01

    We present a computational approach that can quickly search a large protein structural database to identify structures that fit a given electron density, such as determined by cryo-electron microscopy. We use geometric invariants (fingerprints) constructed using 3D Zernike moments to describe the electron density, and reduce the problem of fitting of the structure to the electron density to simple fingerprint comparison. Using this approach, we are able to screen the entire Protein Data Bank and identify structures that fit two experimental electron densities determined by cryo-electron microscopy. Copyright © 2010 Wiley-Liss, Inc.

  20. Mapping the patent landscape of synthetic biology for fine chemical production pathways.

    PubMed

    Carbonell, Pablo; Gök, Abdullah; Shapira, Philip; Faulon, Jean-Loup

    2016-09-01

    A goal of synthetic biology bio-foundries is to innovate through an iterative design/build/test/learn pipeline. In assessing the value of new chemical production routes, the intellectual property (IP) novelty of the pathway is important. Exploratory studies can be carried using knowledge of the patent/IP landscape for synthetic biology and metabolic engineering. In this paper, we perform an assessment of pathways as potential targets for chemical production across the full catalogue of reachable chemicals in the extended metabolic space of chassis organisms, as computed by the retrosynthesis-based algorithm RetroPath. Our database for reactions processed by sequences in heterologous pathways was screened against the PatSeq database, a comprehensive collection of more than 150M sequences present in patent grants and applications. We also examine related patent families using Derwent Innovations. This large-scale computational study provides useful insights into the IP landscape of synthetic biology for fine and specialty chemicals production. © 2016 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  1. An effective method to screen sodium-based layered materials for sodium ion batteries

    NASA Astrophysics Data System (ADS)

    Zhang, Xu; Zhang, Zihe; Yao, Sai; Chen, An; Zhao, Xudong; Zhou, Zhen

    2018-03-01

    Due to the high cost and insufficient resource of lithium, sodium-ion batteries are widely investigated for large-scale applications. Typically, insertion-type materials possess better cyclic stability than alloy-type and conversion-type ones. Therefore, in this work, we proposed a facile and effective method to screen sodium-based layered materials based on Materials Project database as potential candidate insertion-type materials for sodium ion batteries. The obtained Na-based layered materials contains 38 kinds of space group, which reveals that the credibility of our screening approach would not be affected by the space group. Then, some important indexes of the representative materials, including the average voltage, volume change and sodium ion mobility, were further studied by means of density functional theory computations. Some materials with extremely low volume changes and Na diffusion barriers are promising candidates for sodium ion batteries. We believe that our classification algorithm could also be used to search for other alkali and multivalent ion-based layered materials, to accelerate the development of battery materials.

  2. Array comparative genomic hybridization and computational genome annotation in constitutional cytogenetics: suggesting candidate genes for novel submicroscopic chromosomal imbalance syndromes.

    PubMed

    Van Vooren, Steven; Coessens, Bert; De Moor, Bart; Moreau, Yves; Vermeesch, Joris R

    2007-09-01

    Genome-wide array comparative genomic hybridization screening is uncovering pathogenic submicroscopic chromosomal imbalances in patients with developmental disorders. In those patients, imbalances appear now to be scattered across the whole genome, and most patients carry different chromosomal anomalies. Screening patients with developmental disorders can be considered a forward functional genome screen. The imbalances pinpoint the location of genes that are involved in human development. Because most imbalances encompass regions harboring multiple genes, the challenge is to (1) identify those genes responsible for the specific phenotype and (2) disentangle the role of the different genes located in an imbalanced region. In this review, we discuss novel tools and relevant databases that have recently been developed to aid this gene discovery process. Identification of the functional relevance of genes will not only deepen our understanding of human development but will, in addition, aid in the data interpretation and improve genetic counseling.

  3. 48 CFR 352.227-14 - Rights in Data-Exceptional Circumstances.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....] Computer database or database means a collection of recorded information in a form capable of, and for the... databases or computer software documentation. Computer software documentation means owner's manuals, user's... nature (including computer databases and computer software documentation). This term does not include...

  4. RigFit: a new approach to superimposing ligand molecules.

    PubMed

    Lemmen, C; Hiller, C; Lengauer, T

    1998-09-01

    If structural knowledge of a receptor under consideration is lacking, drug design approaches focus on similarity or dissimilarity analysis of putative ligands. In this context the mutual ligand superposition is of utmost importance. Methods that are rapid enough to facilitate interactive usage, that allow to process sets of conformers and that enable database screening are of special interest here. The ability to superpose molecular fragments instead of entire molecules has proven to be helpful too. The RIGFIT approach meets these requirements and has several additional advantages. In three distinct test applications, we evaluated how closely we can approximate the observed relative orientation for a set of known crystal structures, we employed RIGFIT as a fragment placement procedure, and we performed a fragment-based database screening. The run time of RIGFIT can be traded off against its accuracy. To be competitive in accuracy with another state-of-the-art alignment tool, with which we compare our method explicitly, computing times of about 6 s per superposition on a common day workstation are required. If longer run times can be afforded the accuracy increases significantly. RIGFIT is part of the flexible superposition software FLEXS which can be accessed on the WWW [http:/(/)cartan.gmd.de/FlexS].

  5. The "GeneTrustee": a universal identification system that ensures privacy and confidentiality for human genetic databases.

    PubMed

    Burnett, Leslie; Barlow-Stewart, Kris; Proos, Anné L; Aizenberg, Harry

    2003-05-01

    This article describes a generic model for access to samples and information in human genetic databases. The model utilises a "GeneTrustee", a third-party intermediary independent of the subjects and of the investigators or database custodians. The GeneTrustee model has been implemented successfully in various community genetics screening programs and has facilitated research access to genetic databases while protecting the privacy and confidentiality of research subjects. The GeneTrustee model could also be applied to various types of non-conventional genetic databases, including neonatal screening Guthrie card collections, and to forensic DNA samples.

  6. Novel Hybrid Virtual Screening Protocol Based on Molecular Docking and Structure-Based Pharmacophore for Discovery of Methionyl-tRNA Synthetase Inhibitors as Antibacterial Agents

    PubMed Central

    Liu, Chi; He, Gu; Jiang, Qinglin; Han, Bo; Peng, Cheng

    2013-01-01

    Methione tRNA synthetase (MetRS) is an essential enzyme involved in protein biosynthesis in all living organisms and is a potential antibacterial target. In the current study, the structure-based pharmacophore (SBP)-guided method has been suggested to generate a comprehensive pharmacophore of MetRS based on fourteen crystal structures of MetRS-inhibitor complexes. In this investigation, a hybrid protocol of a virtual screening method, comprised of pharmacophore model-based virtual screening (PBVS), rigid and flexible docking-based virtual screenings (DBVS), is used for retrieving new MetRS inhibitors from commercially available chemical databases. This hybrid virtual screening approach was then applied to screen the Specs (202,408 compounds) database, a structurally diverse chemical database. Fifteen hit compounds were selected from the final hits and shifted to experimental studies. These results may provide important information for further research of novel MetRS inhibitors as antibacterial agents. PMID:23839093

  7. Fragment virtual screening based on Bayesian categorization for discovering novel VEGFR-2 scaffolds.

    PubMed

    Zhang, Yanmin; Jiao, Yu; Xiong, Xiao; Liu, Haichun; Ran, Ting; Xu, Jinxing; Lu, Shuai; Xu, Anyang; Pan, Jing; Qiao, Xin; Shi, Zhihao; Lu, Tao; Chen, Yadong

    2015-11-01

    The discovery of novel scaffolds against a specific target has long been one of the most significant but challengeable goals in discovering lead compounds. A scaffold that binds in important regions of the active pocket is more favorable as a starting point because scaffolds generally possess greater optimization possibilities. However, due to the lack of sufficient chemical space diversity of the databases and the ineffectiveness of the screening methods, it still remains a great challenge to discover novel active scaffolds. Since the strengths and weaknesses of both fragment-based drug design and traditional virtual screening (VS), we proposed a fragment VS concept based on Bayesian categorization for the discovery of novel scaffolds. This work investigated the proposal through an application on VEGFR-2 target. Firstly, scaffold and structural diversity of chemical space for 10 compound databases were explicitly evaluated. Simultaneously, a robust Bayesian classification model was constructed for screening not only compound databases but also their corresponding fragment databases. Although analysis of the scaffold diversity demonstrated a very unevenly distribution of scaffolds over molecules, results showed that our Bayesian model behaved better in screening fragments than molecules. Through a literature retrospective research, several generated fragments with relatively high Bayesian scores indeed exhibit VEGFR-2 biological activity, which strongly proved the effectiveness of fragment VS based on Bayesian categorization models. This investigation of Bayesian-based fragment VS can further emphasize the necessity for enrichment of compound databases employed in lead discovery by amplifying the diversity of databases with novel structures.

  8. Applications of computer-aided approaches in the development of hepatitis C antiviral agents.

    PubMed

    Ganesan, Aravindhan; Barakat, Khaled

    2017-04-01

    Hepatitis C virus (HCV) is a global health problem that causes several chronic life-threatening liver diseases. The numbers of people affected by HCV are rising annually. Since 2011, the FDA has approved several anti-HCV drugs; while many other promising HCV drugs are currently in late clinical trials. Areas covered: This review discusses the applications of different computational approaches in HCV drug design. Expert opinion: Molecular docking and virtual screening approaches have emerged as a low-cost tool to screen large databases and identify potential small-molecule hits against HCV targets. Ligand-based approaches are useful for filtering-out compounds with rich physicochemical properties to inhibit HCV targets. Molecular dynamics (MD) remains a useful tool in optimizing the ligand-protein complexes and understand the ligand binding modes and drug resistance mechanisms in HCV. Despite their varied roles, the application of in-silico approaches in HCV drug design is still in its infancy. A more mature application should aim at modelling the whole HCV replicon in its active form and help to identify new effective druggable sites within the replicon system. With more technological advancements, the roles of computer-aided methods are only going to increase several folds in the development of next-generation HCV drugs.

  9. Managing, profiling and analyzing a library of 2.6 million compounds gathered from 32 chemical providers.

    PubMed

    Monge, Aurélien; Arrault, Alban; Marot, Christophe; Morin-Allory, Luc

    2006-08-01

    The data for 3.8 million compounds from structural databases of 32 providers were gathered and stored in a single chemical database. Duplicates are removed using the IUPAC International Chemical Identifier. After this, 2.6 million compounds remain. Each database and the final one were studied in term of uniqueness, diversity, frameworks, 'drug-like' and 'lead-like' properties. This study also shows that there are more than 87 000 frameworks in the database. It contains 2.1 million 'drug-like' molecules among which, more than one million are 'lead-like'. This study has been carried out using 'ScreeningAssistant', a software dedicated to chemical databases management and screening sets generation. Compounds are stored in a MySQL database and all the operations on this database are carried out by Java code. The druglikeness and leadlikeness are estimated with 'in-house' scores using functions to estimate convenience to properties; unicity using the InChI code and diversity using molecular frameworks and fingerprints. The software has been conceived in order to facilitate the update of the database. 'ScreeningAssistant' is freely available under the GPL license.

  10. DEEP: Database of Energy Efficiency Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Piette, Mary; Lee, Sang Hoon

    A database of energy efficiency performance (DEEP) is a presimulated database to enable quick and accurate assessment of energy retrofit of commercial buildings. DEEP was compiled from results of about 10 million EnergyPlus simulations. DEEP provides energy savings for screening and evaluation of retrofit measures targeting the small and medium-sized office and retail buildings in California. The prototype building models are developed for a comprehensive assessment of building energy performance based on DOE commercial reference buildings and the California DEER [sic] prototype buildings. The prototype buildings represent seven building types across six vintages of constructions and 16 California climate zones.more » DEEP uses these prototypes to evaluate energy performance of about 100 energy conservation measures covering envelope, lighting, heating, ventilation, air conditioning, plug loads, and domestic hot war. DEEP consists the energy simulation results for individual retrofit measures as well as packages of measures to consider interactive effects between multiple measures. The large scale EnergyPlus simulations are being conducted on the super computers at the National Energy Research Scientific Computing Center (NERSC) of Lawrence Berkeley National Laboratory. The pre-simulation database is a part of the CEC PIER project to develop a web-based retrofit toolkit for small and medium-sized commercial buildings in California, which provides real-time energy retrofit feedback by querying DEEP with recommended measures, estimated energy savings and financial payback period based on users' decision criteria of maximizing energy savings, energy cost savings, carbon reduction, or payback of investment. The pre-simulated database and associated comprehensive measure analysis enhances the ability to performance assessments of retrofits to reduce energy use for small and medium buildings and business owners who typically do not have resources to conduct costly building energy audit.« less

  11. A computer-assisted data collection system for use in a multicenter study of American Indians and Alaska Natives: SCAPES.

    PubMed

    Edwards, Roger L; Edwards, Sandra L; Bryner, James; Cunningham, Kelly; Rogers, Amy; Slattery, Martha L

    2008-04-01

    We describe a computer-assisted data collection system developed for a multicenter cohort study of American Indian and Alaska Native people. The study computer-assisted participant evaluation system or SCAPES is built around a central database server that controls a small private network with touch screen workstations. SCAPES encompasses the self-administered questionnaires, the keyboard-based stations for interviewer-administered questionnaires, a system for inputting medical measurements, and administrative tasks such as data exporting, backup and management. Elements of SCAPES hardware/network design, data storage, programming language, software choices, questionnaire programming including the programming of questionnaires administered using audio computer-assisted self-interviewing (ACASI), and participant identification/data security system are presented. Unique features of SCAPES are that data are promptly made available to participants in the form of health feedback; data can be quickly summarized for tribes for health monitoring and planning at the community level; and data are available to study investigators for analyses and scientific evaluation.

  12. A reliable computational workflow for the selection of optimal screening libraries.

    PubMed

    Gilad, Yocheved; Nadassy, Katalin; Senderowitz, Hanoch

    2015-01-01

    The experimental screening of compound collections is a common starting point in many drug discovery projects. Successes of such screening campaigns critically depend on the quality of the screened library. Many libraries are currently available from different vendors yet the selection of the optimal screening library for a specific project is challenging. We have devised a novel workflow for the rational selection of project-specific screening libraries. The workflow accepts as input a set of virtual candidate libraries and applies the following steps to each library: (1) data curation; (2) assessment of ADME/T profile; (3) assessment of the number of promiscuous binders/frequent HTS hitters; (4) assessment of internal diversity; (5) assessment of similarity to known active compound(s) (optional); (6) assessment of similarity to in-house or otherwise accessible compound collections (optional). For ADME/T profiling, Lipinski's and Veber's rule-based filters were implemented and a new blood brain barrier permeation model was developed and validated (85 and 74 % success rate for training set and test set, respectively). Diversity and similarity descriptors which demonstrated best performances in terms of their ability to select either diverse or focused sets of compounds from three databases (Drug Bank, CMC and CHEMBL) were identified and used for diversity and similarity assessments. The workflow was used to analyze nine common screening libraries available from six vendors. The results of this analysis are reported for each library providing an assessment of its quality. Furthermore, a consensus approach was developed to combine the results of these analyses into a single score for selecting the optimal library under different scenarios. We have devised and tested a new workflow for the rational selection of screening libraries under different scenarios. The current workflow was implemented using the Pipeline Pilot software yet due to the usage of generic components, it can be easily adapted and reproduced by computational groups interested in rational selection of screening libraries. Furthermore, the workflow could be readily modified to include additional components. This workflow has been routinely used in our laboratory for the selection of libraries in multiple projects and consistently selects libraries which are well balanced across multiple parameters.Graphical abstract.

  13. DoD Identity Matching Engine for Security and Analysis (IMESA) Access to Criminal Justice Information (CJI) and Terrorist Screening Databases (TSDB)

    DTIC Science & Technology

    2016-05-04

    IMESA) Access to Criminal Justice Information (CJI) and Terrorist Screening Databases (TSDB) References: See Enclosure 1 1. PURPOSE. In...CJI database mirror image files. (3) Memorandums of understanding with the FBI CJIS as the data broker for DoD organizations that need access ...not for access determinations. (3) Legal restrictions established by the Sex Offender Registration and Notification Act (SORNA) jurisdictions on

  14. Countermeasure Evaluation and Validation Project (CEVP) Database Requirement Documentation

    NASA Technical Reports Server (NTRS)

    Shin, Sung Y.

    2003-01-01

    The initial focus of the project by the JSC laboratories will be to develop, test and implement a standardized complement of integrated physiological test (Integrated Testing Regimen, ITR) that will examine both system and intersystem function, and will be used to validate and certify candidate countermeasures. The ITR will consist of medical requirements (MRs) and non-MR core ITR tests, and countermeasure-specific testing. Non-MR and countermeasure-specific test data will be archived in a database specific to the CEVP. Development of a CEVP Database will be critical to documenting the progress of candidate countermeasures. The goal of this work is a fully functional software system that will integrate computer-based data collection and storage with secure, efficient, and practical distribution of that data over the Internet. This system will provide the foundation of a new level of interagency and international cooperation for scientific experimentation and research, providing intramural, international, and extramural collaboration through management and distribution of the CEVP data. The research performed this summer includes the first phase of the project. The first phase of the project is a requirements analysis. This analysis will identify the expected behavior of the system under normal conditions and abnormal conditions; that could affect the system's ability to produce this behavior; and the internal features in the system needed to reduce the risk of unexpected or unwanted behaviors. The second phase of this project have also performed in this summer. The second phase of project is the design of data entry screen and data retrieval screen for a working model of the Ground Data Database. The final report provided the requirements for the CEVP system in a variety of ways, so that both the development team and JSC technical management have a thorough understanding of how the system is expected to behave.

  15. Primary prevention of sudden cardiac death of the young athlete: the controversy about the screening electrocardiogram and its innovative artificial intelligence solution.

    PubMed

    Chang, Anthony C

    2012-03-01

    The preparticipation screening for athlete participation in sports typically entails a comprehensive medical and family history and a complete physical examination. A 12-lead electrocardiogram (ECG) can increase the likelihood of detecting cardiac diagnoses such as hypertrophic cardiomyopathy, but this diagnostic test as part of the screening process has engendered considerable controversy. The pro position is supported by argument that international screening protocols support its use, positive diagnosis has multiple benefits, history and physical examination are inadequate, primary prevention is essential, and the cost effectiveness is justified. Although the aforementioned myriad of justifications for routine ECG screening of young athletes can be persuasive, several valid contentions oppose supporting such a policy, namely, that the sudden death incidence is very (too) low, the ECG screening will be too costly, the false-positive rate is too high, resources will be allocated away from other diseases, and manpower is insufficient for its execution. Clinicians, including pediatric cardiologists, have an understandable proclivity for avoiding this prodigious national endeavor. The controversy, however, should not be focused on whether an inexpensive, noninvasive test such as an ECG should be mandated but should instead be directed at just how these tests for young athletes can be performed in the clinical imbroglio of these disease states (with variable genetic penetrance and phenotypic expression) with concomitant fiscal accountability and logistical expediency in this era of economic restraint. This monumental endeavor in any city or region requires two crucial elements well known to business scholars: implementation and execution. The eventual solution for the screening ECG dilemma requires a truly innovative and systematic approach that will liberate us from inadequate conventional solutions. Artificial intelligence, specifically the process termed "machine learning" and "neural networking," involves complex algorithms that allow computers to improve the decision-making process based on repeated input of empirical data (e.g., databases and ECGs). These elements all can be improved with a national database, evidence-based medicine, and in the near future, innovation that entails a Kurzweilian artificial intelligence infrastructure with machine learning and neural networking that will construct the ultimate clinical decision-making algorithm.

  16. A hybrid CNN feature model for pulmonary nodule malignancy risk differentiation.

    PubMed

    Wang, Huafeng; Zhao, Tingting; Li, Lihong Connie; Pan, Haixia; Liu, Wanquan; Gao, Haoqi; Han, Fangfang; Wang, Yuehai; Qi, Yifan; Liang, Zhengrong

    2018-01-01

    The malignancy risk differentiation of pulmonary nodule is one of the most challenge tasks of computer-aided diagnosis (CADx). Most recently reported CADx methods or schemes based on texture and shape estimation have shown relatively satisfactory on differentiating the risk level of malignancy among the nodules detected in lung cancer screening. However, the existing CADx schemes tend to detect and analyze characteristics of pulmonary nodules from a statistical perspective according to local features only. Enlightened by the currently prevailing learning ability of convolutional neural network (CNN), which simulates human neural network for target recognition and our previously research on texture features, we present a hybrid model that takes into consideration of both global and local features for pulmonary nodule differentiation using the largest public database founded by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). By comparing three types of CNN models in which two of them were newly proposed by us, we observed that the multi-channel CNN model yielded the best discrimination in capacity of differentiating malignancy risk of the nodules based on the projection of distributions of extracted features. Moreover, CADx scheme using the new multi-channel CNN model outperformed our previously developed CADx scheme using the 3D texture feature analysis method, which increased the computed area under a receiver operating characteristic curve (AUC) from 0.9441 to 0.9702.

  17. A gene expression biomarker accurately predicts estrogen ...

    EPA Pesticide Factsheets

    The EPA’s vision for the Endocrine Disruptor Screening Program (EDSP) in the 21st Century (EDSP21) includes utilization of high-throughput screening (HTS) assays coupled with computational modeling to prioritize chemicals with the goal of eventually replacing current Tier 1 screening tests. The ToxCast program currently includes 18 HTS in vitro assays that evaluate the ability of chemicals to modulate estrogen receptor α (ERα), an important endocrine target. We propose microarray-based gene expression profiling as a complementary approach to predict ERα modulation and have developed computational methods to identify ERα modulators in an existing database of whole-genome microarray data. The ERα biomarker consisted of 46 ERα-regulated genes with consistent expression patterns across 7 known ER agonists and 3 known ER antagonists. The biomarker was evaluated as a predictive tool using the fold-change rank-based Running Fisher algorithm by comparison to annotated gene expression data sets from experiments in MCF-7 cells. Using 141 comparisons from chemical- and hormone-treated cells, the biomarker gave a balanced accuracy for prediction of ERα activation or suppression of 94% or 93%, respectively. The biomarker was able to correctly classify 18 out of 21 (86%) OECD ER reference chemicals including “very weak” agonists and replicated predictions based on 18 in vitro ER-associated HTS assays. For 114 chemicals present in both the HTS data and the MCF-7 c

  18. cnvScan: a CNV screening and annotation tool to improve the clinical utility of computational CNV prediction from exome sequencing data.

    PubMed

    Samarakoon, Pubudu Saneth; Sorte, Hanne Sørmo; Stray-Pedersen, Asbjørg; Rødningen, Olaug Kristin; Rognes, Torbjørn; Lyle, Robert

    2016-01-14

    With advances in next generation sequencing technology and analysis methods, single nucleotide variants (SNVs) and indels can be detected with high sensitivity and specificity in exome sequencing data. Recent studies have demonstrated the ability to detect disease-causing copy number variants (CNVs) in exome sequencing data. However, exonic CNV prediction programs have shown high false positive CNV counts, which is the major limiting factor for the applicability of these programs in clinical studies. We have developed a tool (cnvScan) to improve the clinical utility of computational CNV prediction in exome data. cnvScan can accept input from any CNV prediction program. cnvScan consists of two steps: CNV screening and CNV annotation. CNV screening evaluates CNV prediction using quality scores and refines this using an in-house CNV database, which greatly reduces the false positive rate. The annotation step provides functionally and clinically relevant information using multiple source datasets. We assessed the performance of cnvScan on CNV predictions from five different prediction programs using 64 exomes from Primary Immunodeficiency (PIDD) patients, and identified PIDD-causing CNVs in three individuals from two different families. In summary, cnvScan reduces the time and effort required to detect disease-causing CNVs by reducing the false positive count and providing annotation. This improves the clinical utility of CNV detection in exome data.

  19. Magnetic Resonance Imaging as an Adjunct to Mammography for Breast Cancer Screening in Women at Less Than High Risk for Breast Cancer: A Health Technology Assessment

    PubMed Central

    Nikitovic-Jokic, Milica; Holubowich, Corinne

    2016-01-01

    Background Screening with mammography can detect breast cancer early, before clinical symptoms appear. Some cancers, however, are not captured with mammography screening alone. Among women at high risk for breast cancer, magnetic resonance imaging (MRI) has been suggested as a safe adjunct (supplemental) screening tool that can detect breast cancers missed on screening mammography, potentially reducing the number of deaths associated with the disease. However, the use of adjunct screening tests may also increase the number of false-positive test results, which may lead to unnecessary follow-up testing, as well as patient stress and anxiety. We investigated the benefits and harms of MRI as an adjunct to mammography compared with mammography alone for screening women at less than high risk (average or higher than average risk) for breast cancer. Methods We searched Ovid MEDLINE, Ovid Embase, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects (DARE), Centre for Reviews and Dissemination (CRD) Health Technology Assessment Database, and National Health Service (NHS) Economic Evaluation Database, from January 2002 to January 2016, for evidence of effectiveness, harms, and diagnostic accuracy. Only studies evaluating the use of screening breast MRI as an adjunct to mammography in the specified populations were included. Results No studies in women at less than high risk for breast cancer met our inclusion criteria. Conclusions It remains uncertain if the use of adjunct screening breast MRI in women at less than high risk (average or higher than average risk) for breast cancer will reduce breast cancer–related mortality without significant increases in unnecessary follow-up testing and treatment. PMID:27990198

  20. 48 CFR 52.227-14 - Rights in Data-General.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... database or database means a collection of recorded information in a form capable of, and for the purpose... enable the computer program to be produced, created, or compiled. (2) Does not include computer databases... databases and computer software documentation). This term does not include computer software or financial...

  1. 48 CFR 52.227-14 - Rights in Data-General.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... database or database means a collection of recorded information in a form capable of, and for the purpose... enable the computer program to be produced, created, or compiled. (2) Does not include computer databases... databases and computer software documentation). This term does not include computer software or financial...

  2. 48 CFR 52.227-14 - Rights in Data-General.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... database or database means a collection of recorded information in a form capable of, and for the purpose... enable the computer program to be produced, created, or compiled. (2) Does not include computer databases... databases and computer software documentation). This term does not include computer software or financial...

  3. 48 CFR 52.227-14 - Rights in Data-General.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... database or database means a collection of recorded information in a form capable of, and for the purpose... enable the computer program to be produced, created, or compiled. (2) Does not include computer databases... databases and computer software documentation). This term does not include computer software or financial...

  4. Neighborhood Structural Similarity Mapping for the Classification of Masses in Mammograms.

    PubMed

    Rabidas, Rinku; Midya, Abhishek; Chakraborty, Jayasree

    2018-05-01

    In this paper, two novel feature extraction methods, using neighborhood structural similarity (NSS), are proposed for the characterization of mammographic masses as benign or malignant. Since gray-level distribution of pixels is different in benign and malignant masses, more regular and homogeneous patterns are visible in benign masses compared to malignant masses; the proposed method exploits the similarity between neighboring regions of masses by designing two new features, namely, NSS-I and NSS-II, which capture global similarity at different scales. Complementary to these global features, uniform local binary patterns are computed to enhance the classification efficiency by combining with the proposed features. The performance of the features are evaluated using the images from the mini-mammographic image analysis society (mini-MIAS) and digital database for screening mammography (DDSM) databases, where a tenfold cross-validation technique is incorporated with Fisher linear discriminant analysis, after selecting the optimal set of features using stepwise logistic regression method. The best area under the receiver operating characteristic curve of 0.98 with an accuracy of is achieved with the mini-MIAS database, while the same for the DDSM database is 0.93 with accuracy .

  5. SuperNatural: a searchable database of available natural compounds.

    PubMed

    Dunkel, Mathias; Fullbeck, Melanie; Neumann, Stefanie; Preissner, Robert

    2006-01-01

    Although tremendous effort has been put into synthetic libraries, most drugs on the market are still natural compounds or derivatives thereof. There are encyclopaedias of natural compounds, but the availability of these compounds is often unclear and catalogues from numerous suppliers have to be checked. To overcome these problems we have compiled a database of approximately 50,000 natural compounds from different suppliers. To enable efficient identification of the desired compounds, we have implemented substructure searches with typical templates. Starting points for in silico screenings are about 2500 well-known and classified natural compounds from a compendium that we have added. Possible medical applications can be ascertained via automatic searches for similar drugs in a free conformational drug database containing WHO indications. Furthermore, we have computed about three million conformers, which are deployed to account for the flexibilities of the compounds when the 3D superposition algorithm that we have developed is used. The SuperNatural Database is publicly available at http://bioinformatics.charite.de/supernatural. Viewing requires the free Chime-plugin from MDL (Chime) or Java2 Runtime Environment (MView), which is also necessary for using Marvin application for chemical drawing.

  6. Development and Validation of a Qualitative Method for Target Screening of 448 Pesticide Residues in Fruits and Vegetables Using UHPLC/ESI Q-Orbitrap Based on Data-Independent Acquisition and Compound Database.

    PubMed

    Wang, Jian; Chow, Willis; Chang, James; Wong, Jon W

    2017-01-18

    A semiautomated qualitative method for target screening of 448 pesticide residues in fruits and vegetables was developed and validated using ultrahigh-performance liquid chromatography coupled with electrospray ionization quadrupole Orbitrap high-resolution mass spectrometry (UHPLC/ESI Q-Orbitrap). The Q-Orbitrap Full MS/dd-MS 2 (data dependent acquisition) was used to acquire product-ion spectra of individual pesticides to build a compound database or an MS library, while its Full MS/DIA (data independent acquisition) was utilized for sample data acquisition from fruit and vegetable matrices fortified with pesticides at 10 and 100 μg/kg for target screening purpose. Accurate mass, retention time and response threshold were three key parameters in a compound database that were used to detect incurred pesticide residues in samples. The concepts and practical aspects of in-spectrum mass correction or solvent background lock-mass correction, retention time alignment and response threshold adjustment are discussed while building a functional and working compound database for target screening. The validated target screening method is capable of screening at least 94% and 99% of 448 pesticides at 10 and 100 μg/kg, respectively, in fruits and vegetables without having to evaluate every compound manually during data processing, which significantly reduced the workload in routine practice.

  7. Computational oncology.

    PubMed

    Lefor, Alan T

    2011-08-01

    Oncology research has traditionally been conducted using techniques from the biological sciences. The new field of computational oncology has forged a new relationship between the physical sciences and oncology to further advance research. By applying physics and mathematics to oncologic problems, new insights will emerge into the pathogenesis and treatment of malignancies. One major area of investigation in computational oncology centers around the acquisition and analysis of data, using improved computing hardware and software. Large databases of cellular pathways are being analyzed to understand the interrelationship among complex biological processes. Computer-aided detection is being applied to the analysis of routine imaging data including mammography and chest imaging to improve the accuracy and detection rate for population screening. The second major area of investigation uses computers to construct sophisticated mathematical models of individual cancer cells as well as larger systems using partial differential equations. These models are further refined with clinically available information to more accurately reflect living systems. One of the major obstacles in the partnership between physical scientists and the oncology community is communications. Standard ways to convey information must be developed. Future progress in computational oncology will depend on close collaboration between clinicians and investigators to further the understanding of cancer using these new approaches.

  8. Screen-detected versus interval cancers: Effect of imaging modality and breast density in the Flemish Breast Cancer Screening Programme.

    PubMed

    Timmermans, Lore; Bleyen, Luc; Bacher, Klaus; Van Herck, Koen; Lemmens, Kim; Van Ongeval, Chantal; Van Steen, Andre; Martens, Patrick; De Brabander, Isabel; Goossens, Mathieu; Thierens, Hubert

    2017-09-01

    To investigate if direct radiography (DR) performs better than screen-film mammography (SF) and computed radiography (CR) in dense breasts in a decentralized organised Breast Cancer Screening Programme. To this end, screen-detected versus interval cancers were studied in different BI-RADS density classes for these imaging modalities. The study cohort consisted of 351,532 women who participated in the Flemish Breast Cancer Screening Programme in 2009 and 2010. Information on screen-detected and interval cancers, breast density scores of radiologist second readers, and imaging modality was obtained by linkage of the databases of the Centre of Cancer Detection and the Belgian Cancer Registry. Overall, 67% of occurring breast cancers are screen detected and 33% are interval cancers, with DR performing better than SF and CR. The interval cancer rate increases gradually with breast density, regardless of modality. In the high-density class, the interval cancer rate exceeds the cancer detection rate for SF and CR, but not for DR. DR is superior to SF and CR with respect to cancer detection rates for high-density breasts. To reduce the high interval cancer rate in dense breasts, use of an additional imaging technique in screening can be taken into consideration. • Interval cancer rate increases gradually with breast density, regardless of modality. • Cancer detection rate in high-density breasts is superior in DR. • IC rate exceeds CDR for SF and CR in high-density breasts. • DR performs better in high-density breasts for third readings and false-positives.

  9. Informatics applied to cytology

    PubMed Central

    Hornish, Maryanne; Goulart, Robert A.

    2008-01-01

    Automation and emerging information technologies are being adopted by cytology laboratories to augment Pap test screening and improve diagnostic accuracy. As a result, informatics, the application of computers and information systems to information management, has become essential for the successful operation of the cytopathology laboratory. This review describes how laboratory information management systems can be used to achieve an automated and seamless workflow process. The utilization of software, electronic databases and spreadsheets to perform necessary quality control measures are discussed, as well as a Lean production system and Six Sigma approach, to reduce errors in the cytopathology laboratory. PMID:19495402

  10. System and methods for predicting transmembrane domains in membrane proteins and mining the genome for recognizing G-protein coupled receptors

    DOEpatents

    Trabanino, Rene J; Vaidehi, Nagarajan; Hall, Spencer E; Goddard, William A; Floriano, Wely

    2013-02-05

    The invention provides computer-implemented methods and apparatus implementing a hierarchical protocol using multiscale molecular dynamics and molecular modeling methods to predict the presence of transmembrane regions in proteins, such as G-Protein Coupled Receptors (GPCR), and protein structural models generated according to the protocol. The protocol features a coarse grain sampling method, such as hydrophobicity analysis, to provide a fast and accurate procedure for predicting transmembrane regions. Methods and apparatus of the invention are useful to screen protein or polynucleotide databases for encoded proteins with transmembrane regions, such as GPCRs.

  11. Simultaneous real-time data collection methods

    NASA Technical Reports Server (NTRS)

    Klincsek, Thomas

    1992-01-01

    This paper describes the development of electronic test equipment which executes, supervises, and reports on various tests. This validation process uses computers to analyze test results and report conclusions. The test equipment consists of an electronics component and the data collection and reporting unit. The PC software, display screens, and real-time data-base are described. Pass-fail procedures and data replay are discussed. The OS2 operating system and Presentation Manager user interface system were used to create a highly interactive automated system. The system outputs are hardcopy printouts and MS DOS format files which may be used as input for other PC programs.

  12. Position of document holder and work related risk factors for neck pain among computer users: a narrative review.

    PubMed

    Ambusam, S; Baharudin, O; Roslizawati, N; Leonard, J

    2015-01-01

    Document holder is used as a remedy to address occupational neck pain among computer users. An understanding on the effects of the document holder along with other work related risk factors while working in computer workstation requires attention. A comprehensive knowledge on the optimal location of the document holder in computer use and associated work related factors that may contribute to neck pain reviewed in this article. A literature search has been conducted over the past 14 years based on the published articles from January 1990 to January 2014 in both Science Direct and PubMed databases. Medical Subject Headings (MeSH) keywords for search were neck muscle OR head posture OR muscle tension' OR muscle activity OR work related disorders OR neck pain AND/OR document location OR document holder OR source document OR copy screen holder.Document holder placed lateral to the screen was most preferred to reduce neck discomfort among occupational typists. Document without a holder was placed flat on the surface is least preferred. The head posture and muscle activity increases when the document is placed flat on the surface compared to when placed on the document holder. Work related factors such as static posture, repetitive movement, prolong sitting and awkward positions were the risk factors for chronic neck pain. This review highlights the optimal location for document holder for computer users to reduce neck pain. Together, the importance of work related risk factors for to neck pain on occupational typist is emphasized for the clinical management.

  13. Patient navigation for lung cancer screening in an urban safety-net system: Protocol for a pragmatic randomized clinical trial.

    PubMed

    Gerber, David E; Hamann, Heidi A; Santini, Noel O; Abbara, Suhny; Chiu, Hsienchang; McGuire, Molly; Quirk, Lisa; Zhu, Hong; Lee, Simon J Craddock

    2017-09-01

    The National Lung Screening Trial demonstrated improved lung cancer mortality with annual low-dose computed tomography (CT) screening, leading to lung cancer screening endorsement by the United States Preventive Services Task Force and coverage by the Centers for Medicare and Medicaid. Adherence to annual CT screens in that trial was 95%, which may not be representative of real-world, particularly medically underserved populations. This pragmatic trial will determine the effect of patient-focused, telephone-based patient navigation on adherence to CT-based lung cancer screening in an urban safety-net population. 340 adults who meet standard eligibility for lung cancer screening (age 55-77years, smoking history≥30 pack-years, quit within 15years if former smoker) are referred through an electronic medical record-based order by physicians in community- and hospital-based primary care settings within the Parkland Health and Hospital System in Dallas County, Texas. Eligible patients are randomized to usual care or patient navigation, which addresses adherence, patient-reported barriers, smoking cessation, and psycho-social concerns related to screening completion. Patients complete surveys and semi-structured interviews at baseline, 6-month, and 18-month follow-ups to assess attitudes toward screening. The primary endpoint of this pragmatic trial is adherence to three sequential, prospectively defined steps in the screening protocol. Secondary endpoints include self-reported tobacco use and other patient-reported outcomes. Results will provide real-world insight into the impact of patient navigation on adherence to CT-based lung cancer screening in a medically underserved population. This study was registered with the NIH ClinicalTrials.gov database (NCT02758054) on April 26, 2016. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. A web-based platform for virtual screening.

    PubMed

    Watson, Paul; Verdonk, Marcel; Hartshorn, Michael J

    2003-09-01

    A fully integrated, web-based, virtual screening platform has been developed to allow rapid virtual screening of large numbers of compounds. ORACLE is used to store information at all stages of the process. The system includes a large database of historical compounds from high throughput screenings (HTS) chemical suppliers, ATLAS, containing over 3.1 million unique compounds with their associated physiochemical properties (ClogP, MW, etc.). The database can be screened using a web-based interface to produce compound subsets for virtual screening or virtual library (VL) enumeration. In order to carry out the latter task within ORACLE a reaction data cartridge has been developed. Virtual libraries can be enumerated rapidly using the web-based interface to the cartridge. The compound subsets can be seamlessly submitted for virtual screening experiments, and the results can be viewed via another web-based interface allowing ad hoc querying of the virtual screening data stored in ORACLE.

  15. Feasibility and impact of a computer-guided consultation on guideline-based management of COPD in general practice.

    PubMed

    Angus, Robert M; Thompson, Elizabeth B; Davies, Lisa; Trusdale, Ann; Hodgson, Chris; McKnight, Eddie; Davies, Andrew; Pearson, Mike G

    2012-12-01

    Applying guidelines is a universal challenge that is often not met. Intelligent software systems that facilitate real-time management during a clinical interaction may offer a solution. To determine if the use of a computer-guided consultation that facilitates the National Institute for Health and Clinical Excellence-based chronic obstructive pulmonary disease (COPD) guidance and prompts clinical decision-making is feasible in primary care and to assess its impact on diagnosis and management in reviews of COPD patients. Practice nurses, one-third of whom had no specific respiratory training, undertook a computer-guided review in the usual consulting room setting using a laptop computer with the screen visible to them and to the patient. A total of 293 patients (mean (SD) age 69.7 (10.1) years, 163 (55.6%) male) with a diagnosis of COPD were randomly selected from GP databases in 16 practices and assessed. Of 236 patients who had spirometry, 45 (19%) did not have airflow obstruction and the guided clinical history changed the primary diagnosis from COPD in a further 24 patients. In the 191 patients with confirmed COPD, the consultations prompted management changes including 169 recommendations for altered prescribing of inhalers (addition or discontinuation, inhaler dose or device). In addition, 47% of the 55 current smokers were referred for smoking cessation support, 12 (6%) for oxygen assessment, and 47 (24%) for pulmonary rehabilitation. Computer-guided consultations are practicable in general practice. Primary care COPD databases were confirmed to contain a significant proportion of incorrectly assigned patients. They resulted in interventions and the rationalisation of prescribing in line with recommendations. Only in 22 (12%) of those fully assessed was no management change suggested. The introduction of a computer-guided consultation offers the prospect of comprehensive guideline quality management.

  16. In silico design and screening of hypothetical MOF-74 analogs and their experimental synthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witman, Matthew; Ling, Sanliang; Anderson, Samantha

    Here, we present the in silico design of metal-organic frameworks (MOFs) exhibiting 1-dimensional rod topologies. We then introduce an algorithm for construction of this family of MOF topologies, and illustrate its application for enumerating MOF-74-type analogs. Furthermore, we perform a broad search for new linkers that satisfy the topological requirements of MOF-74 and consider the largest database of known chemical space for organic compounds, the PubChem database. Our in silico crystal assembly, when combined with dispersion-corrected density functional theory (DFT) calculations, is demonstrated to generate a hypothetical library of open-metal site containing MOF-74 analogs in the 1-D rod topology frommore » which we can simulate the adsorption behavior of CO 2 . We conclude that these hypothetical structures have synthesizable potential through computational identification and experimental validation of a novel MOF-74 analog, Mg 2 (olsalazine).« less

  17. In silico design and screening of hypothetical MOF-74 analogs and their experimental synthesis

    DOE PAGES

    Witman, Matthew; Ling, Sanliang; Anderson, Samantha; ...

    2016-06-21

    Here, we present the in silico design of metal-organic frameworks (MOFs) exhibiting 1-dimensional rod topologies. We then introduce an algorithm for construction of this family of MOF topologies, and illustrate its application for enumerating MOF-74-type analogs. Furthermore, we perform a broad search for new linkers that satisfy the topological requirements of MOF-74 and consider the largest database of known chemical space for organic compounds, the PubChem database. Our in silico crystal assembly, when combined with dispersion-corrected density functional theory (DFT) calculations, is demonstrated to generate a hypothetical library of open-metal site containing MOF-74 analogs in the 1-D rod topology frommore » which we can simulate the adsorption behavior of CO 2 . We conclude that these hypothetical structures have synthesizable potential through computational identification and experimental validation of a novel MOF-74 analog, Mg 2 (olsalazine).« less

  18. Estimates of long-term mean-annual nutrient loads considered for use in SPARROW models of the Midcontinental region of Canada and the United States, 2002 base year

    USGS Publications Warehouse

    Saad, David A.; Benoy, Glenn A.; Robertson, Dale M.

    2018-05-11

    Streamflow and nutrient concentration data needed to compute nitrogen and phosphorus loads were compiled from Federal, State, Provincial, and local agency databases and also from selected university databases. The nitrogen and phosphorus loads are necessary inputs to Spatially Referenced Regressions on Watershed Attributes (SPARROW) models. SPARROW models are a way to estimate the distribution, sources, and transport of nutrients in streams throughout the Midcontinental region of Canada and the United States. After screening the data, approximately 1,500 sites sampled by 34 agencies were identified as having suitable data for calculating the long-term mean-annual nutrient loads required for SPARROW model calibration. These final sites represent a wide range in watershed sizes, types of nutrient sources, and land-use and watershed characteristics in the Midcontinental region of Canada and the United States.

  19. A Virtual Screening Approach For Identifying Plants with Anti H5N1 Neuraminidase Activity

    PubMed Central

    2016-01-01

    Recent outbreaks of highly pathogenic and occasional drug-resistant influenza strains have highlighted the need to develop novel anti-influenza therapeutics. Here, we report computational and experimental efforts to identify influenza neuraminidase inhibitors from among the 3000 natural compounds in the Malaysian-Plants Natural-Product (NADI) database. These 3000 compounds were first docked into the neuraminidase active site. The five plants with the largest number of top predicted ligands were selected for experimental evaluation. Twelve specific compounds isolated from these five plants were shown to inhibit neuraminidase, including two compounds with IC50 values less than 92 μM. Furthermore, four of the 12 isolated compounds had also been identified in the top 100 compounds from the virtual screen. Together, these results suggest an effective new approach for identifying bioactive plant species that will further the identification of new pharmacologically active compounds from diverse natural-product resources. PMID:25555059

  20. Screening for High Conductivity/Low Viscosity Ionic Liquids Using Product Descriptors.

    PubMed

    Martin, Shawn; Pratt, Harry D; Anderson, Travis M

    2017-07-01

    We seek to optimize Ionic liquids (ILs) for application to redox flow batteries. As part of this effort, we have developed a computational method for suggesting ILs with high conductivity and low viscosity. Since ILs consist of cation-anion pairs, we consider a method for treating ILs as pairs using product descriptors for QSPRs, a concept borrowed from the prediction of protein-protein interactions in bioinformatics. We demonstrate the method by predicting electrical conductivity, viscosity, and melting point on a dataset taken from the ILThermo database on June 18 th , 2014. The dataset consists of 4,329 measurements taken from 165 ILs made up of 72 cations and 34 anions. We benchmark our QSPRs on the known values in the dataset then extend our predictions to screen all 2,448 possible cation-anion pairs in the dataset. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. The drug discovery portal: a computational platform for identifying drug leads from academia.

    PubMed

    Clark, Rachel L; Johnston, Blair F; Mackay, Simon P; Breslin, Catherine J; Robertson, Murray N; Sutcliffe, Oliver B; Dufton, Mark J; Harvey, Alan L

    2010-05-01

    The Drug Discovery Portal (DDP) is a research initiative based at the University of Strathclyde in Glasgow, Scotland. It was initiated in 2007 by a group of researchers with expertise in virtual screening. Academic research groups in the university working in drug discovery programmes estimated there was a historical collection of physical compounds going back 50 years that had never been adequately catalogued. This invaluable resource has been harnessed to form the basis of the DDP library, and has attracted a high-percentage uptake from the Universities and Research Groups internationally. Its unique attributes include the diversity of the academic database, sourced from synthetic, medicinal and phytochemists working an academic laboratories and the ability to link biologists with appropriate chemical expertise through a target-matching virtual screening approach, and has resulted in seven emerging hit development programmes between international contributors.

  2. Screening for High Conductivity/Low Viscosity Ionic Liquids Using Product Descriptors

    DOE PAGES

    Martin, Shawn; Pratt, III, Harry D.; Anderson, Travis M.

    2017-02-21

    We seek to optimize Ionic liquids (ILs) for application to redox flow batteries. As part of this effort, we have developed a computational method for suggesting ILs with high conductivity and low viscosity. Since ILs consist of cation-anion pairs, we consider a method for treating ILs as pairs using product descriptors for QSPRs, a concept borrowed from the prediction of protein-protein interactions in bioinformatics. We demonstrate the method by predicting electrical conductivity, viscosity, and melting point on a dataset taken from the ILThermo database on June 18th, 2014. The dataset consists of 4,329 measurements taken from 165 ILs made upmore » of 72 cations and 34 anions. In conclusion, we benchmark our QSPRs on the known values in the dataset then extend our predictions to screen all 2,448 possible cation-anion pairs in the dataset.« less

  3. Screening for High Conductivity/Low Viscosity Ionic Liquids Using Product Descriptors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Shawn; Pratt, III, Harry D.; Anderson, Travis M.

    We seek to optimize Ionic liquids (ILs) for application to redox flow batteries. As part of this effort, we have developed a computational method for suggesting ILs with high conductivity and low viscosity. Since ILs consist of cation-anion pairs, we consider a method for treating ILs as pairs using product descriptors for QSPRs, a concept borrowed from the prediction of protein-protein interactions in bioinformatics. We demonstrate the method by predicting electrical conductivity, viscosity, and melting point on a dataset taken from the ILThermo database on June 18th, 2014. The dataset consists of 4,329 measurements taken from 165 ILs made upmore » of 72 cations and 34 anions. In conclusion, we benchmark our QSPRs on the known values in the dataset then extend our predictions to screen all 2,448 possible cation-anion pairs in the dataset.« less

  4. Infrared thermography based on artificial intelligence as a screening method for carpal tunnel syndrome diagnosis.

    PubMed

    Jesensek Papez, B; Palfy, M; Mertik, M; Turk, Z

    2009-01-01

    This study further evaluated a computer-based infrared thermography (IRT) system, which employs artificial neural networks for the diagnosis of carpal tunnel syndrome (CTS) using a large database of 502 thermal images of the dorsal and palmar side of 132 healthy and 119 pathological hands. It confirmed the hypothesis that the dorsal side of the hand is of greater importance than the palmar side when diagnosing CTS thermographically. Using this method it was possible correctly to classify 72.2% of all hands (healthy and pathological) based on dorsal images and > 80% of hands when only severely affected and healthy hands were considered. Compared with the gold standard electromyographic diagnosis of CTS, IRT cannot be recommended as an adequate diagnostic tool when exact severity level diagnosis is required, however we conclude that IRT could be used as a screening tool for severe cases in populations with high ergonomic risk factors of CTS.

  5. FlyRNAi.org—the database of the Drosophila RNAi screening center and transgenic RNAi project: 2017 update

    PubMed Central

    Hu, Yanhui; Comjean, Aram; Roesel, Charles; Vinayagam, Arunachalam; Flockhart, Ian; Zirin, Jonathan; Perkins, Lizabeth; Perrimon, Norbert; Mohr, Stephanie E.

    2017-01-01

    The FlyRNAi database of the Drosophila RNAi Screening Center (DRSC) and Transgenic RNAi Project (TRiP) at Harvard Medical School and associated DRSC/TRiP Functional Genomics Resources website (http://fgr.hms.harvard.edu) serve as a reagent production tracking system, screen data repository, and portal to the community. Through this portal, we make available protocols, online tools, and other resources useful to researchers at all stages of high-throughput functional genomics screening, from assay design and reagent identification to data analysis and interpretation. In this update, we describe recent changes and additions to our website, database and suite of online tools. Recent changes reflect a shift in our focus from a single technology (RNAi) and model species (Drosophila) to the application of additional technologies (e.g. CRISPR) and support of integrated, cross-species approaches to uncovering gene function using functional genomics and other approaches. PMID:27924039

  6. Mammography usage with relevant factors among women with mental disabilities in Taiwan: a nationwide population-based study.

    PubMed

    Yen, Suh-May; Kung, Pei-Tseng; Tsai, Wen-Chen

    2015-02-01

    Women with mental illness are at increased risk of developing and dying from breast cancer and are thus in urgent need of breast cancer preventive care. This study examined the use of screening mammography by Taiwanese women with mental disabilities and analyzed factors affecting this use. 17,243 Taiwanese women with mental disabilities aged 50-69 years were retrospectively included as study subjects. Linked patient data were obtained from three national databases in Taiwan (the 2008 database of physically and mentally disabled persons, the Health Promotion Administration's 2007-2008 mammography screening data, and claims data from the National Health Insurance Research Database). Besides descriptive statistics and bivariate analysis, logistic regression analysis was also performed to examine factors affecting screening mammography use. The 2007-2008 mammography screening rate for Taiwanese women with mental disabilities was 8.79% (n=1515). Variables that significantly influenced screening use were income, education, presence of catastrophic illness/injury, severity of mental disability, and usage of other preventive care services. Screening was positively correlated with income and education. Those with catastrophic illness/injury were more likely to be screened (odds ratio [OR], 1.40; 95% CI=1.15-1.72). Severity of disability was negatively correlated with screening, with very severe, severe, and moderate disability being associated with 0.34-0.69 times the odds of screening as mild disability. In Taiwan, women with mental disabilities receive far less mammography screening than women in general. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Grid-based Molecular Footprint Comparison Method for Docking and De Novo Design: Application to HIVgp41

    PubMed Central

    Mukherjee, Sudipto; Rizzo, Robert C.

    2014-01-01

    Scoring functions are a critically important component of computer-aided screening methods for the identification of lead compounds during early stages of drug discovery. Here, we present a new multi-grid implementation of the footprint similarity (FPS) scoring function that was recently developed in our laboratory which has proven useful for identification of compounds which bind to a protein on a per-residue basis in a way that resembles a known reference. The grid-based FPS method is much faster than its Cartesian-space counterpart which makes it computationally tractable for on-the-fly docking, virtual screening, or de novo design. In this work, we establish that: (i) relatively few grids can be used to accurately approximate Cartesian space footprint similarity, (ii) the method yields improved success over the standard DOCK energy function for pose identification across a large test set of experimental co-crystal structures, for crossdocking, and for database enrichment, and (iii) grid-based FPS scoring can be used to tailor construction of new molecules to have specific properties, as demonstrated in a series of test cases targeting the viral protein HIVgp41. The method will be made available in the program DOCK6. PMID:23436713

  8. New Toxico-Cheminformatics & Computational Toxicology ...

    EPA Pesticide Factsheets

    EPA’s National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction. The DSSTox project is improving public access to quality structure-annotated chemical toxicity information in less summarized forms than traditionally employed in SAR modeling, and in ways that facilitate data-mining, and data read-across. The DSSTox Structure-Browser provides structure searchability across all published DSSTox toxicity-related inventory, and is enabling linkages between previously isolated toxicity data resources. As of early March 2008, the public DSSTox inventory has been integrated into PubChem, allowing a user to take full advantage of PubChem structure-activity and bioassay clustering features. The most recent DSSTox version of the Carcinogenic Potency Database file (CPDBAS) illustrates ways in which various summary definitions of carcinogenic activity can be employed in modeling and data mining. Phase I of the ToxCastTM project is generating high-throughput screening data from several hundred biochemical and cell-based assays for a set of 320 chemicals, mostly pesticide actives, with rich toxicology profiles. Incorporating and expanding traditional SAR concepts into this new high-throughput and data-rich world pose conceptual and practical challenges, but also holds great promise for improving predictive capabilities.

  9. Advances in Toxico-Cheminformatics: Supporting a New ...

    EPA Pesticide Factsheets

    EPA’s National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction through the harnessing of legacy toxicity data, creation of data linkages, and generation of new high-throughput screening (HTS) data. The DSSTox project is working to improve public access to quality structure-annotated chemical toxicity information in less summarized forms than traditionally employed in SAR modeling, and in ways that facilitate both data-mining and read-across. Both DSSTox Structure-Files and the dedicated on-line DSSTox Structure-Browser are enabling seamless structure-based searching and linkages to and from previously isolated, chemically indexed public toxicity data resources (e.g., NTP, EPA IRIS, CPDB). Most recently, structure-enabled search capabilities have been extended to chemical exposure-related microarray experiments in the public EBI Array Express database, additionally linking this resource to the NIEHS CEBS toxicogenomics database. The public DSSTox chemical and bioassay inventory has been recently integrated into PubChem, allowing a user to take full advantage of PubChem structure-activity and bioassay clustering features. The DSSTox project is providing cheminformatics support for EPA’s ToxCastTM project, as well as supporting collaborations with the National Toxicology Program (NTP) HTS and the NIH Chemical Genomics Center (NCGC). Phase I of the ToxCastTM project is generating HT

  10. Computational Exploration for Lead Compounds That Can Reverse the Nuclear Morphology in Progeria

    PubMed Central

    Baek, Ayoung; Son, Minky; Zeb, Amir; Park, Chanin; Kumar, Raj; Lee, Gihwan; Kim, Donghwan; Choi, Yeonuk; Cho, Yeongrae; Park, Yohan

    2017-01-01

    Progeria is a rare genetic disorder characterized by premature aging that eventually leads to death and is noticed globally. Despite alarming conditions, this disease lacks effective medications; however, the farnesyltransferase inhibitors (FTIs) are a hope in the dark. Therefore, the objective of the present article is to identify new compounds from the databases employing pharmacophore based virtual screening. Utilizing nine training set compounds along with lonafarnib, a common feature pharmacophore was constructed consisting of four features. The validated Hypo1 was subsequently allowed to screen Maybridge, Chembridge, and Asinex databases to retrieve the novel lead candidates, which were then subjected to Lipinski's rule of 5 and ADMET for drug-like assessment. The obtained 3,372 compounds were forwarded to docking simulations and were manually examined for the key interactions with the crucial residues. Two compounds that have demonstrated a higher dock score than the reference compounds and showed interactions with the crucial residues were subjected to MD simulations and binding free energy calculations to assess the stability of docked conformation and to investigate the binding interactions in detail. Furthermore, this study suggests that the Hits may be more effective against progeria and further the DFT studies were executed to understand their orbital energies. PMID:29226142

  11. Discovery and study of novel protein tyrosine phosphatase 1B inhibitors

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Chen, Xi; Feng, Changgen

    2017-10-01

    Protein tyrosine phosphatase 1B (PTP1B) is considered to be a target for therapy of type II diabetes and obesity. So it is of great significance to take advantage of a computer aided drug design protocol involving the structured-based virtual screening with docking simulations for fast searching small molecule PTP1B inhibitors. Based on optimized complex structure of PTP1B bound with specific inhibitor of IX1, structured-based virtual screening against a library of natural products containing 35308 molecules, which was constructed based on Traditional Chinese Medicine database@ Taiwan (TCM database@ Taiwan), was conducted to determine the occurrence of PTP1B inhibitors using the Lubbock module and CDOCKER module from Discovery Studio 3.1 software package. The results were further filtered by predictive ADME simulation and predictive toxic simulation. As a result, 2 good drug-like molecules, namely para-benzoquinone compound 1 and Clavepictine analogue 2 were identified ultimately with the dock score of original inhibitor (IX1) and the receptor as a threshold. Binding model analyses revealed that these two candidate compounds have good interactions with PTP1B. The PTP1B inhibitory activity of compound 2 hasn't been reported before. The optimized compound 2 has higher scores and deserves further study.

  12. Analysis of framelets for breast cancer diagnosis.

    PubMed

    Thivya, K S; Sakthivel, P; Venkata Sai, P M

    2016-01-01

    Breast cancer is the second threatening tumor among the women. The effective way of reducing breast cancer is its early detection which helps to improve the diagnosing process. Digital mammography plays a significant role in mammogram screening at earlier stage of breast carcinoma. Even though, it is very difficult to find accurate abnormality in prevalent screening by radiologists. But the possibility of precise breast cancer screening is encouraged by predicting the accurate type of abnormality through Computer Aided Diagnosis (CAD) systems. The two most important indicators of breast malignancy are microcalcifications and masses. In this study, framelet transform, a multiresolutional analysis is investigated for the classification of the above mentioned two indicators. The statistical and co-occurrence features are extracted from the framelet decomposed mammograms with different resolution levels and support vector machine is employed for classification with k-fold cross validation. This system achieves 94.82% and 100% accuracy in normal/abnormal classification (stage I) and benign/malignant classification (stage II) of mass classification system and 98.57% and 100% for microcalcification system when using the MIAS database.

  13. A Prospective Virtual Screening Study: Enriching Hit Rates and Designing Focus Libraries To Find Inhibitors of PI3Kδ and PI3Kγ.

    PubMed

    Damm-Ganamet, Kelly L; Bembenek, Scott D; Venable, Jennifer W; Castro, Glenda G; Mangelschots, Lieve; Peeters, Daniëlle C G; Mcallister, Heather M; Edwards, James P; Disepio, Daniel; Mirzadegan, Taraneh

    2016-05-12

    Here, we report a high-throughput virtual screening (HTVS) study using phosphoinositide 3-kinase (both PI3Kγ and PI3Kδ). Our initial HTVS results of the Janssen corporate database identified small focused libraries with hit rates at 50% inhibition showing a 50-fold increase over those from a HTS (high-throughput screen). Further, applying constraints based on "chemically intuitive" hydrogen bonds and/or positional requirements resulted in a substantial improvement in the hit rates (versus no constraints) and reduced docking time. While we find that docking scoring functions are not capable of providing a reliable relative ranking of a set of compounds, a prioritization of groups of compounds (e.g., low, medium, and high) does emerge, which allows for the chemistry efforts to be quickly focused on the most viable candidates. Thus, this illustrates that it is not always necessary to have a high correlation between a computational score and the experimental data to impact the drug discovery process.

  14. Application of kernel functions for accurate similarity search in large chemical databases.

    PubMed

    Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H

    2010-04-29

    Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.

  15. Development and Validation of a Computational Model for Androgen Receptor Activity

    PubMed Central

    2016-01-01

    Testing thousands of chemicals to identify potential androgen receptor (AR) agonists or antagonists would cost millions of dollars and take decades to complete using current validated methods. High-throughput in vitro screening (HTS) and computational toxicology approaches can more rapidly and inexpensively identify potential androgen-active chemicals. We integrated 11 HTS ToxCast/Tox21 in vitro assays into a computational network model to distinguish true AR pathway activity from technology-specific assay interference. The in vitro HTS assays probed perturbations of the AR pathway at multiple points (receptor binding, coregulator recruitment, gene transcription, and protein production) and multiple cell types. Confirmatory in vitro antagonist assay data and cytotoxicity information were used as additional flags for potential nonspecific activity. Validating such alternative testing strategies requires high-quality reference data. We compiled 158 putative androgen-active and -inactive chemicals from a combination of international test method validation efforts and semiautomated systematic literature reviews. Detailed in vitro assay information and results were compiled into a single database using a standardized ontology. Reference chemical concentrations that activated or inhibited AR pathway activity were identified to establish a range of potencies with reproducible reference chemical results. Comparison with existing Tier 1 AR binding data from the U.S. EPA Endocrine Disruptor Screening Program revealed that the model identified binders at relevant test concentrations (<100 μM) and was more sensitive to antagonist activity. The AR pathway model based on the ToxCast/Tox21 assays had balanced accuracies of 95.2% for agonist (n = 29) and 97.5% for antagonist (n = 28) reference chemicals. Out of 1855 chemicals screened in the AR pathway model, 220 chemicals demonstrated AR agonist or antagonist activity and an additional 174 chemicals were predicted to have potential weak AR pathway activity. PMID:27933809

  16. Correlates of mobile screen media use among children aged 0-8: protocol for a systematic review.

    PubMed

    Paudel, Susan; Leavy, Justine; Jancey, Jonine

    2016-06-03

    Childhood is a crucial period for shaping healthy behaviours; however, it currently appears to be dominated by screen time. A large proportion of young children do not adhere to the screen time recommendations, with the use of mobile screen devices becoming more common than fixed screens. Existing systematic reviews on correlates of screen time have focused largely on the traditional fixed screen devices such as television. Reviews specially focused on mobile screen media are almost non-existent. This paper describes the protocol for conducting a systematic review of papers published between 2009 and 2015 to identify the correlates of mobile screen media use among children aged 0-8 years. A systematic literature search of electronic databases will be carried out using different combinations of keywords for papers published in English between January 2009 and December 2015. Additionally, a manual search of reference lists and citations will also be conducted. Papers that have examined correlates of screen time among children aged 0-8 will be included in the review. Studies must include at least one type of mobile screen media (mobile phones, electronic tablets or handheld computers) to be eligible for inclusion. This study will identify correlates of mobile screen-viewing among children in five categories: (i) child biological and demographic correlates, (ii) behavioural correlates, (iii) family biological and demographic correlates, (iv) family structure-related correlates and (v) socio-cultural and environmental correlates. PRISMA statement will be used for ensuring transparency and scientific reporting of the results. This study will identify the correlates associated with increased mobile screen media use among young children through the systematic review of published peer-reviewed papers. This will contribute to addressing the knowledge gap in this area. The results will provide an evidence base to better understand correlates of mobile screen media use and potentially inform the development of recommendations to reduce screen time among those aged 0-8 years. PROSPERO CRD42015028028 .

  17. Database for High Throughput Screening Hits (dHITS): a simple tool to retrieve gene specific phenotypes from systematic screens done in yeast.

    PubMed

    Chuartzman, Silvia G; Schuldiner, Maya

    2018-03-25

    In the last decade several collections of Saccharomyces cerevisiae yeast strains have been created. In these collections every gene is modified in a similar manner such as by a deletion or the addition of a protein tag. Such libraries have enabled a diversity of systematic screens, giving rise to large amounts of information regarding gene functions. However, often papers describing such screens focus on a single gene or a small set of genes and all other loci affecting the phenotype of choice ('hits') are only mentioned in tables that are provided as supplementary material and are often hard to retrieve or search. To help unify and make such data accessible, we have created a Database of High Throughput Screening Hits (dHITS). The dHITS database enables information to be obtained about screens in which genes of interest were found as well as the other genes that came up in that screen - all in a readily accessible and downloadable format. The ability to query large lists of genes at the same time provides a platform to easily analyse hits obtained from transcriptional analyses or other screens. We hope that this platform will serve as a tool to facilitate investigation of protein functions to the yeast community. © 2018 The Authors Yeast Published by John Wiley & Sons Ltd.

  18. BALLIST: A computer program to empirically predict the bumper thickness required to prevent perforation of the Space Station by orbital debris

    NASA Technical Reports Server (NTRS)

    Rule, William Keith

    1991-01-01

    A computer program called BALLIST that is intended to be a design tool for engineers is described. BALLlST empirically predicts the bumper thickness required to prevent perforation of the Space Station pressure wall by a projectile (such as orbital debris) as a function of the projectile's velocity. 'Ballistic' limit curves (bumper thickness vs. projectile velocity) are calculated and are displayed on the screen as well as being stored in an ASCII file. A Whipple style of spacecraft wall configuration is assumed. The predictions are based on a database of impact test results. NASA/Marshall Space Flight Center currently has the capability to generate such test results. Numerical simulation results of impact conditions that can not be tested (high velocities or large particles) can also be used for predictions.

  19. RADER: a RApid DEcoy Retriever to facilitate decoy based assessment of virtual screening.

    PubMed

    Wang, Ling; Pang, Xiaoqian; Li, Yecheng; Zhang, Ziying; Tan, Wen

    2017-04-15

    Evaluation of the capacity for separating actives from challenging decoys is a crucial metric of performance related to molecular docking or a virtual screening workflow. The Directory of Useful Decoys (DUD) and its enhanced version (DUD-E) provide a benchmark for molecular docking, although they only contain a limited set of decoys for limited targets. DecoyFinder was released to compensate the limitations of DUD or DUD-E for building target-specific decoy sets. However, desirable query template design, generation of multiple decoy sets of similar quality, and computational speed remain bottlenecks, particularly when the numbers of queried actives and retrieved decoys increases to hundreds or more. Here, we developed a program suite called RApid DEcoy Retriever (RADER) to facilitate the decoy-based assessment of virtual screening. This program adopts a novel database-management regime that supports rapid and large-scale retrieval of decoys, enables high portability of databases, and provides multifaceted options for designing initial query templates from a large number of active ligands and generating subtle decoy sets. RADER provides two operational modes: as a command-line tool and on a web server. Validation of the performance and efficiency of RADER was also conducted and is described. RADER web server and a local version are freely available at http://rcidm.org/rader/ . lingwang@scut.edu.cn or went@scut.edu.cn . Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  20. A graph-based approach to construct target-focused libraries for virtual screening.

    PubMed

    Naderi, Misagh; Alvin, Chris; Ding, Yun; Mukhopadhyay, Supratik; Brylinski, Michal

    2016-01-01

    Due to exorbitant costs of high-throughput screening, many drug discovery projects commonly employ inexpensive virtual screening to support experimental efforts. However, the vast majority of compounds in widely used screening libraries, such as the ZINC database, will have a very low probability to exhibit the desired bioactivity for a given protein. Although combinatorial chemistry methods can be used to augment existing compound libraries with novel drug-like compounds, the broad chemical space is often too large to be explored. Consequently, the trend in library design has shifted to produce screening collections specifically tailored to modulate the function of a particular target or a protein family. Assuming that organic compounds are composed of sets of rigid fragments connected by flexible linkers, a molecule can be decomposed into its building blocks tracking their atomic connectivity. On this account, we developed eSynth, an exhaustive graph-based search algorithm to computationally synthesize new compounds by reconnecting these building blocks following their connectivity patterns. We conducted a series of benchmarking calculations against the Directory of Useful Decoys, Enhanced database. First, in a self-benchmarking test, the correctness of the algorithm is validated with the objective to recover a molecule from its building blocks. Encouragingly, eSynth can efficiently rebuild more than 80 % of active molecules from their fragment components. Next, the capability to discover novel scaffolds is assessed in a cross-benchmarking test, where eSynth successfully reconstructed 40 % of the target molecules using fragments extracted from chemically distinct compounds. Despite an enormous chemical space to be explored, eSynth is computationally efficient; half of the molecules are rebuilt in less than a second, whereas 90 % take only about a minute to be generated. eSynth can successfully reconstruct chemically feasible molecules from molecular fragments. Furthermore, in a procedure mimicking the real application, where one expects to discover novel compounds based on a small set of already developed bioactives, eSynth is capable of generating diverse collections of molecules with the desired activity profiles. Thus, we are very optimistic that our effort will contribute to targeted drug discovery. eSynth is freely available to the academic community at www.brylinski.org/content/molecular-synthesis.Graphical abstractAssuming that organic compounds are composed of sets of rigid fragments connected by flexible linkers, a molecule can be decomposed into its building blocks tracking their atomic connectivity. Here, we developed eSynth, an automated method to synthesize new compounds by reconnecting these building blocks following the connectivity patterns via an exhaustive graph-based search algorithm. eSynth opens up a possibility to rapidly construct virtual screening libraries for targeted drug discovery.

  1. Using computer-aided drug design and medicinal chemistry strategies in the fight against diabetes.

    PubMed

    Semighini, Evandro P; Resende, Jonathan A; de Andrade, Peterson; Morais, Pedro A B; Carvalho, Ivone; Taft, Carlton A; Silva, Carlos H T P

    2011-04-01

    The aim of this work is to present a simple, practical and efficient protocol for drug design, in particular Diabetes, which includes selection of the illness, good choice of a target as well as a bioactive ligand and then usage of various computer aided drug design and medicinal chemistry tools to design novel potential drug candidates in different diseases. We have selected the validated target dipeptidyl peptidase IV (DPP-IV), whose inhibition contributes to reduce glucose levels in type 2 diabetes patients. The most active inhibitor with complex X-ray structure reported was initially extracted from the BindingDB database. By using molecular modification strategies widely used in medicinal chemistry, besides current state-of-the-art tools in drug design (including flexible docking, virtual screening, molecular interaction fields, molecular dynamics, ADME and toxicity predictions), we have proposed 4 novel potential DPP-IV inhibitors with drug properties for Diabetes control, which have been supported and validated by all the computational tools used herewith.

  2. ChemoPy: freely available python package for computational biology and chemoinformatics.

    PubMed

    Cao, Dong-Sheng; Xu, Qing-Song; Hu, Qian-Nan; Liang, Yi-Zeng

    2013-04-15

    Molecular representation for small molecules has been routinely used in QSAR/SAR, virtual screening, database search, ranking, drug ADME/T prediction and other drug discovery processes. To facilitate extensive studies of drug molecules, we developed a freely available, open-source python package called chemoinformatics in python (ChemoPy) for calculating the commonly used structural and physicochemical features. It computes 16 drug feature groups composed of 19 descriptors that include 1135 descriptor values. In addition, it provides seven types of molecular fingerprint systems for drug molecules, including topological fingerprints, electro-topological state (E-state) fingerprints, MACCS keys, FP4 keys, atom pairs fingerprints, topological torsion fingerprints and Morgan/circular fingerprints. By applying a semi-empirical quantum chemistry program MOPAC, ChemoPy can also compute a large number of 3D molecular descriptors conveniently. The python package, ChemoPy, is freely available via http://code.google.com/p/pychem/downloads/list, and it runs on Linux and MS-Windows. Supplementary data are available at Bioinformatics online.

  3. MouseNet database: digital management of a large-scale mutagenesis project.

    PubMed

    Pargent, W; Heffner, S; Schäble, K F; Soewarto, D; Fuchs, H; Hrabé de Angelis, M

    2000-07-01

    The Munich ENU Mouse Mutagenesis Screen is a large-scale mutant production, phenotyping, and mapping project. It encompasses two animal breeding facilities and a number of screening groups located in the general area of Munich. A central database is required to manage and process the immense amount of data generated by the mutagenesis project. This database, which we named MouseNet(c), runs on a Sybase platform and will finally store and process all data from the entire project. In addition, the system comprises a portfolio of functions needed to support the workflow management of the core facility and the screening groups. MouseNet(c) will make all of the data available to the participating screening groups, and later to the international scientific community. MouseNet(c) will consist of three major software components:* Animal Management System (AMS)* Sample Tracking System (STS)* Result Documentation System (RDS)MouseNet(c) provides the following major advantages:* being accessible from different client platforms via the Internet* being a full-featured multi-user system (including access restriction and data locking mechanisms)* relying on a professional RDBMS (relational database management system) which runs on a UNIX server platform* supplying workflow functions and a variety of plausibility checks.

  4. Screen Time at Home and School among Low-Income Children Attending Head Start

    PubMed Central

    Fletcher, Erica N.; Whitaker, Robert C.; Marino, Alexis J.; Anderson, Sarah E.

    2013-01-01

    Objective To describe the patterns of screen viewing at home and school among low-income preschool-aged children attending Head Start and identify factors associated with high home screen time in this population. Few studies have examined both home and classroom screen time, or included computer use as a component of screen viewing. Methods Participants were 2221 low-income preschool-aged children in the United States studied in the Head Start Family and Child Experiences Survey (FACES) in spring 2007. For 5 categories of screen viewing (television, video/DVD, video games, computer games, other computer use), we assessed children’s typical weekday home (parent-reported) and classroom (teacher-reported) screen viewing in relation to having a television in the child’s bedroom and sociodemographic factors. Results Over half of children (55.7%) had a television in their bedroom, and 12.5% had high home screen time (>4 hours/weekday). Television was the most common category of home screen time, but 56.6% of children had access to a computer at home and 37.5% had used it on the last typical weekday. After adjusting for sociodemographic characteristics, children with a television in their bedroom were more likely to have high home screen time [odds ratio=2.57 (95% confidence interval: 1.80–3.68)]. Classroom screen time consisted almost entirely of computer use; 49.4% of children used a classroom computer for ≥1 hour/week, and 14.2% played computer games at school ≥5 hours/week. Conclusions In 2007, one in eight low-income children attending Head Start had >4 hours/weekday of home screen time, which was associated with having a television in the bedroom. In the Head Start classroom, television and video viewing were uncommon but computer use was common. PMID:24891924

  5. CASKS (Computer Analysis of Storage casKS): A microcomputer based analysis system for storage cask design review. User`s manual to Version 1b (including program reference)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, T.F.; Gerhard, M.A.; Trummer, D.J.

    CASKS (Computer Analysis of Storage casKS) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for evaluating safety analysis reports on spent-fuel storage casks. The bulk of the complete program and this user`s manual are based upon the SCANS (Shipping Cask ANalysis System) program previously developed at LLNL. A number of enhancements and improvements were added to the original SCANS program to meet requirements unique to storage casks. CASKS is an easy-to-use system that calculates global response of storage casks to impact loads, pressure loads and thermal conditions. This provides reviewers withmore » a tool for an independent check on analyses submitted by licensees. CASKS is based on microcomputers compatible with the IBM-PC family of computers. The system is composed of a series of menus, input programs, cask analysis programs, and output display programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.« less

  6. Computational designing and screening of solid materials for CO2capture

    NASA Astrophysics Data System (ADS)

    Duan, Yuhua

    In this presentation, we will update our progress on computational designing and screening of solid materials for CO2 capture. By combining thermodynamic database mining with first principles density functional theory and phonon lattice dynamics calculations, a theoretical screening methodology to identify the most promising CO2 sorbent candidates from the vast array of possible solid materials have been proposed and validated at NETL. The advantage of this method is that it identifies the thermodynamic properties of the CO2 capture reaction as a function of temperature and pressure without any experimental input beyond crystallographic structural information of the solid phases involved. The calculated thermodynamic properties of different classes of solid materials versus temperature and pressure changes were further used to evaluate the equilibrium properties for the CO2 adsorption/desorption cycles. According to the requirements imposed by the pre- and post- combustion technologies and based on our calculated thermodynamic properties for the CO2 capture reactions by the solids of interest, we were able to identify only those solid materials for which lower capture energy costs are expected at the desired working conditions. In addition, we present a simulation scheme to increase and decrease the turnover temperature (Tt) of solid capturing CO2 reaction by mixing other solids. Our results also show that some solid sorbents can serve as bi-functional materials: CO2 sorbent and CO oxidation catalyst. Such dual functionality could be used for removing both CO and CO2 after water-gas-shift to obtain pure H2.

  7. Hierarchical virtual screening for the discovery of new molecular scaffolds in antibacterial hit identification

    PubMed Central

    Ballester, Pedro J.; Mangold, Martina; Howard, Nigel I.; Robinson, Richard L. Marchese; Abell, Chris; Blumberger, Jochen; Mitchell, John B. O.

    2012-01-01

    One of the initial steps of modern drug discovery is the identification of small organic molecules able to inhibit a target macromolecule of therapeutic interest. A small proportion of these hits are further developed into lead compounds, which in turn may ultimately lead to a marketed drug. A commonly used screening protocol used for this task is high-throughput screening (HTS). However, the performance of HTS against antibacterial targets has generally been unsatisfactory, with high costs and low rates of hit identification. Here, we present a novel computational methodology that is able to identify a high proportion of structurally diverse inhibitors by searching unusually large molecular databases in a time-, cost- and resource-efficient manner. This virtual screening methodology was tested prospectively on two versions of an antibacterial target (type II dehydroquinase from Mycobacterium tuberculosis and Streptomyces coelicolor), for which HTS has not provided satisfactory results and consequently practically all known inhibitors are derivatives of the same core scaffold. Overall, our protocols identified 100 new inhibitors, with calculated Ki ranging from 4 to 250 μM (confirmed hit rates are 60% and 62% against each version of the target). Most importantly, over 50 new active molecular scaffolds were discovered that underscore the benefits that a wide application of prospectively validated in silico screening tools is likely to bring to antibacterial hit identification. PMID:22933186

  8. Hierarchical virtual screening for the discovery of new molecular scaffolds in antibacterial hit identification.

    PubMed

    Ballester, Pedro J; Mangold, Martina; Howard, Nigel I; Robinson, Richard L Marchese; Abell, Chris; Blumberger, Jochen; Mitchell, John B O

    2012-12-07

    One of the initial steps of modern drug discovery is the identification of small organic molecules able to inhibit a target macromolecule of therapeutic interest. A small proportion of these hits are further developed into lead compounds, which in turn may ultimately lead to a marketed drug. A commonly used screening protocol used for this task is high-throughput screening (HTS). However, the performance of HTS against antibacterial targets has generally been unsatisfactory, with high costs and low rates of hit identification. Here, we present a novel computational methodology that is able to identify a high proportion of structurally diverse inhibitors by searching unusually large molecular databases in a time-, cost- and resource-efficient manner. This virtual screening methodology was tested prospectively on two versions of an antibacterial target (type II dehydroquinase from Mycobacterium tuberculosis and Streptomyces coelicolor), for which HTS has not provided satisfactory results and consequently practically all known inhibitors are derivatives of the same core scaffold. Overall, our protocols identified 100 new inhibitors, with calculated K(i) ranging from 4 to 250 μM (confirmed hit rates are 60% and 62% against each version of the target). Most importantly, over 50 new active molecular scaffolds were discovered that underscore the benefits that a wide application of prospectively validated in silico screening tools is likely to bring to antibacterial hit identification.

  9. A new version of the RDP (Ribosomal Database Project)

    NASA Technical Reports Server (NTRS)

    Maidak, B. L.; Cole, J. R.; Parker, C. T. Jr; Garrity, G. M.; Larsen, N.; Li, B.; Lilburn, T. G.; McCaughey, M. J.; Olsen, G. J.; Overbeek, R.; hide

    1999-01-01

    The Ribosomal Database Project (RDP-II), previously described by Maidak et al. [ Nucleic Acids Res. (1997), 25, 109-111], is now hosted by the Center for Microbial Ecology at Michigan State University. RDP-II is a curated database that offers ribosomal RNA (rRNA) nucleotide sequence data in aligned and unaligned forms, analysis services, and associated computer programs. During the past two years, data alignments have been updated and now include >9700 small subunit rRNA sequences. The recent development of an ObjectStore database will provide more rapid updating of data, better data accuracy and increased user access. RDP-II includes phylogenetically ordered alignments of rRNA sequences, derived phylogenetic trees, rRNA secondary structure diagrams, and various software programs for handling, analyzing and displaying alignments and trees. The data are available via anonymous ftp (ftp.cme.msu. edu) and WWW (http://www.cme.msu.edu/RDP). The WWW server provides ribosomal probe checking, approximate phylogenetic placement of user-submitted sequences, screening for possible chimeric rRNA sequences, automated alignment, and a suggested placement of an unknown sequence on an existing phylogenetic tree. Additional utilities also exist at RDP-II, including distance matrix, T-RFLP, and a Java-based viewer of the phylogenetic trees that can be used to create subtrees.

  10. Mobile application MDDCS for modeling the expansion dynamics of a dislocation loop in FCC metals

    NASA Astrophysics Data System (ADS)

    Kirilyuk, Vasiliy; Petelin, Alexander; Eliseev, Andrey

    2017-11-01

    A mobile version of the software package Dynamic Dislocation of Crystallographic Slip (MDDCS) designed for modeling the expansion dynamics of dislocation loops and formation of a crystallographic slip zone in FCC-metals is examined. The paper describes the possibilities for using MDDCS, the application interface, and the database scheme. The software has a simple and intuitive interface and does not require special training. The user can set the initial parameters of the experiment, carry out computational experiments, export parameters and results of the experiment into separate text files, and display the experiment results on the device screen.

  11. Bacterial contamination of computer touch screens.

    PubMed

    Gerba, Charles P; Wuollet, Adam L; Raisanen, Peter; Lopez, Gerardo U

    2016-03-01

    The goal of this study was to determine the occurrence of opportunistic bacterial pathogens on the surfaces of computer touch screens used in hospitals and grocery stores. Opportunistic pathogenic bacteria were isolated on touch screens in hospitals; Clostridium difficile and vancomycin-resistant Enterococcus and in grocery stores; methicillin-resistant Staphylococcus aureus. Enteric bacteria were more common on grocery store touch screens than on hospital computer touch screens. Published by Elsevier Inc.

  12. S2RSLDB: a comprehensive manually curated, internet-accessible database of the sigma-2 receptor selective ligands.

    PubMed

    Nastasi, Giovanni; Miceli, Carla; Pittalà, Valeria; Modica, Maria N; Prezzavento, Orazio; Romeo, Giuseppe; Rescifina, Antonio; Marrazzo, Agostino; Amata, Emanuele

    2017-01-01

    Sigma (σ) receptors are accepted as a particular receptor class consisting of two subtypes: sigma-1 (σ 1 ) and sigma-2 (σ 2 ). The two receptor subtypes have specific drug actions, pharmacological profiles and molecular characteristics. The σ 2 receptor is overexpressed in several tumor cell lines, and its ligands are currently under investigation for their role in tumor diagnosis and treatment. The σ 2 receptor structure has not been disclosed, and researchers rely on σ 2 receptor radioligand binding assay to understand the receptor's pharmacological behavior and design new lead compounds. Here we present the sigma-2 Receptor Selective Ligands Database (S2RSLDB) a manually curated database of the σ 2 receptor selective ligands containing more than 650 compounds. The database is built with chemical structure information, radioligand binding affinity data, computed physicochemical properties, and experimental radioligand binding procedures. The S2RSLDB is freely available online without account login and having a powerful search engine the user may build complex queries, sort tabulated results, generate color coded 2D and 3D graphs and download the data for additional screening. The collection here reported is extremely useful for the development of new ligands endowed of σ 2 receptor affinity, selectivity, and appropriate physicochemical properties. The database will be updated yearly and in the near future, an online submission form will be available to help with keeping the database widely spread in the research community and continually updated. The database is available at http://www.researchdsf.unict.it/S2RSLDB.

  13. An image database management system for conducting CAD research

    NASA Astrophysics Data System (ADS)

    Gruszauskas, Nicholas; Drukker, Karen; Giger, Maryellen L.

    2007-03-01

    The development of image databases for CAD research is not a trivial task. The collection and management of images and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased database system for the storage and management of research-specific medical image metadata was designed for use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL. A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata and successfully generates subsets of cases that match the user's desired search criteria.

  14. Chemical graphs, molecular matrices and topological indices in chemoinformatics and quantitative structure-activity relationships.

    PubMed

    Ivanciuc, Ovidiu

    2013-06-01

    Chemical and molecular graphs have fundamental applications in chemoinformatics, quantitative structureproperty relationships (QSPR), quantitative structure-activity relationships (QSAR), virtual screening of chemical libraries, and computational drug design. Chemoinformatics applications of graphs include chemical structure representation and coding, database search and retrieval, and physicochemical property prediction. QSPR, QSAR and virtual screening are based on the structure-property principle, which states that the physicochemical and biological properties of chemical compounds can be predicted from their chemical structure. Such structure-property correlations are usually developed from topological indices and fingerprints computed from the molecular graph and from molecular descriptors computed from the three-dimensional chemical structure. We present here a selection of the most important graph descriptors and topological indices, including molecular matrices, graph spectra, spectral moments, graph polynomials, and vertex topological indices. These graph descriptors are used to define several topological indices based on molecular connectivity, graph distance, reciprocal distance, distance-degree, distance-valency, spectra, polynomials, and information theory concepts. The molecular descriptors and topological indices can be developed with a more general approach, based on molecular graph operators, which define a family of graph indices related by a common formula. Graph descriptors and topological indices for molecules containing heteroatoms and multiple bonds are computed with weighting schemes based on atomic properties, such as the atomic number, covalent radius, or electronegativity. The correlation in QSPR and QSAR models can be improved by optimizing some parameters in the formula of topological indices, as demonstrated for structural descriptors based on atomic connectivity and graph distance.

  15. Computer algorithms and applications used to assist the evaluation and treatment of adolescent idiopathic scoliosis: a review of published articles 2000-2009.

    PubMed

    Phan, Philippe; Mezghani, Neila; Aubin, Carl-Éric; de Guise, Jacques A; Labelle, Hubert

    2011-07-01

    Adolescent idiopathic scoliosis (AIS) is a complex spinal deformity whose assessment and treatment present many challenges. Computer applications have been developed to assist clinicians. A literature review on computer applications used in AIS evaluation and treatment has been undertaken. The algorithms used, their accuracy and clinical usability were analyzed. Computer applications have been used to create new classifications for AIS based on 2D and 3D features, assess scoliosis severity or risk of progression and assist bracing and surgical treatment. It was found that classification accuracy could be improved using computer algorithms that AIS patient follow-up and screening could be done using surface topography thereby limiting radiation and that bracing and surgical treatment could be optimized using simulations. Yet few computer applications are routinely used in clinics. With the development of 3D imaging and databases, huge amounts of clinical and geometrical data need to be taken into consideration when researching and managing AIS. Computer applications based on advanced algorithms will be able to handle tasks that could otherwise not be done which can possibly improve AIS patients' management. Clinically oriented applications and evidence that they can improve current care will be required for their integration in the clinical setting.

  16. Delivering The Benefits of Chemical-Biological Integration in ...

    EPA Pesticide Factsheets

    Abstract: Researchers at the EPA’s National Center for Computational Toxicology integrate advances in biology, chemistry, and computer science to examine the toxicity of chemicals and help prioritize chemicals for further research based on potential human health risks. The intention of this research program is to quickly evaluate thousands of chemicals for potential risk but with much reduced cost relative to historical approaches. This work involves computational and data driven approaches including high-throughput screening, modeling, text-mining and the integration of chemistry, exposure and biological data. We have developed a number of databases and applications that are delivering on the vision of developing a deeper understanding of chemicals and their effects on exposure and biological processes that are supporting a large community of scientists in their research efforts. This presentation will provide an overview of our work to bring together diverse large scale data from the chemical and biological domains, our approaches to integrate and disseminate these data, and the delivery of models supporting computational toxicology. This abstract does not reflect U.S. EPA policy. Presentation at ACS TOXI session on Computational Chemistry and Toxicology in Chemical Discovery and Assessement (QSARs).

  17. Modeling resident error-making patterns in detection of mammographic masses using computer-extracted image features: preliminary experiments

    NASA Astrophysics Data System (ADS)

    Mazurowski, Maciej A.; Zhang, Jing; Lo, Joseph Y.; Kuzmiak, Cherie M.; Ghate, Sujata V.; Yoon, Sora

    2014-03-01

    Providing high quality mammography education to radiology trainees is essential, as good interpretation skills potentially ensure the highest benefit of screening mammography for patients. We have previously proposed a computer-aided education system that utilizes trainee models, which relate human-assessed image characteristics to interpretation error. We proposed that these models be used to identify the most difficult and therefore the most educationally useful cases for each trainee. In this study, as a next step in our research, we propose to build trainee models that utilize features that are automatically extracted from images using computer vision algorithms. To predict error, we used a logistic regression which accepts imaging features as input and returns error as output. Reader data from 3 experts and 3 trainees were used. Receiver operating characteristic analysis was applied to evaluate the proposed trainee models. Our experiments showed that, for three trainees, our models were able to predict error better than chance. This is an important step in the development of adaptive computer-aided education systems since computer-extracted features will allow for faster and more extensive search of imaging databases in order to identify the most educationally beneficial cases.

  18. DR HAGIS-a fundus image database for the automatic extraction of retinal surface vessels from diabetic patients.

    PubMed

    Holm, Sven; Russell, Greg; Nourrit, Vincent; McLoughlin, Niall

    2017-01-01

    A database of retinal fundus images, the DR HAGIS database, is presented. This database consists of 39 high-resolution color fundus images obtained from a diabetic retinopathy screening program in the UK. The NHS screening program uses service providers that employ different fundus and digital cameras. This results in a range of different image sizes and resolutions. Furthermore, patients enrolled in such programs often display other comorbidities in addition to diabetes. Therefore, in an effort to replicate the normal range of images examined by grading experts during screening, the DR HAGIS database consists of images of varying image sizes and resolutions and four comorbidity subgroups: collectively defined as the diabetic retinopathy, hypertension, age-related macular degeneration, and Glaucoma image set (DR HAGIS). For each image, the vasculature has been manually segmented to provide a realistic set of images on which to test automatic vessel extraction algorithms. Modified versions of two previously published vessel extraction algorithms were applied to this database to provide some baseline measurements. A method based purely on the intensity of images pixels resulted in a mean segmentation accuracy of 95.83% ([Formula: see text]), whereas an algorithm based on Gabor filters generated an accuracy of 95.71% ([Formula: see text]).

  19. The impact of computer self-efficacy, computer anxiety, and perceived usability and acceptability on the efficacy of a decision support tool for colorectal cancer screening

    PubMed Central

    Lindblom, Katrina; Gregory, Tess; Flight, Ingrid H K; Zajac, Ian

    2011-01-01

    Objective This study investigated the efficacy of an internet-based personalized decision support (PDS) tool designed to aid in the decision to screen for colorectal cancer (CRC) using a fecal occult blood test. We tested whether the efficacy of the tool in influencing attitudes to screening was mediated by perceived usability and acceptability, and considered the role of computer self-efficacy and computer anxiety in these relationships. Methods Eighty-one participants aged 50–76 years worked through the on-line PDS tool and completed questionnaires on computer self-efficacy, computer anxiety, attitudes to and beliefs about CRC screening before and after exposure to the PDS, and perceived usability and acceptability of the tool. Results Repeated measures ANOVA found that PDS exposure led to a significant increase in knowledge about CRC and screening, and more positive attitudes to CRC screening as measured by factors from the Preventive Health Model. Perceived usability and acceptability of the PDS mediated changes in attitudes toward CRC screening (but not CRC knowledge), and computer self-efficacy and computer anxiety were significant predictors of individuals' perceptions of the tool. Conclusion Interventions designed to decrease computer anxiety, such as computer courses and internet training, may improve the acceptability of new health information technologies including internet-based decision support tools, increasing their impact on behavior change. PMID:21857024

  20. DEEP: A Database of Energy Efficiency Performance to Accelerate Energy Retrofitting of Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoon Lee, Sang; Hong, Tianzhen; Sawaya, Geof

    The paper presents a method and process to establish a database of energy efficiency performance (DEEP) to enable quick and accurate assessment of energy retrofit of commercial buildings. DEEP was compiled from results of about 35 million EnergyPlus simulations. DEEP provides energy savings for screening and evaluation of retrofit measures targeting the small and medium-sized office and retail buildings in California. The prototype building models are developed for a comprehensive assessment of building energy performance based on DOE commercial reference buildings and the California DEER prototype buildings. The prototype buildings represent seven building types across six vintages of constructions andmore » 16 California climate zones. DEEP uses these prototypes to evaluate energy performance of about 100 energy conservation measures covering envelope, lighting, heating, ventilation, air-conditioning, plug-loads, and domestic hot water. DEEP consists the energy simulation results for individual retrofit measures as well as packages of measures to consider interactive effects between multiple measures. The large scale EnergyPlus simulations are being conducted on the super computers at the National Energy Research Scientific Computing Center of Lawrence Berkeley National Laboratory. The pre-simulation database is a part of an on-going project to develop a web-based retrofit toolkit for small and medium-sized commercial buildings in California, which provides real-time energy retrofit feedback by querying DEEP with recommended measures, estimated energy savings and financial payback period based on users’ decision criteria of maximizing energy savings, energy cost savings, carbon reduction, or payback of investment. The pre-simulated database and associated comprehensive measure analysis enhances the ability to performance assessments of retrofits to reduce energy use for small and medium buildings and business owners who typically do not have resources to conduct costly building energy audit. DEEP will be migrated into the DEnCity - DOE’s Energy City, which integrates large-scale energy data for multi-purpose, open, and dynamic database leveraging diverse source of existing simulation data.« less

  1. 8. DETAIL OF COMPUTER SCREEN AND CONTROL BOARDS: LEFT SCREEN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. DETAIL OF COMPUTER SCREEN AND CONTROL BOARDS: LEFT SCREEN TRACKS RESIDUAL CHLORINE; INDICATES AMOUNT OF SUNLIGHT WHICH ENABLES OPERATOR TO ESTIMATE NEEDED CHLORINE; CENTER SCREEN SHOWS TURNOUT STRUCTURES; RIGHT SCREEN SHOWS INDICATORS OF ALUMINUM SULFATE TANK FARM. - F. E. Weymouth Filtration Plant, 700 North Moreno Avenue, La Verne, Los Angeles County, CA

  2. The LAILAPS search engine: a feature model for relevance ranking in life science databases.

    PubMed

    Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe

    2010-03-25

    Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.

  3. Usefulness of Canadian Public Health Insurance Administrative Databases to Assess Breast and Ovarian Cancer Screening Imaging Technologies for BRCA1/2 Mutation Carriers.

    PubMed

    Larouche, Geneviève; Chiquette, Jocelyne; Plante, Marie; Pelletier, Sylvie; Simard, Jacques; Dorval, Michel

    2016-11-01

    In Canada, recommendations for clinical management of hereditary breast and ovarian cancer among individuals carrying a deleterious BRCA1 or BRCA2 mutation have been available since 2007. Eight years later, very little is known about the uptake of screening and risk-reduction measures in this population. Because Canada's public health care system falls under provincial jurisdictions, using provincial health care administrative databases appears a valuable option to assess management of BRCA1/2 mutation carriers. The objective was to explore the usefulness of public health insurance administrative databases in British Columbia, Ontario, and Quebec to assess management after BRCA1/2 genetic testing. Official public health insurance documents were considered potentially useful if they had specific procedure codes, and pertained to procedures performed in the public and private health care systems. All 3 administrative databases have specific procedures codes for mammography and breast ultrasounds. Only Quebec and Ontario have a specific procedure code for breast magnetic resonance imaging. It is impossible to assess, on an individual basis, the frequency of others screening exams, with the exception of CA-125 testing in British Columbia. Screenings done in private practice are excluded from the administrative databases unless covered by special agreements for reimbursement, such as all breast imaging exams in Ontario and mammograms in British Columbia and Quebec. There are no specific procedure codes for risk-reduction surgeries for breast and ovarian cancer. Population-based assessment of breast and ovarian cancer risk management strategies other than mammographic screening, using only administrative data, is currently challenging in the 3 Canadian provinces studied. Copyright © 2016 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.

  4. NALDB: nucleic acid ligand database for small molecules targeting nucleic acid

    PubMed Central

    Kumar Mishra, Subodh; Kumar, Amit

    2016-01-01

    Nucleic acid ligand database (NALDB) is a unique database that provides detailed information about the experimental data of small molecules that were reported to target several types of nucleic acid structures. NALDB is the first ligand database that contains ligand information for all type of nucleic acid. NALDB contains more than 3500 ligand entries with detailed pharmacokinetic and pharmacodynamic information such as target name, target sequence, ligand 2D/3D structure, SMILES, molecular formula, molecular weight, net-formal charge, AlogP, number of rings, number of hydrogen bond donor and acceptor, potential energy along with their Ki, Kd, IC50 values. All these details at single platform would be helpful for the development and betterment of novel ligands targeting nucleic acids that could serve as a potential target in different diseases including cancers and neurological disorders. With maximum 255 conformers for each ligand entry, our database is a multi-conformer database and can facilitate the virtual screening process. NALDB provides powerful web-based search tools that make database searching efficient and simplified using option for text as well as for structure query. NALDB also provides multi-dimensional advanced search tool which can screen the database molecules on the basis of molecular properties of ligand provided by database users. A 3D structure visualization tool has also been included for 3D structure representation of ligands. NALDB offers an inclusive pharmacological information and the structurally flexible set of small molecules with their three-dimensional conformers that can accelerate the virtual screening and other modeling processes and eventually complement the nucleic acid-based drug discovery research. NALDB can be routinely updated and freely available on bsbe.iiti.ac.in/bsbe/naldb/HOME.php. Database URL: http://bsbe.iiti.ac.in/bsbe/naldb/HOME.php PMID:26896846

  5. Annual Screening Strategies in BRCA1 and BRCA2 Gene Mutation Carriers: A Comparative Effectiveness Analysis

    PubMed Central

    Lowry, Kathryn P.; Lee, Janie M.; Kong, Chung Y.; McMahon, Pamela M.; Gilmore, Michael E.; Cott Chubiz, Jessica E.; Pisano, Etta D.; Gatsonis, Constantine; Ryan, Paula D.; Ozanne, Elissa M.; Gazelle, G. Scott

    2011-01-01

    Background While breast cancer screening with mammography and MRI is recommended for BRCA mutation carriers, there is no current consensus on the optimal screening regimen. Methods We used a computer simulation model to compare six annual screening strategies [film mammography (FM), digital mammography (DM), FM and magnetic resonance imaging (MRI) or DM and MRI contemporaneously, and alternating FM/MRI or DM/MRI at six-month intervals] beginning at ages 25, 30, 35, and 40, and two strategies of annual MRI with delayed alternating DM/FM to clinical surveillance alone. Strategies were evaluated without and with mammography-induced breast cancer risk, using two models of excess relative risk. Input parameters were obtained from the medical literature, publicly available databases, and calibration. Results Without radiation risk effects, alternating DM/MRI starting at age 25 provided the highest life expectancy (BRCA1: 72.52 years, BRCA2: 77.63 years). When radiation risk was included, a small proportion of diagnosed cancers were attributable to radiation exposure (BRCA1: <2%, BRCA2: <4%). With radiation risk, alternating DM/MRI at age 25 or annual MRI at age 25/delayed alternating DM at age 30 were most effective, depending on the radiation risk model used. Alternating DM/MRI starting at age 25 also had the highest number of false-positive screens/person (BRCA1: 4.5, BRCA2: 8.1). Conclusions Annual MRI at 25/delayed alternating DM at age 30 is likely the most effective screening strategy in BRCA mutation carriers. Screening benefits, associated risks and personal acceptance of false-positive results, should be considered in choosing the optimal screening strategy for individual women. PMID:21935911

  6. Development of a consumer product ingredient database for chemical exposure screening and prioritization.

    PubMed

    Goldsmith, M-R; Grulke, C M; Brooks, R D; Transue, T R; Tan, Y M; Frame, A; Egeghy, P P; Edwards, R; Chang, D T; Tornero-Velez, R; Isaacs, K; Wang, A; Johnson, J; Holm, K; Reich, M; Mitchell, J; Vallero, D A; Phillips, L; Phillips, M; Wambaugh, J F; Judson, R S; Buckley, T J; Dary, C C

    2014-03-01

    Consumer products are a primary source of chemical exposures, yet little structured information is available on the chemical ingredients of these products and the concentrations at which ingredients are present. To address this data gap, we created a database of chemicals in consumer products using product Material Safety Data Sheets (MSDSs) publicly provided by a large retailer. The resulting database represents 1797 unique chemicals mapped to 8921 consumer products and a hierarchy of 353 consumer product "use categories" within a total of 15 top-level categories. We examine the utility of this database and discuss ways in which it will support (i) exposure screening and prioritization, (ii) generic or framework formulations for several indoor/consumer product exposure modeling initiatives, (iii) candidate chemical selection for monitoring near field exposure from proximal sources, and (iv) as activity tracers or ubiquitous exposure sources using "chemical space" map analyses. Chemicals present at high concentrations and across multiple consumer products and use categories that hold high exposure potential are identified. Our database is publicly available to serve regulators, retailers, manufacturers, and the public for predictive screening of chemicals in new and existing consumer products on the basis of exposure and risk. Published by Elsevier Ltd.

  7. GenomeRNAi: a database for cell-based RNAi phenotypes.

    PubMed

    Horn, Thomas; Arziman, Zeynep; Berger, Juerg; Boutros, Michael

    2007-01-01

    RNA interference (RNAi) has emerged as a powerful tool to generate loss-of-function phenotypes in a variety of organisms. Combined with the sequence information of almost completely annotated genomes, RNAi technologies have opened new avenues to conduct systematic genetic screens for every annotated gene in the genome. As increasing large datasets of RNAi-induced phenotypes become available, an important challenge remains the systematic integration and annotation of functional information. Genome-wide RNAi screens have been performed both in Caenorhabditis elegans and Drosophila for a variety of phenotypes and several RNAi libraries have become available to assess phenotypes for almost every gene in the genome. These screens were performed using different types of assays from visible phenotypes to focused transcriptional readouts and provide a rich data source for functional annotation across different species. The GenomeRNAi database provides access to published RNAi phenotypes obtained from cell-based screens and maps them to their genomic locus, including possible non-specific regions. The database also gives access to sequence information of RNAi probes used in various screens. It can be searched by phenotype, by gene, by RNAi probe or by sequence and is accessible at http://rnai.dkfz.de.

  8. GenomeRNAi: a database for cell-based RNAi phenotypes

    PubMed Central

    Horn, Thomas; Arziman, Zeynep; Berger, Juerg; Boutros, Michael

    2007-01-01

    RNA interference (RNAi) has emerged as a powerful tool to generate loss-of-function phenotypes in a variety of organisms. Combined with the sequence information of almost completely annotated genomes, RNAi technologies have opened new avenues to conduct systematic genetic screens for every annotated gene in the genome. As increasing large datasets of RNAi-induced phenotypes become available, an important challenge remains the systematic integration and annotation of functional information. Genome-wide RNAi screens have been performed both in Caenorhabditis elegans and Drosophila for a variety of phenotypes and several RNAi libraries have become available to assess phenotypes for almost every gene in the genome. These screens were performed using different types of assays from visible phenotypes to focused transcriptional readouts and provide a rich data source for functional annotation across different species. The GenomeRNAi database provides access to published RNAi phenotypes obtained from cell-based screens and maps them to their genomic locus, including possible non-specific regions. The database also gives access to sequence information of RNAi probes used in various screens. It can be searched by phenotype, by gene, by RNAi probe or by sequence and is accessible at PMID:17135194

  9. In silico discovery of metal-organic frameworks for precombustion CO2 capture using a genetic algorithm

    PubMed Central

    Chung, Yongchul G.; Gómez-Gualdrón, Diego A.; Li, Peng; Leperi, Karson T.; Deria, Pravas; Zhang, Hongda; Vermeulen, Nicolaas A.; Stoddart, J. Fraser; You, Fengqi; Hupp, Joseph T.; Farha, Omar K.; Snurr, Randall Q.

    2016-01-01

    Discovery of new adsorbent materials with a high CO2 working capacity could help reduce CO2 emissions from newly commissioned power plants using precombustion carbon capture. High-throughput computational screening efforts can accelerate the discovery of new adsorbents but sometimes require significant computational resources to explore the large space of possible materials. We report the in silico discovery of high-performing adsorbents for precombustion CO2 capture by applying a genetic algorithm to efficiently search a large database of metal-organic frameworks (MOFs) for top candidates. High-performing MOFs identified from the in silico search were synthesized and activated and show a high CO2 working capacity and a high CO2/H2 selectivity. One of the synthesized MOFs shows a higher CO2 working capacity than any MOF reported in the literature under the operating conditions investigated here. PMID:27757420

  10. Computer capillaroscopy as a new cardiological diagnostics method

    NASA Astrophysics Data System (ADS)

    Gurfinkel, Youri I.; Korol, Oleg A.; Kufal, George E.

    1998-04-01

    The blood flow in capillary vessels plays an important role in sustaining the vital activity of the human organism. The computerized capillaroscope is used for the investigations of nailfold (eponychium) capillary blood flow. An important advantage of the instrument is the possibility of performing non-invasive investigations, i.e., without damage to skin or vessels and causing no pain or unpleasant sensations. The high-class equipment and software allow direct observation of capillary blood flow dynamics on a computer screen at a 700 - 1300 times magnification. For the first time in the clinical practice, it has become possible to precisely measure the speed of capillary blood flow, as well as the frequency of aggregate formation (glued together in clots of blood particles). In addition, provision is made for automatic measurement of capillary size and wall thickness and automatic recording of blood aggregate images for further visual study, documentation, and electronic database management.

  11. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  12. Deep Convolutional Neural Networks for breast cancer screening.

    PubMed

    Chougrad, Hiba; Zouaki, Hamid; Alheyane, Omar

    2018-04-01

    Radiologists often have a hard time classifying mammography mass lesions which leads to unnecessary breast biopsies to remove suspicions and this ends up adding exorbitant expenses to an already burdened patient and health care system. In this paper we developed a Computer-aided Diagnosis (CAD) system based on deep Convolutional Neural Networks (CNN) that aims to help the radiologist classify mammography mass lesions. Deep learning usually requires large datasets to train networks of a certain depth from scratch. Transfer learning is an effective method to deal with relatively small datasets as in the case of medical images, although it can be tricky as we can easily start overfitting. In this work, we explore the importance of transfer learning and we experimentally determine the best fine-tuning strategy to adopt when training a CNN model. We were able to successfully fine-tune some of the recent, most powerful CNNs and achieved better results compared to other state-of-the-art methods which classified the same public datasets. For instance we achieved 97.35% accuracy and 0.98 AUC on the DDSM database, 95.50% accuracy and 0.97 AUC on the INbreast database and 96.67% accuracy and 0.96 AUC on the BCDR database. Furthermore, after pre-processing and normalizing all the extracted Regions of Interest (ROIs) from the full mammograms, we merged all the datasets to build one large set of images and used it to fine-tune our CNNs. The CNN model which achieved the best results, a 98.94% accuracy, was used as a baseline to build the Breast Cancer Screening Framework. To evaluate the proposed CAD system and its efficiency to classify new images, we tested it on an independent database (MIAS) and got 98.23% accuracy and 0.99 AUC. The results obtained demonstrate that the proposed framework is performant and can indeed be used to predict if the mass lesions are benign or malignant. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. LS-align: an atom-level, flexible ligand structural alignment algorithm for high-throughput virtual screening.

    PubMed

    Hu, Jun; Liu, Zi; Yu, Dong-Jun; Zhang, Yang

    2018-02-15

    Sequence-order independent structural comparison, also called structural alignment, of small ligand molecules is often needed for computer-aided virtual drug screening. Although many ligand structure alignment programs are proposed, most of them build the alignments based on rigid-body shape comparison which cannot provide atom-specific alignment information nor allow structural variation; both abilities are critical to efficient high-throughput virtual screening. We propose a novel ligand comparison algorithm, LS-align, to generate fast and accurate atom-level structural alignments of ligand molecules, through an iterative heuristic search of the target function that combines inter-atom distance with mass and chemical bond comparisons. LS-align contains two modules of Rigid-LS-align and Flexi-LS-align, designed for rigid-body and flexible alignments, respectively, where a ligand-size independent, statistics-based scoring function is developed to evaluate the similarity of ligand molecules relative to random ligand pairs. Large-scale benchmark tests are performed on prioritizing chemical ligands of 102 protein targets involving 1,415,871 candidate compounds from the DUD-E (Database of Useful Decoys: Enhanced) database, where LS-align achieves an average enrichment factor (EF) of 22.0 at the 1% cutoff and the AUC score of 0.75, which are significantly higher than other state-of-the-art methods. Detailed data analyses show that the advanced performance is mainly attributed to the design of the target function that combines structural and chemical information to enhance the sensitivity of recognizing subtle difference of ligand molecules and the introduces of structural flexibility that help capture the conformational changes induced by the ligand-receptor binding interactions. These data demonstrate a new avenue to improve the virtual screening efficiency through the development of sensitive ligand structural alignments. http://zhanglab.ccmb.med.umich.edu/LS-align/. njyudj@njust.edu.cn or zhng@umich.edu. Supplementary data are available at Bioinformatics online.

  14. An Integrated In Silico Method to Discover Novel Rock1 Inhibitors: Multi- Complex-Based Pharmacophore, Molecular Dynamics Simulation and Hybrid Protocol Virtual Screening.

    PubMed

    Chen, Haining; Li, Sijia; Hu, Yajiao; Chen, Guo; Jiang, Qinglin; Tong, Rongsheng; Zang, Zhihe; Cai, Lulu

    2016-01-01

    Rho-associated, coiled-coil containing protein kinase 1 (ROCK1) is an important regulator of focal adhesion, actomyosin contraction and cell motility. In this manuscript, a combination of the multi-complex-based pharmacophore (MCBP), molecular dynamics simulation and a hybrid protocol of a virtual screening method, comprised of multipharmacophore- based virtual screening (PBVS) and ensemble docking-based virtual screening (DBVS) methods were used for retrieving novel ROCK1 inhibitors from the natural products database embedded in the ZINC database. Ten hit compounds were selected from the hit compounds, and five compounds were tested experimentally. Thus, these results may provide valuable information for further discovery of more novel ROCK1 inhibitors.

  15. The use of high-throughput screening techniques to evaluate mitochondrial toxicity.

    PubMed

    Wills, Lauren P

    2017-11-01

    Toxicologists and chemical regulators depend on accurate and effective methods to evaluate and predict the toxicity of thousands of current and future compounds. Robust high-throughput screening (HTS) experiments have the potential to efficiently test large numbers of chemical compounds for effects on biological pathways. HTS assays can be utilized to examine chemical toxicity across multiple mechanisms of action, experimental models, concentrations, and lengths of exposure. Many agricultural, industrial, and pharmaceutical chemicals classified as harmful to human and environmental health exert their effects through the mechanism of mitochondrial toxicity. Mitochondrial toxicants are compounds that cause a decrease in the number of mitochondria within a cell, and/or decrease the ability of mitochondria to perform normal functions including producing adenosine triphosphate (ATP) and maintaining cellular homeostasis. Mitochondrial dysfunction can lead to apoptosis, necrosis, altered metabolism, muscle weakness, neurodegeneration, decreased organ function, and eventually disease or death of the whole organism. The development of HTS techniques to identify mitochondrial toxicants will provide extensive databases with essential connections between mechanistic mitochondrial toxicity and chemical structure. Computational and bioinformatics approaches can be used to evaluate compound databases for specific chemical structures associated with toxicity, with the goal of developing quantitative structure-activity relationship (QSAR) models and mitochondrial toxicophores. Ultimately these predictive models will facilitate the identification of mitochondrial liabilities in consumer products, industrial compounds, pharmaceuticals and environmental hazards. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. A unified approach to the design of clinical reporting systems.

    PubMed

    Gouveia-Oliveira, A; Salgado, N C; Azevedo, A P; Lopes, L; Raposo, V D; Almeida, I; de Melo, F G

    1994-12-01

    Computer-based Clinical Reporting Systems (CRS) for diagnostic departments that use structured data entry have a number of functional and structural affinities suggesting that a common software architecture for CRS may be defined. Such an architecture should allow easy expandability and reusability of a CRS. We report the development methodology and the architecture of SISCOPE, a CRS originally designed for gastrointestinal endoscopy that is expandable and reusable. Its main components are a patient database, a knowledge base, a reports base, and screen and reporting engines. The knowledge base contains the description of the controlled vocabulary and all the information necessary to control the menu system, and is easily accessed and modified with a conventional text editor. The structure of the controlled vocabulary is formally presented as an entity-relationship diagram. The screen engine drives a dynamic user interface and the reporting engine automatically creates a medical report; both engines operate by following a set of rules and the information contained in the knowledge base. Clinical experience has shown this architecture to be highly flexible and to allow frequent modifications of both the vocabulary and the menu system. This structure provided increased collaboration among development teams, insulating the domain expert from the details of the database, and enabling him to modify the system as necessary and to test the changes immediately. The system has also been reused in several different domains.

  17. First-principles data-driven discovery of transition metal oxides for artificial photosynthesis

    NASA Astrophysics Data System (ADS)

    Yan, Qimin

    We develop a first-principles data-driven approach for rapid identification of transition metal oxide (TMO) light absorbers and photocatalysts for artificial photosynthesis using the Materials Project. Initially focusing on Cr, V, and Mn-based ternary TMOs in the database, we design a broadly-applicable multiple-layer screening workflow automating density functional theory (DFT) and hybrid functional calculations of bulk and surface electronic and magnetic structures. We further assess the electrochemical stability of TMOs in aqueous environments from computed Pourbaix diagrams. Several promising earth-abundant low band-gap TMO compounds with desirable band edge energies and electrochemical stability are identified by our computational efforts and then synergistically evaluated using high-throughput synthesis and photoelectrochemical screening techniques by our experimental collaborators at Caltech. Our joint theory-experiment effort has successfully identified new earth-abundant copper and manganese vanadate complex oxides that meet highly demanding requirements for photoanodes, substantially expanding the known space of such materials. By integrating theory and experiment, we validate our approach and develop important new insights into structure-property relationships for TMOs for oxygen evolution photocatalysts, paving the way for use of first-principles data-driven techniques in future applications. This work is supported by the Materials Project Predictive Modeling Center and the Joint Center for Artificial Photosynthesis through the U.S. Department of Energy, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract No. DE-AC02-05CH11231. Computational resources also provided by the Department of Energy through the National Energy Supercomputing Center.

  18. GENPLOT: A formula-based Pascal program for data manipulation and plotting

    NASA Astrophysics Data System (ADS)

    Kramer, Matthew J.

    Geochemical processes involving alteration, differentiation, fractionation, or migration of elements may be elucidated by a number of discrimination or variation diagrams (e.g., AFM, Harker, Pearce, and many others). The construction of these diagrams involves arithmetic combination of selective elements (involving major, minor, or trace elements). GENPLOT utilizes a formula-based algorithm (an expression parser) which enables the program to manipulate multiparameter databases and plot XY, ternary, tetrahedron, and REE type plots without needing to change either the source code or rearranging databases. Formulae may be any quadratic expression whose variables are the column headings of the data matrix. A full-screen editor with limited equations and arithmetic functions (spreadsheet) has been incorporated into the program to aid data entry and editing. Data are stored as ASCII files to facilitate interchange of data between other programs and computers. GENPLOT was developed in Turbo Pascal for the IBM and compatible computers but also is available in Apple Pascal for the Apple Ile and Ill. Because the source code is too extensive to list here (about 5200 lines of Pascal code), the expression parsing routine, which is central to GENPLOT's flexibility is incorporated into a smaller demonstration program named SOLVE. The following paper includes a discussion on how the expression parser works and a detailed description of GENPLOT's capabilities.

  19. TOXCAST, A TOOL FOR CATEGORIZATION AND ...

    EPA Pesticide Factsheets

    Across several EPA Program Offices (e.g., OPPTS, OW, OAR), there is a clear need to develop strategies and methods to screen large numbers of chemicals for potential toxicity, and to use the resulting information to prioritize the use of testing resources towards those entities and endpoints that present the greatest likelihood of risk to human health and the environment. This need could be addressed using the experience of the pharmaceutical industry in the use of advanced modern molecular biology and computational chemistry tools for the development of new drugs, with appropriate adjustment to the needs and desires of environmental toxicology. A conceptual approach named ToxCast has been developed to address the needs of EPA Program Offices in the area of prioritization and screening. Modern computational chemistry and molecular biology tools bring enabling technologies forward that can provide information about the physical and biological properties of large numbers of chemicals. The essence of the proposal is to conduct a demonstration project based upon a rich toxicological database (e.g., registered pesticides, or the chemicals tested in the NTP bioassay program), select a fairly large number (50-100 or more chemicals) representative of a number of differing structural classes and phenotypic outcomes (e.g., carcinogens, reproductive toxicants, neurotoxicants), and evaluate them across a broad spectrum of information domains that modern technology has pro

  20. EPA Project Updates: DSSTox and ToxCast Generating New ...

    EPA Pesticide Factsheets

    EPAs National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction. The DSSTox project is improving public access to quality structure-annotated chemical toxicity information in less summarized forms than traditionally employed in SAR modeling, and in ways that facilitate data-mining, and data read-across. The DSSTox Structure-Browser, launched in September 2007, provides structure searchability across all published DSSTox toxicity-related inventory, and is enabling linkages between previously isolated toxicity data resources. As of early March 2008, the public DSSTox inventory as been integrated into PubChem, allowing a user to take full advantage of PubChem structure-activity and bioassay clustering features. The most recent DSSTox version of Carcinogenic Potency Database file (CPDBAS) illustrates ways in which various summary definitions of carcinogenic activity can be employed in modeling and data mining. Phase I of the ToxCast project is generating high-throughput screening data from several hundred biochemical and cell-based assays for a set of 320 chemicals, mostly pesticide actives, with rich toxicology profiles. Incorporating and expanding traditional SAR Concepts into this new high-throughput and data-rich would pose conceptual and practical challenges, but also holds great promise for improving predictive capabilities. EPA's National Center for Computational Toxicology is bu

  1. Correlates of mobile screen media use among children aged 0–8: a systematic review

    PubMed Central

    Jancey, Jonine; Subedi, Narayan; Leavy, Justine

    2017-01-01

    Objective This study is a systematic review of the peer-reviewed literature to identify the correlates of mobile screen media use among children aged 8 years and less. Setting Home or community-based studies were included in this review while child care or school-based studies were excluded. Participants Children aged 8 years or less were the study population. Studies that included larger age groups without subgroup analysis specific to the 0–8 years category were excluded. Eight electronic databases were searched for peer-reviewed English language primary research articles published or in press between January 2009 and March 2017 that have studied correlates of mobile screen media use in this age group. Outcome measure Mobile screen media use was the primary outcome measure. Mobile screen media use refers to children’s use of mobile screens, such as mobile phones, electronic tablets, handheld computers or personal digital assistants. Results Thirteen studies meeting the inclusion criteria were identified of which a total of 36 correlates were examined. Older children, children better skilled in using mobile screen media devices, those having greater access to such devices at home and whose parents had high mobile screen media use were more likely to have higher use of mobile screen media devices. No association existed with parent’s age, sex and education. Conclusion Limited research has been undertaken into young children’s mobile screen media use and most of the variables have been studied too infrequently for robust conclusions to be reached. Future studies with objective assessment of mobile screen media use and frequent examination of the potential correlates across multiple studies and settings are recommended. Trial registration number This review is registered with PROSPERO International Prospective Register of Ongoing Systematic Reviews (registration number: CRD42015028028). PMID:29070636

  2. Correlates of mobile screen media use among children aged 0-8: a systematic review.

    PubMed

    Paudel, Susan; Jancey, Jonine; Subedi, Narayan; Leavy, Justine

    2017-10-24

    This study is a systematic review of the peer-reviewed literature to identify the correlates of mobile screen media use among children aged 8 years and less. Home or community-based studies were included in this review while child care or school-based studies were excluded. Children aged 8 years or less were the study population. Studies that included larger age groups without subgroup analysis specific to the 0-8 years category were excluded. Eight electronic databases were searched for peer-reviewed English language primary research articles published or in press between January 2009 and March 2017 that have studied correlates of mobile screen media use in this age group. Mobile screen media use was the primary outcome measure. Mobile screen media use refers to children's use of mobile screens, such as mobile phones, electronic tablets, handheld computers or personal digital assistants. Thirteen studies meeting the inclusion criteria were identified of which a total of 36 correlates were examined. Older children, children better skilled in using mobile screen media devices, those having greater access to such devices at home and whose parents had high mobile screen media use were more likely to have higher use of mobile screen media devices. No association existed with parent's age, sex and education. Limited research has been undertaken into young children's mobile screen media use and most of the variables have been studied too infrequently for robust conclusions to be reached. Future studies with objective assessment of mobile screen media use and frequent examination of the potential correlates across multiple studies and settings are recommended. This review is registered with PROSPERO International Prospective Register of Ongoing Systematic Reviews (registration number: CRD42015028028). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Construction of a robust, large-scale, collaborative database for raw data in computational chemistry: the Collaborative Chemistry Database Tool (CCDBT).

    PubMed

    Chen, Mingyang; Stott, Amanda C; Li, Shenggang; Dixon, David A

    2012-04-01

    A robust metadata database called the Collaborative Chemistry Database Tool (CCDBT) for massive amounts of computational chemistry raw data has been designed and implemented. It performs data synchronization and simultaneously extracts the metadata. Computational chemistry data in various formats from different computing sources, software packages, and users can be parsed into uniform metadata for storage in a MySQL database. Parsing is performed by a parsing pyramid, including parsers written for different levels of data types and sets created by the parser loader after loading parser engines and configurations. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Implementation of a computer database testing and analysis program.

    PubMed

    Rouse, Deborah P

    2007-01-01

    The author is the coordinator of a computer software database testing and analysis program implemented in an associate degree nursing program. Computer software database programs help support the testing development and analysis process. Critical thinking is measurable and promoted with their use. The reader of this article will learn what is involved in procuring and implementing a computer database testing and analysis program in an academic nursing program. The use of the computerized database for testing and analysis will be approached as a method to promote and evaluate the nursing student's critical thinking skills and to prepare the nursing student for the National Council Licensure Examination.

  5. A Web-based telemedicine system for diabetic retinopathy screening using digital fundus photography.

    PubMed

    Wei, Jack C; Valentino, Daniel J; Bell, Douglas S; Baker, Richard S

    2006-02-01

    The purpose was to design and implement a Web-based telemedicine system for diabetic retinopathy screening using digital fundus cameras and to make the software publicly available through Open Source release. The process of retinal imaging and case reviewing was modeled to optimize workflow and implement use of computer system. The Web-based system was built on Java Servlet and Java Server Pages (JSP) technologies. Apache Tomcat was chosen as the JSP engine, while MySQL was used as the main database and Laboratory of Neuro Imaging (LONI) Image Storage Architecture, from the LONI-UCLA, as the platform for image storage. For security, all data transmissions were carried over encrypted Internet connections such as Secure Socket Layer (SSL) and HyperText Transfer Protocol over SSL (HTTPS). User logins were required and access to patient data was logged for auditing. The system was deployed at Hubert H. Humphrey Comprehensive Health Center and Martin Luther King/Drew Medical Center of Los Angeles County Department of Health Services. Within 4 months, 1500 images of more than 650 patients were taken at Humphrey's Eye Clinic and successfully transferred to King/Drew's Department of Ophthalmology. This study demonstrates an effective architecture for remote diabetic retinopathy screening.

  6. Computer-aided identification, synthesis, and biological evaluation of novel inhibitors for botulinum neurotoxin serotype A

    DOE PAGES

    Teng, Y. G.; Berger, W. T.; Nesbitt, N. M.; ...

    2015-07-27

    Botulinum neurotoxins (BoNTs) are among the most potent biological toxin known to humans, and are classified as Category A bioterrorism agents by the Centers for Disease Control and prevention (CDC). There are seven known BoNT serotypes (A-G) which have been thus far identified in literature. BoNTs have been shown to block neurotransmitter release by cleaving proteins of the soluble NSF attachment protein receptor (SNARE) complex. Disruption of the SNARE complex precludes motor neuron failure which ultimately results in flaccid paralysis in humans and animals. Currently, there are no effective therapeutic treatments against the neurotoxin light chain (LC) after translocation intomore » the cytosols of motor neurons. In this work, high-throughput virtual screening was employed to screen a library of commercially available compounds from ZINC database against BoNT/A-LC. Among the hit compounds from the in-silico screening, two lead compounds were identified and found to have potent inhibitory activity against BoNT/A-LC in vitro, as well as in Neuro-2a cells. A few analogues of the lead compounds were synthesized and their potency examined. One of these analogues showed an enhanced activity than the lead compounds« less

  7. A Practical Standardized Composite Nutrition Score Based on Lean Tissue Index: Application in Nutrition Screening and Prediction of Outcome in Hemodialysis Population.

    PubMed

    Chen, Huan-Sheng; Cheng, Chun-Ting; Hou, Chun-Cheng; Liou, Hung-Hsiang; Chang, Cheng-Tsung; Lin, Chun-Ju; Wu, Tsai-Kun; Chen, Chang-Hsu; Lim, Paik-Seong

    2017-07-01

    Rapid screening and monitoring of nutritional status is mandatory in hemodialysis population because of the increasingly encountered nutritional problems. Considering the limitations of previous composite nutrition scores applied in this population, we tried to develop a standardized composite nutrition score (SCNS) using low lean tissue index as a marker of protein wasting to facilitate clinical screening and monitoring and to predict outcome. This retrospective cohort used 2 databases of dialysis populations from Taiwan between 2011 and 2014. First database consisting of data from 629 maintenance hemodialysis patients was used to develop the SCNS and the second database containing data from 297 maintenance hemodialysis patients was used to validate this developed score. SCNS containing albumin, creatinine, potassium, and body mass index was developed from the first database using low lean tissue index as a marker of protein wasting. When applying this score in the original database, significantly higher risk of developing protein wasting was found for patients with lower SCNS (odds ratio 1.38 [middle tertile vs highest tertile, P < .0001] and 2.40 [lowest tertile vs middle tertile, P < .0001]). The risk of death was also shown to be higher for patients with lower SCNS (hazard ratio 4.45 [below median level vs above median level, P < .0001]). These results were validated in the second database. We developed an SCNS consisting of 4 easily available biochemical parameters. This kind of scoring system can be easily applied in different dialysis facilities for screening and monitoring of protein wasting. The wide application of body composition monitor in dialysis population will also facilitate the development of specific nutrition scoring model for individual facility. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  8. Computational Modeling of Mixed Solids for CO2 CaptureSorbents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Yuhua

    2015-01-01

    Since current technologies for capturing CO2 to fight global climate change are still too energy intensive, there is a critical need for development of new materials that can capture CO2 reversibly with acceptable energy costs. Accordingly, solid sorbents have been proposed to be used for CO2 capture applications through a reversible chemical transformation. By combining thermodynamic database mining with first principles density functional theory and phonon lattice dynamics calculations, a theoretical screening methodology to identify the most promising CO2 sorbent candidates from the vast array of possible solid materials has been proposed and validated. The calculated thermodynamic properties of differentmore » classes of solid materials versus temperature and pressure changes were further used to evaluate the equilibrium properties for the CO2 adsorption/desorption cycles. According to the requirements imposed by the pre- and post- combustion technologies and based on our calculated thermodynamic properties for the CO2 capture reactions by the solids of interest, we were able to screen only those solid materials for which lower capture energy costs are expected at the desired pressure and temperature conditions. Only those selected CO2 sorbent candidates were further considered for experimental validations. The ab initio thermodynamic technique has the advantage of identifying thermodynamic properties of CO2 capture reactions without any experimental input beyond crystallographic structural information of the solid phases involved. Such methodology not only can be used to search for good candidates from existing database of solid materials, but also can provide some guidelines for synthesis new materials. In this presentation, we apply our screening methodology to mixing solid systems to adjust the turnover temperature to help on developing CO2 capture Technologies.« less

  9. Large-scale annotation of small-molecule libraries using public databases.

    PubMed

    Zhou, Yingyao; Zhou, Bin; Chen, Kaisheng; Yan, S Frank; King, Frederick J; Jiang, Shumei; Winzeler, Elizabeth A

    2007-01-01

    While many large publicly accessible databases provide excellent annotation for biological macromolecules, the same is not true for small chemical compounds. Commercial data sources also fail to encompass an annotation interface for large numbers of compounds and tend to be cost prohibitive to be widely available to biomedical researchers. Therefore, using annotation information for the selection of lead compounds from a modern day high-throughput screening (HTS) campaign presently occurs only under a very limited scale. The recent rapid expansion of the NIH PubChem database provides an opportunity to link existing biological databases with compound catalogs and provides relevant information that potentially could improve the information garnered from large-scale screening efforts. Using the 2.5 million compound collection at the Genomics Institute of the Novartis Research Foundation (GNF) as a model, we determined that approximately 4% of the library contained compounds with potential annotation in such databases as PubChem and the World Drug Index (WDI) as well as related databases such as the Kyoto Encyclopedia of Genes and Genomes (KEGG) and ChemIDplus. Furthermore, the exact structure match analysis showed 32% of GNF compounds can be linked to third party databases via PubChem. We also showed annotations such as MeSH (medical subject headings) terms can be applied to in-house HTS databases in identifying signature biological inhibition profiles of interest as well as expediting the assay validation process. The automated annotation of thousands of screening hits in batch is becoming feasible and has the potential to play an essential role in the hit-to-lead decision making process.

  10. VecScreen_plus_taxonomy: imposing a tax(onomy) increase on vector contamination screening.

    PubMed

    Schäffer, Alejandro A; Nawrocki, Eric P; Choi, Yoon; Kitts, Paul A; Karsch-Mizrachi, Ilene; McVeigh, Richard

    2018-03-01

    Nucleic acid sequences in public databases should not contain vector contamination, but many sequences in GenBank do (or did) contain vectors. The National Center for Biotechnology Information uses the program VecScreen to screen submitted sequences for contamination. Additional tools are needed to distinguish true-positive (contamination) from false-positive (not contamination) VecScreen matches. A principal reason for false-positive VecScreen matches is that the sequence and the matching vector subsequence originate from closely related or identical organisms (for example, both originate in Escherichia coli). We collected information on the taxonomy of sources of vector segments in the UniVec database used by VecScreen. We used that information in two overlapping software pipelines for retrospective analysis of contamination in GenBank and for prospective analysis of contamination in new sequence submissions. Using the retrospective pipeline, we identified and corrected over 8000 contaminated sequences in the nonredundant nucleotide database. The prospective analysis pipeline has been in production use since April 2017 to evaluate some new GenBank submissions. Data on the sources of UniVec entries were included in release 10.0 (ftp://ftp.ncbi.nih.gov/pub/UniVec/). The main software is freely available at https://github.com/aaschaffer/vecscreen_plus_taxonomy. aschaffe@helix.nih.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  11. Incidental renal tumours on low-dose CT lung cancer screening exams.

    PubMed

    Pinsky, Paul F; Dunn, Barbara; Gierada, David; Nath, P Hrudaya; Munden, Reginald; Berland, Lincoln; Kramer, Barnett S

    2017-06-01

    Introduction Renal cancer incidence has increased markedly in the United States in recent decades, largely due to incidentally detected tumours from computed tomography imaging. Here, we analyze the potential for low-dose computed tomography lung cancer screening to detect renal cancer. Methods The National Lung Screening Trial randomized subjects to three annual screens with either low-dose computed tomography or chest X-ray. Eligibility criteria included 30 + pack-years, current smoking or quit within 15 years, and age 55-74. Subjects were followed for seven years. Low-dose computed tomography screening forms collected information on lung cancer and non-lung cancer abnormalities, including abnormalities below the diaphragm. A reader study was performed on a sample of National Lung Screening Trial low-dose computed tomography images assessing presence of abnormalities below the diaphragms and abnormalities suspicious for renal cancer. Results There were 26,722 and 26,732 subjects enrolled in the low-dose computed tomography and chest X-ray arms, respectively, and there were 104 and 85 renal cancer cases diagnosed, respectively (relative risk = 1.22, 95% CI: 0.9-1.5). From 75,126 low-dose computed tomography screens, there were 46 renal cancer diagnoses within one year. Abnormalities below the diaphragm rates were 39.1% in screens with renal cancer versus 4.1% in screens without (P < 0.001). Cases with abnormalities below the diaphragms had shorter median time to diagnosis than those without (71 vs. 160 days, P = 0.004). In the reader study, 64% of renal cancer cases versus 13% of non-cases had abnormalities below the diaphragms; 55% of cases and 0.8% of non-cases had a finding suspicious for renal cancer (P < 0.001). Conclusion Low-dose computed tomography screens can potentially detect renal cancers. The benefits to harms tradeoff of incidental detection of renal tumours on low-dose computed tomography is unknown.

  12. Aerosol Robotic Network (AERONET) Version 3 Aerosol Optical Depth and Inversion Products

    NASA Astrophysics Data System (ADS)

    Giles, D. M.; Holben, B. N.; Eck, T. F.; Smirnov, A.; Sinyuk, A.; Schafer, J.; Sorokin, M. G.; Slutsker, I.

    2017-12-01

    The Aerosol Robotic Network (AERONET) surface-based aerosol optical depth (AOD) database has been a principal component of many Earth science remote sensing applications and modelling for more than two decades. During this time, the AERONET AOD database had utilized a semiautomatic quality assurance approach (Smirnov et al., 2000). Data quality automation developed for AERONET Version 3 (V3) was achieved by augmenting and improving upon the combination of Version 2 (V2) automatic and manual procedures to provide a more refined near real time (NRT) and historical worldwide database of AOD. The combined effect of these new changes provides a historical V3 AOD Level 2.0 data set comparable to V2 Level 2.0 AOD. The recently released V3 Level 2.0 AOD product uses Level 1.5 data with automated cloud screening and quality controls and applies pre-field and post-field calibrations and wavelength-dependent temperature characterizations. For V3, the AERONET aerosol retrieval code inverts AOD and almucantar sky radiances using a full vector radiative transfer called Successive ORDers of scattering (SORD; Korkin et al., 2017). The full vector code allows for potentially improving the real part of the complex index of refraction and the sphericity parameter and computing the radiation field in the UV (e.g., 380nm) and degree of linear depolarization. Effective lidar ratio and depolarization ratio products are also available with the V3 inversion release. Inputs to the inversion code were updated to the accommodate H2O, O3 and NO2 absorption to be consistent with the computation of V3 AOD. All of the inversion products are associated with estimated uncertainties that include the random error plus biases due to the uncertainty in measured AOD, absolute sky radiance calibration, and retrieved MODIS BRDF for snow-free and snow covered surfaces. The V3 inversion products use the same data quality assurance criteria as V2 inversions (Holben et al. 2006). The entire AERONET V3 almucantar inversion database was computed using the NASA High End Computing resources at NASA Ames Research Center and NASA Goddard Space Flight Center. In addition to a description of data products, this presentation will provide a comparison of the V3 AOD and inversion climatology comparison of the V3 Level 2.0 and V2 Level 2.0 for sites with varying aerosol types.

  13. Unified Database for Rejected Image Analysis Across Multiple Vendors in Radiography.

    PubMed

    Little, Kevin J; Reiser, Ingrid; Liu, Lili; Kinsey, Tiffany; Sánchez, Adrian A; Haas, Kateland; Mallory, Florence; Froman, Carmen; Lu, Zheng Feng

    2017-02-01

    Reject rate analysis has been part of radiography departments' quality control since the days of screen-film radiography. In the era of digital radiography, one might expect that reject rate analysis is easily facilitated because of readily available information produced by the modality during the examination procedure. Unfortunately, this is not always the case. The lack of an industry standard and the wide variety of system log entries and formats have made it difficult to implement a robust multivendor reject analysis program, and logs do not always include all relevant information. The increased use of digital detectors exacerbates this problem because of higher reject rates associated with digital radiography compared with computed radiography. In this article, the authors report on the development of a unified database for vendor-neutral reject analysis across multiple sites within an academic institution and share their experience from a team-based approach to reduce reject rates. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  14. Mean curvature and texture constrained composite weighted random walk algorithm for optic disc segmentation towards glaucoma screening.

    PubMed

    Panda, Rashmi; Puhan, N B; Panda, Ganapati

    2018-02-01

    Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.

  15. Development of a screening tool for staging of diabetic retinopathy in fundus images

    NASA Astrophysics Data System (ADS)

    Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Bency, Mayur Joseph; Rangayyan, Rangaraj M.; Bansal, Reema; Gupta, Amod

    2015-03-01

    Diabetic retinopathy is a condition of the eye of diabetic patients where the retina is damaged because of long-term diabetes. The condition deteriorates towards irreversible blindness in extreme cases of diabetic retinopathy. Hence, early detection of diabetic retinopathy is important to prevent blindness. Regular screening of fundus images of diabetic patients could be helpful in preventing blindness caused by diabetic retinopathy. In this paper, we propose techniques for staging of diabetic retinopathy in fundus images using several shape and texture features computed from detected microaneurysms, exudates, and hemorrhages. The classification accuracy is reported in terms of the area (Az) under the receiver operating characteristic curve using 200 fundus images from the MESSIDOR database. The value of Az for classifying normal images versus mild, moderate, and severe nonproliferative diabetic retinopathy (NPDR) is 0:9106. The value of Az for classification of mild NPDR versus moderate and severe NPDR is 0:8372. The Az value for classification of moderate NPDR and severe NPDR is 0:9750.

  16. Applying a new mammographic imaging marker to predict breast cancer risk

    NASA Astrophysics Data System (ADS)

    Aghaei, Faranak; Danala, Gopichandh; Hollingsworth, Alan B.; Stoug, Rebecca G.; Pearce, Melanie; Liu, Hong; Zheng, Bin

    2018-02-01

    Identifying and developing new mammographic imaging markers to assist prediction of breast cancer risk has been attracting extensive research interest recently. Although mammographic density is considered an important breast cancer risk, its discriminatory power is lower for predicting short-term breast cancer risk, which is a prerequisite to establish a more effective personalized breast cancer screening paradigm. In this study, we presented a new interactive computer-aided detection (CAD) scheme to generate a new quantitative mammographic imaging marker based on the bilateral mammographic tissue density asymmetry to predict risk of cancer detection in the next subsequent mammography screening. An image database involving 1,397 women was retrospectively assembled and tested. Each woman had two digital mammography screenings namely, the "current" and "prior" screenings with a time interval from 365 to 600 days. All "prior" images were originally interpreted negative. In "current" screenings, these cases were divided into 3 groups, which include 402 positive, 643 negative, and 352 biopsy-proved benign cases, respectively. There is no significant difference of BIRADS based mammographic density ratings between 3 case groups (p < 0.6). When applying the CAD-generated imaging marker or risk model to classify between 402 positive and 643 negative cases using "prior" negative mammograms, the area under a ROC curve is 0.70+/-0.02 and the adjusted odds ratios show an increasing trend from 1.0 to 8.13 to predict the risk of cancer detection in the "current" screening. Study demonstrated that this new imaging marker had potential to yield significantly higher discriminatory power to predict short-term breast cancer risk.

  17. Ensemble pharmacophore meets ensemble docking: a novel screening strategy for the identification of RIPK1 inhibitors

    NASA Astrophysics Data System (ADS)

    Fayaz, S. M.; Rajanikant, G. K.

    2014-07-01

    Programmed cell death has been a fascinating area of research since it throws new challenges and questions in spite of the tremendous ongoing research in this field. Recently, necroptosis, a programmed form of necrotic cell death, has been implicated in many diseases including neurological disorders. Receptor interacting serine/threonine protein kinase 1 (RIPK1) is an important regulatory protein involved in the necroptosis and inhibition of this protein is essential to stop necroptotic process and eventually cell death. Current structure-based virtual screening methods involve a wide range of strategies and recently, considering the multiple protein structures for pharmacophore extraction has been emphasized as a way to improve the outcome. However, using the pharmacophoric information completely during docking is very important. Further, in such methods, using the appropriate protein structures for docking is desirable. If not, potential compound hits, obtained through pharmacophore-based screening, may not have correct ranks and scores after docking. Therefore, a comprehensive integration of different ensemble methods is essential, which may provide better virtual screening results. In this study, dual ensemble screening, a novel computational strategy was used to identify diverse and potent inhibitors against RIPK1. All the pharmacophore features present in the binding site were captured using both the apo and holo protein structures and an ensemble pharmacophore was built by combining these features. This ensemble pharmacophore was employed in pharmacophore-based screening of ZINC database. The compound hits, thus obtained, were subjected to ensemble docking. The leads acquired through docking were further validated through feature evaluation and molecular dynamics simulation.

  18. Predicting the performance of fingerprint similarity searching.

    PubMed

    Vogt, Martin; Bajorath, Jürgen

    2011-01-01

    Fingerprints are bit string representations of molecular structure that typically encode structural fragments, topological features, or pharmacophore patterns. Various fingerprint designs are utilized in virtual screening and their search performance essentially depends on three parameters: the nature of the fingerprint, the active compounds serving as reference molecules, and the composition of the screening database. It is of considerable interest and practical relevance to predict the performance of fingerprint similarity searching. A quantitative assessment of the potential that a fingerprint search might successfully retrieve active compounds, if available in the screening database, would substantially help to select the type of fingerprint most suitable for a given search problem. The method presented herein utilizes concepts from information theory to relate the fingerprint feature distributions of reference compounds to screening libraries. If these feature distributions do not sufficiently differ, active database compounds that are similar to reference molecules cannot be retrieved because they disappear in the "background." By quantifying the difference in feature distribution using the Kullback-Leibler divergence and relating the divergence to compound recovery rates obtained for different benchmark classes, fingerprint search performance can be quantitatively predicted.

  19. In silico identification of anthropogenic chemicals as ligands of zebrafish sex hormone binding globulin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorsteinson, Nels; Ban, Fuqiang; Santos-Filho, Osvaldo

    2009-01-01

    Anthropogenic compounds with the capacity to interact with the steroid-binding site of sex hormone binding globulin (SHBG) pose health risks to humans and other vertebrates including fish. Building on studies of human SHBG, we have applied in silico drug discovery methods to identify potential binders for SHBG in zebrafish (Danio rerio) as a model aquatic organism. Computational methods, including; homology modeling, molecular dynamics simulations, virtual screening, and 3D QSAR analysis, successfully identified 6 non-steroidal substances from the ZINC chemical database that bind to zebrafish SHBG (zfSHBG) with low-micromolar to nanomolar affinities, as determined by a competitive ligand-binding assay. We alsomore » screened 80,000 commercial substances listed by the European Chemicals Bureau and Environment Canada, and 6 non-steroidal hits from this in silico screen were tested experimentally for zfSHBG binding. All 6 of these compounds displaced the [{sup 3}H]5{alpha}-dihydrotestosterone used as labeled ligand in the zfSHBG screening assay when tested at a 33 {mu}M concentration, and 3 of them (hexestrol, 4-tert-octylcatechol, and dihydrobenzo(a)pyren-7(8H)-one) bind to zfSHBG in the micromolar range. The study demonstrates the feasibility of large-scale in silico screening of anthropogenic compounds that may disrupt or highjack functionally important protein:ligand interactions. Such studies could increase the awareness of hazards posed by existing commercial chemicals at relatively low cost.« less

  20. Training system for digital mammographic diagnoses of breast cancer

    NASA Astrophysics Data System (ADS)

    Thomaz, R. L.; Nirschl Crozara, M. G.; Patrocinio, A. C.

    2013-03-01

    As the technology evolves, the analog mammography systems are being replaced by digital systems. The digital system uses video monitors as the display of mammographic images instead of the previously used screen-film and negatoscope for analog images. The change in the way of visualizing mammographic images may require a different approach for training the health care professionals in diagnosing the breast cancer with digital mammography. Thus, this paper presents a computational approach to train the health care professionals providing a smooth transition between analog and digital technology also training to use the advantages of digital image processing tools to diagnose the breast cancer. This computational approach consists of a software where is possible to open, process and diagnose a full mammogram case from a database, which has the digital images of each of the mammographic views. The software communicates with a gold standard digital mammogram cases database. This database contains the digital images in Tagged Image File Format (TIFF) and the respective diagnoses according to BI-RADSTM, these files are read by software and shown to the user as needed. There are also some digital image processing tools that can be used to provide better visualization of each single image. The software was built based on a minimalist and a user-friendly interface concept that might help in the smooth transition. It also has an interface for inputting diagnoses from the professional being trained, providing a result feedback. This system has been already completed, but hasn't been applied to any professional training yet.

  1. MOLA: a bootable, self-configuring system for virtual screening using AutoDock4/Vina on computer clusters.

    PubMed

    Abreu, Rui Mv; Froufe, Hugo Jc; Queiroz, Maria João Rp; Ferreira, Isabel Cfr

    2010-10-28

    Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a potential maximum speed-up of 10x, the parallel algorithm of MOLA performed with a speed-up of 8,64× using AutoDock4 and 8,60× using Vina.

  2. COMDECOM: predicting the lifetime of screening compounds in DMSO solution.

    PubMed

    Zitha-Bovens, Emrin; Maas, Peter; Wife, Dick; Tijhuis, Johan; Hu, Qian-Nan; Kleinöder, Thomas; Gasteiger, Johann

    2009-06-01

    The technological evolution of the 1990s in both combinatorial chemistry and high-throughput screening created the demand for rapid access to the compound deck to support the screening process. The common strategy within the pharmaceutical industry is to store the screening library in DMSO solution. Several studies have shown that a percentage of these compounds decompose in solution, varying from a few percent of the total to a substantial part of the library. In the COMDECOM (COMpound DECOMposition) project, the compound stability of screening compounds in DMSO solution is monitored in an accelerated thermal, hydrolytic, and oxidative decomposition program. A large database with stability data is collected, and from this database, a predictive model is being developed. The aim of this program is to build an algorithm that can flag compounds that are likely to decompose-information that is considered to be of utmost importance (e.g., in the compound acquisition process and when evaluation screening results of library compounds, as well as in the determination of optimal storage conditions).

  3. A Novel Approach for Efficient Pharmacophore-based Virtual Screening: Method and Applications

    PubMed Central

    Dror, Oranit; Schneidman-Duhovny, Dina; Inbar, Yuval; Nussinov, Ruth; Wolfson, Haim J.

    2009-01-01

    Virtual screening is emerging as a productive and cost-effective technology in rational drug design for the identification of novel lead compounds. An important model for virtual screening is the pharmacophore. Pharmacophore is the spatial configuration of essential features that enable a ligand molecule to interact with a specific target receptor. In the absence of a known receptor structure, a pharmacophore can be identified from a set of ligands that have been observed to interact with the target receptor. Here, we present a novel computational method for pharmacophore detection and virtual screening. The pharmacophore detection module is able to: (i) align multiple flexible ligands in a deterministic manner without exhaustive enumeration of the conformational space, (ii) detect subsets of input ligands that may bind to different binding sites or have different binding modes, (iii) address cases where the input ligands have different affinities by defining weighted pharmacophores based on the number of ligands that share them, and (iv) automatically select the most appropriate pharmacophore candidates for virtual screening. The algorithm is highly efficient, allowing a fast exploration of the chemical space by virtual screening of huge compound databases. The performance of PharmaGist was successfully evaluated on a commonly used dataset of G-Protein Coupled Receptor alpha1A. Additionally, a large-scale evaluation using the DUD (directory of useful decoys) dataset was performed. DUD contains 2950 active ligands for 40 different receptors, with 36 decoy compounds for each active ligand. PharmaGist enrichment rates are comparable with other state-of-the-art tools for virtual screening. Availability The software is available for download. A user-friendly web interface for pharmacophore detection is available at http://bioinfo3d.cs.tau.ac.il/PharmaGist. PMID:19803502

  4. Flexible ligand docking using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Oshiro, C. M.; Kuntz, I. D.; Dixon, J. Scott

    1995-04-01

    Two computational techniques have been developed to explore the orientational and conformational space of a flexible ligand within an enzyme. Both methods use the Genetic Algorithm (GA) to generate conformationally flexible ligands in conjunction with algorithms from the DOCK suite of programs to characterize the receptor site. The methods are applied to three enzyme-ligand complexes: dihydrofolate reductase-methotrexate, thymidylate synthase-phenolpthalein and HIV protease-thioketal haloperidol. Conformations and orientations close to the crystallographically determined structures are obtained, as well as alternative structures with low energy. The potential for the GA method to screen a database of compounds is also examined. A collection of ligands is evaluated simultaneously, rather than docking the ligands individually into the enzyme.

  5. Intelligent system for topic survey in MEDLINE by keyword recommendation and learning text characteristics.

    PubMed

    Tanaka, M; Nakazono, S; Matsuno, H; Tsujimoto, H; Kitamura, Y; Miyano, S

    2000-01-01

    We have implemented a system for assisting experts in selecting MEDLINE records for database construction purposes. This system has two specific features: The first is a learning mechanism which extracts characteristics in the abstracts of MEDLINE records of interest as patterns. These patterns reflect selection decisions by experts and are used for screening the records. The second is a keyword recommendation system which assists and supplements experts' knowledge in unexpected cases. Combined with a conventional keyword-based information retrieval system, this system may provide an efficient and comfortable environment for MEDLINE record selection by experts. Some computational experiments are provided to prove that this idea is useful.

  6. The new Cloud Dynamics and Radiation Database algorithms for AMSR2 and GMI: exploitation of the GPM observational database for operational applications

    NASA Astrophysics Data System (ADS)

    Cinzia Marra, Anna; Casella, Daniele; Martins Costa do Amaral, Lia; Sanò, Paolo; Dietrich, Stefano; Panegrossi, Giulia

    2017-04-01

    Two new precipitation retrieval algorithms for the Advanced Microwave Scanning Radiometer 2 (AMSR2) and for the GPM Microwave Imager (GMI) are presented. The algorithms are based on the Cloud Dynamics and Radiation Database (CDRD) Bayesian approach and represent an evolution of the previous version applied to Special Sensor Microwave Imager/Sounder (SSMIS) observations, and used operationally within the EUMETSAT Satellite Application Facility on support to Operational Hydrology and Water Management (H-SAF). These new products present as main innovation the use of an extended database entirely empirical, derived from coincident radar and radiometer observations from the NASA/JAXA Global Precipitation Measurement Core Observatory (GPM-CO) (Dual-frequency Precipitation Radar-DPR and GMI). The other new aspects are: 1) a new rain-no-rain screening approach; 2) the use of Empirical Orthogonal Functions (EOF) and Canonical Correlation Analysis (CCA) both in the screening approach, and in the Bayesian algorithm; 2) the use of new meteorological and environmental ancillary variables to categorize the database and mitigate the problem of non-uniqueness of the retrieval solution; 3) the development and implementations of specific modules for computational time minimization. The CDRD algorithms for AMSR2 and GMI are able to handle an extremely large observational database available from GPM-CO and provide the rainfall estimate with minimum latency, making them suitable for near-real time hydrological and operational applications. As far as CDRD for AMSR2, a verification study over Italy using ground-based radar data and over the MSG full disk area using coincident GPM-CO/AMSR2 observations has been carried out. Results show remarkable AMSR2 capabilities for rainfall rate (RR) retrieval over ocean (for RR > 0.25 mm/h), good capabilities over vegetated land (for RR > 1 mm/h), while for coastal areas the results are less certain. Comparisons with NASA GPM products, and with ground-based radar data, show that CDRD for AMSR2 is able to depict very well the areas of high precipitation over all surface types. Similarly, preliminary results of the application of CDRD for GMI are also shown and discussed, highlighting the advantage of the availability of high frequency channels (> 90 GHz) for precipitation retrieval over land and coastal areas.

  7. Virtual fragment preparation for computational fragment-based drug design.

    PubMed

    Ludington, Jennifer L

    2015-01-01

    Fragment-based drug design (FBDD) has become an important component of the drug discovery process. The use of fragments can accelerate both the search for a hit molecule and the development of that hit into a lead molecule for clinical testing. In addition to experimental methodologies for FBDD such as NMR and X-ray Crystallography screens, computational techniques are playing an increasingly important role. The success of the computational simulations is due in large part to how the database of virtual fragments is prepared. In order to prepare the fragments appropriately it is necessary to understand how FBDD differs from other approaches and the issues inherent in building up molecules from smaller fragment pieces. The ultimate goal of these calculations is to link two or more simulated fragments into a molecule that has an experimental binding affinity consistent with the additive predicted binding affinities of the virtual fragments. Computationally predicting binding affinities is a complex process, with many opportunities for introducing error. Therefore, care should be taken with the fragment preparation procedure to avoid introducing additional inaccuracies.This chapter is focused on the preparation process used to create a virtual fragment database. Several key issues of fragment preparation which affect the accuracy of binding affinity predictions are discussed. The first issue is the selection of the two-dimensional atomic structure of the virtual fragment. Although the particular usage of the fragment can affect this choice (i.e., whether the fragment will be used for calibration, binding site characterization, hit identification, or lead optimization), general factors such as synthetic accessibility, size, and flexibility are major considerations in selecting the 2D structure. Other aspects of preparing the virtual fragments for simulation are the generation of three-dimensional conformations and the assignment of the associated atomic point charges.

  8. Mathematical modeling and computational prediction of cancer drug resistance.

    PubMed

    Sun, Xiaoqiang; Hu, Bin

    2017-06-23

    Diverse forms of resistance to anticancer drugs can lead to the failure of chemotherapy. Drug resistance is one of the most intractable issues for successfully treating cancer in current clinical practice. Effective clinical approaches that could counter drug resistance by restoring the sensitivity of tumors to the targeted agents are urgently needed. As numerous experimental results on resistance mechanisms have been obtained and a mass of high-throughput data has been accumulated, mathematical modeling and computational predictions using systematic and quantitative approaches have become increasingly important, as they can potentially provide deeper insights into resistance mechanisms, generate novel hypotheses or suggest promising treatment strategies for future testing. In this review, we first briefly summarize the current progress of experimentally revealed resistance mechanisms of targeted therapy, including genetic mechanisms, epigenetic mechanisms, posttranslational mechanisms, cellular mechanisms, microenvironmental mechanisms and pharmacokinetic mechanisms. Subsequently, we list several currently available databases and Web-based tools related to drug sensitivity and resistance. Then, we focus primarily on introducing some state-of-the-art computational methods used in drug resistance studies, including mechanism-based mathematical modeling approaches (e.g. molecular dynamics simulation, kinetic model of molecular networks, ordinary differential equation model of cellular dynamics, stochastic model, partial differential equation model, agent-based model, pharmacokinetic-pharmacodynamic model, etc.) and data-driven prediction methods (e.g. omics data-based conventional screening approach for node biomarkers, static network approach for edge biomarkers and module biomarkers, dynamic network approach for dynamic network biomarkers and dynamic module network biomarkers, etc.). Finally, we discuss several further questions and future directions for the use of computational methods for studying drug resistance, including inferring drug-induced signaling networks, multiscale modeling, drug combinations and precision medicine. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Direct access midwifery booking for prenatal care and its role in Down syndrome screening.

    PubMed

    Nawaz, Tariq S; Tringham, Gillian M; Holding, Stephen; McFarlane, Jane; Lindow, Stephen W

    2011-10-01

    To compare the uptake of Down syndrome screening by women following referral by direct access and general practitioner (GP) modes. The method of referral by either GP or direct access, for women who booked into prenatal care in Hull and East Yorkshire in 2010, was analysed using data collected from the Protos database at the Women and Children's Hospital, Hull. Subsequently, the uptake of first and second trimester screening for Down syndrome was reviewed by combining the Protos database to the screening data collected by the Clinical Biochemistry Laboratory at Hull Royal Infirmary, Hull. Women booked into prenatal care significantly earlier when referred by GP in comparison to direct access with a significant difference in screening uptake of 49.5 and 42.7%, respectively. The ratio of uptake between first and second trimester screening was not significantly different. Further research on the new direct access method of referral is required, as it may have a role in the uptake of prenatal screening for Down syndrome. More time is needed to show a definitive effect. Copyright © 2011 John Wiley & Sons, Ltd.

  10. Reduction of false-positive recalls using a computerized mammographic image feature analysis scheme

    NASA Astrophysics Data System (ADS)

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-08-01

    The high false-positive recall rate is one of the major dilemmas that significantly reduce the efficacy of screening mammography, which harms a large fraction of women and increases healthcare cost. This study aims to investigate the feasibility of helping reduce false-positive recalls by developing a new computer-aided diagnosis (CAD) scheme based on the analysis of global mammographic texture and density features computed from four-view images. Our database includes full-field digital mammography (FFDM) images acquired from 1052 recalled women (669 positive for cancer and 383 benign). Each case has four images: two craniocaudal (CC) and two mediolateral oblique (MLO) views. Our CAD scheme first computed global texture features related to the mammographic density distribution on the segmented breast regions of four images. Second, the computed features were given to two artificial neural network (ANN) classifiers that were separately trained and tested in a ten-fold cross-validation scheme on CC and MLO view images, respectively. Finally, two ANN classification scores were combined using a new adaptive scoring fusion method that automatically determined the optimal weights to assign to both views. CAD performance was tested using the area under a receiver operating characteristic curve (AUC). The AUC = 0.793  ±  0.026 was obtained for this four-view CAD scheme, which was significantly higher at the 5% significance level than the AUCs achieved when using only CC (p = 0.025) or MLO (p = 0.0004) view images, respectively. This study demonstrates that a quantitative assessment of global mammographic image texture and density features could provide useful and/or supplementary information to classify between malignant and benign cases among the recalled cases, which may eventually help reduce the false-positive recall rate in screening mammography.

  11. Clinical Decision Support Tools for Selecting Interventions for Patients with Disabling Musculoskeletal Disorders: A Scoping Review.

    PubMed

    Gross, Douglas P; Armijo-Olivo, Susan; Shaw, William S; Williams-Whitt, Kelly; Shaw, Nicola T; Hartvigsen, Jan; Qin, Ziling; Ha, Christine; Woodhouse, Linda J; Steenstra, Ivan A

    2016-09-01

    Purpose We aimed to identify and inventory clinical decision support (CDS) tools for helping front-line staff select interventions for patients with musculoskeletal (MSK) disorders. Methods We used Arksey and O'Malley's scoping review framework which progresses through five stages: (1) identifying the research question; (2) identifying relevant studies; (3) selecting studies for analysis; (4) charting the data; and (5) collating, summarizing and reporting results. We considered computer-based, and other available tools, such as algorithms, care pathways, rules and models. Since this research crosses multiple disciplines, we searched health care, computing science and business databases. Results Our search resulted in 4605 manuscripts. Titles and abstracts were screened for relevance. The reliability of the screening process was high with an average percentage of agreement of 92.3 %. Of the located articles, 123 were considered relevant. Within this literature, there were 43 CDS tools located. These were classified into 3 main areas: computer-based tools/questionnaires (n = 8, 19 %), treatment algorithms/models (n = 14, 33 %), and clinical prediction rules/classification systems (n = 21, 49 %). Each of these areas and the associated evidence are described. The state of evidentiary support for CDS tools is still preliminary and lacks external validation, head-to-head comparisons, or evidence of generalizability across different populations and settings. Conclusions CDS tools, especially those employing rapidly advancing computer technologies, are under development and of potential interest to health care providers, case management organizations and funders of care. Based on the results of this scoping review, we conclude that these tools, models and systems should be subjected to further validation before they can be recommended for large-scale implementation for managing patients with MSK disorders.

  12. THE ECOTOX DATABASE AND ECOLOGICAL SOIL SCREENING LEVEL (ECO-SSL) WEB SITES

    EPA Science Inventory

    The EPA's ECOTOX database (http://www.epa.gov/ecotox/) provides a web browser search interface for locating aquatic and terrestrial toxic effects information. Data on more than 8100 chemicals and 5700 terrestrial and aquatic species are included in the database. Information is ...

  13. Six Online Periodical Databases: A Librarian's View.

    ERIC Educational Resources Information Center

    Willems, Harry

    1999-01-01

    Compares the following World Wide Web-based periodical databases, focusing on their usefulness in K-12 school libraries: EBSCO, Electric Library, Facts on File, SIRS, Wilson, and UMI. Search interfaces, display options, help screens, printing, home access, copyright restrictions, database administration, and making a decision are discussed. A…

  14. Ubiquitous Accessibility for People with Visual Impairments: Are We There Yet?

    PubMed Central

    Billah, Syed Masum; Ashok, Vikas; Porter, Donald E.; Ramakrishnan, IV

    2017-01-01

    Ubiquitous access is an increasingly common vision of computing, wherein users can interact with any computing device or service from anywhere, at any time. In the era of personal computing, users with visual impairments required special-purpose, assistive technologies, such as screen readers, to interact with computers. This paper investigates whether technologies like screen readers have kept pace with, or have created a barrier to, the trend toward ubiquitous access, with a specific focus on desktop computing as this is still the primary way computers are used in education and employment. Towards that, the paper presents a user study with 21 visually-impaired participants, specifically involving the switching of screen readers within and across different computing platforms, and the use of screen readers in remote access scenarios. Among the findings, the study shows that, even for remote desktop access—an early forerunner of true ubiquitous access—screen readers are too limited, if not unusable. The study also identifies several accessibility needs, such as uniformity of navigational experience across devices, and recommends potential solutions. In summary, assistive technologies have not made the jump into the era of ubiquitous access, and multiple, inconsistent screen readers create new practical problems for users with visual impairments. PMID:28782061

  15. Ubiquitous Accessibility for People with Visual Impairments: Are We There Yet?

    PubMed

    Billah, Syed Masum; Ashok, Vikas; Porter, Donald E; Ramakrishnan, I V

    2017-05-01

    Ubiquitous access is an increasingly common vision of computing, wherein users can interact with any computing device or service from anywhere, at any time. In the era of personal computing, users with visual impairments required special-purpose, assistive technologies, such as screen readers, to interact with computers. This paper investigates whether technologies like screen readers have kept pace with, or have created a barrier to, the trend toward ubiquitous access, with a specific focus on desktop computing as this is still the primary way computers are used in education and employment. Towards that, the paper presents a user study with 21 visually-impaired participants, specifically involving the switching of screen readers within and across different computing platforms, and the use of screen readers in remote access scenarios. Among the findings, the study shows that, even for remote desktop access-an early forerunner of true ubiquitous access-screen readers are too limited, if not unusable. The study also identifies several accessibility needs, such as uniformity of navigational experience across devices, and recommends potential solutions. In summary, assistive technologies have not made the jump into the era of ubiquitous access, and multiple, inconsistent screen readers create new practical problems for users with visual impairments.

  16. Virtual screening applications: a study of ligand-based methods and different structure representations in four different scenarios.

    PubMed

    Hristozov, Dimitar P; Oprea, Tudor I; Gasteiger, Johann

    2007-01-01

    Four different ligand-based virtual screening scenarios are studied: (1) prioritizing compounds for subsequent high-throughput screening (HTS); (2) selecting a predefined (small) number of potentially active compounds from a large chemical database; (3) assessing the probability that a given structure will exhibit a given activity; (4) selecting the most active structure(s) for a biological assay. Each of the four scenarios is exemplified by performing retrospective ligand-based virtual screening for eight different biological targets using two large databases--MDDR and WOMBAT. A comparison between the chemical spaces covered by these two databases is presented. The performance of two techniques for ligand--based virtual screening--similarity search with subsequent data fusion (SSDF) and novelty detection with Self-Organizing Maps (ndSOM) is investigated. Three different structure representations--2,048-dimensional Daylight fingerprints, topological autocorrelation weighted by atomic physicochemical properties (sigma electronegativity, polarizability, partial charge, and identity) and radial distribution functions weighted by the same atomic physicochemical properties--are compared. Both methods were found applicable in scenario one. The similarity search was found to perform slightly better in scenario two while the SOM novelty detection is preferred in scenario three. No method/descriptor combination achieved significant success in scenario four.

  17. Incorporating Virtual Reactions into a Logic-based Ligand-based Virtual Screening Method to Discover New Leads

    PubMed Central

    Reynolds, Christopher R; Muggleton, Stephen H; Sternberg, Michael J E

    2015-01-01

    The use of virtual screening has become increasingly central to the drug development pipeline, with ligand-based virtual screening used to screen databases of compounds to predict their bioactivity against a target. These databases can only represent a small fraction of chemical space, and this paper describes a method of exploring synthetic space by applying virtual reactions to promising compounds within a database, and generating focussed libraries of predicted derivatives. A ligand-based virtual screening tool Investigational Novel Drug Discovery by Example (INDDEx) is used as the basis for a system of virtual reactions. The use of virtual reactions is estimated to open up a potential space of 1.21×1012 potential molecules. A de novo design algorithm known as Partial Logical-Rule Reactant Selection (PLoRRS) is introduced and incorporated into the INDDEx methodology. PLoRRS uses logical rules from the INDDEx model to select reactants for the de novo generation of potentially active products. The PLoRRS method is found to increase significantly the likelihood of retrieving molecules similar to known actives with a p-value of 0.016. Case studies demonstrate that the virtual reactions produce molecules highly similar to known actives, including known blockbuster drugs. PMID:26583052

  18. Automated assessment of bilateral breast volume asymmetry as a breast cancer biomarker during mammographic screening

    NASA Astrophysics Data System (ADS)

    Williams, Alex C.; Hitt, Austin; Voisin, Sophie; Tourassi, Georgia

    2013-03-01

    The biological concept of bilateral symmetry as a marker of developmental stability and good health is well established. Although most individuals deviate slightly from perfect symmetry, humans are essentially considered bilaterally symmetrical. Consequently, increased fluctuating asymmetry of paired structures could be an indicator of disease. There are several published studies linking bilateral breast size asymmetry with increased breast cancer risk. These studies were based on radiologists' manual measurements of breast size from mammographic images. We aim to develop a computerized technique to assess fluctuating breast volume asymmetry in screening mammograms and investigate whether it correlates with the presence of breast cancer. Using a large database of screening mammograms with known ground truth we applied automated breast region segmentation and automated breast size measurements in CC and MLO views using three well established methods. All three methods confirmed that indeed patients with breast cancer have statistically significantly higher fluctuating asymmetry of their breast volumes. However, statistically significant difference between patients with cancer and benign lesions was observed only for the MLO views. The study suggests that automated assessment of global bilateral asymmetry could serve as a breast cancer risk biomarker for women undergoing mammographic screening. Such biomarker could be used to alert radiologists or computer-assisted detection (CAD) systems to exercise increased vigilance if higher than normal cancer risk is suspected.

  19. Confirmed detection of Cyclospora cayetanesis, Encephalitozoon intestinalis and Cryptosporidium parvum in water used for drinking.

    PubMed

    Dowd, Scot E; John, David; Eliopolus, James; Gerba, Charles P; Naranjo, Jaime; Klein, Robert; López, Beatriz; de Mejía, Maricruz; Mendoza, Carlos E; Pepper, Ian L

    2003-09-01

    Human enteropathogenic microsporidia (HEM), Cryptosporidium parvum, Cyclospora cayetanesis, and Giardia lamblia are associated with gastrointestinal disease in humans. To date, the mode of transmission and environmental occurrence of HEM (Encephalitozoon intestinalis and Enterocytozoon bieneusi) and Cyclospora cayetanesis have not been fully elucidated due to lack of sensitive and specific environmental screening methods. The present study was undertaken with recently developed methods, to screen various water sources used for public consumption in rural areas around the city of Guatemala. Water concentrates collected in these areas were subjected to community DNA extraction followed by PCR amplification, PCR sequencing and computer database homology comparison (CDHC). All water samples screened in this study had been previously confirmed positive for Giardia spp. by immunofluorescent assay (IFA). Of the 12 water concentrates screened, 6 showed amplification of microsporidial SSU-rDNA and were subsequently confirmed to be Encephalitozoon intestinalis. Five of the samples allowed for amplification of Cyclospora 18S-rDNA; three of these were confirmed to be Cyclospora cayetanesis while two could not be identified because of inadequate sequence information. Thus, this study represents the first confirmed identification of Cyclospora cayetanesis and Encephalitozoon intestinalis in source water used for consumption. The fact that the waters tested may be used for human consumption indicates that these emerging protozoa may be transmitted by ingestion of contaminated water.

  20. The effectiveness of patient navigation to improve healthcare utilization outcomes: A meta-analysis of randomized controlled trials.

    PubMed

    Ali-Faisal, Sobia F; Colella, Tracey J F; Medina-Jaudes, Naomi; Benz Scott, Lisa

    2017-03-01

    To determine the effects of patient navigation (PN) on healthcare utilization outcomes using meta-analysis and the quality of evidence. Medical and social science databases were searched for randomized controlled trials published in English between 1989 and May 2015. The review process was guided by PRISMA. Included studies were assessed for quality using the Downs and Black tool. Data were extracted to assess the effect of navigation on: health screening rates, diagnostic resolution, cancer care follow-up treatment adherence, and attendance of care events. Random-effects models were used to compute risk ratios and I 2 statistics determined the impact of heterogeneity. Of 3985 articles screened, 25 articles met inclusion criteria. Compared to usual care, patients who received PN were significantly more likely to access health screening (OR 2.48, 95% CI, 1.93-3.18, P<0.00001) and attend a recommended care event (OR 2.55, 95% CI, 1.27-5.10, P<0.01). PN was favoured to increase adherence to cancer care follow-up treatment and obtain diagnoses. Most studies involved trained lay navigators (n=12) compared to health professionals (n=9). PN is effective to increase screening rates and complete care events. PN is an effective intervention for use in healthcare. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Benefits of computer screen-based simulation in learning cardiac arrest procedures.

    PubMed

    Bonnetain, Elodie; Boucheix, Jean-Michel; Hamet, Maël; Freysz, Marc

    2010-07-01

    What is the best way to train medical students early so that they acquire basic skills in cardiopulmonary resuscitation as effectively as possible? Studies have shown the benefits of high-fidelity patient simulators, but have also demonstrated their limits. New computer screen-based multimedia simulators have fewer constraints than high-fidelity patient simulators. In this area, as yet, there has been no research on the effectiveness of transfer of learning from a computer screen-based simulator to more realistic situations such as those encountered with high-fidelity patient simulators. We tested the benefits of learning cardiac arrest procedures using a multimedia computer screen-based simulator in 28 Year 2 medical students. Just before the end of the traditional resuscitation course, we compared two groups. An experiment group (EG) was first asked to learn to perform the appropriate procedures in a cardiac arrest scenario (CA1) in the computer screen-based learning environment and was then tested on a high-fidelity patient simulator in another cardiac arrest simulation (CA2). While the EG was learning to perform CA1 procedures in the computer screen-based learning environment, a control group (CG) actively continued to learn cardiac arrest procedures using practical exercises in a traditional class environment. Both groups were given the same amount of practice, exercises and trials. The CG was then also tested on the high-fidelity patient simulator for CA2, after which it was asked to perform CA1 using the computer screen-based simulator. Performances with both simulators were scored on a precise 23-point scale. On the test on a high-fidelity patient simulator, the EG trained with a multimedia computer screen-based simulator performed significantly better than the CG trained with traditional exercises and practice (16.21 versus 11.13 of 23 possible points, respectively; p<0.001). Computer screen-based simulation appears to be effective in preparing learners to use high-fidelity patient simulators, which present simulations that are closer to real-life situations.

  2. Diabetic retinopathy screening using deep neural network.

    PubMed

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  3. Drowning in Data: Sorting through CD ROM and Computer Databases.

    ERIC Educational Resources Information Center

    Cates, Carl M.; Kaye, Barbara K.

    This paper identifies the bibliographic and numeric databases on CD-ROM and computer diskette that should be most useful for investigators in communication, marketing, and communication education. Bibliographic databases are usually found in three formats: citations only, citations and abstracts, and full-text articles. Numeric databases are…

  4. The CEBAF Element Database and Related Operational Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larrieu, Theodore; Slominski, Christopher; Keesee, Marie

    The newly commissioned 12GeV CEBAF accelerator relies on a flexible, scalable and comprehensive database to define the accelerator. This database delivers the configuration for CEBAF operational tools, including hardware checkout, the downloadable optics model, control screens, and much more. The presentation will describe the flexible design of the CEBAF Element Database (CED), its features and assorted use case examples.

  5. [SCREENING OF NUTRITIONAL STATUS AMONG ELDERLY PEOPLE AT FAMILY MEDICINE].

    PubMed

    Račić, M; Ivković, N; Kusmuk, S

    2015-11-01

    The prevalence of malnutrition in elderly is high. Malnutrition or risk of malnutrition can be detected by use of nutritional screening or assessment tools. This systematic review aimed to identify tools that would be reliable, valid, sensitive and specific for nutritional status screening in patients older than 65 at family medicine. The review was performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. Studies were retrieved using MEDLINE (via Ovid), PubMed and Cochrane Library electronic databases and by manual searching of relevant articles listed in reference list of key publications. The electronic databases were searched using defined key words adapted to each database and using MESH terms. Manual revision of reviews and original articles was performed using Electronic Journals Library. Included studies involved development and validation of screening tools in the community-dwelling elderly population. The tools, subjected to validity and reliability testing for use in the community-dwelling elderly population were Mini Nutritional Assessment (MNA), Mini Nutritional Assessment-Short Form (MNA-SF), Nutrition Screening Initiative (NSI), which includes DETERMINE list, Level I and II Screen, Seniors in the Community: Risk Evaluation for Eating, and Nutrition (SCREEN I and SCREEN II), Subjective Global Assessment (SGA), Nutritional Risk Index (NRI), and Malaysian and South African tool. MNA and MNA-SF appear to have highest reliability and validity for screening of community-dwelling elderly, while the reliability and validity of SCREEN II are good. The authors conclude that whilst several tools have been developed, most have not undergone extensive testing to demonstrate their ability to identify nutritional risk. MNA and MNA-SF have the highest reliability and validity for screening of nutritional status in the community-dwelling elderly, and the reliability and validity of SCREEN II are satisfactory. These instruments also contain all three nutritional status indicators and are practical for use in family medicine. However, the gold standard for screening cannot be set because testing of reliability and continuous validation in the study with a higher level of evidence need to be conducted in family medicine.

  6. The ToxCast Pathway Database for Identifying Toxicity Signatures and Potential Modes of Action from Chemical Screening Data

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA), through its ToxCast program, is developing predictive toxicity approaches that will use in vitro high-throughput screening (HTS), high-content screening (HCS) and toxicogenomic data to predict in vivo toxicity phenotypes. There are ...

  7. Parenting style, the home environment, and screen time of 5-year-old children; the 'be active, eat right' study.

    PubMed

    Veldhuis, Lydian; van Grieken, Amy; Renders, Carry M; Hirasing, Remy A; Raat, Hein

    2014-01-01

    The global increase in childhood overweight and obesity has been ascribed partly to increases in children's screen time. Parents have a large influence on their children's screen time. Studies investigating parenting and early childhood screen time are limited. In this study, we investigated associations of parenting style and the social and physical home environment on watching TV and using computers or game consoles among 5-year-old children. This study uses baseline data concerning 5-year-old children (n = 3067) collected for the 'Be active, eat right' study. Children of parents with a higher score on the parenting style dimension involvement, were more likely to spend >30 min/day on computers or game consoles. Overall, families with an authoritative or authoritarian parenting style had lower percentages of children's screen time compared to families with an indulgent or neglectful style, but no significant difference in OR was found. In families with rules about screen time, children were less likely to watch TV>2 hrs/day and more likely to spend >30 min/day on computers or game consoles. The number of TVs and computers or game consoles in the household was positively associated with screen time, and children with a TV or computer or game console in their bedroom were more likely to watch TV>2 hrs/day or spend >30 min/day on computers or game consoles. The magnitude of the association between parenting style and screen time of 5-year-olds was found to be relatively modest. The associations found between the social and physical environment and children's screen time are independent of parenting style. Interventions to reduce children's screen time might be most effective when they support parents specifically with introducing family rules related to screen time and prevent the presence of a TV or computer or game console in the child's room.

  8. Parenting Style, the Home Environment, and Screen Time of 5-Year-Old Children; The ‘Be Active, Eat Right’ Study

    PubMed Central

    Veldhuis, Lydian; van Grieken, Amy; Renders, Carry M.; HiraSing, Remy A.; Raat, Hein

    2014-01-01

    Introduction The global increase in childhood overweight and obesity has been ascribed partly to increases in children's screen time. Parents have a large influence on their children's screen time. Studies investigating parenting and early childhood screen time are limited. In this study, we investigated associations of parenting style and the social and physical home environment on watching TV and using computers or game consoles among 5-year-old children. Methods This study uses baseline data concerning 5-year-old children (n = 3067) collected for the ‘Be active, eat right’ study. Results Children of parents with a higher score on the parenting style dimension involvement, were more likely to spend >30 min/day on computers or game consoles. Overall, families with an authoritative or authoritarian parenting style had lower percentages of children's screen time compared to families with an indulgent or neglectful style, but no significant difference in OR was found. In families with rules about screen time, children were less likely to watch TV>2 hrs/day and more likely to spend >30 min/day on computers or game consoles. The number of TVs and computers or game consoles in the household was positively associated with screen time, and children with a TV or computer or game console in their bedroom were more likely to watch TV>2 hrs/day or spend >30 min/day on computers or game consoles. Conclusion The magnitude of the association between parenting style and screen time of 5-year-olds was found to be relatively modest. The associations found between the social and physical environment and children's screen time are independent of parenting style. Interventions to reduce children's screen time might be most effective when they support parents specifically with introducing family rules related to screen time and prevent the presence of a TV or computer or game console in the child's room. PMID:24533092

  9. HTS-DB: an online resource to publish and query data from functional genomics high-throughput siRNA screening projects.

    PubMed

    Saunders, Rebecca E; Instrell, Rachael; Rispoli, Rossella; Jiang, Ming; Howell, Michael

    2013-01-01

    High-throughput screening (HTS) uses technologies such as RNA interference to generate loss-of-function phenotypes on a genomic scale. As these technologies become more popular, many research institutes have established core facilities of expertise to deal with the challenges of large-scale HTS experiments. As the efforts of core facility screening projects come to fruition, focus has shifted towards managing the results of these experiments and making them available in a useful format that can be further mined for phenotypic discovery. The HTS-DB database provides a public view of data from screening projects undertaken by the HTS core facility at the CRUK London Research Institute. All projects and screens are described with comprehensive assay protocols, and datasets are provided with complete descriptions of analysis techniques. This format allows users to browse and search data from large-scale studies in an informative and intuitive way. It also provides a repository for additional measurements obtained from screens that were not the focus of the project, such as cell viability, and groups these data so that it can provide a gene-centric summary across several different cell lines and conditions. All datasets from our screens that can be made available can be viewed interactively and mined for further hit lists. We believe that in this format, the database provides researchers with rapid access to results of large-scale experiments that might facilitate their understanding of genes/compounds identified in their own research. DATABASE URL: http://hts.cancerresearchuk.org/db/public.

  10. Randomized Approaches for Nearest Neighbor Search in Metric Space When Computing the Pairwise Distance Is Extremely Expensive

    NASA Astrophysics Data System (ADS)

    Wang, Lusheng; Yang, Yong; Lin, Guohui

    Finding the closest object for a query in a database is a classical problem in computer science. For some modern biological applications, computing the similarity between two objects might be very time consuming. For example, it takes a long time to compute the edit distance between two whole chromosomes and the alignment cost of two 3D protein structures. In this paper, we study the nearest neighbor search problem in metric space, where the pair-wise distance between two objects in the database is known and we want to minimize the number of distances computed on-line between the query and objects in the database in order to find the closest object. We have designed two randomized approaches for indexing metric space databases, where objects are purely described by their distances with each other. Analysis and experiments show that our approaches only need to compute O(logn) objects in order to find the closest object, where n is the total number of objects in the database.

  11. NALDB: nucleic acid ligand database for small molecules targeting nucleic acid.

    PubMed

    Kumar Mishra, Subodh; Kumar, Amit

    2016-01-01

    Nucleic acid ligand database (NALDB) is a unique database that provides detailed information about the experimental data of small molecules that were reported to target several types of nucleic acid structures. NALDB is the first ligand database that contains ligand information for all type of nucleic acid. NALDB contains more than 3500 ligand entries with detailed pharmacokinetic and pharmacodynamic information such as target name, target sequence, ligand 2D/3D structure, SMILES, molecular formula, molecular weight, net-formal charge, AlogP, number of rings, number of hydrogen bond donor and acceptor, potential energy along with their Ki, Kd, IC50 values. All these details at single platform would be helpful for the development and betterment of novel ligands targeting nucleic acids that could serve as a potential target in different diseases including cancers and neurological disorders. With maximum 255 conformers for each ligand entry, our database is a multi-conformer database and can facilitate the virtual screening process. NALDB provides powerful web-based search tools that make database searching efficient and simplified using option for text as well as for structure query. NALDB also provides multi-dimensional advanced search tool which can screen the database molecules on the basis of molecular properties of ligand provided by database users. A 3D structure visualization tool has also been included for 3D structure representation of ligands. NALDB offers an inclusive pharmacological information and the structurally flexible set of small molecules with their three-dimensional conformers that can accelerate the virtual screening and other modeling processes and eventually complement the nucleic acid-based drug discovery research. NALDB can be routinely updated and freely available on bsbe.iiti.ac.in/bsbe/naldb/HOME.php. Database URL: http://bsbe.iiti.ac.in/bsbe/naldb/HOME.php. © The Author(s) 2016. Published by Oxford University Press.

  12. A new approach to data evaluation in the non-target screening of organic trace substances in water analysis.

    PubMed

    Müller, Alexander; Schulz, Wolfgang; Ruck, Wolfgang K L; Weber, Walter H

    2011-11-01

    Non-target screening via high performance liquid chromatography-mass spectrometry (HPLC-MS) has gained increasingly in importance for monitoring organic trace substances in water resources targeted for the production of drinking water. In this article a new approach for evaluating the data from non-target HPLC-MS screening in water is introduced and its advantages are demonstrated using the supply of drinking water as an example. The crucial difference between this and other approaches is the comparison of samples based on compounds (features) determined by their full scan data. In so doing, we take advantage of the temporal, spatial, or process-based relationships among the samples by applying the set operators, UNION, INTERSECT, and COMPLEMENT to the features of each sample. This approach regards all compounds, detectable by the used analytical method. That is the fundamental meaning of non-target screening, which includes all analytical information from the applied technique for further data evaluation. In the given example, in just one step, all detected features (1729) of a landfill leachate sample could be examined for their relevant influences on water purification respectively drinking water. This study shows that 1721 out of 1729 features were not relevant for the water purification. Only eight features could be determined in the untreated water and three of them were found in the final drinking water after ozonation. In so doing, it was possible to identify 1-adamantylamine as contamination of the landfill in the drinking water at a concentration in the range of 20 ng L(-1). To support the identification of relevant compounds and their transformation products, the DAIOS database (Database-Assisted Identification of Organic Substances) was used. This database concept includes some functions such as product ion search to increase the efficiency of the database query after the screening. To identify related transformation products the database function "transformation tree" was used. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. [Chemical databases and virtual screening].

    PubMed

    Rognan, Didier; Bonnet, Pascal

    2014-12-01

    A prerequisite to any virtual screening is the definition of compound libraries to be screened. As we describe here, various sources are available. The selection of the proper library is usually project-dependent but at least as important as the screening method itself. This review details the main compound libraries that are available for virtual screening and guide the reader to the best possible selection according to its needs. © 2014 médecine/sciences – Inserm.

  14. A User's Applications of Imaging Techniques: The University of Maryland Historic Textile Database.

    ERIC Educational Resources Information Center

    Anderson, Clarita S.

    1991-01-01

    Describes the incorporation of textile images into the University of Maryland Historic Textile Database by a computer user rather than a computer expert. Selection of a database management system is discussed, and PICTUREPOWER, a system that integrates photographic quality images with text and numeric information in databases, is described. (three…

  15. Geochemical databases: minding the pitfalls to avoid the pratfalls

    NASA Astrophysics Data System (ADS)

    Goldstein, S. L.; Hofmann, A. W.

    2011-12-01

    The field of geochemistry has been revolutionized in recent years by the advent of databases (PetDB, GEOROC, NAVDAT, etc). A decade ago, a geochemical synthesis required major time investments in order to compile relatively small amounts of fragmented data from large numbers of publications, Now virtually all of the published data on nearly any solid Earth topic can be downloaded to nearly any desktop computer with a few mouse clicks. Most solid Earth talks at international meetings show data compilations from these databases. Applications of the data are playing an increasingly important role in shaping our thinking about the Earth. They have changed some fundamental ideas about the compositional structure of the Earth (for example, showing that the Earth's "trace element depleted upper mantle" is not so depleted in trace elements). This abundance of riches also poses new risks. Until recently, important details associated with data publication (adequate metadata and quality control information) were given low priority, even in major journals. The online databases preserve whatever has been published, irrespective of quality. "Bad data" arises from many causes, here are a few. Some are associated with sample processing, including incomplete dissolution of refractory trace minerals, or inhomogeneous powders, or contamination of key elements during preparation (for example, this was a problem for lead when gasoline was leaded, and for niobium when tungsten-carbide mills were used to powder samples). Poor analytical quality is a continual problem (for example, when elemental abundances are at near background levels for an analytical method). Errors in published data tables (more common than you think) become bad data in the databases. The accepted values of interlaboratory standards change with time, while the published data based on old values stay the same. Thus the pitfalls associated with the new data accessibility are dangerous in the hands of the inexperienced users (for example, a student of mine took the initiative to write a paper showing very creative insights, based on some neodymium isotope data on oceanic volcanics; unfortunately the uniqueness of the data reflected the normalization procedures used by different labs). Many syntheses assume random sampling even though we know that oversampled regions are over-represented. We will show examples where raw downloads of data from databases without extensive screening can yield data collections where the garbage swamps the useful information. We will also show impressive but meaningless correlations (e.g. upper-mantle temperature versus atmospheric temperature). In order to avoid the pratfalls, screening of database output is necessary. In order to generate better data consistency, new standards for reporting geochemical data are necessary.

  16. An optimal user-interface for EPIMS database conversions and SSQ 25002 EEE parts screening

    NASA Technical Reports Server (NTRS)

    Watson, John C.

    1996-01-01

    The Electrical, Electronic, and Electromechanical (EEE) Parts Information Management System (EPIMS) database was selected by the International Space Station Parts Control Board for providing parts information to NASA managers and contractors. Parts data is transferred to the EPIMS database by converting parts list data to the EP1MS Data Exchange File Format. In general, parts list information received from contractors and suppliers does not convert directly into the EPIMS Data Exchange File Format. Often parts lists use different variable and record field assignments. Many of the EPES variables are not defined in the parts lists received. The objective of this work was to develop an automated system for translating parts lists into the EPIMS Data Exchange File Format for upload into the EPIMS database. Once EEE parts information has been transferred to the EPIMS database it is necessary to screen parts data in accordance with the provisions of the SSQ 25002 Supplemental List of Qualified Electrical, Electronic, and Electromechanical Parts, Manufacturers, and Laboratories (QEPM&L). The SSQ 2S002 standards are used to identify parts which satisfy the requirements for spacecraft applications. An additional objective for this work was to develop an automated system which would screen EEE parts information against the SSQ 2S002 to inform managers of the qualification status of parts used in spacecraft applications. The EPIMS Database Conversion and SSQ 25002 User Interfaces are designed to interface through the World-Wide-Web(WWW)/Internet to provide accessibility by NASA managers and contractors.

  17. Promoting Colorectal Cancer Screening Discussion

    PubMed Central

    Christy, Shannon M.; Perkins, Susan M.; Tong, Yan; Krier, Connie; Champion, Victoria L.; Skinner, Celette Sugg; Springston, Jeffrey K.; Imperiale, Thomas F.; Rawl, Susan M.

    2013-01-01

    Background Provider recommendation is a predictor of colorectal cancer (CRC) screening. Purpose To compare the effects of two clinic-based interventions on patient–provider discussions about CRC screening. Design Two-group RCT with data collected at baseline and 1 week post-intervention. Participants/setting African-American patients that were non-adherent to CRC screening recommendations (n=693) with a primary care visit between 2008 and 2010 in one of 11 urban primary care clinics. Intervention Participants received either a computer-delivered tailored CRC screening intervention or a nontailored informational brochure about CRC screening immediately prior to their primary care visit. Main outcome measures Between-group differences in odds of having had a CRC screening discussion about a colon test, with and without adjusting for demographic, clinic, health literacy, health belief, and social support variables, were examined as predictors of a CRC screening discussion using logistic regression. Intervention effects on CRC screening test order by PCPs were examined using logistic regression. Analyses were conducted in 2011 and 2012. Results Compared to the brochure group, a greater proportions of those in the computer-delivered tailored intervention group reported having had a discussion with their provider about CRC screening (63% vs 48%, OR=1.81, p<0.001). Predictors of a discussion about CRC screening included computer group participation, younger age, reason for visit, being unmarried, colonoscopy self-efficacy, and family member/friend recommendation (all p-values <0.05). Conclusions The computer-delivered tailored intervention was more effective than a nontailored brochure at stimulating patient–provider discussions about CRC screening. Those who received the computer-delivered intervention also were more likely to have a CRC screening test (fecal occult blood test or colonoscopy) ordered by their PCP. Trial registration This study is registered at www.clinicaltrials.gov NCT00672828. PMID:23498096

  18. Identification of drug interactions in hospitals--computerized screening vs. bedside recording.

    PubMed

    Blix, H S; Viktil, K K; Moger, T A; Reikvam, A

    2008-04-01

    Managing drug interactions in hospitalized patients is important and challenging. The objective of the study was to compare two methods for identification of drug interactions (DDIs)--computerized screening and prospective bedside recording--with regard to capability of identifying DDIs. Patient characteristics were recorded for patients admitted to five hospitals. By bedside evaluation drug-related problems, including DDIs, were prospectively recorded by pharmacists and discussed in multidisciplinary teams. A computer screening programme was used to identify DDIs retrospectively--dividing DDIs into four classes: A, avoid; B, avoid/take precautions; C, take precautions; D, no action needed. Among 827 patients, computer screening identified DDIs in 544 patients (66%); 351 had DDIs introduced in hospital. The 1513 computer-identified DDIs had the following distribution: type A 78; type B 915; type C 38; type D 482. By bedside evaluation, 99 DDIs were identified in 73 patients (9%). The proportions of computer recorded DDIs which were also identified at the bedside were: 5%, 8%, 8%, 2% DDIs of types A, B, C and D respectively. In 10 patients, DDIs not registered by computer screening were identified by bedside evaluation. The drugs most frequently involved in DDIs, identified by computerized screening were acetylsalicylic acid, warfarin, furosemide and digitoxin compared with warfarin, simvastatin, theophylline and carbamazepine, by bedside evaluation. Despite an active prospective bedside search for DDIs, this approach identified less than one in 10 of the DDIs recorded by computer screening, including those regarded as hazardous. However, computer screening overestimates considerably when the objective is to identify clinically relevant DDIs.

  19. Some Activities with Polarized Light from a Laptop LCD Screen

    ERIC Educational Resources Information Center

    Fakhruddin, Hasan

    2008-01-01

    The LCD screen of a laptop computer provides a broad, bright, and extended source of polarized light. A number of demonstrations on the properties of polarized light from a laptop computer screens are presented here.

  20. Proceedings of Workshop 1, the Human Brainmap Database Held in San Antonio, Texas on November 29-December 1, 1992.

    DTIC Science & Technology

    1993-02-17

    these differences should be reflected in fields in the database. The limiting factor is whether the methodological differences make comparisons among...another search, return to the Search Criteria - Summary screen. Fvii BrainMap Users Guide II I I I I I II 1 1 The 3-view plot screen appears when Plot is...for an organizational meeting of this type, it was quite productive . There was significant Information passed, and the Issues that needed to be

  1. Databases for LDEF results

    NASA Technical Reports Server (NTRS)

    Bohnhoff-Hlavacek, Gail

    1992-01-01

    One of the objectives of the team supporting the LDEF Systems and Materials Special Investigative Groups is to develop databases of experimental findings. These databases identify the hardware flown, summarize results and conclusions, and provide a system for acknowledging investigators, tracing sources of data, and future design suggestions. To date, databases covering the optical experiments, and thermal control materials (chromic acid anodized aluminum, silverized Teflon blankets, and paints) have been developed at Boeing. We used the Filemaker Pro software, the database manager for the Macintosh computer produced by the Claris Corporation. It is a flat, text-retrievable database that provides access to the data via an intuitive user interface, without tedious programming. Though this software is available only for the Macintosh computer at this time, copies of the databases can be saved to a format that is readable on a personal computer as well. Further, the data can be exported to more powerful relational databases, capabilities, and use of the LDEF databases and describe how to get copies of the database for your own research.

  2. cisPath: an R/Bioconductor package for cloud users for visualization and management of functional protein interaction networks.

    PubMed

    Wang, Likun; Yang, Luhe; Peng, Zuohan; Lu, Dan; Jin, Yan; McNutt, Michael; Yin, Yuxin

    2015-01-01

    With the burgeoning development of cloud technology and services, there are an increasing number of users who prefer cloud to run their applications. All software and associated data are hosted on the cloud, allowing users to access them via a web browser from any computer, anywhere. This paper presents cisPath, an R/Bioconductor package deployed on cloud servers for client users to visualize, manage, and share functional protein interaction networks. With this R package, users can easily integrate downloaded protein-protein interaction information from different online databases with private data to construct new and personalized interaction networks. Additional functions allow users to generate specific networks based on private databases. Since the results produced with the use of this package are in the form of web pages, cloud users can easily view and edit the network graphs via the browser, using a mouse or touch screen, without the need to download them to a local computer. This package can also be installed and run on a local desktop computer. Depending on user preference, results can be publicized or shared by uploading to a web server or cloud driver, allowing other users to directly access results via a web browser. This package can be installed and run on a variety of platforms. Since all network views are shown in web pages, such package is particularly useful for cloud users. The easy installation and operation is an attractive quality for R beginners and users with no previous experience with cloud services.

  3. cisPath: an R/Bioconductor package for cloud users for visualization and management of functional protein interaction networks

    PubMed Central

    2015-01-01

    Background With the burgeoning development of cloud technology and services, there are an increasing number of users who prefer cloud to run their applications. All software and associated data are hosted on the cloud, allowing users to access them via a web browser from any computer, anywhere. This paper presents cisPath, an R/Bioconductor package deployed on cloud servers for client users to visualize, manage, and share functional protein interaction networks. Results With this R package, users can easily integrate downloaded protein-protein interaction information from different online databases with private data to construct new and personalized interaction networks. Additional functions allow users to generate specific networks based on private databases. Since the results produced with the use of this package are in the form of web pages, cloud users can easily view and edit the network graphs via the browser, using a mouse or touch screen, without the need to download them to a local computer. This package can also be installed and run on a local desktop computer. Depending on user preference, results can be publicized or shared by uploading to a web server or cloud driver, allowing other users to directly access results via a web browser. Conclusions This package can be installed and run on a variety of platforms. Since all network views are shown in web pages, such package is particularly useful for cloud users. The easy installation and operation is an attractive quality for R beginners and users with no previous experience with cloud services. PMID:25708840

  4. iDrug: a web-accessible and interactive drug discovery and design platform

    PubMed Central

    2014-01-01

    Background The progress in computer-aided drug design (CADD) approaches over the past decades accelerated the early-stage pharmaceutical research. Many powerful standalone tools for CADD have been developed in academia. As programs are developed by various research groups, a consistent user-friendly online graphical working environment, combining computational techniques such as pharmacophore mapping, similarity calculation, scoring, and target identification is needed. Results We presented a versatile, user-friendly, and efficient online tool for computer-aided drug design based on pharmacophore and 3D molecular similarity searching. The web interface enables binding sites detection, virtual screening hits identification, and drug targets prediction in an interactive manner through a seamless interface to all adapted packages (e.g., Cavity, PocketV.2, PharmMapper, SHAFTS). Several commercially available compound databases for hit identification and a well-annotated pharmacophore database for drug targets prediction were integrated in iDrug as well. The web interface provides tools for real-time molecular building/editing, converting, displaying, and analyzing. All the customized configurations of the functional modules can be accessed through featured session files provided, which can be saved to the local disk and uploaded to resume or update the history work. Conclusions iDrug is easy to use, and provides a novel, fast and reliable tool for conducting drug design experiments. By using iDrug, various molecular design processing tasks can be submitted and visualized simply in one browser without installing locally any standalone modeling softwares. iDrug is accessible free of charge at http://lilab.ecust.edu.cn/idrug. PMID:24955134

  5. Survey of ecotoxicologically-relevant reproductive endpoint coverage within the ECOTOX database across ToxCast ER agonists (ASCCT)

    EPA Science Inventory

    The U.S. EPA’s Endocrine Disruptor Screening Program (EDSP) has been charged with screening thousands of chemicals for their potential to affect the endocrine systems of humans and wildlife. In vitro high throughput screening (HTS) assays have been proposed as a way to prioritize...

  6. Newborn Screening: National Library of Medicine Literature Search, January 1980 through March 1987. No. 87-2.

    ERIC Educational Resources Information Center

    Patrias, Karen

    This bibliography, prepared by the National Library of Medicine through a literature search of its online databases, covers all aspects of newborn screening. It includes references to screening for: inborn errors of metabolism, such as phenylketonuria and galactosemia; hemoglobinopathies, particularly sickle cell disease; congenital hypothyroidism…

  7. A practical approach to screen for authorised and unauthorised genetically modified plants.

    PubMed

    Waiblinger, Hans-Ulrich; Grohmann, Lutz; Mankertz, Joachim; Engelbert, Dirk; Pietsch, Klaus

    2010-03-01

    In routine analysis, screening methods based on real-time PCR are most commonly used for the detection of genetically modified (GM) plant material in food and feed. In this paper, it is shown that the combination of five DNA target sequences can be used as a universal screening approach for at least 81 GM plant events authorised or unauthorised for placing on the market and described in publicly available databases. Except for maize event LY038, soybean events DP-305423 and BPS-CV127-9 and cotton event 281-24-236 x 3006-210-23, at least one of the five genetic elements has been inserted in these GM plants and is targeted by this screening approach. For the detection of these sequences, fully validated real-time PCR methods have been selected. A screening table is presented that describes the presence or absence of the target sequences for most of the listed GM plants. These data have been verified either theoretically according to available databases or experimentally using available reference materials. The screening table will be updated regularly by a network of German enforcement laboratories.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez Torres, E., E-mail: Ernesto.Lopez.Torres@cern.ch, E-mail: cerello@to.infn.it; Fiorina, E.; Pennazio, F.

    Purpose: M5L, a fully automated computer-aided detection (CAD) system for the detection and segmentation of lung nodules in thoracic computed tomography (CT), is presented and validated on several image datasets. Methods: M5L is the combination of two independent subsystems, based on the Channeler Ant Model as a segmentation tool [lung channeler ant model (lungCAM)] and on the voxel-based neural approach. The lungCAM was upgraded with a scan equalization module and a new procedure to recover the nodules connected to other lung structures; its classification module, which makes use of a feed-forward neural network, is based of a small number ofmore » features (13), so as to minimize the risk of lacking generalization, which could be possible given the large difference between the size of the training and testing datasets, which contain 94 and 1019 CTs, respectively. The lungCAM (standalone) and M5L (combined) performance was extensively tested on 1043 CT scans from three independent datasets, including a detailed analysis of the full Lung Image Database Consortium/Image Database Resource Initiative database, which is not yet found in literature. Results: The lungCAM and M5L performance is consistent across the databases, with a sensitivity of about 70% and 80%, respectively, at eight false positive findings per scan, despite the variable annotation criteria and acquisition and reconstruction conditions. A reduced sensitivity is found for subtle nodules and ground glass opacities (GGO) structures. A comparison with other CAD systems is also presented. Conclusions: The M5L performance on a large and heterogeneous dataset is stable and satisfactory, although the development of a dedicated module for GGOs detection could further improve it, as well as an iterative optimization of the training procedure. The main aim of the present study was accomplished: M5L results do not deteriorate when increasing the dataset size, making it a candidate for supporting radiologists on large scale screenings and clinical programs.« less

  9. Large scale validation of the M5L lung CAD on heterogeneous CT datasets.

    PubMed

    Torres, E Lopez; Fiorina, E; Pennazio, F; Peroni, C; Saletta, M; Camarlinghi, N; Fantacci, M E; Cerello, P

    2015-04-01

    M5L, a fully automated computer-aided detection (CAD) system for the detection and segmentation of lung nodules in thoracic computed tomography (CT), is presented and validated on several image datasets. M5L is the combination of two independent subsystems, based on the Channeler Ant Model as a segmentation tool [lung channeler ant model (lungCAM)] and on the voxel-based neural approach. The lungCAM was upgraded with a scan equalization module and a new procedure to recover the nodules connected to other lung structures; its classification module, which makes use of a feed-forward neural network, is based of a small number of features (13), so as to minimize the risk of lacking generalization, which could be possible given the large difference between the size of the training and testing datasets, which contain 94 and 1019 CTs, respectively. The lungCAM (standalone) and M5L (combined) performance was extensively tested on 1043 CT scans from three independent datasets, including a detailed analysis of the full Lung Image Database Consortium/Image Database Resource Initiative database, which is not yet found in literature. The lungCAM and M5L performance is consistent across the databases, with a sensitivity of about 70% and 80%, respectively, at eight false positive findings per scan, despite the variable annotation criteria and acquisition and reconstruction conditions. A reduced sensitivity is found for subtle nodules and ground glass opacities (GGO) structures. A comparison with other CAD systems is also presented. The M5L performance on a large and heterogeneous dataset is stable and satisfactory, although the development of a dedicated module for GGOs detection could further improve it, as well as an iterative optimization of the training procedure. The main aim of the present study was accomplished: M5L results do not deteriorate when increasing the dataset size, making it a candidate for supporting radiologists on large scale screenings and clinical programs.

  10. Fast 3D shape screening of large chemical databases through alignment-recycling

    PubMed Central

    Fontaine, Fabien; Bolton, Evan; Borodina, Yulia; Bryant, Stephen H

    2007-01-01

    Background Large chemical databases require fast, efficient, and simple ways of looking for similar structures. Although such tasks are now fairly well resolved for graph-based similarity queries, they remain an issue for 3D approaches, particularly for those based on 3D shape overlays. Inspired by a recent technique developed to compare molecular shapes, we designed a hybrid methodology, alignment-recycling, that enables efficient retrieval and alignment of structures with similar 3D shapes. Results Using a dataset of more than one million PubChem compounds of limited size (< 28 heavy atoms) and flexibility (< 6 rotatable bonds), we obtained a set of a few thousand diverse structures covering entirely the 3D shape space of the conformers of the dataset. Transformation matrices gathered from the overlays between these diverse structures and the 3D conformer dataset allowed us to drastically (100-fold) reduce the CPU time required for shape overlay. The alignment-recycling heuristic produces results consistent with de novo alignment calculation, with better than 80% hit list overlap on average. Conclusion Overlay-based 3D methods are computationally demanding when searching large databases. Alignment-recycling reduces the CPU time to perform shape similarity searches by breaking the alignment problem into three steps: selection of diverse shapes to describe the database shape-space; overlay of the database conformers to the diverse shapes; and non-optimized overlay of query and database conformers using common reference shapes. The precomputation, required by the first two steps, is a significant cost of the method; however, once performed, querying is two orders of magnitude faster. Extensions and variations of this methodology, for example, to handle more flexible and larger small-molecules are discussed. PMID:17880744

  11. Computer Databases as an Educational Tool in the Basic Sciences.

    ERIC Educational Resources Information Center

    Friedman, Charles P.; And Others

    1990-01-01

    The University of North Carolina School of Medicine developed a computer database, INQUIRER, containing scientific information in bacteriology, and then integrated the database into routine educational activities for first-year medical students in their microbiology course. (Author/MLW)

  12. Academic consortium for the evaluation of computer-aided diagnosis (CADx) in mammography

    NASA Astrophysics Data System (ADS)

    Mun, Seong K.; Freedman, Matthew T.; Wu, Chris Y.; Lo, Shih-Chung B.; Floyd, Carey E., Jr.; Lo, Joseph Y.; Chan, Heang-Ping; Helvie, Mark A.; Petrick, Nicholas; Sahiner, Berkman; Wei, Datong; Chakraborty, Dev P.; Clarke, Laurence P.; Kallergi, Maria; Clark, Bob; Kim, Yongmin

    1995-04-01

    Computer aided diagnosis (CADx) is a promising technology for the detection of breast cancer in screening mammography. A number of different approaches have been developed for CADx research that have achieved significant levels of performance. Research teams now recognize the need for a careful and detailed evaluation study of approaches to accelerate the development of CADx, to make CADx more clinically relevant and to optimize the CADx algorithms based on unbiased evaluations. The results of such a comparative study may provide each of the participating teams with new insights into the optimization of their individual CADx algorithms. This consortium of experienced CADx researchers is working as a group to compare results of the algorithms and to optimize the performance of CADx algorithms by learning from each other. Each institution will be contributing an equal number of cases that will be collected under a standard protocol for case selection, truth determination, and data acquisition to establish a common and unbiased database for the evaluation study. An evaluation procedure for the comparison studies are being developed to analyze the results of individual algorithms for each of the test cases in the common database. Optimization of individual CADx algorithms can be made based on the comparison studies. The consortium effort is expected to accelerate the eventual clinical implementation of CADx algorithms at participating institutions.

  13. Screening and Management of Asymptomatic Renal Stones in Astronauts

    NASA Technical Reports Server (NTRS)

    Reyes, David; Locke, James; Sargsyan, Ashot; Garcia, Kathleen

    2017-01-01

    Management guidelines were created to screen and manage asymptomatic renal stones in U.S. astronauts. The true risk for renal stone formation in astronauts due to the space flight environment is unknown. Proper management of this condition is crucial to mitigate health and mission risks. The NASA Flight Medicine Clinic electronic medical record and the Lifetime Surveillance of Astronaut Health databases were reviewed. An extensive review of the literature and current aeromedical standards for the monitoring and management of renal stones was also done. This work was used to develop a screening and management protocol for renal stones in astronauts that is relevant to the spaceflight operational environment. In the proposed guidelines all astronauts receive a yearly screening and post-flight renal ultrasound using a novel ultrasound protocol. The ultrasound protocol uses a combination of factors, including: size, position, shadow, twinkle and dispersion properties to confirm the presence of a renal calcification. For mission-assigned astronauts, any positive ultrasound study is followed by a low-dose renal computed tomography scan and urologic consult. Other specific guidelines were also created. A small asymptomatic renal stone within the renal collecting system may become symptomatic at any time, and therefore affect launch and flight schedules, or cause incapacitation during a mission. Astronauts in need of definitive care can be evacuated from the International Space Station, but for deep space missions evacuation is impossible. The new screening and management algorithm has been implemented and the initial round of screening ultrasounds is under way. Data from these exams will better define the incidence of renal stones in U.S. astronauts, and will be used to inform risk mitigation for both short and long duration spaceflights.

  14. Computational Selection of Inhibitors of A-beta Aggregation and Neuronal Toxicity

    PubMed Central

    Chen, Deliang; Martin, Zane S.; Soto, Claudio; Schein, Catherine H.

    2009-01-01

    Alzheimer’s Disease (AD) is characterized by the cerebral accumulation of misfolded and aggregated amyloid-β protein (Aβ). Disease symptoms can be alleviated, in vitro and in vivo, by “β-sheet breaker” pentapeptides that reduce plaque volume. However the peptide nature of these compounds, made them biologically unstable and unable to penetrate membranes with high efficiency. The main goal of this study was to use computational methods to identify small molecule mimetics with better drug-like properties. For this purpose, the docked conformations of the active peptides were used to identify compounds with similar activities. A series of related β-sheet breaker peptides were docked to solid state NMR structures of a fibrillar form of Aβ. The lowest energy conformations of the active peptides were used to design three dimensional (3D)-pharmacophores, suitable for screening the NCI database with Unity. Small molecular weight compounds with physicochemical features in a conformation similar to the active peptides were selected, ranked by docking solubility parameters. Of 16 diverse compounds selected for experimental screening, 2 prevented and reversed Aβ aggregation at 2–3 μM concentration, as measured by Thioflavin T (ThT) fluorescence and ELISA assays. They also prevented the toxic effects of aggregated Aβ on neuroblastoma cells. Their low molecular weight and aqueous solubility makes them promising lead compounds for treating AD. PMID:19540126

  15. Ligand efficiency based approach for efficient virtual screening of compound libraries.

    PubMed

    Ke, Yi-Yu; Coumar, Mohane Selvaraj; Shiao, Hui-Yi; Wang, Wen-Chieh; Chen, Chieh-Wen; Song, Jen-Shin; Chen, Chun-Hwa; Lin, Wen-Hsing; Wu, Szu-Huei; Hsu, John T A; Chang, Chung-Ming; Hsieh, Hsing-Pang

    2014-08-18

    Here we report for the first time the use of fit quality (FQ), a ligand efficiency (LE) based measure for virtual screening (VS) of compound libraries. The LE based VS protocol was used to screen an in-house database of 125,000 compounds to identify aurora kinase A inhibitors. First, 20 known aurora kinase inhibitors were docked to aurora kinase A crystal structure (PDB ID: 2W1C); and the conformations of docked ligand were used to create a pharmacophore (PH) model. The PH model was used to screen the database compounds, and rank (PH rank) them based on the predicted IC50 values. Next, LE_Scale, a weight-dependant LE function, was derived from 294 known aurora kinase inhibitors. Using the fit quality (FQ = LE/LE_Scale) score derived from the LE_Scale function, the database compounds were reranked (PH_FQ rank) and the top 151 (0.12% of database) compounds were assessed for aurora kinase A inhibition biochemically. This VS protocol led to the identification of 7 novel hits, with compound 5 showing aurora kinase A IC50 = 1.29 μM. Furthermore, testing of 5 against a panel of 31 kinase reveals that it is selective toward aurora kinase A & B, with <50% inhibition for other kinases at 10 μM concentrations and is a suitable candidate for further development. Incorporation of FQ score in the VS protocol not only helped identify a novel aurora kinase inhibitor, 5, but also increased the hit rate of the VS protocol by improving the enrichment factor (EF) for FQ based screening (EF = 828), compared to PH based screening (EF = 237) alone. The LE based VS protocol disclosed here could be applied to other targets for hit identification in an efficient manner. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  16. The Candidate Cancer Gene Database: a database of cancer driver genes from forward genetic screens in mice.

    PubMed

    Abbott, Kenneth L; Nyre, Erik T; Abrahante, Juan; Ho, Yen-Yi; Isaksson Vogel, Rachel; Starr, Timothy K

    2015-01-01

    Identification of cancer driver gene mutations is crucial for advancing cancer therapeutics. Due to the overwhelming number of passenger mutations in the human tumor genome, it is difficult to pinpoint causative driver genes. Using transposon mutagenesis in mice many laboratories have conducted forward genetic screens and identified thousands of candidate driver genes that are highly relevant to human cancer. Unfortunately, this information is difficult to access and utilize because it is scattered across multiple publications using different mouse genome builds and strength metrics. To improve access to these findings and facilitate meta-analyses, we developed the Candidate Cancer Gene Database (CCGD, http://ccgd-starrlab.oit.umn.edu/). The CCGD is a manually curated database containing a unified description of all identified candidate driver genes and the genomic location of transposon common insertion sites (CISs) from all currently published transposon-based screens. To demonstrate relevance to human cancer, we performed a modified gene set enrichment analysis using KEGG pathways and show that human cancer pathways are highly enriched in the database. We also used hierarchical clustering to identify pathways enriched in blood cancers compared to solid cancers. The CCGD is a novel resource available to scientists interested in the identification of genetic drivers of cancer. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Distributed processor allocation for launching applications in a massively connected processors complex

    DOEpatents

    Pedretti, Kevin

    2008-11-18

    A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.

  18. DBMap: a TreeMap-based framework for data navigation and visualization of brain research registry

    NASA Astrophysics Data System (ADS)

    Zhang, Ming; Zhang, Hong; Tjandra, Donny; Wong, Stephen T. C.

    2003-05-01

    The purpose of this study is to investigate and apply a new, intuitive and space-conscious visualization framework to facilitate efficient data presentation and exploration of large-scale data warehouses. We have implemented the DBMap framework for the UCSF Brain Research Registry. Such a novel utility would facilitate medical specialists and clinical researchers in better exploring and evaluating a number of attributes organized in the brain research registry. The current UCSF Brain Research Registry consists of a federation of disease-oriented database modules, including Epilepsy, Brain Tumor, Intracerebral Hemorrphage, and CJD (Creuzfeld-Jacob disease). These database modules organize large volumes of imaging and non-imaging data to support Web-based clinical research. While the data warehouse supports general information retrieval and analysis, there lacks an effective way to visualize and present the voluminous and complex data stored. This study investigates whether the TreeMap algorithm can be adapted to display and navigate categorical biomedical data warehouse or registry. TreeMap is a space constrained graphical representation of large hierarchical data sets, mapped to a matrix of rectangles, whose size and color represent interested database fields. It allows the display of a large amount of numerical and categorical information in limited real estate of computer screen with an intuitive user interface. The paper will describe, DBMap, the proposed new data visualization framework for large biomedical databases. Built upon XML, Java and JDBC technologies, the prototype system includes a set of software modules that reside in the application server tier and provide interface to backend database tier and front-end Web tier of the brain registry.

  19. The Effect of All-Capital vs. Regular Mixed Print, as Presented on a Computer Screen, on Reading Rate and Accuracy.

    ERIC Educational Resources Information Center

    Henney, Maribeth

    Two related studies were conducted to determine whether students read all-capital text and mixed text displayed on a computer screen with the same speed and accuracy. Seventy-seven college students read M. A. Tinker's "Basic Reading Rate Test" displayed on a PLATO computer screen. One treatment consisted of paragraphs in all-capital type…

  20. Visual ergonomic aspects of glare on computer displays: glossy screens and angular dependence

    NASA Astrophysics Data System (ADS)

    Brunnström, Kjell; Andrén, Börje; Konstantinides, Zacharias; Nordström, Lukas

    2007-02-01

    Recently flat panel computer displays and notebook computer are designed with a so called glare panel i.e. highly glossy screens, have emerged on the market. The shiny look of the display appeals to the costumers, also there are arguments that the contrast, colour saturation etc improves by using a glare panel. LCD displays suffer often from angular dependent picture quality. This has been even more pronounced by the introduction of Prism Light Guide plates into displays for notebook computers. The TCO label is the leading labelling system for computer displays. Currently about 50% of all computer displays on the market are certified according to the TCO requirements. The requirements are periodically updated to keep up with the technical development and the latest research in e.g. visual ergonomics. The gloss level of the screen and the angular dependence has recently been investigated by conducting user studies. A study of the effect of highly glossy screens compared to matt screens has been performed. The results show a slight advantage for the glossy screen when no disturbing reflexes are present, however the difference was not statistically significant. When disturbing reflexes are present the advantage is changed into a larger disadvantage and this difference is statistically significant. Another study of angular dependence has also been performed. The results indicates a linear relationship between the picture quality and the centre luminance of the screen.

  1. Lung cancer screening CT-based coronary artery calcification in predicting cardiovascular events: A systematic review and meta-analysis.

    PubMed

    Fan, Lili; Fan, Kaikai

    2018-05-01

    Coronary artery calcificition (CAC) is a well-established predictor of cardiovascular events (CVEs). We aimed to evaluate whether lung cancer screening computed tomography (CT)-based CAC score has a good cost-effectiveness for predicting CVEs in heavy smokers. A literature search was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Pubmed, EMBASE, and Cochrane library databases were systematically searched for relevant studies that investigated the association between lung cancer screening CT-based CAC and CVEs up to December 31, 2017. We selected fixed-effect model for analysis of data heterogeneity. Statistical analyses were performed by using Review Manager Version 5.3 for Windows. Four randomized controlled trials with 5504 participants were included. Our results demonstrated that CVEs were significantly associated with the presence of CAC (relative risk [RR] 2.85, 95% confidence interval [CI] 2.02-4.02, P < .00001). Moreover, higher CAC score (defined as CAC score >400 or >1000) was associated with a significant increased CVE count (RR 3.47, 95% CI 2.65-4.53, P < .00001). However, the prevalence of CVEs was not different between male and female groups (RR 2.46, 95% CI 0.44-13.66, P = .30). CAC Agatston score evaluated by lung cancer screening CT had potential in predicting the likelihood of CVEs in the early stage without sexual difference. Thus, it may guide clinicians to intervene those heavy smokers with increased risk of CVEs earlier by CAC score through lung cancer screening CT.

  2. Video Discs in Libraries.

    ERIC Educational Resources Information Center

    Barker, Philip

    1986-01-01

    Discussion of developments in information storage technology likely to have significant impact upon library utilization focuses on hardware (videodisc technology) and software developments (knowledge databases; computer networks; database management systems; interactive video, computer, and multimedia user interfaces). Three generic computer-based…

  3. A privacy preserving protocol for tracking participants in phase I clinical trials.

    PubMed

    El Emam, Khaled; Farah, Hanna; Samet, Saeed; Essex, Aleksander; Jonker, Elizabeth; Kantarcioglu, Murat; Earle, Craig C

    2015-10-01

    Some phase 1 clinical trials offer strong financial incentives for healthy individuals to participate in their studies. There is evidence that some individuals enroll in multiple trials concurrently. This creates safety risks and introduces data quality problems into the trials. Our objective was to construct a privacy preserving protocol to track phase 1 participants to detect concurrent enrollment. A protocol using secure probabilistic querying against a database of trial participants that allows for screening during telephone interviews and on-site enrollment was developed. The match variables consisted of demographic information. The accuracy (sensitivity, precision, and negative predictive value) of the matching and its computational performance in seconds were measured under simulated environments. Accuracy was also compared to non-secure matching methods. The protocol performance scales linearly with the database size. At the largest database size of 20,000 participants, a query takes under 20s on a 64 cores machine. Sensitivity, precision, and negative predictive value of the queries were consistently at or above 0.9, and were very similar to non-secure versions of the protocol. The protocol provides a reasonable solution to the concurrent enrollment problems in phase 1 clinical trials, and is able to ensure that personal information about participants is kept secure. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Storage and retrieval of digital images in dermatology.

    PubMed

    Bittorf, A; Krejci-Papa, N C; Diepgen, T L

    1995-11-01

    Differential diagnosis in dermatology relies on the interpretation of visual information in the form of clinical and histopathological images. Up until now, reference images have had to be retrieved from textbooks and/or appropriate journals. To overcome inherent limitations of those storage media with respect to the number of images stored, display, and search parameters available, we designed a computer-based database of digitized dermatologic images. Images were taken from the photo archive of the Dermatological Clinic of the University of Erlangen. A database was designed using the Entity-Relationship approach. It was implemented on a PC-Windows platform using MS Access* and MS Visual Basic®. As WWW-server a Sparc 10 workstation was used with the CERN Hypertext-Transfer-Protocol-Daemon (httpd) 3.0 pre 6 software running. For compressed storage on a hard drive, a quality factor of 60 allowed on-screen differential diagnosis and corresponded to a compression factor of 1:35 for clinical images and 1:40 for histopathological images. Hierarchical keys of clinical or histopathological criteria permitted multi-criteria searches. A script using the Common Gateway Interface (CGI) enabled remote search and image retrieval via the World-Wide-Web (W3). A dermatologic image database, featurig clinical and histopathological images was constructed which allows for multi-parameter searches and world-wide remote access.

  5. Recent Developments in Toxico-Cheminformatics; Supporting ...

    EPA Pesticide Factsheets

    EPA's National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction through the harnessing of legacy toxicity data, creation of data linkages, and generation of new high-content and high-thoughput screening data. In association with EPA's ToxCast, ToxRefDB, and ACToR projects, the DSSTox project provides cheminformatics support and, in addition, is improving public access to quality structure-annotated chemical toxicity information in less summarized forms than traditionally employed in SAR modeling, and in ways that facilitate data-mining and data read-across. The latest DSSTox version of the Carcinogenic Potency Database file (CPDBAS) illustrates ways in which various summary definitions of carcinogenic activity can be employed in modeling and data mining. DSSTox Structure-Browser provides structure searchability across all published DSSTox toxicity-related inventory, and is enabling linkages between previously isolated toxicity data resources associated with environmental and industrial chemicals. The public DSSTox inventory also has been integrated into PubChem, allowing a user to take full advantage of PubChem structure-activity and bioassay clustering features. Phase I of the ToxCast project is generating high-throughput screening data from several hundred biochemical and cell-based assays for a set of 320 chemicals, mostly pesticide actives with rich toxicology profiles. Incorporating

  6. Predictive QSAR modeling workflow, model applicability domains, and virtual screening.

    PubMed

    Tropsha, Alexander; Golbraikh, Alexander

    2007-01-01

    Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.

  7. Message passing interface and multithreading hybrid for parallel molecular docking of large databases on petascale high performance computing machines.

    PubMed

    Zhang, Xiaohua; Wong, Sergio E; Lightstone, Felice C

    2013-04-30

    A mixed parallel scheme that combines message passing interface (MPI) and multithreading was implemented in the AutoDock Vina molecular docking program. The resulting program, named VinaLC, was tested on the petascale high performance computing (HPC) machines at Lawrence Livermore National Laboratory. To exploit the typical cluster-type supercomputers, thousands of docking calculations were dispatched by the master process to run simultaneously on thousands of slave processes, where each docking calculation takes one slave process on one node, and within the node each docking calculation runs via multithreading on multiple CPU cores and shared memory. Input and output of the program and the data handling within the program were carefully designed to deal with large databases and ultimately achieve HPC on a large number of CPU cores. Parallel performance analysis of the VinaLC program shows that the code scales up to more than 15K CPUs with a very low overhead cost of 3.94%. One million flexible compound docking calculations took only 1.4 h to finish on about 15K CPUs. The docking accuracy of VinaLC has been validated against the DUD data set by the re-docking of X-ray ligands and an enrichment study, 64.4% of the top scoring poses have RMSD values under 2.0 Å. The program has been demonstrated to have good enrichment performance on 70% of the targets in the DUD data set. An analysis of the enrichment factors calculated at various percentages of the screening database indicates VinaLC has very good early recovery of actives. Copyright © 2013 Wiley Periodicals, Inc.

  8. Implementation of Lung Cancer Screening in the Veterans Health Administration.

    PubMed

    Kinsinger, Linda S; Anderson, Charles; Kim, Jane; Larson, Martha; Chan, Stephanie H; King, Heather A; Rice, Kathryn L; Slatore, Christopher G; Tanner, Nichole T; Pittman, Kathleen; Monte, Robert J; McNeil, Rebecca B; Grubber, Janet M; Kelley, Michael J; Provenzale, Dawn; Datta, Santanu K; Sperber, Nina S; Barnes, Lottie K; Abbott, David H; Sims, Kellie J; Whitley, Richard L; Wu, R Ryanne; Jackson, George L

    2017-03-01

    The US Preventive Services Task Force recommends annual lung cancer screening (LCS) with low-dose computed tomography for current and former heavy smokers aged 55 to 80 years. There is little published experience regarding implementing this recommendation in clinical practice. To describe organizational- and patient-level experiences with implementing an LCS program in selected Veterans Health Administration (VHA) hospitals and to estimate the number of VHA patients who may be candidates for LCS. This clinical demonstration project was conducted at 8 academic VHA hospitals among 93 033 primary care patients who were assessed on screening criteria; 2106 patients underwent LCS between July 1, 2013, and June 30, 2015. Implementation Guide and support, full-time LCS coordinators, electronic tools, tracking database, patient education materials, and radiologic and nodule follow-up guidelines. Description of implementation processes; percentages of patients who agreed to undergo LCS, had positive findings on results of low-dose computed tomographic scans (nodules to be tracked or suspicious findings), were found to have lung cancer, or had incidental findings; and estimated number of VHA patients who met the criteria for LCS. Of the 4246 patients who met the criteria for LCS, 2452 (57.7%) agreed to undergo screening and 2106 (2028 men and 78 women; mean [SD] age, 64.9 [5.1] years) underwent LCS. Wide variation in processes and patient experiences occurred among the 8 sites. Of the 2106 patients screened, 1257 (59.7%) had nodules; 1184 of these patients (56.2%) required tracking, 42 (2.0%) required further evaluation but the findings were not cancer, and 31 (1.5%) had lung cancer. A variety of incidental findings, such as emphysema, other pulmonary abnormalities, and coronary artery calcification, were noted on the scans of 857 patients (40.7%). It is estimated that nearly 900 000 of a population of 6.7 million VHA patients met the criteria for LCS. Implementation of LCS in the VHA will likely lead to large numbers of patients eligible for LCS and will require substantial clinical effort for both patients and staff.

  9. Tuberculosis screening prior to anti-tumor necrosis factor therapy among patients with immune-mediated inflammatory diseases in Japan: a longitudinal study using a large-scale health insurance claims database.

    PubMed

    Tomio, Jun; Yamana, Hayato; Matsui, Hiroki; Yamashita, Hiroyuki; Yoshiyama, Takashi; Yasunaga, Hideo

    2017-11-01

    Tuberculosis screening is recommended for patients with immune-mediated inflammatory diseases (IMIDs) prior to anti-tumor necrosis factor (TNF) therapy. However, adherence to the recommended practice is unknown in the current clinical setting in Japan. We used a large-scale health insurance claims database in Japan to conduct a longitudinal observational study. Of more than two million beneficiaries in the database between 2013 and 2014, we enrolled those with IMIDs aged 15-69 years who had initiated anti-TNF therapy. We defined tuberculosis screening primarily as tuberculin skin test and/or interferon-gamma release assay (TST/IGRA) within 2 months before commencing anti-TNF therapy. We analyzed the proportions of the patients who had undergone tuberculosis screening and the associations with primary disease, type of anti-TNF agent, methotrexate prescription prior to anti-TNF therapy, and treatment for latent tuberculosis infection (LTBI). Of 385 patients presumed to have initiated anti-TNF therapy, 252 (66%) had undergone tuberculosis screening by TST/IGRA (22% TST, 56% IGRA, and 12% both TST and IGRA), and 231 (60%) had undergone TST/IGRA and radiography. Patients with psoriasis tended to be more likely to undergo tuberculosis screening than those with other diseases; however, this association was not statistically significant. Treatment for LTBI was provided to 43 (11%) patients; 123 (32%) received neither TST/IGRA nor LTBI treatment. Tuberculosis screening was often not performed prior to anti-TNF therapy despite the guidelines' recommendations; thus, patients could be put at unnecessary risk of reactivation of tuberculosis. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  10. Association of mammographic image feature change and an increasing risk trend of developing breast cancer: an assessment

    NASA Astrophysics Data System (ADS)

    Tan, Maxine; Leader, Joseph K.; Liu, Hong; Zheng, Bin

    2015-03-01

    We recently investigated a new mammographic image feature based risk factor to predict near-term breast cancer risk after a woman has a negative mammographic screening. We hypothesized that unlike the conventional epidemiology-based long-term (or lifetime) risk factors, the mammographic image feature based risk factor value will increase as the time lag between the negative and positive mammography screening decreases. The purpose of this study is to test this hypothesis. From a large and diverse full-field digital mammography (FFDM) image database with 1278 cases, we collected all available sequential FFDM examinations for each case including the "current" and 1 to 3 most recently "prior" examinations. All "prior" examinations were interpreted negative, and "current" ones were either malignant or recalled negative/benign. We computed 92 global mammographic texture and density based features, and included three clinical risk factors (woman's age, family history and subjective breast density BIRADS ratings). On this initial feature set, we applied a fast and accurate Sequential Forward Floating Selection (SFFS) feature selection algorithm to reduce feature dimensionality. The features computed on both mammographic views were individually/ separately trained using two artificial neural network (ANN) classifiers. The classification scores of the two ANNs were then merged with a sequential ANN. The results show that the maximum adjusted odds ratios were 5.59, 7.98, and 15.77 for using the 3rd, 2nd, and 1st "prior" FFDM examinations, respectively, which demonstrates a higher association of mammographic image feature change and an increasing risk trend of developing breast cancer in the near-term after a negative screening.

  11. Toxicity ForeCaster (ToxCast™) Data

    EPA Pesticide Factsheets

    Data is organized into different data sets and includes descriptions of ToxCast chemicals and assays and files summarizing the screening results, a MySQL database, chemicals screened through Tox21, and available data generated from animal toxicity studies.

  12. Identification of a New Isoindole-2-yl Scaffold as a Qo and Qi Dual Inhibitor of Cytochrome bc 1 Complex: Virtual Screening, Synthesis, and Biochemical Assay.

    PubMed

    Azizian, Homa; Bagherzadeh, Kowsar; Shahbazi, Sophia; Sharifi, Niusha; Amanlou, Massoud

    2017-09-18

    Respiratory chain ubiquinol-cytochrome (cyt) c oxidoreductase (cyt bc 1 or complex III) has been demonstrated as a promising target for numerous antibiotics and fungicide applications. In this study, a virtual screening of NCI diversity database was carried out in order to find novel Qo/Qi cyt bc 1 complex inhibitors. Structure-based virtual screening and molecular docking methodology were employed to further screen compounds with inhibition activity against cyt bc 1 complex after extensive reliability validation protocol with cross-docking method and identification of the best score functions. Subsequently, the application of rational filtering procedure over the target database resulted in the elucidation of a novel class of cyt bc 1 complex potent inhibitors with comparable binding energies and biological activities to those of the standard inhibitor, antimycin.

  13. Reference manual for data base on Nevada well logs

    USGS Publications Warehouse

    Bauer, E.M.; Cartier, K.D.

    1995-01-01

    The U.S. Geological Survey and Nevada Division of Water Resources are cooperatively using a data base for are cooperatively using a data base for managing well-log information for the State of Nevada. The Well-Log Data Base is part of an integrated system of computer data bases using the Ingres Relational Data-Base Management System, which allows efficient storage and access to water information from the State Engineer's office. The data base contains a main table, two ancillary tables, and nine lookup tables, as well as a menu-driven system for entering, updating, and reporting on the data. This reference guide outlines the general functions of the system and provides a brief description of data tables and data-entry screens.

  14. Machine learning of molecular electronic properties in chemical compound space

    NASA Astrophysics Data System (ADS)

    Montavon, Grégoire; Rupp, Matthias; Gobre, Vivekanand; Vazquez-Mayagoitia, Alvaro; Hansen, Katja; Tkatchenko, Alexandre; Müller, Klaus-Robert; Anatole von Lilienfeld, O.

    2013-09-01

    The combination of modern scientific computing with electronic structure theory can lead to an unprecedented amount of data amenable to intelligent data analysis for the identification of meaningful, novel and predictive structure-property relationships. Such relationships enable high-throughput screening for relevant properties in an exponentially growing pool of virtual compounds that are synthetically accessible. Here, we present a machine learning model, trained on a database of ab initio calculation results for thousands of organic molecules, that simultaneously predicts multiple electronic ground- and excited-state properties. The properties include atomization energy, polarizability, frontier orbital eigenvalues, ionization potential, electron affinity and excitation energies. The machine learning model is based on a deep multi-task artificial neural network, exploiting the underlying correlations between various molecular properties. The input is identical to ab initio methods, i.e. nuclear charges and Cartesian coordinates of all atoms. For small organic molecules, the accuracy of such a ‘quantum machine’ is similar, and sometimes superior, to modern quantum-chemical methods—at negligible computational cost.

  15. Landscape of Research Areas for Zeolites and Metal-Organic Frameworks Using Computational Classification Based on Citation Networks.

    PubMed

    Ogawa, Takaya; Iyoki, Kenta; Fukushima, Tomohiro; Kajikawa, Yuya

    2017-12-14

    The field of porous materials is widely spreading nowadays, and researchers need to read tremendous numbers of papers to obtain a "bird's eye" view of a given research area. However, it is difficult for researchers to obtain an objective database based on statistical data without any relation to subjective knowledge related to individual research interests. Here, citation network analysis was applied for a comparative analysis of the research areas for zeolites and metal-organic frameworks as examples for porous materials. The statistical and objective data contributed to the analysis of: (1) the computational screening of research areas; (2) classification of research stages to a certain domain; (3) "well-cited" research areas; and (4) research area preferences of specific countries. Moreover, we proposed a methodology to assist researchers to gain potential research ideas by reviewing related research areas, which is based on the detection of unfocused ideas in one area but focused in the other area by a bibliometric approach.

  16. Landscape of Research Areas for Zeolites and Metal-Organic Frameworks Using Computational Classification Based on Citation Networks

    PubMed Central

    Ogawa, Takaya; Fukushima, Tomohiro; Kajikawa, Yuya

    2017-01-01

    The field of porous materials is widely spreading nowadays, and researchers need to read tremendous numbers of papers to obtain a “bird’s eye” view of a given research area. However, it is difficult for researchers to obtain an objective database based on statistical data without any relation to subjective knowledge related to individual research interests. Here, citation network analysis was applied for a comparative analysis of the research areas for zeolites and metal-organic frameworks as examples for porous materials. The statistical and objective data contributed to the analysis of: (1) the computational screening of research areas; (2) classification of research stages to a certain domain; (3) “well-cited” research areas; and (4) research area preferences of specific countries. Moreover, we proposed a methodology to assist researchers to gain potential research ideas by reviewing related research areas, which is based on the detection of unfocused ideas in one area but focused in the other area by a bibliometric approach. PMID:29240708

  17. SCANS (Shipping Cask ANalysis System) a microcomputer-based analysis system for shipping cask design review: User`s manual to Version 3a. Volume 1, Revision 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mok, G.C.; Thomas, G.R.; Gerhard, M.A.

    SCANS (Shipping Cask ANalysis System) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for evaluating safety analysis reports on spent fuel shipping casks. SCANS is an easy-to-use system that calculates the global response to impact loads, pressure loads and thermal conditions, providing reviewers with an independent check on analyses submitted by licensees. SCANS is based on microcomputers compatible with the IBM-PC family of computers. The system is composed of a series of menus, input programs, cask analysis programs, and output display programs. All data is entered through fill-in-the-blank input screens thatmore » contain descriptive data requests. Analysis options are based on regulatory cases described in the Code of Federal Regulations 10 CFR 71 and Regulatory Guides published by the US Nuclear Regulatory Commission in 1977 and 1978.« less

  18. Searching for Controlled Trials of Complementary and Alternative Medicine: A Comparison of 15 Databases

    PubMed Central

    Cogo, Elise; Sampson, Margaret; Ajiferuke, Isola; Manheimer, Eric; Campbell, Kaitryn; Daniel, Raymond; Moher, David

    2011-01-01

    This project aims to assess the utility of bibliographic databases beyond the three major ones (MEDLINE, EMBASE and Cochrane CENTRAL) for finding controlled trials of complementary and alternative medicine (CAM). Fifteen databases were searched to identify controlled clinical trials (CCTs) of CAM not also indexed in MEDLINE. Searches were conducted in May 2006 using the revised Cochrane highly sensitive search strategy (HSSS) and the PubMed CAM Subset. Yield of CAM trials per 100 records was determined, and databases were compared over a standardized period (2005). The Acudoc2 RCT, Acubriefs, Index to Chiropractic Literature (ICL) and Hom-Inform databases had the highest concentrations of non-MEDLINE records, with more than 100 non-MEDLINE records per 500. Other productive databases had ratios between 500 and 1500 records to 100 non-MEDLINE records—these were AMED, MANTIS, PsycINFO, CINAHL, Global Health and Alt HealthWatch. Five databases were found to be unproductive: AGRICOLA, CAIRSS, Datadiwan, Herb Research Foundation and IBIDS. Acudoc2 RCT yielded 100 CAM trials in the most recent 100 records screened. Acubriefs, AMED, Hom-Inform, MANTIS, PsycINFO and CINAHL had more than 25 CAM trials per 100 records screened. Global Health, ICL and Alt HealthWatch were below 25 in yield. There were 255 non-MEDLINE trials from eight databases in 2005, with only 10% indexed in more than one database. Yield varied greatly between databases; the most productive databases from both sampling methods were Acubriefs, Acudoc2 RCT, AMED and CINAHL. Low overlap between databases indicates comprehensive CAM literature searches will require multiple databases. PMID:19468052

  19. Searching for controlled trials of complementary and alternative medicine: a comparison of 15 databases.

    PubMed

    Cogo, Elise; Sampson, Margaret; Ajiferuke, Isola; Manheimer, Eric; Campbell, Kaitryn; Daniel, Raymond; Moher, David

    2011-01-01

    This project aims to assess the utility of bibliographic databases beyond the three major ones (MEDLINE, EMBASE and Cochrane CENTRAL) for finding controlled trials of complementary and alternative medicine (CAM). Fifteen databases were searched to identify controlled clinical trials (CCTs) of CAM not also indexed in MEDLINE. Searches were conducted in May 2006 using the revised Cochrane highly sensitive search strategy (HSSS) and the PubMed CAM Subset. Yield of CAM trials per 100 records was determined, and databases were compared over a standardized period (2005). The Acudoc2 RCT, Acubriefs, Index to Chiropractic Literature (ICL) and Hom-Inform databases had the highest concentrations of non-MEDLINE records, with more than 100 non-MEDLINE records per 500. Other productive databases had ratios between 500 and 1500 records to 100 non-MEDLINE records-these were AMED, MANTIS, PsycINFO, CINAHL, Global Health and Alt HealthWatch. Five databases were found to be unproductive: AGRICOLA, CAIRSS, Datadiwan, Herb Research Foundation and IBIDS. Acudoc2 RCT yielded 100 CAM trials in the most recent 100 records screened. Acubriefs, AMED, Hom-Inform, MANTIS, PsycINFO and CINAHL had more than 25 CAM trials per 100 records screened. Global Health, ICL and Alt HealthWatch were below 25 in yield. There were 255 non-MEDLINE trials from eight databases in 2005, with only 10% indexed in more than one database. Yield varied greatly between databases; the most productive databases from both sampling methods were Acubriefs, Acudoc2 RCT, AMED and CINAHL. Low overlap between databases indicates comprehensive CAM literature searches will require multiple databases.

  20. DESPIC: Detecting Early Signatures of Persuasion in Information Cascades

    DTIC Science & Technology

    2015-08-27

    over NoSQL Databases, Proceedings of the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2014). 26-MAY-14, . : , P...over NoSQL Databases. Proceedings of the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2014). Chicago, IL, USA...distributed NoSQL databases including HBase and Riak, we finalized the requirements of the optimal computational architecture to support our framework

  1. High-Order Methods for Computational Physics

    DTIC Science & Technology

    1999-03-01

    computation is running in 278 Ronald D. Henderson parallel. Instead we use the concept of a voxel database (VDB) of geometric positions in the mesh [85...processor 0 Fig. 4.19. Connectivity and communications axe established by building a voxel database (VDB) of positions. A VDB maps each position to a...studies such as the highly accurate stability computations considered help expand the database for this benchmark problem. The two-dimensional linear

  2. Does Patient Time Spent Viewing Computer-Tailored Colorectal Cancer Screening Materials Predict Patient-Reported Discussion of Screening with Providers?

    ERIC Educational Resources Information Center

    Sanders, Mechelle; Fiscella, Kevin; Veazie, Peter; Dolan, James G.; Jerant, Anthony

    2016-01-01

    The main aim is to examine whether patients' viewing time on information about colorectal cancer (CRC) screening before a primary care physician (PCP) visit is associated with discussion of screening options during the visit. We analyzed data from a multi-center randomized controlled trial of a tailored interactive multimedia computer program…

  3. Simple re-instantiation of small databases using cloud computing.

    PubMed

    Tan, Tin Wee; Xie, Chao; De Silva, Mark; Lim, Kuan Siong; Patro, C Pawan K; Lim, Shen Jean; Govindarajan, Kunde Ramamoorthy; Tong, Joo Chuan; Choo, Khar Heng; Ranganathan, Shoba; Khan, Asif M

    2013-01-01

    Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear.

  4. Simple re-instantiation of small databases using cloud computing

    PubMed Central

    2013-01-01

    Background Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. Results We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Conclusions Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear. PMID:24564380

  5. Tools for Material Design and Selection

    NASA Astrophysics Data System (ADS)

    Wehage, Kristopher

    The present thesis focuses on applications of numerical methods to create tools for material characterization, design and selection. The tools generated in this work incorporate a variety of programming concepts, from digital image analysis, geometry, optimization, and parallel programming to data-mining, databases and web design. The first portion of the thesis focuses on methods for characterizing clustering in bimodal 5083 Aluminum alloys created by cryomilling and powder metallurgy. The bimodal samples analyzed in the present work contain a mixture of a coarse grain phase, with a grain size on the order of several microns, and an ultra-fine grain phase, with a grain size on the order of 200 nm. The mixing of the two phases is not homogeneous and clustering is observed. To investigate clustering in these bimodal materials, various microstructures were created experimentally by conventional cryomilling, Hot Isostatic Pressing (HIP), Extrusion, Dual-Mode Dynamic Forging (DMDF) and a new 'Gradient' cryomilling process. Two techniques for quantitative clustering analysis are presented, formulated and implemented. The first technique, the Area Disorder function, provides a metric of the quality of coarse grain dispersion in an ultra-fine grain matrix and the second technique, the Two-Point Correlation function, provides a metric of long and short range spatial arrangements of the two phases, as well as an indication of the mean feature size in any direction. The two techniques are implemented on digital images created by Scanning Electron Microscopy (SEM) and Electron Backscatter Detection (EBSD) of the microstructures. To investigate structure--property relationships through modeling and simulation, strategies for generating synthetic microstructures are discussed and a computer program that generates randomized microstructures with desired configurations of clustering described by the Area Disorder Function is formulated and presented. In the computer program, two-dimensional microstructures are generated by Random Sequential Adsorption (RSA) of voxelized ellipses representing the coarse grain phase. A simulated annealing algorithm is used to geometrically optimize the placement of the ellipses in the model to achieve varying user-defined configurations of spatial arrangement of the coarse grains. During the simulated annealing process, the ellipses are allowed to overlap up to a specified threshold, allowing triple junctions to form in the model. Once the simulated annealing process is complete, the remaining space is populated by smaller ellipses representing the ultra-fine grain phase. Uniform random orientations are assigned to the grains. The program generates text files that can be imported in to Crystal Plasticity Finite Element Analysis Software for stress analysis. Finally, numerical methods and programming are applied to current issues in green engineering and hazard assessment. To understand hazards associated with materials and select safer alternatives, engineers and designers need access to up-to-date hazard information. However, hazard information comes from many disparate sources and aggregating, interpreting and taking action on the wealth of data is not trivial. In light of these challenges, a Framework for Automated Hazard Assessment based on the GreenScreen list translator is presented. The framework consists of a computer program that automatically extracts data from the GHS-Japan hazard database, loads the data into a machine-readable JSON format, transforms the JSON document in to a GreenScreen JSON document using the GreenScreen List Translator v1.2 and performs GreenScreen Benchmark scoring on the material. The GreenScreen JSON documents are then uploaded to a document storage system to allow human operators to search for, modify or add additional hazard information via a web interface.

  6. Partitioning medical image databases for content-based queries on a Grid.

    PubMed

    Montagnat, J; Breton, V; E Magnin, I

    2005-01-01

    In this paper we study the impact of executing a medical image database query application on the grid. For lowering the total computation time, the image database is partitioned into subsets to be processed on different grid nodes. A theoretical model of the application complexity and estimates of the grid execution overhead are used to efficiently partition the database. We show results demonstrating that smart partitioning of the database can lead to significant improvements in terms of total computation time. Grids are promising for content-based image retrieval in medical databases.

  7. Design of Glucagon-Like Peptide-1 Receptor Agonist for Diabetes Mellitus from Traditional Chinese Medicine

    PubMed Central

    Tang, Hsin-Chieh; Chen, Calvin Yu-Chian

    2014-01-01

    Glucagon-like peptide-1 (GLP-1) is a promising target for diabetes mellitus (DM) therapy and reduces the occurrence of diabetes due to obesity. However, GLP-1 will be hydrolyzed soon by the enzyme dipeptidyl peptidase-4 (DPP-4). We tried to design small molecular drugs for GLP-1 receptor agonist from the world's largest traditional Chinese medicine (TCM) Database@Taiwan. According to docking results of virtual screening, we selected 2 TCM compounds, wenyujinoside and 28-deglucosylchikusetsusaponin IV, for further molecular dynamics (MD) simulation. GLP-1 was assigned as the control compound. Based on the results of root mean square deviation (RMSD), solvent accessible surface (SAS), mean square deviation (MSD), Gyrate, total energy, root mean square fluctuation (RMSF), matrices of smallest distance of residues, database of secondary structure assignment (DSSP), cluster analysis, and distance of H-bond, we concluded that all the 3 compounds could bind and activate GLP-1 receptor by computational simulation. Wenyujinoside and 28-deglucosylchikusetsusaponin IV were the TCM compounds that could be GLP-1 receptor agonists. PMID:24891870

  8. Voice recognition products-an occupational risk for users with ULDs?

    PubMed

    Williams, N R

    2003-10-01

    Voice recognition systems (VRS) allow speech to be converted both directly into text-which appears on the screen of a computer-and to direct equipment to perform specific functions. Suggested applications are many and varied, including increasing efficiency in the reporting of radiographs, allowing directed surgery and enabling individuals with upper limb disorders (ULDs) who cannot use other input devices, such as keyboards and mice, to carry out word processing and other activities. Aim This paper describes four cases of vocal dysfunction related to the use of such software, which have been identified from the database of the Voice and Speech Laboratory of the Massachusetts Eye and Ear infirmary (MEEI). The database was searched using key words 'voice recognition' and four cases were identified from a total of 4800. In all cases, the VRS was supplied to assist individuals with ULDs who could not use conventional input devices. Case reports illustrate time of onset and symptoms experienced. The cases illustrate the need for risk assessment and consideration of the ergonomic aspects of voice use prior to such adaptations being used, particularly in those who already experience work-related ULDs.

  9. A Web Server and Mobile App for Computing Hemolytic Potency of Peptides.

    PubMed

    Chaudhary, Kumardeep; Kumar, Ritesh; Singh, Sandeep; Tuknait, Abhishek; Gautam, Ankur; Mathur, Deepika; Anand, Priya; Varshney, Grish C; Raghava, Gajendra P S

    2016-03-08

    Numerous therapeutic peptides do not enter the clinical trials just because of their high hemolytic activity. Recently, we developed a database, Hemolytik, for maintaining experimentally validated hemolytic and non-hemolytic peptides. The present study describes a web server and mobile app developed for predicting, and screening of peptides having hemolytic potency. Firstly, we generated a dataset HemoPI-1 that contains 552 hemolytic peptides extracted from Hemolytik database and 552 random non-hemolytic peptides (from Swiss-Prot). The sequence analysis of these peptides revealed that certain residues (e.g., L, K, F, W) and motifs (e.g., "FKK", "LKL", "KKLL", "KWK", "VLK", "CYCR", "CRR", "RFC", "RRR", "LKKL") are more abundant in hemolytic peptides. Therefore, we developed models for discriminating hemolytic and non-hemolytic peptides using various machine learning techniques and achieved more than 95% accuracy. We also developed models for discriminating peptides having high and low hemolytic potential on different datasets called HemoPI-2 and HemoPI-3. In order to serve the scientific community, we developed a web server, mobile app and JAVA-based standalone software (http://crdd.osdd.net/raghava/hemopi/).

  10. Design of glucagon-like Peptide-1 receptor agonist for diabetes mellitus from traditional chinese medicine.

    PubMed

    Tang, Hsin-Chieh; Chen, Calvin Yu-Chian

    2014-01-01

    Glucagon-like peptide-1 (GLP-1) is a promising target for diabetes mellitus (DM) therapy and reduces the occurrence of diabetes due to obesity. However, GLP-1 will be hydrolyzed soon by the enzyme dipeptidyl peptidase-4 (DPP-4). We tried to design small molecular drugs for GLP-1 receptor agonist from the world's largest traditional Chinese medicine (TCM) Database@Taiwan. According to docking results of virtual screening, we selected 2 TCM compounds, wenyujinoside and 28-deglucosylchikusetsusaponin IV, for further molecular dynamics (MD) simulation. GLP-1 was assigned as the control compound. Based on the results of root mean square deviation (RMSD), solvent accessible surface (SAS), mean square deviation (MSD), Gyrate, total energy, root mean square fluctuation (RMSF), matrices of smallest distance of residues, database of secondary structure assignment (DSSP), cluster analysis, and distance of H-bond, we concluded that all the 3 compounds could bind and activate GLP-1 receptor by computational simulation. Wenyujinoside and 28-deglucosylchikusetsusaponin IV were the TCM compounds that could be GLP-1 receptor agonists.

  11. Designing Semiconductor Heterostructures Using Digitally Accessible Electronic-Structure Data

    NASA Astrophysics Data System (ADS)

    Shapera, Ethan; Schleife, Andre

    Semiconductor sandwich structures, so-called heterojunctions, are at the heart of modern applications with tremendous societal impact: Light-emitting diodes shape the future of lighting and solar cells are promising for renewable energy. However, their computer-based design is hampered by the high cost of electronic structure techniques used to select materials based on alignment of valence and conduction bands and to evaluate excited state properties. We describe, validate, and demonstrate an open source Python framework which rapidly screens existing online databases and user-provided data to find combinations of suitable, previously fabricated materials for optoelectronic applications. The branch point energy aligns valence and conduction bands of different materials, requiring only the bulk density functional theory band structure. We train machine learning algorithms to predict the dielectric constant, electron mobility, and hole mobility with material descriptors available in online databases. Using CdSe and InP as emitting layers for LEDs and CH3NH3PbI3 and nanoparticle PbS as absorbers for solar cells, we demonstrate our broadly applicable, automated method.

  12. Cheminformatics meets molecular mechanics: a combined application of knowledge-based pose scoring and physical force field-based hit scoring functions improves the accuracy of structure-based virtual screening.

    PubMed

    Hsieh, Jui-Hua; Yin, Shuangye; Wang, Xiang S; Liu, Shubin; Dokholyan, Nikolay V; Tropsha, Alexander

    2012-01-23

    Poor performance of scoring functions is a well-known bottleneck in structure-based virtual screening (VS), which is most frequently manifested in the scoring functions' inability to discriminate between true ligands vs known nonbinders (therefore designated as binding decoys). This deficiency leads to a large number of false positive hits resulting from VS. We have hypothesized that filtering out or penalizing docking poses recognized as non-native (i.e., pose decoys) should improve the performance of VS in terms of improved identification of true binders. Using several concepts from the field of cheminformatics, we have developed a novel approach to identifying pose decoys from an ensemble of poses generated by computational docking procedures. We demonstrate that the use of target-specific pose (scoring) filter in combination with a physical force field-based scoring function (MedusaScore) leads to significant improvement of hit rates in VS studies for 12 of the 13 benchmark sets from the clustered version of the Database of Useful Decoys (DUD). This new hybrid scoring function outperforms several conventional structure-based scoring functions, including XSCORE::HMSCORE, ChemScore, PLP, and Chemgauss3, in 6 out of 13 data sets at early stage of VS (up 1% decoys of the screening database). We compare our hybrid method with several novel VS methods that were recently reported to have good performances on the same DUD data sets. We find that the retrieved ligands using our method are chemically more diverse in comparison with two ligand-based methods (FieldScreen and FLAP::LBX). We also compare our method with FLAP::RBLB, a high-performance VS method that also utilizes both the receptor and the cognate ligand structures. Interestingly, we find that the top ligands retrieved using our method are highly complementary to those retrieved using FLAP::RBLB, hinting effective directions for best VS applications. We suggest that this integrative VS approach combining cheminformatics and molecular mechanics methodologies may be applied to a broad variety of protein targets to improve the outcome of structure-based drug discovery studies.

  13. High-Performance Secure Database Access Technologies for HEP Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less

  14. Generation of ligand-based pharmacophore model and virtual screening for identification of novel tubulin inhibitors with potent anticancer activity.

    PubMed

    Chiang, Yi-Kun; Kuo, Ching-Chuan; Wu, Yu-Shan; Chen, Chung-Tong; Coumar, Mohane Selvaraj; Wu, Jian-Sung; Hsieh, Hsing-Pang; Chang, Chi-Yen; Jseng, Huan-Yi; Wu, Ming-Hsine; Leou, Jiun-Shyang; Song, Jen-Shin; Chang, Jang-Yang; Lyu, Ping-Chiang; Chao, Yu-Sheng; Wu, Su-Ying

    2009-07-23

    A pharmacophore model, Hypo1, was built on the basis of 21 training-set indole compounds with varying levels of antiproliferative activity. Hypo1 possessed important chemical features required for the inhibitors and demonstrated good predictive ability for biological activity, with high correlation coefficients of 0.96 and 0.89 for the training-set and test-set compounds, respectively. Further utilization of the Hypo1 pharmacophore model to screen chemical database in silico led to the identification of four compounds with antiproliferative activity. Among these four compounds, 43 showed potent antiproliferative activity against various cancer cell lines with the strongest inhibition on the proliferation of KB cells (IC(50) = 187 nM). Further biological characterization revealed that 43 effectively inhibited tubulin polymerization and significantly induced cell cycle arrest in G(2)-M phase. In addition, 43 also showed the in vivo-like anticancer effects. To our knowledge, 43 is the most potent antiproliferative compound with antitubulin activity discovered by computer-aided drug design. The chemical novelty of 43 and its anticancer activities make this compound worthy of further lead optimization.

  15. Computational screening of organic materials towards improved photovoltaic properties

    NASA Astrophysics Data System (ADS)

    Dai, Shuo; Olivares-Amaya, Roberto; Amador-Bedolla, Carlos; Aspuru-Guzik, Alan; Borunda, Mario

    2015-03-01

    The world today faces an energy crisis that is an obstruction to the development of the human civilization. One of the most promising solutions is solar energy harvested by economical solar cells. Being the third generation of solar cell materials, organic photovoltaic (OPV) materials is now under active development from both theoretical and experimental points of view. In this study, we constructed a parameter to select the desired molecules based on their optical spectra performance. We applied it to investigate a large collection of potential OPV materials, which were from the CEPDB database set up by the Harvard Clean Energy Project. Time dependent density functional theory (TD-DFT) modeling was used to calculate the absorption spectra of the molecules. Then based on the parameter, we screened out the top performing molecules for their potential OPV usage and suggested experimental efforts toward their synthesis. In addition, from those molecules, we summarized the functional groups that provided molecules certain spectrum capability. It is hoped that useful information could be mined out to provide hints to molecular design of OPV materials.

  16. Molecular formula and METLIN Personal Metabolite Database matching applied to the identification of compounds generated by LC/TOF-MS.

    PubMed

    Sana, Theodore R; Roark, Joseph C; Li, Xiangdong; Waddell, Keith; Fischer, Steven M

    2008-09-01

    In an effort to simplify and streamline compound identification from metabolomics data generated by liquid chromatography time-of-flight mass spectrometry, we have created software for constructing Personalized Metabolite Databases with content from over 15,000 compounds pulled from the public METLIN database (http://metlin.scripps.edu/). Moreover, we have added extra functionalities to the database that (a) permit the addition of user-defined retention times as an orthogonal searchable parameter to complement accurate mass data; and (b) allow interfacing to separate software, a Molecular Formula Generator (MFG), that facilitates reliable interpretation of any database matches from the accurate mass spectral data. To test the utility of this identification strategy, we added retention times to a subset of masses in this database, representing a mixture of 78 synthetic urine standards. The synthetic mixture was analyzed and screened against this METLIN urine database, resulting in 46 accurate mass and retention time matches. Human urine samples were subsequently analyzed under the same analytical conditions and screened against this database. A total of 1387 ions were detected in human urine; 16 of these ions matched both accurate mass and retention time parameters for the 78 urine standards in the database. Another 374 had only an accurate mass match to the database, with 163 of those masses also having the highest MFG score. Furthermore, MFG calculated a formula for a further 849 ions that had no match to the database. Taken together, these results suggest that the METLIN Personal Metabolite database and MFG software offer a robust strategy for confirming the formula of database matches. In the event of no database match, it also suggests possible formulas that may be helpful in interpreting the experimental results.

  17. Cloud-Based NoSQL Open Database of Pulmonary Nodules for Computer-Aided Lung Cancer Diagnosis and Reproducible Research.

    PubMed

    Ferreira Junior, José Raniery; Oliveira, Marcelo Costa; de Azevedo-Marques, Paulo Mazzoncini

    2016-12-01

    Lung cancer is the leading cause of cancer-related deaths in the world, and its main manifestation is pulmonary nodules. Detection and classification of pulmonary nodules are challenging tasks that must be done by qualified specialists, but image interpretation errors make those tasks difficult. In order to aid radiologists on those hard tasks, it is important to integrate the computer-based tools with the lesion detection, pathology diagnosis, and image interpretation processes. However, computer-aided diagnosis research faces the problem of not having enough shared medical reference data for the development, testing, and evaluation of computational methods for diagnosis. In order to minimize this problem, this paper presents a public nonrelational document-oriented cloud-based database of pulmonary nodules characterized by 3D texture attributes, identified by experienced radiologists and classified in nine different subjective characteristics by the same specialists. Our goal with the development of this database is to improve computer-aided lung cancer diagnosis and pulmonary nodule detection and classification research through the deployment of this database in a cloud Database as a Service framework. Pulmonary nodule data was provided by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), image descriptors were acquired by a volumetric texture analysis, and database schema was developed using a document-oriented Not only Structured Query Language (NoSQL) approach. The proposed database is now with 379 exams, 838 nodules, and 8237 images, 4029 of them are CT scans and 4208 manually segmented nodules, and it is allocated in a MongoDB instance on a cloud infrastructure.

  18. Aggregating Data for Computational Toxicology Applications ...

    EPA Pesticide Factsheets

    Computational toxicology combines data from high-throughput test methods, chemical structure analyses and other biological domains (e.g., genes, proteins, cells, tissues) with the goals of predicting and understanding the underlying mechanistic causes of chemical toxicity and for predicting toxicity of new chemicals and products. A key feature of such approaches is their reliance on knowledge extracted from large collections of data and data sets in computable formats. The U.S. Environmental Protection Agency (EPA) has developed a large data resource called ACToR (Aggregated Computational Toxicology Resource) to support these data-intensive efforts. ACToR comprises four main repositories: core ACToR (chemical identifiers and structures, and summary data on hazard, exposure, use, and other domains), ToxRefDB (Toxicity Reference Database, a compilation of detailed in vivo toxicity data from guideline studies), ExpoCastDB (detailed human exposure data from observational studies of selected chemicals), and ToxCastDB (data from high-throughput screening programs, including links to underlying biological information related to genes and pathways). The EPA DSSTox (Distributed Structure-Searchable Toxicity) program provides expert-reviewed chemical structures and associated information for these and other high-interest public inventories. Overall, the ACToR system contains information on about 400,000 chemicals from 1100 different sources. The entire system is built usi

  19. [Problem list in computer-based patient records].

    PubMed

    Ludwig, C A

    1997-01-14

    Computer-based clinical information systems are capable of effectively processing even large amounts of patient-related data. However, physicians depend on rapid access to summarized, clearly laid out data on the computer screen to inform themselves about a patient's current clinical situation. In introducing a clinical workplace system, we therefore transformed the problem list-which for decades has been successfully used in clinical information management-into an electronic equivalent and integrated it into the medical record. The table contains a concise overview of diagnoses and problems as well as related findings. Graphical information can also be integrated into the table, and an additional space is provided for a summary of planned examinations or interventions. The digital form of the problem list makes it possible to use the entire list or selected text elements for generating medical documents. Diagnostic terms for medical reports are transferred automatically to corresponding documents. Computer technology has an immense potential for the further development of problem list concepts. With multimedia applications sound and images will be included in the problem list. For hyperlink purpose the problem list could become a central information board and table of contents of the medical record, thus serving as the starting point for database searches and supporting the user in navigating through the medical record.

  20. Nontargeted Screening Method for Illegal Additives Based on Ultrahigh-Performance Liquid Chromatography-High-Resolution Mass Spectrometry.

    PubMed

    Fu, Yanqing; Zhou, Zhihui; Kong, Hongwei; Lu, Xin; Zhao, Xinjie; Chen, Yihui; Chen, Jia; Wu, Zeming; Xu, Zhiliang; Zhao, Chunxia; Xu, Guowang

    2016-09-06

    Identification of illegal additives in complex matrixes is important in the food safety field. In this study a nontargeted screening strategy was developed to find illegal additives based on ultrahigh-performance liquid chromatography-high-resolution mass spectrometry (UHPLC-HRMS). First, an analytical method for possible illegal additives in complex matrixes was established including fast sample pretreatment, accurate UHPLC separation, and HRMS detection. Second, efficient data processing and differential analysis workflow were suggested and applied to find potential risk compounds. Third, structure elucidation of risk compounds was performed by (1) searching online databases [Metlin and the Human Metabolome Database (HMDB)] and an in-house database which was established at the above-defined conditions of UHPLC-HRMS analysis and contains information on retention time, mass spectra (MS), and tandem mass spectra (MS/MS) of 475 illegal additives, (2) analyzing fragment ions, and (3) referring to fragmentation rules. Fish was taken as an example to show the usefulness of the nontargeted screening strategy, and six additives were found in suspected fish samples. Quantitative analysis was further carried out to determine the contents of these compounds. The satisfactory application of this strategy in fish samples means that it can also be used in the screening of illegal additives in other kinds of food samples.

  1. SCREENING CHEMICALS FOR ESTROGEN RECEPTOR BIOACTIVITY USING A COMPUTATIONAL MODEL

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) is considering the use high-throughput and computational methods for regulatory applications in the Endocrine Disruptor Screening Program (EDSP). To use these new tools for regulatory decision making, computational methods must be a...

  2. Toward unification of taxonomy databases in a distributed computer environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitakami, Hajime; Tateno, Yoshio; Gojobori, Takashi

    1994-12-31

    All the taxonomy databases constructed with the DNA databases of the international DNA data banks are powerful electronic dictionaries which aid in biological research by computer. The taxonomy databases are, however not consistently unified with a relational format. If we can achieve consistent unification of the taxonomy databases, it will be useful in comparing many research results, and investigating future research directions from existent research results. In particular, it will be useful in comparing relationships between phylogenetic trees inferred from molecular data and those constructed from morphological data. The goal of the present study is to unify the existent taxonomymore » databases and eliminate inconsistencies (errors) that are present in them. Inconsistencies occur particularly in the restructuring of the existent taxonomy databases, since classification rules for constructing the taxonomy have rapidly changed with biological advancements. A repair system is needed to remove inconsistencies in each data bank and mismatches among data banks. This paper describes a new methodology for removing both inconsistencies and mismatches from the databases on a distributed computer environment. The methodology is implemented in a relational database management system, SYBASE.« less

  3. Data Structures in Natural Computing: Databases as Weak or Strong Anticipatory Systems

    NASA Astrophysics Data System (ADS)

    Rossiter, B. N.; Heather, M. A.

    2004-08-01

    Information systems anticipate the real world. Classical databases store, organise and search collections of data of that real world but only as weak anticipatory information systems. This is because of the reductionism and normalisation needed to map the structuralism of natural data on to idealised machines with von Neumann architectures consisting of fixed instructions. Category theory developed as a formalism to explore the theoretical concept of naturality shows that methods like sketches arising from graph theory as only non-natural models of naturality cannot capture real-world structures for strong anticipatory information systems. Databases need a schema of the natural world. Natural computing databases need the schema itself to be also natural. Natural computing methods including neural computers, evolutionary automata, molecular and nanocomputing and quantum computation have the potential to be strong. At present they are mainly at the stage of weak anticipatory systems.

  4. The Fabric for Frontier Experiments Project at Fermilab

    NASA Astrophysics Data System (ADS)

    Kirby, Michael

    2014-06-01

    The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere; 2) an extensive data management system for managing local and remote caches, cataloging, querying, moving, and tracking the use of data; 3) custom and generic database applications for calibrations, beam information, and other purposes; 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.

  5. Reducing false-positive detections by combining two stage-1 computer-aided mass detection algorithms

    NASA Astrophysics Data System (ADS)

    Bedard, Noah D.; Sampat, Mehul P.; Stokes, Patrick A.; Markey, Mia K.

    2006-03-01

    In this paper we present a strategy for reducing the number of false-positives in computer-aided mass detection. Our approach is to only mark "consensus" detections from among the suspicious sites identified by different "stage-1" detection algorithms. By "stage-1" we mean that each of the Computer-aided Detection (CADe) algorithms is designed to operate with high sensitivity, allowing for a large number of false positives. In this study, two mass detection methods were used: (1) Heath and Bowyer's algorithm based on the average fraction under the minimum filter (AFUM) and (2) a low-threshold bi-lateral subtraction algorithm. The two methods were applied separately to a set of images from the Digital Database for Screening Mammography (DDSM) to obtain paired sets of mass candidates. The consensus mass candidates for each image were identified by a logical "and" operation of the two CADe algorithms so as to eliminate regions of suspicion that were not independently identified by both techniques. It was shown that by combining the evidence from the AFUM filter method with that obtained from bi-lateral subtraction, the same sensitivity could be reached with fewer false-positives per image relative to using the AFUM filter alone.

  6. Computer-Assisted Inverse Design of Inorganic Electrides

    NASA Astrophysics Data System (ADS)

    Zhang, Yunwei; Wang, Hui; Wang, Yanchao; Zhang, Lijun; Ma, Yanming

    2017-01-01

    Electrides are intrinsic electron-rich materials enabling applications as excellent electron emitters, superior catalysts, and strong reducing agents. There are a number of organic electrides; however, their instability at room temperature and sensitivity to moisture are bottlenecks for their practical uses. Known inorganic electrides are rare, but they appear to have greater thermal stability at ambient conditions and are thus better characterized for application. Here, we develop a computer-assisted inverse-design method for searching for a large variety of inorganic electrides unbiased by any known electride structures. It uses the intrinsic property of interstitial electron localization of electrides as the global variable function for swarm intelligence structure searches. We construct two rules of thumb on the design of inorganic electrides pointing to electron-rich ionic systems and low electronegativity of the cationic elements involved. By screening 99 such binary compounds through large-scale computer simulations, we identify 24 stable and 65 metastable new inorganic electrides that show distinct three-, two-, and zero-dimensional conductive properties, among which 18 are existing compounds that have not been pointed to as electrides. Our work reveals the rich abundance of inorganic electrides by providing 33 hitherto unexpected structure prototypes of electrides, of which 19 are not in the known structure databases.

  7. FireProt: Energy- and Evolution-Based Computational Design of Thermostable Multiple-Point Mutants.

    PubMed

    Bednar, David; Beerens, Koen; Sebestova, Eva; Bendl, Jaroslav; Khare, Sagar; Chaloupkova, Radka; Prokop, Zbynek; Brezovsky, Jan; Baker, David; Damborsky, Jiri

    2015-11-01

    There is great interest in increasing proteins' stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt's reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔTm = 24°C and 21°C) by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications.

  8. Patients and Computers as Reminders to Screen for Diabetes in Family Practice

    PubMed Central

    Kenealy, Tim; Arroll, Bruce; Petrie, Keith J

    2005-01-01

    Background In New Zealand, more than 5% of people aged 50 years and older have undiagnosed diabetes; most of them attend family practitioners (FPs) at least once a year. Objectives To test the effectiveness of patients or computers as reminders to screen for diabetes in patients attending FPs. Design A randomized-controlled trial compared screening rates in 4 intervention arms: patient reminders, computer reminders, both reminders, and usual care. The trial lasted 2 months. The patient reminder was a diabetes risk self-assessment sheet filled in by patients and given to the FP during the consultation. The computer reminder was an icon that flashed only for patients considered eligible for screening. Participants One hundred and seven FPs. Measurements The primary outcome was whether each eligible patient, who attended during the trial, was or was not tested for blood glucose. Analysis was by intention to treat and allowed for clustering by FP. Results Patient reminders (odds ratio [OR] 1.72, 95% confidence interval [CI] 1.21, 2.43), computer reminders (OR 2.55, 1.68, 3.88), and both reminders (OR 1.69, 1.11, 2.59) were all effective compared with usual care. Computer reminders were more effective than patient reminders (OR 1.49, 1.07, 2.07). Patients were more likely to be screened if they visited the FP repeatedly, if patients were non-European, if they were “regular” patients of the practice, and if their FP had a higher screening rate prior to the study. Conclusions Patient and computer reminders were effective methods to increase screening for diabetes. However, the effects were not additive. PMID:16191138

  9. LigandBox: A database for 3D structures of chemical compounds

    PubMed Central

    Kawabata, Takeshi; Sugihara, Yusuke; Fukunishi, Yoshifumi; Nakamura, Haruki

    2013-01-01

    A database for the 3D structures of available compounds is essential for the virtual screening by molecular docking. We have developed the LigandBox database (http://ligandbox.protein.osaka-u.ac.jp/ligandbox/) containing four million available compounds, collected from the catalogues of 37 commercial suppliers, and approved drugs and biochemical compounds taken from KEGG_DRUG, KEGG_COMPOUND and PDB databases. Each chemical compound in the database has several 3D conformers with hydrogen atoms and atomic charges, which are ready to be docked into receptors using docking programs. The 3D conformations were generated using our molecular simulation program package, myPresto. Various physical properties, such as aqueous solubility (LogS) and carcinogenicity have also been calculated to characterize the ADME-Tox properties of the compounds. The Web database provides two services for compound searches: a property/chemical ID search and a chemical structure search. The chemical structure search is performed by a descriptor search and a maximum common substructure (MCS) search combination, using our program kcombu. By specifying a query chemical structure, users can find similar compounds among the millions of compounds in the database within a few minutes. Our database is expected to assist a wide range of researchers, in the fields of medical science, chemical biology, and biochemistry, who are seeking to discover active chemical compounds by the virtual screening. PMID:27493549

  10. LigandBox: A database for 3D structures of chemical compounds.

    PubMed

    Kawabata, Takeshi; Sugihara, Yusuke; Fukunishi, Yoshifumi; Nakamura, Haruki

    2013-01-01

    A database for the 3D structures of available compounds is essential for the virtual screening by molecular docking. We have developed the LigandBox database (http://ligandbox.protein.osaka-u.ac.jp/ligandbox/) containing four million available compounds, collected from the catalogues of 37 commercial suppliers, and approved drugs and biochemical compounds taken from KEGG_DRUG, KEGG_COMPOUND and PDB databases. Each chemical compound in the database has several 3D conformers with hydrogen atoms and atomic charges, which are ready to be docked into receptors using docking programs. The 3D conformations were generated using our molecular simulation program package, myPresto. Various physical properties, such as aqueous solubility (LogS) and carcinogenicity have also been calculated to characterize the ADME-Tox properties of the compounds. The Web database provides two services for compound searches: a property/chemical ID search and a chemical structure search. The chemical structure search is performed by a descriptor search and a maximum common substructure (MCS) search combination, using our program kcombu. By specifying a query chemical structure, users can find similar compounds among the millions of compounds in the database within a few minutes. Our database is expected to assist a wide range of researchers, in the fields of medical science, chemical biology, and biochemistry, who are seeking to discover active chemical compounds by the virtual screening.

  11. The Toxicity Data Landscape for Environmental Chemicals

    PubMed Central

    Judson, Richard; Richard, Ann; Dix, David J.; Houck, Keith; Martin, Matthew; Kavlock, Robert; Dellarco, Vicki; Henry, Tala; Holderman, Todd; Sayre, Philip; Tan, Shirlee; Carpenter, Thomas; Smith, Edwin

    2009-01-01

    Objective Thousands of chemicals are in common use, but only a portion of them have undergone significant toxicologic evaluation, leading to the need to prioritize the remainder for targeted testing. To address this issue, the U.S. Environmental Protection Agency (EPA) and other organizations are developing chemical screening and prioritization programs. As part of these efforts, it is important to catalog, from widely dispersed sources, the toxicology information that is available. The main objective of this analysis is to define a list of environmental chemicals that are candidates for the U.S. EPA screening and prioritization process, and to catalog the available toxicology information. Data sources We are developing ACToR (Aggregated Computational Toxicology Resource), which combines information for hundreds of thousands of chemicals from > 200 public sources, including the U.S. EPA, National Institutes of Health, Food and Drug Administration, corresponding agencies in Canada, Europe, and Japan, and academic sources. Data extraction ACToR contains chemical structure information; physical–chemical properties; in vitro assay data; tabular in vivo data; summary toxicology calls (e.g., a statement that a chemical is considered to be a human carcinogen); and links to online toxicology summaries. Here, we use data from ACToR to assess the toxicity data landscape for environmental chemicals. Data synthesis We show results for a set of 9,912 environmental chemicals being considered for analysis as part of the U.S. EPA ToxCast screening and prioritization program. These include high-and medium-production-volume chemicals, pesticide active and inert ingredients, and drinking water contaminants. Conclusions Approximately two-thirds of these chemicals have at least limited toxicity summaries available. About one-quarter have been assessed in at least one highly curated toxicology evaluation database such as the U.S. EPA Toxicology Reference Database, U.S. EPA Integrated Risk Information System, and the National Toxicology Program. PMID:19479008

  12. How to Achieve Better Results Using Pass-Based Virtual Screening: Case Study for Kinase Inhibitors

    NASA Astrophysics Data System (ADS)

    Pogodin, Pavel V.; Lagunin, Alexey A.; Rudik, Anastasia V.; Filimonov, Dmitry A.; Druzhilovskiy, Dmitry S.; Nicklaus, Mark C.; Poroikov, Vladimir V.

    2018-04-01

    Discovery of new pharmaceutical substances is currently boosted by the possibility of utilization of the Synthetically Accessible Virtual Inventory (SAVI) library, which includes about 283 million molecules, each annotated with a proposed synthetic one-step route from commercially available starting materials. The SAVI database is well-suited for ligand-based methods of virtual screening to select molecules for experimental testing. In this study, we compare the performance of three approaches for the analysis of structure-activity relationships that differ in their criteria for selecting of “active” and “inactive” compounds included in the training sets. PASS (Prediction of Activity Spectra for Substances), which is based on a modified Naïve Bayes algorithm, was applied since it had been shown to be robust and to provide good predictions of many biological activities based on just the structural formula of a compound even if the information in the training set is incomplete. We used different subsets of kinase inhibitors for this case study because many data are currently available on this important class of drug-like molecules. Based on the subsets of kinase inhibitors extracted from the ChEMBL 20 database we performed the PASS training, and then applied the model to ChEMBL 23 compounds not yet present in ChEMBL 20 to identify novel kinase inhibitors. As one may expect, the best prediction accuracy was obtained if only the experimentally confirmed active and inactive compounds for distinct kinases in the training procedure were used. However, for some kinases, reasonable results were obtained even if we used merged training sets, in which we designated as inactives the compounds not tested against the particular kinase. Thus, depending on the availability of data for a particular biological activity, one may choose the first or the second approach for creating ligand-based computational tools to achieve the best possible results in virtual screening.

  13. Theoretical calculating the thermodynamic properties of solid sorbents for CO{sub 2} capture applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Yuhua

    2012-11-02

    Since current technologies for capturing CO{sub 2} to fight global climate change are still too energy intensive, there is a critical need for development of new materials that can capture CO{sub 2} reversibly with acceptable energy costs. Accordingly, solid sorbents have been proposed to be used for CO{sub 2} capture applications through a reversible chemical transformation. By combining thermodynamic database mining with first principles density functional theory and phonon lattice dynamics calculations, a theoretical screening methodology to identify the most promising CO{sub 2} sorbent candidates from the vast array of possible solid materials has been proposed and validated. The calculatedmore » thermodynamic properties of different classes of solid materials versus temperature and pressure changes were further used to evaluate the equilibrium properties for the CO{sub 2} adsorption/desorption cycles. According to the requirements imposed by the pre- and post- combustion technologies and based on our calculated thermodynamic properties for the CO{sub 2} capture reactions by the solids of interest, we were able to screen only those solid materials for which lower capture energy costs are expected at the desired pressure and temperature conditions. Only those selected CO{sub 2} sorbent candidates were further considered for experimental validations. The ab initio thermodynamic technique has the advantage of identifying thermodynamic properties of CO{sub 2} capture reactions without any experimental input beyond crystallographic structural information of the solid phases involved. Such methodology not only can be used to search for good candidates from existing database of solid materials, but also can provide some guidelines for synthesis new materials. In this presentation, we first introduce our screening methodology and the results on a testing set of solids with known thermodynamic properties to validate our methodology. Then, by applying our computational method to several different kinds of solid systems, we demonstrate that our methodology can predict the useful information to help developing CO{sub 2} capture Technologies.« less

  14. Pediatric hydrocephalus: systematic literature review and evidence-based guidelines. Part 3: Endoscopic computer-assisted electromagnetic navigation and ultrasonography as technical adjuvants for shunt placement.

    PubMed

    Flannery, Ann Marie; Duhaime, Ann-Christine; Tamber, Mandeep S; Kemp, Joanna

    2014-11-01

    This systematic review was undertaken to answer the following question: Do technical adjuvants such as ventricular endoscopic placement, computer-assisted electromagnetic guidance, or ultrasound guidance improve ventricular shunt function and survival? The US National Library of Medicine PubMed/MEDLINE database and the Cochrane Database of Systematic Reviews were queried using MeSH headings and key words specifically chosen to identify published articles detailing the use of cerebrospinal fluid shunts for the treatment of pediatric hydrocephalus. Articles meeting specific criteria that had been delineated a priori were then examined, and data were abstracted and compiled in evidentiary tables. These data were then analyzed by the Pediatric Hydrocephalus Systematic Review and Evidence-Based Guidelines Task Force to consider evidence-based treatment recommendations. The search yielded 163 abstracts, which were screened for potential relevance to the application of technical adjuvants in shunt placement. Fourteen articles were selected for full-text review. One additional article was selected during a review of literature citations. Eight of these articles were included in the final recommendations concerning the use of endoscopy, ultrasonography, and electromagnetic image guidance during shunt placement, whereas the remaining articles were excluded due to poor evidence or lack of relevance. The evidence included 1 Class I, 1 Class II, and 6 Class III papers. An evidentiary table of relevant articles was created. CONCLUSIONS/RECOMMENDATION: There is insufficient evidence to recommend the use of endoscopic guidance for routine ventricular catheter placement. Level I, high degree of clinical certainty. The routine use of ultrasound-assisted catheter placement is an option. Level III, unclear clinical certainty. The routine use of computer-assisted electromagnetic (EM) navigation is an option. Level III, unclear clinical certainty.

  15. TOOLKIT, Version 2. 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, E.; Bagot, B.; McNeill, R.L.

    1990-05-09

    The purpose of this User's Guide is to show by example many of the features of Toolkit II. Some examples will be copies of screens as they appear while running the Toolkit. Other examples will show what the user should enter in various situations; in these instances, what the computer asserts will be in boldface and what the user responds will be in regular type. The User's Guide is divided into four sections. The first section, FOCUS Databases'', will give a broad overview of the Focus administrative databases that are available on the VAX; easy-to-use reports are available for mostmore » of them in the Toolkit. The second section, Getting Started'', will cover the steps necessary to log onto the Computer Center VAX cluster and how to start Focus and the Toolkit. The third section, Using the Toolkit'', will discuss some of the features in the Toolkit -- the available reports and how to access them, as well as some utilities. The fourth section, Helpful Hints'', will cover some useful facts about the VAX and Focus as well as some of the more common problems that can occur. The Toolkit is not set in concrete but is continually being revised and improved. If you have any opinions as to changes that you would like to see made to the Toolkit or new features that you would like included, please let us know. Since we do try to respond to the needs of the user and make periodic improvement to the Toolkit, this User's Guide may not correspond exactly to what is available in the computer. In general, changes are made to provide new options or features; rarely is an existing feature deleted.« less

  16. An interactive system for computer-aided diagnosis of breast masses.

    PubMed

    Wang, Xingwei; Li, Lihua; Liu, Wei; Xu, Weidong; Lederman, Dror; Zheng, Bin

    2012-10-01

    Although mammography is the only clinically accepted imaging modality for screening the general population to detect breast cancer, interpreting mammograms is difficult with lower sensitivity and specificity. To provide radiologists "a visual aid" in interpreting mammograms, we developed and tested an interactive system for computer-aided detection and diagnosis (CAD) of mass-like cancers. Using this system, an observer can view CAD-cued mass regions depicted on one image and then query any suspicious regions (either cued or not cued by CAD). CAD scheme automatically segments the suspicious region or accepts manually defined region and computes a set of image features. Using content-based image retrieval (CBIR) algorithm, CAD searches for a set of reference images depicting "abnormalities" similar to the queried region. Based on image retrieval results and a decision algorithm, a classification score is assigned to the queried region. In this study, a reference database with 1,800 malignant mass regions and 1,800 benign and CAD-generated false-positive regions was used. A modified CBIR algorithm with a new function of stretching the attributes in the multi-dimensional space and decision scheme was optimized using a genetic algorithm. Using a leave-one-out testing method to classify suspicious mass regions, we compared the classification performance using two CBIR algorithms with either equally weighted or optimally stretched attributes. Using the modified CBIR algorithm, the area under receiver operating characteristic curve was significantly increased from 0.865 ± 0.006 to 0.897 ± 0.005 (p < 0.001). This study demonstrated the feasibility of developing an interactive CAD system with a large reference database and achieving improved performance.

  17. Use of redundant sets of landmark information by humans (Homo sapiens) in a goal-searching task in an open field and on a computer screen.

    PubMed

    Sekiguchi, Katsuo; Ushitani, Tomokazu; Sawa, Kosuke

    2018-05-01

    Landmark-based goal-searching tasks that were similar to those for pigeons (Ushitani & Jitsumori, 2011) were provided to human participants to investigate whether they could learn and use multiple sources of spatial information that redundantly indicate the position of a hidden target in both an open field (Experiment 1) and on a computer screen (Experiments 2 and 3). During the training in each experiment, participants learned to locate a target in 1 of 25 objects arranged in a 5 × 5 grid, using two differently colored, arrow-shaped (Experiments 1 and 2) or asymmetrically shaped (Experiment 3) landmarks placed adjacent to the goal and pointing to the goal location. The absolute location and directions of the landmarks varied across trials, but the constant configuration of the goal and the landmarks enabled participants to find the goal using both global configural information and local vector information (pointing to the goal by each individual landmark). On subsequent test trials, the direction was changed for one of the landmarks to conflict with the global configural information. Results of Experiment 1 indicated that participants used vector information from a single landmark but not configural information. Further examinations revealed that the use of global (metric) information was enhanced remarkably by goal searching with nonarrow-shaped landmarks on the computer monitor (Experiment 3) but much less so with arrow-shaped landmarks (Experiment 2). The General Discussion focuses on a comparison between humans in the current study and pigeons in the previous study. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Research on computer virus database management system

    NASA Astrophysics Data System (ADS)

    Qi, Guoquan

    2011-12-01

    The growing proliferation of computer viruses becomes the lethal threat and research focus of the security of network information. While new virus is emerging, the number of viruses is growing, virus classification increasing complex. Virus naming because of agencies' capture time differences can not be unified. Although each agency has its own virus database, the communication between each other lacks, or virus information is incomplete, or a small number of sample information. This paper introduces the current construction status of the virus database at home and abroad, analyzes how to standardize and complete description of virus characteristics, and then gives the information integrity, storage security and manageable computer virus database design scheme.

  19. The New Screen Time: Computers, Tablets, and Smartphones Enter the Equation

    ERIC Educational Resources Information Center

    Wiles, Bradford B.; Schachtner, Laura; Pentz, Julie L.

    2016-01-01

    Emerging technologies attract children and push parents' and caregivers' abilities to attend to their families. This article presents recommendations related to the new version of screen time, which includes time with computers, tablets, and smartphones. Recommendations are provided for screen time for very young children and those in middle and…

  20. Motivational Screen Design Guidelines for Effective Computer-Mediated Instruction.

    ERIC Educational Resources Information Center

    Lee, Sung Heum; Boling, Elizabeth

    Screen designers for computer-mediated instruction (CMI) products must consider the motivational appeal of their designs. Although learners may be motivated to use CMI programs initially because of their novelty, this effect wears off and the instruction must stand on its own. Instructional screens must provide effective and efficient instruction,…

  1. 77 FR 72335 - Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ... computer networks, systems, or databases. The records contain the individual's name; social security number... control and track access to DLA-controlled networks, computer systems, and databases. The records may also...

  2. Cohort profile: the National Health Insurance Service-National Health Screening Cohort (NHIS-HEALS) in Korea

    PubMed Central

    Seong, Sang Cheol; Kim, Yeon-Yong; Park, Sue K; Khang, Young Ho; Kim, Hyeon Chang; Park, Jong Heon; Kang, Hee-Jin; Do, Cheol-Ho; Song, Jong-Sun; Lee, Eun-Joo; Ha, Seongjun; Shin, Soon Ae; Jeong, Seung-Lyeal

    2017-01-01

    Purpose The National Health Insurance Service-Health Screening Cohort (NHIS-HEALS) is a cohort of participants who participated in health screening programmes provided by the NHIS in the Republic of Korea. The NHIS constructed the NHIS-HEALS cohort database in 2015. The purpose of this cohort is to offer relevant and useful data for health researchers, especially in the field of non-communicable diseases and health risk factors, and policy-maker. Participants To construct the NHIS-HEALS database, a sample cohort was first selected from the 2002 and 2003 health screening participants, who were aged between 40 and 79 in 2002 and followed up through 2013. This cohort included 514 866 health screening participants who comprised a random selection of 10% of all health screening participants in 2002 and 2003. Findings to date The age-standardised prevalence of anaemia, diabetes mellitus, hypertension, obesity, hypercholesterolaemia and abnormal urine protein were 9.8%, 8.2%, 35.6%, 2.7%, 14.2% and 2.0%, respectively. The age-standardised mortality rate for the first 2 years (through 2004) was 442.0 per 100 000 person-years, while the rate for 10 years (through 2012) was 865.9 per 100 000 person-years. The most common cause of death was malignant neoplasm in both sexes (364.1 per 100 000 person-years for men, 128.3 per 100 000 person-years for women). Future plans This database can be used to study the risk factors of non-communicable diseases and dental health problems, which are important health issues that have not yet been fully investigated. The cohort will be maintained and continuously updated by the NHIS. PMID:28947447

  3. Definition and maintenance of a telemetry database dictionary

    NASA Technical Reports Server (NTRS)

    Knopf, William P. (Inventor)

    2007-01-01

    A telemetry dictionary database includes a component for receiving spreadsheet workbooks of telemetry data over a web-based interface from other computer devices. Another component routes the spreadsheet workbooks to a specified directory on the host processing device. A process then checks the received spreadsheet workbooks for errors, and if no errors are detected the spreadsheet workbooks are routed to another directory to await initiation of a remote database loading process. The loading process first converts the spreadsheet workbooks to comma separated value (CSV) files. Next, a network connection with the computer system that hosts the telemetry dictionary database is established and the CSV files are ported to the computer system that hosts the telemetry dictionary database. This is followed by a remote initiation of a database loading program. Upon completion of loading a flatfile generation program is manually initiated to generate a flatfile to be used in a mission operations environment by the core ground system.

  4. Atrial Fibrillation Screening in Nonmetropolitan Areas Using a Telehealth Surveillance System With an Embedded Cloud-Computing Algorithm: Prospective Pilot Study

    PubMed Central

    Chen, Ying-Hsien; Hung, Chi-Sheng; Huang, Ching-Chang; Hung, Yu-Chien

    2017-01-01

    Background Atrial fibrillation (AF) is a common form of arrhythmia that is associated with increased risk of stroke and mortality. Detecting AF before the first complication occurs is a recognized priority. No previous studies have examined the feasibility of undertaking AF screening using a telehealth surveillance system with an embedded cloud-computing algorithm; we address this issue in this study. Objective The objective of this study was to evaluate the feasibility of AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm. Methods We conducted a prospective AF screening study in a nonmetropolitan area using a single-lead electrocardiogram (ECG) recorder. All ECG measurements were reviewed on the telehealth surveillance system and interpreted by the cloud-computing algorithm and a cardiologist. The process of AF screening was evaluated with a satisfaction questionnaire. Results Between March 11, 2016 and August 31, 2016, 967 ECGs were recorded from 922 residents in nonmetropolitan areas. A total of 22 (2.4%, 22/922) residents with AF were identified by the physician’s ECG interpretation, and only 0.2% (2/967) of ECGs contained significant artifacts. The novel cloud-computing algorithm for AF detection had a sensitivity of 95.5% (95% CI 77.2%-99.9%) and specificity of 97.7% (95% CI 96.5%-98.5%). The overall satisfaction score for the process of AF screening was 92.1%. Conclusions AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm is feasible. PMID:28951384

  5. Cost-effectiveness of screening high-risk HIV-positive men who have sex with men (MSM) and HIV-positive women for anal cancer.

    PubMed

    Czoski-Murray, C; Karnon, J; Jones, R; Smith, K; Kinghorn, G

    2010-11-01

    Anal cancer is uncommon and predominantly a disease of the elderly. The human papillomavirus (HPV) has been implicated as a causal agent, and HPV infection is usually transmitted sexually. Individuals who are human immunodeficiency virus (HIV)-positive are particularly vulnerable to HPV infections, and increasing numbers from this population present with anal cancer. To estimate the cost-effectiveness of screening for anal cancer in the high-risk HIV-positive population [in particular, men who have sex with men (MSM), who have been identified as being at greater risk of the disease] by developing a model that incorporates the national screening guidelines criteria. A comprehensive literature search was undertaken in January 2006 (updated in November 2006). The following electronic bibliographic databases were searched: Applied Social Sciences Index and Abstracts (ASSIA), BIOSIS previews (Biological Abstracts), British Nursing Index (BNI), Cumulative Index to Nursing and Allied Health Literature (CINAHL), Cochrane Database of Systematic Reviews (CDSR), Cochrane Central Register of Controlled Trials (CENTRAL), EMBASE, MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, NHS Database of Abstracts of Reviews of Effects (DARE), NHS Health Technology Assessment (HTA) Database, PsycINFO, Science Citation Index (SCI), and Social Sciences Citation Index (SSCI). Published literature identified by the search strategy was assessed by four reviewers. Papers that met the inclusion criteria contained the following: data on population incidence, effectiveness of screening, health outcomes or screening and/or treatment costs; defined suitable screening technologies; prospectively evaluated tests to detect anal cancer. Foreign-language papers were excluded. Searches identified 2102 potential papers; 1403 were rejected at title and a further 493 at abstract. From 206 papers retrieved, 81 met the inclusion criteria. A further treatment paper was added, giving a total of 82 papers included. Data from included studies were extracted into data extraction forms by the clinical effectiveness reviewer. To analyse the cost-effectiveness of screening, two decision-analytical models were developed and populated. The reference case cost-effectiveness model for MSM found that screening for anal cancer is very unlikely to be cost-effective. The negative aspects of screening included utility decrements associated with false-positive results and with treatment for high-grade anal intraepithelial neoplasia (HG-AIN). Sensitivity analyses showed that removing these utility decrements improved the cost-effectiveness of screening. However, combined with higher regression rates from low-grade anal intraepithelial neoplasia (LG-AIN), the lowest expected incremental cost-effectiveness ratio remained at over 44,000 pounds per quality-adjusted life-year (QALY) gained. Probabilistic sensitivity analysis showed that no screening retained over 50% probability of cost-effectiveness to a QALY value of 50,000 pounds. The screening model for HIV-positive women showed an even lower likelihood of cost-effectiveness, with the most favourable sensitivity analyses reporting an incremental cost per QALY of 88,000 pounds. Limited knowledge is available about the epidemiology and natural history of anal cancer, along with a paucity of good-quality evidence concerning the effectiveness of screening. Many of the criteria for assessing the need for a screening programme were not met and the cost-effectiveness analyses showed little likelihood that screening any of the identified high-risk groups would generate health improvements at a reasonable cost. Further studies could assess whether the screening model has underestimated the impact of anal cancer, the results of which may justify an evaluative study of the effects of treatment for HG-AIN.

  6. DockoMatic: automated peptide analog creation for high throughput virtual screening.

    PubMed

    Jacob, Reed B; Bullock, Casey W; Andersen, Tim; McDougal, Owen M

    2011-10-01

    The purpose of this manuscript is threefold: (1) to describe an update to DockoMatic that allows the user to generate cyclic peptide analog structure files based on protein database (pdb) files, (2) to test the accuracy of the peptide analog structure generation utility, and (3) to evaluate the high throughput capacity of DockoMatic. The DockoMatic graphical user interface interfaces with the software program Treepack to create user defined peptide analogs. To validate this approach, DockoMatic produced cyclic peptide analogs were tested for three-dimensional structure consistency and binding affinity against four experimentally determined peptide structure files available in the Research Collaboratory for Structural Bioinformatics database. The peptides used to evaluate this new functionality were alpha-conotoxins ImI, PnIA, and their published analogs. Peptide analogs were generated by DockoMatic and tested for their ability to bind to X-ray crystal structure models of the acetylcholine binding protein originating from Aplysia californica. The results, consisting of more than 300 simulations, demonstrate that DockoMatic predicts the binding energy of peptide structures to within 3.5 kcal mol(-1), and the orientation of bound ligand compares to within 1.8 Å root mean square deviation for ligand structures as compared to experimental data. Evaluation of high throughput virtual screening capacity demonstrated that Dockomatic can collect, evaluate, and summarize the output of 10,000 AutoDock jobs in less than 2 hours of computational time, while 100,000 jobs requires approximately 15 hours and 1,000,000 jobs is estimated to take up to a week. Copyright © 2011 Wiley Periodicals, Inc.

  7. 3D pharmacophore-based virtual screening, docking and density functional theory approach towards the discovery of novel human epidermal growth factor receptor-2 (HER2) inhibitors.

    PubMed

    Gogoi, Dhrubajyoti; Baruah, Vishwa Jyoti; Chaliha, Amrita Kashyap; Kakoti, Bibhuti Bhushan; Sarma, Diganta; Buragohain, Alak Kumar

    2016-12-21

    Human epidermal growth factor receptor 2 (HER2) is one of the four members of the epidermal growth factor receptor (EGFR) family and is expressed to facilitate cellular proliferation across various tissue types. Therapies targeting HER2, which is a transmembrane glycoprotein with tyrosine kinase activity, offer promising prospects especially in breast and gastric/gastroesophageal cancer patients. Persistence of both primary and acquired resistance to various routine drugs/antibodies is a disappointing outcome in the treatment of many HER2 positive cancer patients and is a challenge that requires formulation of new and improved strategies to overcome the same. Identification of novel HER2 inhibitors with improved therapeutics index was performed with a highly correlating (r=0.975) ligand-based pharmacophore model (Hypo1) in this study. Hypo1 was generated from a training set of 22 compounds with HER2 inhibitory activity and this well-validated hypothesis was subsequently used as a 3D query to screen compounds in a total of four databases of which two were natural product databases. Further, these compounds were analyzed for compliance with Veber's drug-likeness rule and optimum ADMET parameters. The selected compounds were then subjected to molecular docking and Density Functional Theory (DFT) analysis to discern their molecular interactions at the active site of HER2. The findings thus presented would be an important starting point towards the development of novel HER2 inhibitors using well-validated computational techniques. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. ESIM: Edge Similarity for Screen Content Image Quality Assessment.

    PubMed

    Ni, Zhangkai; Ma, Lin; Zeng, Huanqiang; Chen, Jing; Cai, Canhui; Ma, Kai-Kuang

    2017-10-01

    In this paper, an accurate full-reference image quality assessment (IQA) model developed for assessing screen content images (SCIs), called the edge similarity (ESIM), is proposed. It is inspired by the fact that the human visual system (HVS) is highly sensitive to edges that are often encountered in SCIs; therefore, essential edge features are extracted and exploited for conducting IQA for the SCIs. The key novelty of the proposed ESIM lies in the extraction and use of three salient edge features-i.e., edge contrast, edge width, and edge direction. The first two attributes are simultaneously generated from the input SCI based on a parametric edge model, while the last one is derived directly from the input SCI. The extraction of these three features will be performed for the reference SCI and the distorted SCI, individually. The degree of similarity measured for each above-mentioned edge attribute is then computed independently, followed by combining them together using our proposed edge-width pooling strategy to generate the final ESIM score. To conduct the performance evaluation of our proposed ESIM model, a new and the largest SCI database (denoted as SCID) is established in our work and made to the public for download. Our database contains 1800 distorted SCIs that are generated from 40 reference SCIs. For each SCI, nine distortion types are investigated, and five degradation levels are produced for each distortion type. Extensive simulation results have clearly shown that the proposed ESIM model is more consistent with the perception of the HVS on the evaluation of distorted SCIs than the multiple state-of-the-art IQA methods.

  9. --No Title--

    Science.gov Websites

    interoperability emerging infrastructure for data management on computational grids Software Packages Services : ATLAS: Management and Steering: Computing Management Board Software Project Management Board Database Model Group Computing TDR: 4.5 Event Data 4.8 Database and Data Management Services 6.3.4 Production and

  10. The SQL Server Database for Non Computer Professional Teaching Reform

    ERIC Educational Resources Information Center

    Liu, Xiangwei

    2012-01-01

    A summary of the teaching methods of the non-computer professional SQL Server database, analyzes the current situation of the teaching course. According to non computer professional curriculum teaching characteristic, put forward some teaching reform methods, and put it into practice, improve the students' analysis ability, practice ability and…

  11. Data management and language enhancement for generalized set theory computer language for operation of large relational databases

    NASA Technical Reports Server (NTRS)

    Finley, Gail T.

    1988-01-01

    This report covers the study of the relational database implementation in the NASCAD computer program system. The existing system is used primarily for computer aided design. Attention is also directed to a hidden-surface algorithm for final drawing output.

  12. Association between screen viewing duration and sleep duration, sleep quality, and excessive daytime sleepiness among adolescents in Hong Kong.

    PubMed

    Mak, Yim Wah; Wu, Cynthia Sau Ting; Hui, Donna Wing Shun; Lam, Siu Ping; Tse, Hei Yin; Yu, Wing Yan; Wong, Ho Ting

    2014-10-28

    Screen viewing is considered to have adverse impacts on the sleep of adolescents. Although there has been a considerable amount of research on the association between screen viewing and sleep, most studies have focused on specific types of screen viewing devices such as televisions and computers. The present study investigated the duration with which currently prevalent screen viewing devices (including televisions, personal computers, mobile phones, and portable video devices) are viewed in relation to sleep duration, sleep quality, and daytime sleepiness among Hong Kong adolescents (N = 762). Television and computer viewing remain prevalent, but were not correlated with sleep variables. Mobile phone viewing was correlated with all sleep variables, while portable video device viewing was shown to be correlated only with daytime sleepiness. The results demonstrated a trend of increase in the prevalence and types of screen viewing and their effects on the sleep patterns of adolescents.

  13. Screening and identification of potential PTP1B allosteric inhibitors using in silico and in vitro approaches.

    PubMed

    Shinde, Ranajit Nivrutti; Kumar, G Siva; Eqbal, Shahbaz; Sobhia, M Elizabeth

    2018-01-01

    Protein tyrosine phosphatase 1B (PTP1B) is a validated therapeutic target for Type 2 diabetes due to its specific role as a negative regulator of insulin signaling pathways. Discovery of active site directed PTP1B inhibitors is very challenging due to highly conserved nature of the active site and multiple charge requirements of the ligands, which makes them non-selective and non-permeable. Identification of the PTP1B allosteric site has opened up new avenues for discovering potent and selective ligands for therapeutic intervention. Interactions made by potent allosteric inhibitor in the presence of PTP1B were studied using Molecular Dynamics (MD). Computationally optimized models were used to build separate pharmacophore models of PTP1B and TCPTP, respectively. Based on the nature of interactions the target residues offered, a receptor based pharmacophore was developed. The pharmacophore considering conformational flexibility of the residues was used for the development of pharmacophore hypothesis to identify potentially active inhibitors by screening large compound databases. Two pharmacophore were successively used in the virtual screening protocol to identify potential selective and permeable inhibitors of PTP1B. Allosteric inhibition mechanism of these molecules was established using molecular docking and MD methods. The geometrical criteria values confirmed their ability to stabilize PTP1B in an open conformation. 23 molecules that were identified as potential inhibitors were screened for PTP1B inhibitory activity. After screening, 10 molecules which have good permeability values were identified as potential inhibitors of PTP1B. This study confirms that selective and permeable inhibitors can be identified by targeting allosteric site of PTP1B.

  14. Evolution of Breast Cancer Screening in the Medicare Population: Clinical and Economic Implications

    PubMed Central

    Killelea, Brigid K.; Long, Jessica B.; Chagpar, Anees B.; Ma, Xiaomei; Wang, Rong; Ross, Joseph S.

    2014-01-01

    Background Newer approaches to mammography, including digital image acquisition and computer-aided detection (CAD), and adjunct imaging (e.g., magnetic resonance imaging [MRI]) have diffused into clinical practice. The impact of these technologies on screening-related cost and outcomes remains undefined, particularly among older women. Methods Using the Surveillance, Epidemiology, and End Results–Medicare linked database, we constructed two cohorts of women without a history of breast cancer and followed each cohort for 2 years. We compared the use and cost of screening mammography including digital mammography and CAD, adjunct procedures including breast ultrasound, MRI, and biopsy between the period of 2001 and 2002 and the period of 2008 and 2009 using χ2 and t test. We also assessed the change in breast cancer stage and incidence rates using χ2 and Poisson regression. All statistical tests were two-sided. Results There were 137150 women (mean age = 76.0 years) in the early cohort (2001–2002) and 133097 women (mean age = 77.3 years) in the later cohort (2008–2009). The use of digital image acquisition for screening mammography increased from 2.0% in 2001 and 2002 to 29.8% in 2008 and 2009 (P < .001). CAD use increased from 3.2% to 33.1% (P < .001). Average screening-related cost per capita increased from $76 to $112 (P < .001), with annual national fee-for-service Medicare spending increasing from $666 million to $962 million. There was no statistically significant change in detection rates of early-stage tumors (2.45 vs 2.57 per 1000 person-years; P = .41). Conclusions Although breast cancer screening–related costs increased substantially from 2001 through 2009 among Medicare beneficiaries, a clinically significant change in stage at diagnosis was not observed. PMID:25031307

  15. Online Catalog Screen Displays. A Series of Discussions. Report of a Conference Sponsored by the Council on Library Resources (Austin, Texas, March 10-13, 1985).

    ERIC Educational Resources Information Center

    Williams, Joan Frye, Ed.

    Papers presented and summaries of discussions at a 3-day conference which focused on screen displays for online catalogs are included in this report. Papers presented were: (1) "Suggested Guidelines for Screen Layouts and Design of Online Catalogs" (Joseph R. Matthews); (2) "Displays in Database Search Systems" (Fran Spigai);…

  16. Application of the stochastic tunneling method to high throughput database screening

    NASA Astrophysics Data System (ADS)

    Merlitz, H.; Burghardt, B.; Wenzel, W.

    2003-03-01

    The stochastic tunneling technique is applied to screen a database of chemical compounds to the active site of dihydrofolate reductase for lead candidates in the receptor-ligand docking problem. Using an atomistic force field we consider the ligand's internal rotational degrees of freedom. It is shown that the natural ligand (methotrexate) scores best among 10 000 randomly chosen compounds. We analyze the top scoring compounds to identify hot-spots of the receptor. We mutate the amino acids that are responsible for the hot-spots of the receptor and verify that its specificity is lost upon modification.

  17. Adolescent substance use screening in primary care: validity of computer self-administered vs. clinician-administered screening

    PubMed Central

    Harris, Sion Kim; Knight, John R; Van Hook, Shari; Sherritt, Lon; Brooks, Traci; Kulig, John W; Nordt, Christina; Saitz, Richard

    2015-01-01

    Background Computer self-administration may help busy pediatricians’ offices increase adolescent substance use screening rates efficiently and effectively, if proven to yield valid responses. The CRAFFT screening protocol for adolescents has demonstrated validity as an interview, but a computer self-entry approach needs validity testing. The aim of this study was to evaluate the criterion validity and time efficiency of a computerized adolescent substance use screening protocol implemented by self-administration or clinician-administration. Methods 12- to 17-year-old patients coming for routine care at three primary care clinics completed the computerized screen by both self-administration and clinician-administration during their visit. To account for order effects, we randomly assigned participants to self-administer the screen either before or after seeing their clinician. Both were conducted using a tablet computer and included identical items (any past-12-month use of tobacco, alcohol, drugs; past-3-months frequency of each; and six CRAFFT items). The criterion measure for substance use was the Timeline Follow-Back, and for alcohol/drug use disorder, the Adolescent Diagnostic Interview, both conducted by confidential research assistant-interview after the visit. Tobacco dependence risk was assessed with the self-administered Hooked on Nicotine Checklist (HONC). Analyses accounted for the multi-site cluster sampling design. Results Among 136 participants, mean age was 15.0±1.5 yrs, 54% were girls, 53% were Black or Hispanic, and 67% had ≥3 prior visits with their clinician. Twenty-seven percent reported any substance use (including tobacco) in the past 12 months, 7% met criteria for an alcohol or cannabis use disorder, and 4% were HONC-positive. Sensitivity/specificity of the screener were high for detecting past-12-month use or disorder and did not differ between computer and clinician. Mean completion time was 49 seconds (95%CI 44-54) for computer and 74 seconds (95%CI 68-87) for clinician (paired comparison p<0.001). Conclusions Substance use screening by computer self-entry is a valid and time-efficient alternative to clinician-administered screening. PMID:25774878

  18. StreptomycesInforSys: A web-enabled information repository

    PubMed Central

    Jain, Chakresh Kumar; Gupta, Vidhi; Gupta, Ashvarya; Gupta, Sanjay; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Sarethy, Indira P

    2012-01-01

    Members of Streptomyces produce 70% of natural bioactive products. There is considerable amount of information available based on polyphasic approach for classification of Streptomyces. However, this information based on phenotypic, genotypic and bioactive component production profiles is crucial for pharmacological screening programmes. This is scattered across various journals, books and other resources, many of which are not freely accessible. The designed database incorporates polyphasic typing information using combinations of search options to aid in efficient screening of new isolates. This will help in the preliminary categorization of appropriate groups. It is a free relational database compatible with existing operating systems. A cross platform technology with XAMPP Web server has been used to develop, manage, and facilitate the user query effectively with database support. Employment of PHP, a platform-independent scripting language, embedded in HTML and the database management software MySQL will facilitate dynamic information storage and retrieval. The user-friendly, open and flexible freeware (PHP, MySQL and Apache) is foreseen to reduce running and maintenance cost. Availability www.sis.biowaves.org PMID:23275736

  19. AllerML: markup language for allergens.

    PubMed

    Ivanciuc, Ovidiu; Gendel, Steven M; Power, Trevor D; Schein, Catherine H; Braun, Werner

    2011-06-01

    Many concerns have been raised about the potential allergenicity of novel, recombinant proteins into food crops. Guidelines, proposed by WHO/FAO and EFSA, include the use of bioinformatics screening to assess the risk of potential allergenicity or cross-reactivities of all proteins introduced, for example, to improve nutritional value or promote crop resistance. However, there are no universally accepted standards that can be used to encode data on the biology of allergens to facilitate using data from multiple databases in this screening. Therefore, we developed AllerML a markup language for allergens to assist in the automated exchange of information between databases and in the integration of the bioinformatics tools that are used to investigate allergenicity and cross-reactivity. As proof of concept, AllerML was implemented using the Structural Database of Allergenic Proteins (SDAP; http://fermi.utmb.edu/SDAP/) database. General implementation of AllerML will promote automatic flow of validated data that will aid in allergy research and regulatory analysis. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. AllerML: Markup Language for Allergens

    PubMed Central

    Ivanciuc, Ovidiu; Gendel, Steven M.; Power, Trevor D.; Schein, Catherine H.; Braun, Werner

    2011-01-01

    Many concerns have been raised about the potential allergenicity of novel, recombinant proteins into food crops. Guidelines, proposed by WHO/FAO and EFSA, include the use of bioinformatics screening to assess the risk of potential allergenicity or cross-reactivities of all proteins introduced, for example, to improve nutritional value or promote crop resistance. However, there are no universally accepted standards that can be used to encode data on the biology of allergens to facilitate using data from multiple databases in this screening. Therefore, we developed AllerML a markup language for allergens to assist in the automated exchange of information between databases and in the integration of the bioinformatics tools that are used to investigate allergenicity and cross-reactivity. As proof of concept, AllerML was implemented using the Structural Database of Allergenic Proteins (SDAP; http://fermi.utmb.edu/SDAP/) database. General implementation of AllerML will promote automatic flow of validated data that will aid in allergy research and regulatory analysis. PMID:21420460

  1. StreptomycesInforSys: A web-enabled information repository.

    PubMed

    Jain, Chakresh Kumar; Gupta, Vidhi; Gupta, Ashvarya; Gupta, Sanjay; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Sarethy, Indira P

    2012-01-01

    Members of Streptomyces produce 70% of natural bioactive products. There is considerable amount of information available based on polyphasic approach for classification of Streptomyces. However, this information based on phenotypic, genotypic and bioactive component production profiles is crucial for pharmacological screening programmes. This is scattered across various journals, books and other resources, many of which are not freely accessible. The designed database incorporates polyphasic typing information using combinations of search options to aid in efficient screening of new isolates. This will help in the preliminary categorization of appropriate groups. It is a free relational database compatible with existing operating systems. A cross platform technology with XAMPP Web server has been used to develop, manage, and facilitate the user query effectively with database support. Employment of PHP, a platform-independent scripting language, embedded in HTML and the database management software MySQL will facilitate dynamic information storage and retrieval. The user-friendly, open and flexible freeware (PHP, MySQL and Apache) is foreseen to reduce running and maintenance cost. www.sis.biowaves.org.

  2. tcpl: The ToxCast Pipeline for High-Throughput Screening Data

    EPA Science Inventory

    Motivation: The large and diverse high-throughput chemical screening efforts carried out by the US EPAToxCast program requires an efficient, transparent, and reproducible data pipeline.Summary: The tcpl R package and its associated MySQL database provide a generalized platform fo...

  3. Identification of critical chemical features for Aurora kinase-B inhibitors using Hip-Hop, virtual screening and molecular docking

    NASA Astrophysics Data System (ADS)

    Sakkiah, Sugunadevi; Thangapandian, Sundarapandian; John, Shalini; Lee, Keun Woo

    2011-01-01

    This study was performed to find the selective chemical features for Aurora kinase-B inhibitors using the potent methods like Hip-Hop, virtual screening, homology modeling, molecular dynamics and docking. The best hypothesis, Hypo1 was validated toward a wide range of test set containing the selective inhibitors of Aurora kinase-B. Homology modeling and molecular dynamics studies were carried out to perform the molecular docking studies. The best hypothesis Hypo1 was used as a 3D query to screen the chemical databases. The screened molecules from the databases were sorted based on ADME and drug like properties. The selective hit compounds were docked and the hydrogen bond interactions with the critical amino acids present in Aurora kinase-B were compared with the chemical features present in the Hypo1. Finally, we suggest that the chemical features present in the Hypo1 are vital for a molecule to inhibit the Aurora kinase-B activity.

  4. Method and system for rendering and interacting with an adaptable computing environment

    DOEpatents

    Osbourn, Gordon Cecil [Albuquerque, NM; Bouchard, Ann Marie [Albuquerque, NM

    2012-06-12

    An adaptable computing environment is implemented with software entities termed "s-machines", which self-assemble into hierarchical data structures capable of rendering and interacting with the computing environment. A hierarchical data structure includes a first hierarchical s-machine bound to a second hierarchical s-machine. The first hierarchical s-machine is associated with a first layer of a rendering region on a display screen and the second hierarchical s-machine is associated with a second layer of the rendering region overlaying at least a portion of the first layer. A screen element s-machine is linked to the first hierarchical s-machine. The screen element s-machine manages data associated with a screen element rendered to the display screen within the rendering region at the first layer.

  5. In silico screening for Plasmodium falciparum enoyl-ACP reductase inhibitors

    NASA Astrophysics Data System (ADS)

    Lindert, Steffen; Tallorin, Lorillee; Nguyen, Quynh G.; Burkart, Michael D.; McCammon, J. Andrew

    2015-01-01

    The need for novel therapeutics against Plasmodium falciparum is urgent due to recent emergence of multi-drug resistant malaria parasites. Since fatty acids are essential for both the liver and blood stages of the malarial parasite, targeting fatty acid biosynthesis is a promising strategy for combatting P. falciparum. We present a combined computational and experimental study to identify novel inhibitors of enoyl-acyl carrier protein reductase ( PfENR) in the fatty acid biosynthesis pathway. A small-molecule database from ChemBridge was docked into three distinct PfENR crystal structures that provide multiple receptor conformations. Two different docking algorithms were used to generate a consensus score in order to rank possible small molecule hits. Our studies led to the identification of five low-micromolar pyrimidine dione inhibitors of PfENR.

  6. A Java-Enabled Interactive Graphical Gas Turbine Propulsion System Simulator

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Afjeh, Abdollah A.

    1997-01-01

    This paper describes a gas turbine simulation system which utilizes the newly developed Java language environment software system. The system provides an interactive graphical environment which allows the quick and efficient construction and analysis of arbitrary gas turbine propulsion systems. The simulation system couples a graphical user interface, developed using the Java Abstract Window Toolkit, and a transient, space- averaged, aero-thermodynamic gas turbine analysis method, both entirely coded in the Java language. The combined package provides analytical, graphical and data management tools which allow the user to construct and control engine simulations by manipulating graphical objects on the computer display screen. Distributed simulations, including parallel processing and distributed database access across the Internet and World-Wide Web (WWW), are made possible through services provided by the Java environment.

  7. New perspectives in toxicological information management, and the role of ISSTOX databases in assessing chemical mutagenicity and carcinogenicity.

    PubMed

    Benigni, Romualdo; Battistelli, Chiara Laura; Bossa, Cecilia; Tcheremenskaia, Olga; Crettaz, Pierre

    2013-07-01

    Currently, the public has access to a variety of databases containing mutagenicity and carcinogenicity data. These resources are crucial for the toxicologists and regulators involved in the risk assessment of chemicals, which necessitates access to all the relevant literature, and the capability to search across toxicity databases using both biological and chemical criteria. Towards the larger goal of screening chemicals for a wide range of toxicity end points of potential interest, publicly available resources across a large spectrum of biological and chemical data space must be effectively harnessed with current and evolving information technologies (i.e. systematised, integrated and mined), if long-term screening and prediction objectives are to be achieved. A key to rapid progress in the field of chemical toxicity databases is that of combining information technology with the chemical structure as identifier of the molecules. This permits an enormous range of operations (e.g. retrieving chemicals or chemical classes, describing the content of databases, finding similar chemicals, crossing biological and chemical interrogations, etc.) that other more classical databases cannot allow. This article describes the progress in the technology of toxicity databases, including the concepts of Chemical Relational Database and Toxicological Standardized Controlled Vocabularies (Ontology). Then it describes the ISSTOX cluster of toxicological databases at the Istituto Superiore di Sanitá. It consists of freely available databases characterised by the use of modern information technologies and by curation of the quality of the biological data. Finally, this article provides examples of analyses and results made possible by ISSTOX.

  8. The Use of a Relational Database in Qualitative Research on Educational Computing.

    ERIC Educational Resources Information Center

    Winer, Laura R.; Carriere, Mario

    1990-01-01

    Discusses the use of a relational database as a data management and analysis tool for nonexperimental qualitative research, and describes the use of the Reflex Plus database in the Vitrine 2001 project in Quebec to study computer-based learning environments. Information systems are also discussed, and the use of a conceptual model is explained.…

  9. Computer Cataloging of Electronic Journals in Unstable Aggregator Databases: The Hong Kong Baptist University Library Experience.

    ERIC Educational Resources Information Center

    Li, Yiu-On; Leung, Shirley W.

    2001-01-01

    Discussion of aggregator databases focuses on a project at the Hong Kong Baptist University library to integrate full-text electronic journal titles from three unstable aggregator databases into its online public access catalog (OPAC). Explains the development of the electronic journal computer program (EJCOP) to generate MARC records for…

  10. Physics in Screening Environments

    NASA Astrophysics Data System (ADS)

    Certik, Ondrej

    In the current study, we investigated atoms in screening environments like plasmas. It is common practice to extract physical data, such as temperature and electron densities, from plasma experiments. We present results that address inherent computational difficulties that arise when the screening approach is extended to include the interaction between the atomic electrons. We show that there may arise an ambiguity in the interpretation of physical properties, such as temperature and charge density, from experimental data due to the opposing effects of electron-nucleus screening and electron-electron screening. The focus of the work, however, is on the resolution of inherent computational challenges that appear in the computation of two-particle matrix elements. Those enter already at the Hartree-Fock level. Furthermore, as examples of post Hartree-Fock calculations, we show second-order Green's function results and many body perturbation theory results of second order. A self-contained derivation of all necessary equations has been included. The accuracy of the implementation of the method is established by comparing standard unscreened results for various atoms and molecules against literature for Hartree-Fock as well as Green's function and many body perturbation theory. The main results of the thesis are presented in the chapter called Screened Results, where the behavior of several atomic systems depending on electron-electron and electron-nucleus Debye screening was studied. The computer code that we have developed has been made available for anybody to use. Finally, we present and discuss results obtained for screened interactions. We also examine thoroughly the computational details of the calculations and particular implementations of the method.

  11. Using Chemoinformatics, Bioinformatics, and Bioassay to Predict and Explain the Antibacterial Activity of Nonantibiotic Food and Drug Administration Drugs.

    PubMed

    Kahlous, Nour Aldin; Bawarish, Muhammad Al Mohdi; Sarhan, Muhammad Arabi; Küpper, Manfred; Hasaba, Ali; Rajab, Mazen

    2017-04-01

    Discovering of new and effective antibiotics is a major issue facing scientists today. Luckily, the development of computer science offers new methods to overcome this issue. In this study, a set of computer software was used to predict the antibacterial activity of nonantibiotic Food and Drug Administration (FDA)-approved drugs, and to explain their action by possible binding to well-known bacterial protein targets, along with testing their antibacterial activity against Gram-positive and Gram-negative bacteria. A three-dimensional virtual screening method that relies on chemical and shape similarity was applied using rapid overlay of chemical structures (ROCS) software to select candidate compounds from the FDA-approved drugs database that share similarity with 17 known antibiotics. Then, to check their antibacterial activity, disk diffusion test was applied on Staphylococcus aureus and Escherichia coli. Finally, a protein docking method was applied using HYBRID software to predict the binding of the active candidate to the target receptor of its similar antibiotic. Of the 1,991 drugs that were screened, 34 had been selected and among them 10 drugs showed antibacterial activity, whereby drotaverine and metoclopramide activities were without precedent reports. Furthermore, the docking process predicted that diclofenac, drotaverine, (S)-flurbiprofen, (S)-ibuprofen, and indomethacin could bind to the protein target of their similar antibiotics. Nevertheless, their antibacterial activities are weak compared with those of their similar antibiotics, which can be potentiated further by performing chemical modifications on their structure.

  12. Diagnosis and treatment of depression following routine screening in patients with coronary heart disease or diabetes: a database cohort study.

    PubMed

    Burton, C; Simpson, C; Anderson, N

    2013-03-01

    Depression is common in chronic illness and screening for depression has been widely recommended. There have been no large studies of screening for depression in routine care for patients with chronic illness. We performed a retrospective cohort study to examine the timing of new depression diagnosis or treatment in relation to annual screening for depression in patients with coronary heart disease (CHD) or diabetes. We examined a database derived from 1.3 million patients registered with general practices in Scotland for the year commencing 1 April 2007. Eligible patients had either CHD or diabetes, were screened for depression during the year and either received a new diagnosis of depression or commenced a new course of antidepressant (excluding those commonly used to treat diabetic neuropathy). Analysis was by the self-controlled case-series method with the outcome measure being the relative incidence (RI) in the period 1-28 days after screening compared to other times. A total of 67358 patients were screened for depression and 2269 received a new diagnosis or commenced treatment. For the period after screening, the RI was 3.03 [95% confidence interval (CI) 2.44-3.78] for diagnosis and 1.78 (95% CI 1.54-2.05) for treatment. The number needed to screen was 976 (95% CI 886-1104) for a new diagnosis and 687 (95% CI 586-853) for new antidepressant treatment. Systematic screening for depression in patients with chronic disease in primary care results in a significant but small increase in new diagnosis and treatment in the following 4 weeks.

  13. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  14. Color Variations in Screen Text: Effects on Proofreading.

    ERIC Educational Resources Information Center

    Szul, Linda; Berry, Louis

    As the use of computers has become more common in society, human engineering and ergonomics have lagged behind the sciences which developed the equipment. Some research has been done in the past on the effects of screen colors on computer use efficiency, but results were inconclusive. This paper describes a study of the impact of screen color…

  15. ARC-2006-ACD06-0135-014

    NASA Image and Video Library

    2006-08-19

    DC-8 NAMMA MISSION TO CAPE VERDE, AFRICA: Glenn Diskin (l) Bruce Anderson (c) & Ed Winstead (r) examine data on computer screens hooked up to two Langley Res. Ctr. experiments. The DLH (Diode Laser Hygometer) screen is on the left and one of the computer screens for the LARGE instrument package (Langley Aerosol Research Group Experiment) is on the right.

  16. Classroom Laboratory Report: Using an Image Database System in Engineering Education.

    ERIC Educational Resources Information Center

    Alam, Javed; And Others

    1991-01-01

    Describes an image database system assembled using separate computer components that was developed to overcome text-only computer hardware storage and retrieval limitations for a pavement design class. (JJK)

  17. A review on quantum search algorithms

    NASA Astrophysics Data System (ADS)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  18. Evaluation of information-theoretic similarity measures for content-based retrieval and detection of masses in mammograms.

    PubMed

    Tourassi, Georgia D; Harrawood, Brian; Singh, Swatee; Lo, Joseph Y; Floyd, Carey E

    2007-01-01

    The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrieval precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses.

  19. Evaluation of information-theoretic similarity measures for content-based retrieval and detection of masses in mammograms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tourassi, Georgia D.; Harrawood, Brian; Singh, Swatee

    The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrievalmore » precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses.« less

  20. In silico toxicology for the pharmaceutical sciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valerio, Luis G., E-mail: Luis.Valerio@fda.hhs.go

    2009-12-15

    The applied use of in silico technologies (a.k.a. computational toxicology, in silico toxicology, computer-assisted tox, e-tox, i-drug discovery, predictive ADME, etc.) for predicting preclinical toxicological endpoints, clinical adverse effects, and metabolism of pharmaceutical substances has become of high interest to the scientific community and the public. The increased accessibility of these technologies for scientists and recent regulations permitting their use for chemical risk assessment supports this notion. The scientific community is interested in the appropriate use of such technologies as a tool to enhance product development and safety of pharmaceuticals and other xenobiotics, while ensuring the reliability and accuracy ofmore » in silico approaches for the toxicological and pharmacological sciences. For pharmaceutical substances, this means active and impurity chemicals in the drug product may be screened using specialized software and databases designed to cover these substances through a chemical structure-based screening process and algorithm specific to a given software program. A major goal for use of these software programs is to enable industry scientists not only to enhance the discovery process but also to ensure the judicious use of in silico tools to support risk assessments of drug-induced toxicities and in safety evaluations. However, a great amount of applied research is still needed, and there are many limitations with these approaches which are described in this review. Currently, there is a wide range of endpoints available from predictive quantitative structure-activity relationship models driven by many different computational software programs and data sources, and this is only expected to grow. For example, there are models based on non-proprietary and/or proprietary information specific to assessing potential rodent carcinogenicity, in silico screens for ICH genetic toxicity assays, reproductive and developmental toxicity, theoretical prediction of human drug metabolism, mechanisms of action for pharmaceuticals, and newer models for predicting human adverse effects. How accurate are these approaches is both a statistical issue and challenge in toxicology. In this review, fundamental concepts and the current capabilities and limitations of this technology will be critically addressed.« less

  1. CD-ROM End-User Instruction: A Planning Model.

    ERIC Educational Resources Information Center

    Johnson, Mary E.; Rosen, Barbara S.

    1990-01-01

    Discusses methods and content of library instruction for CD-ROM searching in terms of the needs of end-users. Instructional methods explored include staff instruction, structured instruction, database documentation, tutorials and help screens, and floaters. Suggestions for effective instruction in transfer of skills, database content, database…

  2. DPubChem: a web tool for QSAR modeling and high-throughput virtual screening.

    PubMed

    Soufan, Othman; Ba-Alawi, Wail; Magana-Mora, Arturo; Essack, Magbubah; Bajic, Vladimir B

    2018-06-14

    High-throughput screening (HTS) performs the experimental testing of a large number of chemical compounds aiming to identify those active in the considered assay. Alternatively, faster and cheaper methods of large-scale virtual screening are performed computationally through quantitative structure-activity relationship (QSAR) models. However, the vast amount of available HTS heterogeneous data and the imbalanced ratio of active to inactive compounds in an assay make this a challenging problem. Although different QSAR models have been proposed, they have certain limitations, e.g., high false positive rates, complicated user interface, and limited utilization options. Therefore, we developed DPubChem, a novel web tool for deriving QSAR models that implement the state-of-the-art machine-learning techniques to enhance the precision of the models and enable efficient analyses of experiments from PubChem BioAssay database. DPubChem also has a simple interface that provides various options to users. DPubChem predicted active compounds for 300 datasets with an average geometric mean and F 1 score of 76.68% and 76.53%, respectively. Furthermore, DPubChem builds interaction networks that highlight novel predicted links between chemical compounds and biological assays. Using such a network, DPubChem successfully suggested a novel drug for the Niemann-Pick type C disease. DPubChem is freely available at www.cbrc.kaust.edu.sa/dpubchem .

  3. Modern approaches to accelerate discovery of new antischistosomal drugs.

    PubMed

    Neves, Bruno Junior; Muratov, Eugene; Machado, Renato Beilner; Andrade, Carolina Horta; Cravo, Pedro Vitor Lemos

    2016-06-01

    The almost exclusive use of only praziquantel for the treatment of schistosomiasis has raised concerns about the possible emergence of drug-resistant schistosomes. Consequently, there is an urgent need for new antischistosomal drugs. The identification of leads and the generation of high quality data are crucial steps in the early stages of schistosome drug discovery projects. Herein, the authors focus on the current developments in antischistosomal lead discovery, specifically referring to the use of automated in vitro target-based and whole-organism screens and virtual screening of chemical databases. They highlight the strengths and pitfalls of each of the above-mentioned approaches, and suggest possible roadmaps towards the integration of several strategies, which may contribute for optimizing research outputs and led to more successful and cost-effective drug discovery endeavors. Increasing partnerships and access to funding for drug discovery have strengthened the battle against schistosomiasis in recent years. However, the authors believe this battle also includes innovative strategies to overcome scientific challenges. In this context, significant advances of in vitro screening as well as computer-aided drug discovery have contributed to increase the success rate and reduce the costs of drug discovery campaigns. Although some of these approaches were already used in current antischistosomal lead discovery pipelines, the integration of these strategies in a solid workflow should allow the production of new treatments for schistosomiasis in the near future.

  4. Computer-aided drug design of falcipain inhibitors: virtual screening, structure-activity relationships, hydration site thermodynamics, and reactivity analysis.

    PubMed

    Shah, Falgun; Gut, Jiri; Legac, Jennifer; Shivakumar, Devleena; Sherman, Woody; Rosenthal, Philip J; Avery, Mitchell A

    2012-03-26

    Falcipains (FPs) are hemoglobinases of Plasmodium falciparum that are validated targets for the development of antimalarial chemotherapy. A combined ligand- and structure-based virtual screening of commercial databases was performed to identify structural analogs of virtual screening hits previously discovered in our laboratory. A total of 28 low micromolar inhibitors of FP-2 and FP-3 were identified and the structure-activity relationship (SAR) in each series was elaborated. The SAR of the compounds was unusually steep in some cases and could not be explained by a traditional analysis of the ligand-protein interactions (van der Waals, electrostatics, and hydrogen bonds). To gain further insights, a statistical thermodynamic analysis of explicit solvent in the ligand binding domains of FP-2 and FP-3 was carried out to understand the roles played by water molecules in binding of these inhibitors. Indeed, the energetics associated with the displacement of water molecules upon ligand binding explained some of the complex trends in the SAR. Furthermore, low potency of a subset of FP-2 inhibitors that could not be understood by the water energetics was explained in the context of poor chemical reactivity of the reactive centers of these compounds. The present study highlights the importance of considering energetic contributors to binding beyond traditional ligand-protein interactions. © 2012 American Chemical Society

  5. Efficient discovery of responses of proteins to compounds using active learning

    PubMed Central

    2014-01-01

    Background Drug discovery and development has been aided by high throughput screening methods that detect compound effects on a single target. However, when using focused initial screening, undesirable secondary effects are often detected late in the development process after significant investment has been made. An alternative approach would be to screen against undesired effects early in the process, but the number of possible secondary targets makes this prohibitively expensive. Results This paper describes methods for making this global approach practical by constructing predictive models for many target responses to many compounds and using them to guide experimentation. We demonstrate for the first time that by jointly modeling targets and compounds using descriptive features and using active machine learning methods, accurate models can be built by doing only a small fraction of possible experiments. The methods were evaluated by computational experiments using a dataset of 177 assays and 20,000 compounds constructed from the PubChem database. Conclusions An average of nearly 60% of all hits in the dataset were found after exploring only 3% of the experimental space which suggests that active learning can be used to enable more complete characterization of compound effects than otherwise affordable. The methods described are also likely to find widespread application outside drug discovery, such as for characterizing the effects of a large number of compounds or inhibitory RNAs on a large number of cell or tissue phenotypes. PMID:24884564

  6. An evaluation of FIA's stand age variable

    Treesearch

    John D. Shaw

    2015-01-01

    The Forest Inventory and Analysis Database (FIADB) includes a large number of measured and computed variables. The definitions of measured variables are usually well-documented in FIA field and database manuals. Some computed variables, such as live basal area of the condition, are equally straightforward. Other computed variables, such as individual tree volume,...

  7. The Fabric for Frontier Experiments Project at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirby, Michael

    2014-01-01

    The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere, 2) an extensive data management system for managing local and remote caches, cataloging, querying,more » moving, and tracking the use of data, 3) custom and generic database applications for calibrations, beam information, and other purposes, 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.« less

  8. Molecular Quantum Similarity, Chemical Reactivity and Database Screening of 3D Pharmacophores of the Protein Kinases A, B and G from Mycobacterium tuberculosis.

    PubMed

    Morales-Bayuelo, Alejandro

    2017-06-21

    Mycobacterium tuberculosis remains one of the world's most devastating pathogens. For this reason, we developed a study involving 3D pharmacophore searching, selectivity analysis and database screening for a series of anti-tuberculosis compounds, associated with the protein kinases A, B, and G. This theoretical study is expected to shed some light onto some molecular aspects that could contribute to the knowledge of the molecular mechanics behind interactions of these compounds, with anti-tuberculosis activity. Using the Molecular Quantum Similarity field and reactivity descriptors supported in the Density Functional Theory, it was possible to measure the quantification of the steric and electrostatic effects through the Overlap and Coulomb quantitative convergence (alpha and beta) scales. In addition, an analysis of reactivity indices using global and local descriptors was developed, identifying the binding sites and selectivity on these anti-tuberculosis compounds in the active sites. Finally, the reported pharmacophores to PKn A, B and G, were used to carry out database screening, using a database with anti-tuberculosis drugs from the Kelly Chibale research group (http://www.kellychibaleresearch.uct.ac.za/), to find the compounds with affinity for the specific protein targets associated with PKn A, B and G. In this regard, this hybrid methodology (Molecular Mechanic/Quantum Chemistry) shows new insights into drug design that may be useful in the tuberculosis treatment today.

  9. Virtual screening of a milk peptide database for the identification of food-derived antimicrobial peptides.

    PubMed

    Liu, Yufang; Eichler, Jutta; Pischetsrieder, Monika

    2015-11-01

    Milk provides a wide range of bioactive substances, such as antimicrobial peptides and proteins. Our study aimed to identify novel antimicrobial peptides naturally present in milk. The components of an endogenous bovine milk peptide database were virtually screened for charge, amphipathy, and predicted secondary structure. Thus, 23 of 248 screened peptides were identified as candidates for antimicrobial effects. After commercial synthesis, their antimicrobial activities were determined against Escherichia coli NEB5α, E. coli ATCC25922, and Bacillus subtilis ATCC6051. In the tested concentration range (<2 mM), bacteriostatic activity of 14 peptides was detected including nine peptides inhibiting both Gram-positive and Gram-negative bacteria. The most effective fragment was TKLTEEEKNRLNFLKKISQRYQKFΑLPQYLK corresponding to αS2 -casein151-181 , with minimum inhibitory concentration (MIC) of 4.0 μM against B. subtilis ATCC6051, and minimum inhibitory concentrations of 16.2 μM against both E. coli strains. Circular dichroism spectroscopy revealed conformational changes of most active peptides in a membrane-mimic environment, transitioning from an unordered to α-helical structure. Screening of food peptide databases by prediction tools is an efficient method to identify novel antimicrobial food-derived peptides. Milk-derived antimicrobial peptides may have potential use as functional food ingredients and help to understand the molecular mechanisms of anti-infective milk effects. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Hand-held computer operating system program for collection of resident experience data.

    PubMed

    Malan, T K; Haffner, W H; Armstrong, A Y; Satin, A J

    2000-11-01

    To describe a system for recording resident experience involving hand-held computers with the Palm Operating System (3 Com, Inc., Santa Clara, CA). Hand-held personal computers (PCs) are popular, easy to use, inexpensive, portable, and can share data among other operating systems. Residents in our program carry individual hand-held database computers to record Residency Review Committee (RRC) reportable patient encounters. Each resident's data is transferred to a single central relational database compatible with Microsoft Access (Microsoft Corporation, Redmond, WA). Patient data entry and subsequent transfer to a central database is accomplished with commercially available software that requires minimal computer expertise to implement and maintain. The central database can then be used for statistical analysis or to create required RRC resident experience reports. As a result, the data collection and transfer process takes less time for residents and program director alike, than paper-based or central computer-based systems. The system of collecting resident encounter data using hand-held computers with the Palm Operating System is easy to use, relatively inexpensive, accurate, and secure. The user-friendly system provides prompt, complete, and accurate data, enhancing the education of residents while facilitating the job of the program director.

  11. Radiological interpretation of images displayed on tablet computers: a systematic review.

    PubMed

    Caffery, L J; Armfield, N R; Smith, A C

    2015-06-01

    To review the published evidence and to determine if radiological diagnostic accuracy is compromised when images are displayed on a tablet computer and thereby inform practice on using tablet computers for radiological interpretation by on-call radiologists. We searched the PubMed and EMBASE databases for studies on the diagnostic accuracy or diagnostic reliability of images interpreted on tablet computers. Studies were screened for inclusion based on pre-determined inclusion and exclusion criteria. Studies were assessed for quality and risk of bias using Quality Appraisal of Diagnostic Reliability Studies or the revised Quality Assessment of Diagnostic Accuracy Studies tool. Treatment of studies was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). 11 studies met the inclusion criteria. 10 of these studies tested the Apple iPad(®) (Apple, Cupertino, CA). The included studies reported high sensitivity (84-98%), specificity (74-100%) and accuracy rates (98-100%) for radiological diagnosis. There was no statistically significant difference in accuracy between a tablet computer and a digital imaging and communication in medicine-calibrated control display. There was a near complete consensus from authors on the non-inferiority of diagnostic accuracy of images displayed on a tablet computer. All of the included studies were judged to be at risk of bias. Our findings suggest that the diagnostic accuracy of radiological interpretation is not compromised by using a tablet computer. This result is only relevant to the Apple iPad and to the modalities of CT, MRI and plain radiography. The iPad may be appropriate for an on-call radiologist to use for radiological interpretation.

  12. The FLIGHT Drosophila RNAi database

    PubMed Central

    Bursteinas, Borisas; Jain, Ekta; Gao, Qiong; Baum, Buzz; Zvelebil, Marketa

    2010-01-01

    FLIGHT (http://flight.icr.ac.uk/) is an online resource compiling data from high-throughput Drosophila in vivo and in vitro RNAi screens. FLIGHT includes details of RNAi reagents and their predicted off-target effects, alongside RNAi screen hits, scores and phenotypes, including images from high-content screens. The latest release of FLIGHT is designed to enable users to upload, analyze, integrate and share their own RNAi screens. Users can perform multiple normalizations, view quality control plots, detect and assign screen hits and compare hits from multiple screens using a variety of methods including hierarchical clustering. FLIGHT integrates RNAi screen data with microarray gene expression as well as genomic annotations and genetic/physical interaction datasets to provide a single interface for RNAi screen analysis and datamining in Drosophila. PMID:20855970

  13. Screening Questionnaires for Obstructive Sleep Apnea: An Updated Systematic Review.

    PubMed

    Amra, Babak; Rahmati, Behzad; Soltaninejad, Forogh; Feizi, Awat

    2018-05-01

    Obstructive sleep apnea (OSA) is the most common sleep-related breathing disorder and is associated with significant morbidity. We sought to present an updated systematic review of the literature on the accuracy of screening questionnaires for OSA against polysomnography (PSG) as the reference test. Using the main databases (including Medline, Cochrane Database of Systematic Reviews and Scopus) we used a combination of relevant keywords to filter studies published between January 2010 and April 2017. Population-based studies evaluating the accuracy of screening questionnaires for OSA against PSG were included in the review. Thirty-nine studies comprising 18 068 subjects were included. Four screening questionnaires for OSA had been validated in selected studies including the Berlin questionnaire (BQ), STOP-Bang Questionnaire (SBQ), STOP Questionnaire (SQ), and Epworth Sleepiness Scale (ESS). The sensitivity of SBQ in detecting mild (apnea-hypopnea index (AHI) ≥ 5 events/hour) and severe (AHI ≥ 30 events/hour) OSA was higher compared to other screening questionnaires (range from 81.08% to 97.55% and 69.2% to 98.7%, respectively). However, SQ had the highest sensitivity in predicting moderate OSA (AHI ≥ 15 events/hour; range = 41.3% to 100%). SQ and SBQ are reliable tools for screening OSA among sleep clinic patients. Although further validation studies on the screening abilities of these questionnaires on general populations are required.

  14. THE NASA AMES POLYCYCLIC AROMATIC HYDROCARBON INFRARED SPECTROSCOPIC DATABASE: THE COMPUTED SPECTRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauschlicher, C. W.; Ricca, A.; Boersma, C.

    The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant to test and refine the PAH hypothesis have been assembled into a spectroscopic database. This database now contains over 800 PAH spectra spanning 2-2000 {mu}m (5000-5 cm{sup -1}). These data are now available on the World Wide Web at www.astrochem.org/pahdb. This paper presents an overview of the computational spectra in the database and the tools developed to analyzemore » and interpret astronomical spectra using the database. A description of the online and offline user tools available on the Web site is also presented.« less

  15. Atrial Fibrillation Screening in Nonmetropolitan Areas Using a Telehealth Surveillance System With an Embedded Cloud-Computing Algorithm: Prospective Pilot Study.

    PubMed

    Chen, Ying-Hsien; Hung, Chi-Sheng; Huang, Ching-Chang; Hung, Yu-Chien; Hwang, Juey-Jen; Ho, Yi-Lwun

    2017-09-26

    Atrial fibrillation (AF) is a common form of arrhythmia that is associated with increased risk of stroke and mortality. Detecting AF before the first complication occurs is a recognized priority. No previous studies have examined the feasibility of undertaking AF screening using a telehealth surveillance system with an embedded cloud-computing algorithm; we address this issue in this study. The objective of this study was to evaluate the feasibility of AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm. We conducted a prospective AF screening study in a nonmetropolitan area using a single-lead electrocardiogram (ECG) recorder. All ECG measurements were reviewed on the telehealth surveillance system and interpreted by the cloud-computing algorithm and a cardiologist. The process of AF screening was evaluated with a satisfaction questionnaire. Between March 11, 2016 and August 31, 2016, 967 ECGs were recorded from 922 residents in nonmetropolitan areas. A total of 22 (2.4%, 22/922) residents with AF were identified by the physician's ECG interpretation, and only 0.2% (2/967) of ECGs contained significant artifacts. The novel cloud-computing algorithm for AF detection had a sensitivity of 95.5% (95% CI 77.2%-99.9%) and specificity of 97.7% (95% CI 96.5%-98.5%). The overall satisfaction score for the process of AF screening was 92.1%. AF screening in nonmetropolitan areas using a telehealth surveillance system with an embedded cloud-computing algorithm is feasible. ©Ying-Hsien Chen, Chi-Sheng Hung, Ching-Chang Huang, Yu-Chien Hung, Juey-Jen Hwang, Yi-Lwun Ho. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 26.09.2017.

  16. Combining structure-based pharmacophore modeling, virtual screening, and in silico ADMET analysis to discover novel tetrahydro-quinoline based pyruvate kinase isozyme M2 activators with antitumor activity

    PubMed Central

    Chen, Can; Wang, Ting; Wu, Fengbo; Huang, Wei; He, Gu; Ouyang, Liang; Xiang, Mingli; Peng, Cheng; Jiang, Qinglin

    2014-01-01

    Compared with normal differentiated cells, cancer cells upregulate the expression of pyruvate kinase isozyme M2 (PKM2) to support glycolytic intermediates for anabolic processes, including the synthesis of nucleic acids, amino acids, and lipids. In this study, a combination of the structure-based pharmacophore modeling and a hybrid protocol of virtual screening methods comprised of pharmacophore model-based virtual screening, docking-based virtual screening, and in silico ADMET (absorption, distribution, metabolism, excretion and toxicity) analysis were used to retrieve novel PKM2 activators from commercially available chemical databases. Tetrahydroquinoline derivatives were identified as potential scaffolds of PKM2 activators. Thus, the hybrid virtual screening approach was applied to screen the focused tetrahydroquinoline derivatives embedded in the ZINC database. Six hit compounds were selected from the final hits and experimental studies were then performed. Compound 8 displayed a potent inhibitory effect on human lung cancer cells. Following treatment with Compound 8, cell viability, apoptosis, and reactive oxygen species (ROS) production were examined in A549 cells. Finally, we evaluated the effects of Compound 8 on mice xenograft tumor models in vivo. These results may provide important information for further research on novel PKM2 activators as antitumor agents. PMID:25214764

  17. Understanding the Effects of Databases as Cognitive Tools in a Problem-Based Multimedia Learning Environment

    ERIC Educational Resources Information Center

    Li, Rui; Liu, Min

    2007-01-01

    The purpose of this study is to examine the potential of using computer databases as cognitive tools to share learners' cognitive load and facilitate learning in a multimedia problem-based learning (PBL) environment designed for sixth graders. Two research questions were: (a) can the computer database tool share sixth-graders' cognitive load? and…

  18. Benchmark of four popular virtual screening programs: construction of the active/decoy dataset remains a major determinant of measured performance.

    PubMed

    Chaput, Ludovic; Martinez-Sanz, Juan; Saettel, Nicolas; Mouawad, Liliane

    2016-01-01

    In a structure-based virtual screening, the choice of the docking program is essential for the success of a hit identification. Benchmarks are meant to help in guiding this choice, especially when undertaken on a large variety of protein targets. Here, the performance of four popular virtual screening programs, Gold, Glide, Surflex and FlexX, is compared using the Directory of Useful Decoys-Enhanced database (DUD-E), which includes 102 targets with an average of 224 ligands per target and 50 decoys per ligand, generated to avoid biases in the benchmarking. Then, a relationship between these program performances and the properties of the targets or the small molecules was investigated. The comparison was based on two metrics, with three different parameters each. The BEDROC scores with α = 80.5, indicated that, on the overall database, Glide succeeded (score > 0.5) for 30 targets, Gold for 27, FlexX for 14 and Surflex for 11. The performance did not depend on the hydrophobicity nor the openness of the protein cavities, neither on the families to which the proteins belong. However, despite the care in the construction of the DUD-E database, the small differences that remain between the actives and the decoys likely explain the successes of Gold, Surflex and FlexX. Moreover, the similarity between the actives of a target and its crystal structure ligand seems to be at the basis of the good performance of Glide. When all targets with significant biases are removed from the benchmarking, a subset of 47 targets remains, for which Glide succeeded for only 5 targets, Gold for 4 and FlexX and Surflex for 2. The performance dramatic drop of all four programs when the biases are removed shows that we should beware of virtual screening benchmarks, because good performances may be due to wrong reasons. Therefore, benchmarking would hardly provide guidelines for virtual screening experiments, despite the tendency that is maintained, i.e., Glide and Gold display better performance than FlexX and Surflex. We recommend to always use several programs and combine their results. Graphical AbstractSummary of the results obtained by virtual screening with the four programs, Glide, Gold, Surflex and FlexX, on the 102 targets of the DUD-E database. The percentage of targets with successful results, i.e., with BDEROC(α = 80.5) > 0.5, when the entire database is considered are in Blue, and when targets with biased chemical libraries are removed are in Red.

  19. In person versus Computer Screening for Intimate Partner Violence Among Pregnant Patients

    PubMed Central

    Dado, Diane; Schussler, Sara; Hawker, Lynn; Holland, Cynthia L.; Burke, Jessica G.; Cluss, Patricia A.

    2012-01-01

    Objective To compare in person versus computerized screening for intimate partner violence (IPV) in a hospital-based prenatal clinic and explore women’s assessment of the screening methods. Methods We compared patient IPV disclosures on a computerized questionnaire to audio-taped first obstetric visits with an obstetric care provider and performed semi-structured interviews with patient participants who reported experiencing IPV. Results Two-hundred and fifty patient participants and 52 provider participants were in the study. Ninety-one (36%) patients disclosed IPV either via computer or in person. Of those who disclosed IPV, 60 (66%) disclosed via both methods, but 31 (34%) disclosed IPV via only one of the two methods. Twenty-three women returned for interviews. They recommended using both types together. While computerized screening was felt to be non-judgmental and more anonymous, in person screening allowed for tailored questioning and more emotional connection with the provider. Conclusion Computerized screening allowed disclosure without fear of immediate judgment. In person screening allows more flexibility in wording of questions regarding IPV and opportunity for interpersonal rapport. Practice Implications Both computerized or self-completed screening and in person screening is recommended. Providers should address IPV using non-judgmental, descriptive language, include assessments for psychological IPV, and repeat screening in person, even if no patient disclosure occurs via computer. PMID:22770815

  20. In person versus computer screening for intimate partner violence among pregnant patients.

    PubMed

    Chang, Judy C; Dado, Diane; Schussler, Sara; Hawker, Lynn; Holland, Cynthia L; Burke, Jessica G; Cluss, Patricia A

    2012-09-01

    To compare in person versus computerized screening for intimate partner violence (IPV) in a hospital-based prenatal clinic and explore women's assessment of the screening methods. We compared patient IPV disclosures on a computerized questionnaire to audio-taped first obstetric visits with an obstetric care provider and performed semi-structured interviews with patient participants who reported experiencing IPV. Two-hundred and fifty patient participants and 52 provider participants were in the study. Ninety-one (36%) patients disclosed IPV either via computer or in person. Of those who disclosed IPV, 60 (66%) disclosed via both methods, but 31 (34%) disclosed IPV via only one of the two methods. Twenty-three women returned for interviews. They recommended using both types together. While computerized screening was felt to be non-judgmental and more anonymous, in person screening allowed for tailored questioning and more emotional connection with the provider. Computerized screening allowed disclosure without fear of immediate judgment. In person screening allows more flexibility in wording of questions regarding IPV and opportunity for interpersonal rapport. Both computerized or self-completed screening and in person screening is recommended. Providers should address IPV using non-judgmental, descriptive language, include assessments for psychological IPV, and repeat screening in person, even if no patient disclosure occurs via computer. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. Developing a Computer Touch-Screen Interactive Colorectal Screening Decision Aid for a Low-Literacy African American Population: Lessons Learned

    PubMed Central

    Bass, Sarah Bauerle; Gordon, Thomas F.; Ruzek, Sheryl Burt; Wolak, Caitlin; Ruggieri, Dominique; Mora, Gabriella; Rovito, Michael J.; Britto, Johnson; Parameswaran, Lalitha; Abedin, Zainab; Ward, Stephanie; Paranjape, Anuradha; Lin, Karen; Meyer, Brian; Pitts, Khaliah

    2017-01-01

    African Americans have higher colorectal cancer (CRC) mortality than White Americans and yet have lower rates of CRC screening. Increased screening aids in early detection and higher survival rates. Coupled with low literacy rates, the burden of CRC morbidity and mortality is exacerbated in this population, making it important to develop culturally and literacy appropriate aids to help low-literacy African Americans make informed decisions about CRC screening. This article outlines the development of a low-literacy computer touch-screen colonoscopy decision aid using an innovative marketing method called perceptual mapping and message vector modeling. This method was used to mathematically model key messages for the decision aid, which were then used to modify an existing CRC screening tutorial with different messages. The final tutorial was delivered through computer touch-screen technology to increase access and ease of use for participants. Testing showed users were not only more comfortable with the touch-screen technology but were also significantly more willing to have a colonoscopy compared with a “usual care group.” Results confirm the importance of including participants in planning and that the use of these innovative mapping and message design methods can lead to significant CRC screening attitude change. PMID:23132838

  2. Digital data storage systems, computers, and data verification methods

    DOEpatents

    Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.

    2005-12-27

    Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.

  3. [Implementation of a computerized pharmacological database for pediatric use].

    PubMed

    Currò, V; Grimaldi, V; Polidori, G; Cascioli, E; Lanni, R; De Luca, F; D'Atri, A; Bernabei, A

    1990-01-01

    The authors present a pharmacological database to support teaching and care activity carried out in the Divisional Paediatric Ambulatory of the Catholic University of Rome. This database is included in a integrated system, ARPIA (Ambulatory and Research in Pediatric by Information Assistance), devoted to manage ambulatory paediatric data. ARPIA has been implemented by using a relational DBMS, very cheap and highly diffused on personal computers. The database specifies: active ingredient and code number related to it, clinical uses, doses, contra-indications and precautions, adverse effects, besides the possible wrapping available on the market. All this is showed on a single for that appears on the screen and allows a fast reading of the most important elements characterizing every drug. The search of the included drugs can be made on the basis of three different detailed lists: active ingredient, proprietary preparation and clinical use. It is, besides, possible to have a complete report about the drugs requested by the user. This system allows the user, without modifying the program, to interact with the included data modifying each element of the form. In the system there is also a fast consultation handbook containing for every active ingredient, the complete list of italian proprietary medicines. This system aims to give a better knowledge of the most commonly used drugs, not only limited to the paediatrician but also to the ambulatory health staff; an improvement of the therapy furthering, a more effective use of several pharmacological agents and first of all a training device not only to specialists but also to students.

  4. Citation Analysis of Hepatitis Monthly by Journal Citation Report (ISI), Google Scholar, and Scopus.

    PubMed

    Miri, Seyyed Mohammad; Raoofi, Azam; Heidari, Zahra

    2012-09-01

    Citation analysis as one of the most widely used methods of bibliometrics can be used for computing the various impact measures for scholars based on data from citation databases. Journal Citation Reports (JCR) from Thomson Reuters provides annual report in the form of impact factor (IF) for each journal. We aimed to evaluate the citation parameters of Hepatitis Monthly by JCR in 2010 and compare them with GS and Sc. All articles of Hepat Mon published in 2009 and 2008 which had been cited in 2010 in three databases including WoS, Sc and GS gathered in a spreadsheet. The IFs were manually calculated. Among the 104 total published articles the accuracy rates of GS and Sc in recording the total number of articles was 96% and 87.5%. There was a difference between IFs among the three databases (0.793 in ISI [Institute for Scientific Information], 0.945 in Sc and 0.85 GS). The missing rate of citations in ISI was 4% totally. Original articles were the main cited types, whereas, guidelines and clinical challenges were the least ones. None of the three databases succeed to record all articles published in the journal. Despite high sensitivity of GS comparing to Sc, it cannot be a reliable source for indexing since GS has lack of screening in the data collection and low specificity. Using an average of three IFs is suggested to find the correct IF. Editors should be more aware on the role of original articles in increasing IF and the potential efficacy of review articles in long term impact factor.

  5. sc-PDB: an annotated database of druggable binding sites from the Protein Data Bank.

    PubMed

    Kellenberger, Esther; Muller, Pascal; Schalon, Claire; Bret, Guillaume; Foata, Nicolas; Rognan, Didier

    2006-01-01

    The sc-PDB is a collection of 6 415 three-dimensional structures of binding sites found in the Protein Data Bank (PDB). Binding sites were extracted from all high-resolution crystal structures in which a complex between a protein cavity and a small-molecular-weight ligand could be identified. Importantly, ligands are considered from a pharmacological and not a structural point of view. Therefore, solvents, detergents, and most metal ions are not stored in the sc-PDB. Ligands are classified into four main categories: nucleotides (< 4-mer), peptides (< 9-mer), cofactors, and organic compounds. The corresponding binding site is formed by all protein residues (including amino acids, cofactors, and important metal ions) with at least one atom within 6.5 angstroms of any ligand atom. The database was carefully annotated by browsing several protein databases (PDB, UniProt, and GO) and storing, for every sc-PDB entry, the following features: protein name, function, source, domain and mutations, ligand name, and structure. The repository of ligands has also been archived by diversity analysis of molecular scaffolds, and several chemoinformatics descriptors were computed to better understand the chemical space covered by stored ligands. The sc-PDB may be used for several purposes: (i) screening a collection of binding sites for predicting the most likely target(s) of any ligand, (ii) analyzing the molecular similarity between different cavities, and (iii) deriving rules that describe the relationship between ligand pharmacophoric points and active-site properties. The database is periodically updated and accessible on the web at http://bioinfo-pharma.u-strasbg.fr/scPDB/.

  6. Application of Energy Function as a Measure of Error in the Numerical Solution for Online Transient Stability Assessment

    NASA Astrophysics Data System (ADS)

    Sarojkumar, K.; Krishna, S.

    2016-08-01

    Online dynamic security assessment (DSA) is a computationally intensive task. In order to reduce the amount of computation, screening of contingencies is performed. Screening involves analyzing the contingencies with the system described by a simpler model so that computation requirement is reduced. Screening identifies those contingencies which are sure to not cause instability and hence can be eliminated from further scrutiny. The numerical method and the step size used for screening should be chosen with a compromise between speed and accuracy. This paper proposes use of energy function as a measure of error in the numerical solution used for screening contingencies. The proposed measure of error can be used to determine the most accurate numerical method satisfying the time constraint of online DSA. Case studies on 17 generator system are reported.

  7. Developing and validating predictive decision tree models from mining chemical structural fingerprints and high-throughput screening data in PubChem.

    PubMed

    Han, Lianyi; Wang, Yanli; Bryant, Stephen H

    2008-09-25

    Recent advances in high-throughput screening (HTS) techniques and readily available compound libraries generated using combinatorial chemistry or derived from natural products enable the testing of millions of compounds in a matter of days. Due to the amount of information produced by HTS assays, it is a very challenging task to mine the HTS data for potential interest in drug development research. Computational approaches for the analysis of HTS results face great challenges due to the large quantity of information and significant amounts of erroneous data produced. In this study, Decision Trees (DT) based models were developed to discriminate compound bioactivities by using their chemical structure fingerprints provided in the PubChem system http://pubchem.ncbi.nlm.nih.gov. The DT models were examined for filtering biological activity data contained in four assays deposited in the PubChem Bioassay Database including assays tested for 5HT1a agonists, antagonists, and HIV-1 RT-RNase H inhibitors. The 10-fold Cross Validation (CV) sensitivity, specificity and Matthews Correlation Coefficient (MCC) for the models are 57.2 approximately 80.5%, 97.3 approximately 99.0%, 0.4 approximately 0.5 respectively. A further evaluation was also performed for DT models built for two independent bioassays, where inhibitors for the same HIV RNase target were screened using different compound libraries, this experiment yields enrichment factor of 4.4 and 9.7. Our results suggest that the designed DT models can be used as a virtual screening technique as well as a complement to traditional approaches for hits selection.

  8. Solitary Pure Ground-Glass Nodules 5 mm or Smaller: Frequency of Growth.

    PubMed

    Kakinuma, Ryutaro; Muramatsu, Yukio; Kusumoto, Masahiko; Tsuchida, Takaaki; Tsuta, Koji; Maeshima, Akiko Miyagi; Asamura, Hisao; Moriyama, Noriyuki

    2015-09-01

    To clarify the percentage of solitary pure ground-glass nodules (SPGGNs) 5 mm or smaller that grow and develop into invasive adenocarcinomas. This study was approved by the institutional review board, and informed consent was obtained from all people who were screened. From February 2004 through December 2007, 7294 participants underwent screening for lung cancer with computed tomographic (CT) imaging. The nodule database was reviewed to identify SPGGNs 5 mm or smaller. Growth of the SPGGNs was evaluated as of March 31, 2013. In cases of pathologic analysis-proven adenocarcinomas that developed from SPGGNs 5 mm or smaller, solid components were evaluated. Percentages, 95% confidence intervals, and means were calculated. At baseline screening, 438 SPGGNs 5 mm or smaller were identified, and during the study period one SPGGN 5 mm or smaller developed de novo. Of the 439 SPGGNs, 394 were stable and 45 (10.3% [95% confidence interval: 7.5%, 13.7%]), including newly developed SPGGN, grew. Of the 45 SPGGNs that grew, 0.9% (four of 439 [95% confidence interval: 0.3%, 2.3%]) developed into adenocarcinomas (two minimally invasive [including the newly developed SPGGN] and two invasive). The mean period between baseline CT screening and the appearance of solid components in the four adenocarcinomas was 3.6 years. Of SPGGNs 5 mm or smaller, approximately 10% will grow and 1% will develop into invasive adenocarcinomas or minimally invasive adenocarcinomas. SPGGNs 5 mm or smaller should be rescanned 3.5 years later to look for development of a solid component.

  9. Television viewing, computer use and total screen time in Canadian youth.

    PubMed

    Mark, Amy E; Boyce, William F; Janssen, Ian

    2006-11-01

    Research has linked excessive television viewing and computer use in children and adolescents to a variety of health and social problems. Current recommendations are that screen time in children and adolescents should be limited to no more than 2 h per day. To determine the percentage of Canadian youth meeting the screen time guideline recommendations. The representative study sample consisted of 6942 Canadian youth in grades 6 to 10 who participated in the 2001/2002 World Health Organization Health Behaviour in School-Aged Children survey. Only 41% of girls and 34% of boys in grades 6 to 10 watched 2 h or less of television per day. Once the time of leisure computer use was included and total daily screen time was examined, only 18% of girls and 14% of boys met the guidelines. The prevalence of those meeting the screen time guidelines was higher in girls than boys. Fewer than 20% of Canadian youth in grades 6 to 10 met the total screen time guidelines, suggesting that increased public health interventions are needed to reduce the number of leisure time hours that Canadian youth spend watching television and using the computer.

  10. SCREENOP: A Computer Assisted Model for ASW (Anti-Submarine Warfare) Screen Design.

    DTIC Science & Technology

    1983-09-01

    AD-A736 892 SCREENOP: A COMPUTER ASSISTED MODEL FOR ASW I (ANTISUBMARINE WARFARE) SCREEN DESIGN (S) NAVAL , POSTGRADUATE SCHOOL MONTEREY CA W J...POSTGRADUATE SCHOOL £ Monterey, California DTIC $ELECTE f JAIN17 1984J THESIS SCREENOP: A COMPUTER ASSISTED MODEL FOR ASH SCREEN DESIGN by William Joseph... Design SL AVSIUACY rCin do msiwoS 0ddst aO memeo mE Idm.M& 6y 61-k imwel) This chesis is a description of the Naval Postgraduate School’s version of

  11. Development of a Consumer Product Ingredient Database for Chemical ExposureScreening and Prioritization

    EPA Science Inventory

    Consumer products are a primary source of chemical exposures, yet little structured information is available on the chemical ingredients of these products and the concentrations at which ingredients are present. To address this data gap, we created a database of chemicals in cons...

  12. Identifying Toxicity Pathways with ToxCast High-Throughput Screening and Applications to Predicting Developmental Toxicity

    EPA Science Inventory

    Results from rodent and non-rodent prenatal developmental toxicity tests for over 300 chemicals have been curated into the relational database ToxRefDB. These same chemicals have been run in concentration-response format through over 500 high-throughput screening assays assessin...

  13. Evaluating Computer Screen Time and Its Possible Link to Psychopathology in the Context of Age: A Cross-Sectional Study of Parents and Children

    PubMed Central

    Ross, Sharon; Silman, Zmira; Maoz, Hagai; Bloch, Yuval

    2015-01-01

    Background Several studies have suggested that high levels of computer use are linked to psychopathology. However, there is ambiguity about what should be considered normal or over-use of computers. Furthermore, the nature of the link between computer usage and psychopathology is controversial. The current study utilized the context of age to address these questions. Our hypothesis was that the context of age will be paramount for differentiating normal from excessive use, and that this context will allow a better understanding of the link to psychopathology. Methods In a cross-sectional study, 185 parents and children aged 3–18 years were recruited in clinical and community settings. They were asked to fill out questionnaires regarding demographics, functional and academic variables, computer use as well as psychiatric screening questionnaires. Using a regression model, we identified 3 groups of normal-use, over-use and under-use and examined known factors as putative differentiators between the over-users and the other groups. Results After modeling computer screen time according to age, factors linked to over-use were: decreased socialization (OR 3.24, Confidence interval [CI] 1.23–8.55, p = 0.018), difficulty to disengage from the computer (OR 1.56, CI 1.07–2.28, p = 0.022) and age, though borderline-significant (OR 1.1 each year, CI 0.99–1.22, p = 0.058). While psychopathology was not linked to over-use, post-hoc analysis revealed that the link between increased computer screen time and psychopathology was age-dependent and solidified as age progressed (p = 0.007). Unlike computer usage, the use of small-screens and smartphones was not associated with psychopathology. Conclusions The results suggest that computer screen time follows an age-based course. We conclude that differentiating normal from over-use as well as defining over-use as a possible marker for psychiatric difficulties must be performed within the context of age. If verified by additional studies, future research should integrate those views in order to better understand the intricacies of computer over-use. PMID:26536037

  14. Evaluating Computer Screen Time and Its Possible Link to Psychopathology in the Context of Age: A Cross-Sectional Study of Parents and Children.

    PubMed

    Segev, Aviv; Mimouni-Bloch, Aviva; Ross, Sharon; Silman, Zmira; Maoz, Hagai; Bloch, Yuval

    2015-01-01

    Several studies have suggested that high levels of computer use are linked to psychopathology. However, there is ambiguity about what should be considered normal or over-use of computers. Furthermore, the nature of the link between computer usage and psychopathology is controversial. The current study utilized the context of age to address these questions. Our hypothesis was that the context of age will be paramount for differentiating normal from excessive use, and that this context will allow a better understanding of the link to psychopathology. In a cross-sectional study, 185 parents and children aged 3-18 years were recruited in clinical and community settings. They were asked to fill out questionnaires regarding demographics, functional and academic variables, computer use as well as psychiatric screening questionnaires. Using a regression model, we identified 3 groups of normal-use, over-use and under-use and examined known factors as putative differentiators between the over-users and the other groups. After modeling computer screen time according to age, factors linked to over-use were: decreased socialization (OR 3.24, Confidence interval [CI] 1.23-8.55, p = 0.018), difficulty to disengage from the computer (OR 1.56, CI 1.07-2.28, p = 0.022) and age, though borderline-significant (OR 1.1 each year, CI 0.99-1.22, p = 0.058). While psychopathology was not linked to over-use, post-hoc analysis revealed that the link between increased computer screen time and psychopathology was age-dependent and solidified as age progressed (p = 0.007). Unlike computer usage, the use of small-screens and smartphones was not associated with psychopathology. The results suggest that computer screen time follows an age-based course. We conclude that differentiating normal from over-use as well as defining over-use as a possible marker for psychiatric difficulties must be performed within the context of age. If verified by additional studies, future research should integrate those views in order to better understand the intricacies of computer over-use.

  15. TR-DB: an open-access database of compounds affecting the ethylene-induced triple response in Arabidopsis.

    PubMed

    Hu, Yuming; Callebert, Pieter; Vandemoortel, Ilse; Nguyen, Long; Audenaert, Dominique; Verschraegen, Luc; Vandenbussche, Filip; Van Der Straeten, Dominique

    2014-02-01

    Small molecules which act as hormone agonists or antagonists represent useful tools in fundamental research and are widely applied in agriculture to control hormone effects. High-throughput screening of large chemical compound libraries has yielded new findings in plant biology, with possible future applications in agriculture and horticulture. To further understand ethylene biosynthesis/signaling and its crosstalk with other hormones, we screened a 12,000 compound chemical library based on an ethylene-related bioassay of dark-grown Arabidopsis thaliana (L.) Heynh. seedlings. From the initial screening, 1313 (∼11%) biologically active small molecules altering the phenotype triggered by the ethylene precursor 1-aminocyclopropane-1-carboxylic acid (ACC), were identified. Selection and sorting in classes were based on the angle of curvature of the apical hook, the length and width of the hypocotyl and the root. A MySQL-database was constructed (https://chaos.ugent.be/WE15/) including basic chemical information on the compounds, images illustrating the phenotypes, phenotype descriptions and classification. The research perspectives for different classes of hit compounds will be evaluated, and some general screening tips for customized high-throughput screening and pitfalls will be discussed. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  16. ChemBank: a small-molecule screening and cheminformatics resource database.

    PubMed

    Seiler, Kathleen Petri; George, Gregory A; Happ, Mary Pat; Bodycombe, Nicole E; Carrinski, Hyman A; Norton, Stephanie; Brudz, Steve; Sullivan, John P; Muhlich, Jeremy; Serrano, Martin; Ferraiolo, Paul; Tolliday, Nicola J; Schreiber, Stuart L; Clemons, Paul A

    2008-01-01

    ChemBank (http://chembank.broad.harvard.edu/) is a public, web-based informatics environment developed through a collaboration between the Chemical Biology Program and Platform at the Broad Institute of Harvard and MIT. This knowledge environment includes freely available data derived from small molecules and small-molecule screens and resources for studying these data. ChemBank is unique among small-molecule databases in its dedication to the storage of raw screening data, its rigorous definition of screening experiments in terms of statistical hypothesis testing, and its metadata-based organization of screening experiments into projects involving collections of related assays. ChemBank stores an increasingly varied set of measurements derived from cells and other biological assay systems treated with small molecules. Analysis tools are available and are continuously being developed that allow the relationships between small molecules, cell measurements, and cell states to be studied. Currently, ChemBank stores information on hundreds of thousands of small molecules and hundreds of biomedically relevant assays that have been performed at the Broad Institute by collaborators from the worldwide research community. The goal of ChemBank is to provide life scientists unfettered access to biomedically relevant data and tools heretofore available primarily in the private sector.

  17. [Virtual screening of anti-angiogenesis flavonoids from Sophora flavescens].

    PubMed

    Chen, Xi-Xin; Liu, Yi; Huang, Rong; Zhao, Lin-Lin; Chen, Lei; Wang, Shu-Mei

    2017-03-01

    Angiogenesis is a dynamic, multi-step process. It is known that about 70 diseases are related to angiogenesis. Both the experimental and the literature reports showed that Sophora flavescens inhibit angiogenesis significantly, but the material basis and the mechanism of action have not been clear. In this study, molecular docking was used for screening of anti-angiogenesis flavonoids from the roots of S. flavescens. One handred and twenty-six flavonoids selected from S. flavescens were screened in the docking ligand database with six targets(VEGF-a,TEK,KDR,Flt1,FGFR1 and FGFR2) as the receptors. In addition, the small-molecule approved drugs of targets from DrugBank database were set as a reference with minimum score of each target's approved drugs as threshold. The LibDock module in Discovery Studio 2.5 (DS2.5) software was applied to screen the compounds. As a result, 37 compounds were screened out that their scores were higher than the minimum score of approved drugs as well as being in the top of 10%. At last the mechanism of flavonoids anti-angiogenesis was preliminarily revealed, which provided a new method for the development of angiogenesis inhibitor drugs. Copyright© by the Chinese Pharmaceutical Association.

  18. Domain fusion analysis by applying relational algebra to protein sequence and domain databases

    PubMed Central

    Truong, Kevin; Ikura, Mitsuhiko

    2003-01-01

    Background Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. Results This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at . Conclusion As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time. PMID:12734020

  19. An algorithm of discovering signatures from DNA databases on a computer cluster.

    PubMed

    Lee, Hsiao Ping; Sheu, Tzu-Fang

    2014-10-05

    Signatures are short sequences that are unique and not similar to any other sequence in a database that can be used as the basis to identify different species. Even though several signature discovery algorithms have been proposed in the past, these algorithms require the entirety of databases to be loaded in the memory, thus restricting the amount of data that they can process. It makes those algorithms unable to process databases with large amounts of data. Also, those algorithms use sequential models and have slower discovery speeds, meaning that the efficiency can be improved. In this research, we are debuting the utilization of a divide-and-conquer strategy in signature discovery and have proposed a parallel signature discovery algorithm on a computer cluster. The algorithm applies the divide-and-conquer strategy to solve the problem posed to the existing algorithms where they are unable to process large databases and uses a parallel computing mechanism to effectively improve the efficiency of signature discovery. Even when run with just the memory of regular personal computers, the algorithm can still process large databases such as the human whole-genome EST database which were previously unable to be processed by the existing algorithms. The algorithm proposed in this research is not limited by the amount of usable memory and can rapidly find signatures in large databases, making it useful in applications such as Next Generation Sequencing and other large database analysis and processing. The implementation of the proposed algorithm is available at http://www.cs.pu.edu.tw/~fang/DDCSDPrograms/DDCSD.htm.

  20. Sankofa pediatric HIV disclosure intervention cyber data management: building capacity in a resource-limited setting and ensuring data quality.

    PubMed

    Catlin, Ann Christine; Fernando, Sumudinie; Gamage, Ruwan; Renner, Lorna; Antwi, Sampson; Tettey, Jonas Kusah; Amisah, Kofi Aikins; Kyriakides, Tassos; Cong, Xiangyu; Reynolds, Nancy R; Paintsil, Elijah

    2015-01-01

    Prevalence of pediatric HIV disclosure is low in resource-limited settings. Innovative, culturally sensitive, and patient-centered disclosure approaches are needed. Conducting such studies in resource-limited settings is not trivial considering the challenges of capturing, cleaning, and storing clinical research data. To overcome some of these challenges, the Sankofa pediatric disclosure intervention adopted an interactive cyber infrastructure for data capture and analysis. The Sankofa Project database system is built on the HUBzero cyber infrastructure ( https://hubzero.org ), an open source software platform. The hub database components support: (1) data management - the "databases" component creates, configures, and manages database access, backup, repositories, applications, and access control; (2) data collection - the "forms" component is used to build customized web case report forms that incorporate common data elements and include tailored form submit processing to handle error checking, data validation, and data linkage as the data are stored to the database; and (3) data exploration - the "dataviewer" component provides powerful methods for users to view, search, sort, navigate, explore, map, graph, visualize, aggregate, drill-down, compute, and export data from the database. The Sankofa cyber data management tool supports a user-friendly, secure, and systematic collection of all data. We have screened more than 400 child-caregiver dyads and enrolled nearly 300 dyads, with tens of thousands of data elements. The dataviews have successfully supported all data exploration and analysis needs of the Sankofa Project. Moreover, the ability of the sites to query and view data summaries has proven to be an incentive for collecting complete and accurate data. The data system has all the desirable attributes of an electronic data capture tool. It also provides an added advantage of building data management capacity in resource-limited settings due to its innovative data query and summary views and availability of real-time support by the data management team.

Top