PyGOLD: a python based API for docking based virtual screening workflow generation.
Patel, Hitesh; Brinkjost, Tobias; Koch, Oliver
2017-08-15
Molecular docking is one of the successful approaches in structure based discovery and development of bioactive molecules in chemical biology and medicinal chemistry. Due to the huge amount of computational time that is still required, docking is often the last step in a virtual screening approach. Such screenings are set as workflows spanned over many steps, each aiming at different filtering task. These workflows can be automatized in large parts using python based toolkits except for docking using the docking software GOLD. However, within an automated virtual screening workflow it is not feasible to use the GUI in between every step to change the GOLD configuration file. Thus, a python module called PyGOLD was developed, to parse, edit and write the GOLD configuration file and to automate docking based virtual screening workflows. The latest version of PyGOLD, its documentation and example scripts are available at: http://www.ccb.tu-dortmund.de/koch or http://www.agkoch.de. PyGOLD is implemented in Python and can be imported as a standard python module without any further dependencies. oliver.koch@agkoch.de, oliver.koch@tu-dortmund.de. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Maximum unbiased validation (MUV) data sets for virtual screening based on PubChem bioactivity data.
Rohrer, Sebastian G; Baumann, Knut
2009-02-01
Refined nearest neighbor analysis was recently introduced for the analysis of virtual screening benchmark data sets. It constitutes a technique from the field of spatial statistics and provides a mathematical framework for the nonparametric analysis of mapped point patterns. Here, refined nearest neighbor analysis is used to design benchmark data sets for virtual screening based on PubChem bioactivity data. A workflow is devised that purges data sets of compounds active against pharmaceutically relevant targets from unselective hits. Topological optimization using experimental design strategies monitored by refined nearest neighbor analysis functions is applied to generate corresponding data sets of actives and decoys that are unbiased with regard to analogue bias and artificial enrichment. These data sets provide a tool for Maximum Unbiased Validation (MUV) of virtual screening methods. The data sets and a software package implementing the MUV design workflow are freely available at http://www.pharmchem.tu-bs.de/lehre/baumann/MUV.html.
Performance Studies on Distributed Virtual Screening
Krüger, Jens; de la Garza, Luis; Kohlbacher, Oliver; Nagel, Wolfgang E.
2014-01-01
Virtual high-throughput screening (vHTS) is an invaluable method in modern drug discovery. It permits screening large datasets or databases of chemical structures for those structures binding possibly to a drug target. Virtual screening is typically performed by docking code, which often runs sequentially. Processing of huge vHTS datasets can be parallelized by chunking the data because individual docking runs are independent of each other. The goal of this work is to find an optimal splitting maximizing the speedup while considering overhead and available cores on Distributed Computing Infrastructures (DCIs). We have conducted thorough performance studies accounting not only for the runtime of the docking itself, but also for structure preparation. Performance studies were conducted via the workflow-enabled science gateway MoSGrid (Molecular Simulation Grid). As input we used benchmark datasets for protein kinases. Our performance studies show that docking workflows can be made to scale almost linearly up to 500 concurrent processes distributed even over large DCIs, thus accelerating vHTS campaigns significantly. PMID:25032219
Scholz, Christoph; Knorr, Sabine; Hamacher, Kay; Schmidt, Boris
2015-02-23
The formation of a covalent bond with the target is essential for a number of successful drugs, yet tools for covalent docking without significant restrictions regarding warhead or receptor classes are rare and limited in use. In this work we present DOCKTITE, a highly versatile workflow for covalent docking in the Molecular Operating Environment (MOE) combining automated warhead screening, nucleophilic side chain attachment, pharmacophore-based docking, and a novel consensus scoring approach. The comprehensive validation study includes pose predictions of 35 protein/ligand complexes which resulted in a mean RMSD of 1.74 Å and a prediction rate of 71.4% with an RMSD below 2 Å, a virtual screening with an area under the curve (AUC) for the receiver operating characteristics (ROC) of 0.81, and a significant correlation between predicted and experimental binding affinities (ρ = 0.806, R(2) = 0.649, p < 0.005).
Le-Thi-Thu, Huong; Casanola-Martín, Gerardo M; Marrero-Ponce, Yovani; Rescigno, Antonio; Abad, Concepcion; Khan, Mahmud Tareq Hassan
2014-01-01
The tyrosinase is a bifunctional, copper-containing enzyme widely distributed in the phylogenetic tree. This enzyme is involved in the production of melanin and some other pigments in humans, animals and plants, including skin pigmentations in mammals, and browning process in plants and vegetables. Therefore, enzyme inhibitors has been under the attention of the scientist community, due to its broad applications in food, cosmetic, agricultural and medicinal fields, to avoid the undesirable effects of abnormal melanin overproduction. However, the research of novel chemical with antityrosinase activity demands the use of more efficient tools to speed up the tyrosinase inhibitors discovery process. This chapter is focused in the different components of a predictive modeling workflow for the identification and prioritization of potential new compounds with activity against the tyrosinase enzyme. In this case, two structure chemical libraries Spectrum Collection and Drugbank are used in this attempt to combine different virtual screening data mining techniques, in a sequential manner helping to avoid the usually expensive and time consuming traditional methods. Some of the sequential steps summarize here comprise the use of drug-likeness filters, similarity searching, classification and potency QSAR multiclassifier systems, modeling molecular interactions systems, and similarity/diversity analysis. Finally, the methodologies showed here provide a rational workflow for virtual screening hit analysis and selection as a promissory drug discovery strategy for use in target identification phase.
Options in virtual 3D, optical-impression-based planning of dental implants.
Reich, Sven; Kern, Thomas; Ritter, Lutz
2014-01-01
If a 3D radiograph, which in today's dentistry often consists of a CBCT dataset, is available for computerized implant planning, the 3D planning should also consider functional prosthetic aspects. In a conventional workflow, the CBCT is done with a specially produced radiopaque prosthetic setup that makes the desired prosthetic situation visible during virtual implant planning. If an exclusively digital workflow is chosen, intraoral digital impressions are taken. On these digital models, the desired prosthetic suprastructures are designed. The entire datasets are virtually superimposed by a "registration" process on the corresponding structures (teeth) in the CBCTs. Thus, both the osseous and prosthetic structures are visible in one single 3D application and make it possible to consider surgical and prosthetic aspects. After having determined the implant positions on the computer screen, a drilling template is designed digitally. According to this design (CAD), a template is printed or milled in CAM process. This template is the first physically extant product in the entire workflow. The article discusses the options and limitations of this workflow.
Developing science gateways for drug discovery in a grid environment.
Pérez-Sánchez, Horacio; Rezaei, Vahid; Mezhuyev, Vitaliy; Man, Duhu; Peña-García, Jorge; den-Haan, Helena; Gesing, Sandra
2016-01-01
Methods for in silico screening of large databases of molecules increasingly complement and replace experimental techniques to discover novel compounds to combat diseases. As these techniques become more complex and computationally costly we are faced with an increasing problem to provide the research community of life sciences with a convenient tool for high-throughput virtual screening on distributed computing resources. To this end, we recently integrated the biophysics-based drug-screening program FlexScreen into a service, applicable for large-scale parallel screening and reusable in the context of scientific workflows. Our implementation is based on Pipeline Pilot and Simple Object Access Protocol and provides an easy-to-use graphical user interface to construct complex workflows, which can be executed on distributed computing resources, thus accelerating the throughput by several orders of magnitude.
Customizing G Protein-coupled receptor models for structure-based virtual screening.
de Graaf, Chris; Rognan, Didier
2009-01-01
This review will focus on the construction, refinement, and validation of G Protein-coupled receptor models for the purpose of structure-based virtual screening. Practical tips and tricks derived from concrete modeling and virtual screening exercises to overcome the problems and pitfalls associated with the different steps of the receptor modeling workflow will be presented. These examples will not only include rhodopsin-like (class A), but also secretine-like (class B), and glutamate-like (class C) receptors. In addition, the review will present a careful comparative analysis of current crystal structures and their implication on homology modeling. The following themes will be discussed: i) the use of experimental anchors in guiding the modeling procedure; ii) amino acid sequence alignments; iii) ligand binding mode accommodation and binding cavity expansion; iv) proline-induced kinks in transmembrane helices; v) binding mode prediction and virtual screening by receptor-ligand interaction fingerprint scoring; vi) extracellular loop modeling; vii) virtual filtering schemes. Finally, an overview of several successful structure-based screening shows that receptor models, despite structural inaccuracies, can be efficiently used to find novel ligands.
A reliable computational workflow for the selection of optimal screening libraries.
Gilad, Yocheved; Nadassy, Katalin; Senderowitz, Hanoch
2015-01-01
The experimental screening of compound collections is a common starting point in many drug discovery projects. Successes of such screening campaigns critically depend on the quality of the screened library. Many libraries are currently available from different vendors yet the selection of the optimal screening library for a specific project is challenging. We have devised a novel workflow for the rational selection of project-specific screening libraries. The workflow accepts as input a set of virtual candidate libraries and applies the following steps to each library: (1) data curation; (2) assessment of ADME/T profile; (3) assessment of the number of promiscuous binders/frequent HTS hitters; (4) assessment of internal diversity; (5) assessment of similarity to known active compound(s) (optional); (6) assessment of similarity to in-house or otherwise accessible compound collections (optional). For ADME/T profiling, Lipinski's and Veber's rule-based filters were implemented and a new blood brain barrier permeation model was developed and validated (85 and 74 % success rate for training set and test set, respectively). Diversity and similarity descriptors which demonstrated best performances in terms of their ability to select either diverse or focused sets of compounds from three databases (Drug Bank, CMC and CHEMBL) were identified and used for diversity and similarity assessments. The workflow was used to analyze nine common screening libraries available from six vendors. The results of this analysis are reported for each library providing an assessment of its quality. Furthermore, a consensus approach was developed to combine the results of these analyses into a single score for selecting the optimal library under different scenarios. We have devised and tested a new workflow for the rational selection of screening libraries under different scenarios. The current workflow was implemented using the Pipeline Pilot software yet due to the usage of generic components, it can be easily adapted and reproduced by computational groups interested in rational selection of screening libraries. Furthermore, the workflow could be readily modified to include additional components. This workflow has been routinely used in our laboratory for the selection of libraries in multiple projects and consistently selects libraries which are well balanced across multiple parameters.Graphical abstract.
Design and Development of ChemInfoCloud: An Integrated Cloud Enabled Platform for Virtual Screening.
Karthikeyan, Muthukumarasamy; Pandit, Deepak; Bhavasar, Arvind; Vyas, Renu
2015-01-01
The power of cloud computing and distributed computing has been harnessed to handle vast and heterogeneous data required to be processed in any virtual screening protocol. A cloud computing platorm ChemInfoCloud was built and integrated with several chemoinformatics and bioinformatics tools. The robust engine performs the core chemoinformatics tasks of lead generation, lead optimisation and property prediction in a fast and efficient manner. It has also been provided with some of the bioinformatics functionalities including sequence alignment, active site pose prediction and protein ligand docking. Text mining, NMR chemical shift (1H, 13C) prediction and reaction fingerprint generation modules for efficient lead discovery are also implemented in this platform. We have developed an integrated problem solving cloud environment for virtual screening studies that also provides workflow management, better usability and interaction with end users using container based virtualization, OpenVz.
Predictive QSAR modeling workflow, model applicability domains, and virtual screening.
Tropsha, Alexander; Golbraikh, Alexander
2007-01-01
Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.
Melagraki, G; Afantitis, A
2011-01-01
Virtual Screening (VS) has experienced increased attention into the recent years due to the large datasets made available, the development of advanced VS techniques and the encouraging fact that VS has contributed to the discovery of several compounds that have either reached the market or entered clinical trials. Hepatitis C Virus (HCV) nonstructural protein 5B (NS5B) has become an attractive target for the development of antiviral drugs and many small molecules have been explored as possible HCV NS5B inhibitors. In parallel with experimental practices, VS can serve as a valuable tool in the identification of novel effective inhibitors. Different techniques and workflows have been reported in literature with the goal to prioritize possible potent hits. In this context, different virtual screening strategies have been deployed for the identification of novel Hepatitis C Virus (HCV) inhibitors. This work reviews recent applications of virtual screening in an effort to identify novel potent HCV inhibitors.
2017-01-01
Computational screening is a method to prioritize small-molecule compounds based on the structural and biochemical attributes built from ligand and target information. Previously, we have developed a scalable virtual screening workflow to identify novel multitarget kinase/bromodomain inhibitors. In the current study, we identified several novel N-[3-(2-oxo-pyrrolidinyl)phenyl]-benzenesulfonamide derivatives that scored highly in our ensemble docking protocol. We quantified the binding affinity of these compounds for BRD4(BD1) biochemically and generated cocrystal structures, which were deposited in the Protein Data Bank. As the docking poses obtained in the virtual screening pipeline did not align with the experimental cocrystal structures, we evaluated the predictions of their precise binding modes by performing molecular dynamics (MD) simulations. The MD simulations closely reproduced the experimentally observed protein–ligand cocrystal binding conformations and interactions for all compounds. These results suggest a computational workflow to generate experimental-quality protein–ligand binding models, overcoming limitations of docking results due to receptor flexibility and incomplete sampling, as a useful starting point for the structure-based lead optimization of novel BRD4(BD1) inhibitors. PMID:28884163
When drug discovery meets web search: Learning to Rank for ligand-based virtual screening.
Zhang, Wei; Ji, Lijuan; Chen, Yanan; Tang, Kailin; Wang, Haiping; Zhu, Ruixin; Jia, Wei; Cao, Zhiwei; Liu, Qi
2015-01-01
The rapid increase in the emergence of novel chemical substances presents a substantial demands for more sophisticated computational methodologies for drug discovery. In this study, the idea of Learning to Rank in web search was presented in drug virtual screening, which has the following unique capabilities of 1). Applicable of identifying compounds on novel targets when there is not enough training data available for these targets, and 2). Integration of heterogeneous data when compound affinities are measured in different platforms. A standard pipeline was designed to carry out Learning to Rank in virtual screening. Six Learning to Rank algorithms were investigated based on two public datasets collected from Binding Database and the newly-published Community Structure-Activity Resource benchmark dataset. The results have demonstrated that Learning to rank is an efficient computational strategy for drug virtual screening, particularly due to its novel use in cross-target virtual screening and heterogeneous data integration. To the best of our knowledge, we have introduced here the first application of Learning to Rank in virtual screening. The experiment workflow and algorithm assessment designed in this study will provide a standard protocol for other similar studies. All the datasets as well as the implementations of Learning to Rank algorithms are available at http://www.tongji.edu.cn/~qiliu/lor_vs.html. Graphical AbstractThe analogy between web search and ligand-based drug discovery.
Bauer, Matthias R; Ibrahim, Tamer M; Vogel, Simon M; Boeckler, Frank M
2013-06-24
The application of molecular benchmarking sets helps to assess the actual performance of virtual screening (VS) workflows. To improve the efficiency of structure-based VS approaches, the selection and optimization of various parameters can be guided by benchmarking. With the DEKOIS 2.0 library, we aim to further extend and complement the collection of publicly available decoy sets. Based on BindingDB bioactivity data, we provide 81 new and structurally diverse benchmark sets for a wide variety of different target classes. To ensure a meaningful selection of ligands, we address several issues that can be found in bioactivity data. We have improved our previously introduced DEKOIS methodology with enhanced physicochemical matching, now including the consideration of molecular charges, as well as a more sophisticated elimination of latent actives in the decoy set (LADS). We evaluate the docking performance of Glide, GOLD, and AutoDock Vina with our data sets and highlight existing challenges for VS tools. All DEKOIS 2.0 benchmark sets will be made accessible at http://www.dekois.com.
Chatterjee, Arindam; Doerksen, Robert J.; Khan, Ikhlas A.
2014-01-01
Calpain mediated cleavage of CDK5 natural precursor p35 causes a stable complex formation of CDK5/p25, which leads to hyperphosphorylation of tau. Thus inhibition of this complex is a viable target for numerous acute and chronic neurodegenerative diseases involving tau protein, including Alzheimer’s disease. Since CDK5 has the highest sequence homology with its mitotic counterpart CDK2, our primary goal was to design selective CDK5/p25 inhibitors targeting neurodegeneration. A novel structure-based virtual screening protocol comprised of e-pharmacophore models and virtual screening work-flow was used to identify nine compounds from a commercial database containing 2.84 million compounds. An ATP non-competitive and selective thieno[3,2-c]quinolin-4(5H)-one inhibitor (10) with ligand efficiency (LE) of 0.3 was identified as the lead molecule. Further SAR optimization led to the discovery of several low micromolar inhibitors with good selectivity. The research represents a new class of potent ATP non-competitive CDK5/p25 inhibitors with good CDK2/E selectivity. PMID:25438765
Guasch, Laura; Sala, Esther; Castell-Auví, Anna; Cedó, Lidia; Liedl, Klaus R.; Wolber, Gerhard; Muehlbacher, Markus; Mulero, Miquel; Pinent, Montserrat; Ardévol, Anna; Valls, Cristina; Pujadas, Gerard; Garcia-Vallvé, Santiago
2012-01-01
Background Although there are successful examples of the discovery of new PPARγ agonists, it has recently been of great interest to identify new PPARγ partial agonists that do not present the adverse side effects caused by PPARγ full agonists. Consequently, the goal of this work was to design, apply and validate a virtual screening workflow to identify novel PPARγ partial agonists among natural products. Methodology/Principal Findings We have developed a virtual screening procedure based on structure-based pharmacophore construction, protein-ligand docking and electrostatic/shape similarity to discover novel scaffolds of PPARγ partial agonists. From an initial set of 89,165 natural products and natural product derivatives, 135 compounds were identified as potential PPARγ partial agonists with good ADME properties. Ten compounds that represent ten new chemical scaffolds for PPARγ partial agonists were selected for in vitro biological testing, but two of them were not assayed due to solubility problems. Five out of the remaining eight compounds were confirmed as PPARγ partial agonists: they bind to PPARγ, do not or only moderately stimulate the transactivation activity of PPARγ, do not induce adipogenesis of preadipocyte cells and stimulate the insulin-induced glucose uptake of adipocytes. Conclusions/Significance We have demonstrated that our virtual screening protocol was successful in identifying novel scaffolds for PPARγ partial agonists. PMID:23226391
Searching Fragment Spaces with feature trees.
Lessel, Uta; Wellenzohn, Bernd; Lilienthal, Markus; Claussen, Holger
2009-02-01
Virtual combinatorial chemistry easily produces billions of compounds, for which conventional virtual screening cannot be performed even with the fastest methods available. An efficient solution for such a scenario is the generation of Fragment Spaces, which encode huge numbers of virtual compounds by their fragments/reagents and rules of how to combine them. Similarity-based searches can be performed in such spaces without ever fully enumerating all virtual products. Here we describe the generation of a huge Fragment Space encoding about 5 * 10(11) compounds based on established in-house synthesis protocols for combinatorial libraries, i.e., we encode practically evaluated combinatorial chemistry protocols in a machine readable form, rendering them accessible to in silico search methods. We show how such searches in this Fragment Space can be integrated as a first step in an overall workflow. It reduces the extremely huge number of virtual products by several orders of magnitude so that the resulting list of molecules becomes more manageable for further more elaborated and time-consuming analysis steps. Results of a case study are presented and discussed, which lead to some general conclusions for an efficient expansion of the chemical space to be screened in pharmaceutical companies.
μ Opioid receptor: novel antagonists and structural modeling
NASA Astrophysics Data System (ADS)
Kaserer, Teresa; Lantero, Aquilino; Schmidhammer, Helmut; Spetea, Mariana; Schuster, Daniela
2016-02-01
The μ opioid receptor (MOR) is a prominent member of the G protein-coupled receptor family and the molecular target of morphine and other opioid drugs. Despite the long tradition of MOR-targeting drugs, still little is known about the ligand-receptor interactions and structure-function relationships underlying the distinct biological effects upon receptor activation or inhibition. With the resolved crystal structure of the β-funaltrexamine-MOR complex, we aimed at the discovery of novel agonists and antagonists using virtual screening tools, i.e. docking, pharmacophore- and shape-based modeling. We suggest important molecular interactions, which active molecules share and distinguish agonists and antagonists. These results allowed for the generation of theoretically validated in silico workflows that were employed for prospective virtual screening. Out of 18 virtual hits evaluated in in vitro pharmacological assays, three displayed antagonist activity and the most active compound significantly inhibited morphine-induced antinociception. The new identified chemotypes hold promise for further development into neurochemical tools for studying the MOR or as potential therapeutic lead candidates.
Ibrahim, Tamer M; Bauer, Matthias R; Boeckler, Frank M
2015-01-01
Structure-based virtual screening techniques can help to identify new lead structures and complement other screening approaches in drug discovery. Prior to docking, the data (protein crystal structures and ligands) should be prepared with great attention to molecular and chemical details. Using a subset of 18 diverse targets from the recently introduced DEKOIS 2.0 benchmark set library, we found differences in the virtual screening performance of two popular docking tools (GOLD and Glide) when employing two different commercial packages (e.g. MOE and Maestro) for preparing input data. We systematically investigated the possible factors that can be responsible for the found differences in selected sets. For the Angiotensin-I-converting enzyme dataset, preparation of the bioactive molecules clearly exerted the highest influence on VS performance compared to preparation of the decoys or the target structure. The major contributing factors were different protonation states, molecular flexibility, and differences in the input conformation (particularly for cyclic moieties) of bioactives. In addition, score normalization strategies eliminated the biased docking scores shown by GOLD (ChemPLP) for the larger bioactives and produced a better performance. Generalizing these normalization strategies on the 18 DEKOIS 2.0 sets, improved the performances for the majority of GOLD (ChemPLP) docking, while it showed detrimental performances for the majority of Glide (SP) docking. In conclusion, we exemplify herein possible issues particularly during the preparation stage of molecular data and demonstrate to which extent these issues can cause perturbations in the virtual screening performance. We provide insights into what problems can occur and should be avoided, when generating benchmarks to characterize the virtual screening performance. Particularly, careful selection of an appropriate molecular preparation setup for the bioactive set and the use of score normalization for docking with GOLD (ChemPLP) appear to have a great importance for the screening performance. For virtual screening campaigns, we recommend to invest time and effort into including alternative preparation workflows into the generation of the master library, even at the cost of including multiple representations of each molecule. Graphical AbstractUsing DEKOIS 2.0 benchmark sets in structure-based virtual screening to probe the impact of molecular preparation and score normalization.
A virtual data language and system for scientific workflow management in data grid environments
NASA Astrophysics Data System (ADS)
Zhao, Yong
With advances in scientific instrumentation and simulation, scientific data is growing fast in both size and analysis complexity. So-called Data Grids aim to provide high performance, distributed data analysis infrastructure for data- intensive sciences, where scientists distributed worldwide need to extract information from large collections of data, and to share both data products and the resources needed to produce and store them. However, the description, composition, and execution of even logically simple scientific workflows are often complicated by the need to deal with "messy" issues like heterogeneous storage formats and ad-hoc file system structures. We show how these difficulties can be overcome via a typed workflow notation called virtual data language, within which issues of physical representation are cleanly separated from logical typing, and by the implementation of this notation within the context of a powerful virtual data system that supports distributed execution. The resulting language and system are capable of expressing complex workflows in a simple compact form, enacting those workflows in distributed environments, monitoring and recording the execution processes, and tracing the derivation history of data products. We describe the motivation, design, implementation, and evaluation of the virtual data language and system, and the application of the virtual data paradigm in various science disciplines, including astronomy, cognitive neuroscience.
Social Protocols for Agile Virtual Teams
NASA Astrophysics Data System (ADS)
Picard, Willy
Despite many works on collaborative networked organizations (CNOs), CSCW, groupware, workflow systems and social networks, computer support for virtual teams is still insufficient, especially support for agility, i.e. the capability of virtual team members to rapidly and cost efficiently adapt the way they interact to changes. In this paper, requirements for computer support for agile virtual teams are presented. Next, an extension of the concept of social protocol is proposed as a novel model supporting agile interactions within virtual teams. The extended concept of social protocol consists of an extended social network and a workflow model.
AstroGrid: Taverna in the Virtual Observatory .
NASA Astrophysics Data System (ADS)
Benson, K. M.; Walton, N. A.
This paper reports on the implementation of the Taverna workbench by AstroGrid, a tool for designing and executing workflows of tasks in the Virtual Observatory. The workflow approach helps astronomers perform complex task sequences with little technical effort. Visual approach to workflow construction streamlines highly complex analysis over public and private data and uses computational resources as minimal as a desktop computer. Some integration issues and future work are discussed in this article.
Patel, Preeti; Singh, Avineesh; Patel, Vijay K; Jain, Deepak K; Veerasamy, Ravichandran; Rajak, Harish
2016-01-01
Histone deacetylase (HDAC) inhibitors can reactivate gene expression and inhibit the growth and survival of cancer cells. To identify the important pharmacophoric features and correlate 3Dchemical structure with biological activity using 3D-QSAR and Pharmacophore modeling studies. The pharmacophore hypotheses were developed using e-pharmacophore script and phase module. Pharmacophore hypothesis represents the 3D arrangement of molecular features necessary for activity. A series of 55 compounds with wellassigned HDAC inhibitory activity were used for 3D-QSAR model development. Best 3D-QSAR model, which is a five partial least square (PLS) factor model with good statistics and predictive ability, acquired Q2 (0.7293), R2 (0.9811), cross-validated coefficient rcv 2=0.9807 and R2 pred=0.7147 with low standard deviation (0.0952). Additionally, the selected pharmacophore model DDRRR.419 was used as a 3D query for virtual screening against the ZINC database. In the virtual screening workflow, docking studies (HTVS, SP and XP) were carried out by selecting multiple receptors (PDB ID: 1T69, 1T64, 4LXZ, 4LY1, 3MAX, 2VQQ, 3C10, 1W22). Finally, six compounds were obtained based on high scoring function (dock score -11.2278-10.2222 kcal/mol) and diverse structures. The structure activity correlation was established using virtual screening, docking, energetic based pharmacophore modelling, pharmacophore, atom based 3D QSAR models and their validation. The outcomes of these studies could be further employed for the design of novel HDAC inhibitors for anticancer activity.
Lokwani, Deepak; Azad, Rajaram; Sarkate, Aniket; Reddanna, Pallu; Shinde, Devanand
2015-08-01
The various scaffolds containing 1,4-dihydropyrimidine ring were designed by considering the environment of the active site of COX-1/COX-2 and 5-LOX enzymes. The structure-based library design approach, including the focused library design (Virtual Combinatorial Library Design) and virtual screening was used to select the 1,4-dihydropyrimidine scaffold for simultaneous inhibition of both enzyme pathways (COX-1/COX-2 and 5-LOX). The virtual library on each 1,4-dihydropyrimidine scaffold was enumerated in two alternative ways. In first way, the chemical reagents at R groups were filtered by docking of scaffold with single position substitution, that is, only at R1, or R2, or R3, … Rn on COX-2 enzyme using Glide XP docking mode. The structures that do not dock well were removed and the library was enumerated with filtered chemical reagents. In second alternative way, the single position docking stage was bypassed, and the entire library was enumerated using all chemical reagents by docking on the COX-2 enzyme. The entire library of approximately 15,629 compounds obtained from both ways after screening for drug like properties, were further screened for their binding affinity against COX-1 and 5-LOX enzymes using Virtual Screening Workflow. Finally, 142 hits were obtained and divided into two groups based on their binding affinity for COX-1/COX-2 and for both enzyme pathways (COX-1/COX-2 and 5-LOX). The ten molecules were selected, synthesized and evaluated for their COX-1, COX-2 and 5-LOX inhibiting activity. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Maffucci, Irene; Hu, Xiao; Fumagalli, Valentina; Contini, Alessandro
2018-03-01
Nwat-MMGBSA is a variant of MM-PB/GBSA based on the inclusion of a number of explicit water molecules that are the closest to the ligand in each frame of a molecular dynamics trajectory. This method demonstrated improved correlations between calculated and experimental binding energies in both protein-protein interactions and ligand-receptor complexes, in comparison to the standard MM-GBSA. A protocol optimization, aimed to maximize efficacy and efficiency, is discussed here considering penicillopepsin, HIV1-protease, and BCL-XL as test cases. Calculations were performed in triplicates on both classic HPC environments and on standard workstations equipped by a GPU card, evidencing no statistical differences in the results. No relevant differences in correlation to experiments were also observed when performing Nwat-MMGBSA calculations on 4 ns or 1 ns long trajectories. A fully automatic workflow for structure-based virtual screening, performing from library set-up to docking and Nwat-MMGBSA rescoring, has then been developed. The protocol has been tested against no rescoring or standard MM-GBSA rescoring within a retrospective virtual screening of inhibitors of AmpC β-lactamase and of the Rac1-Tiam1 protein-protein interaction. In both cases, Nwat-MMGBSA rescoring provided a statistically significant increase in the ROC AUCs of between 20% and 30%, compared to docking scoring or to standard MM-GBSA rescoring.
Automated Protocol for Large-Scale Modeling of Gene Expression Data.
Hall, Michelle Lynn; Calkins, David; Sherman, Woody
2016-11-28
With the continued rise of phenotypic- and genotypic-based screening projects, computational methods to analyze, process, and ultimately make predictions in this field take on growing importance. Here we show how automated machine learning workflows can produce models that are predictive of differential gene expression as a function of a compound structure using data from A673 cells as a proof of principle. In particular, we present predictive models with an average accuracy of greater than 70% across a highly diverse ∼1000 gene expression profile. In contrast to the usual in silico design paradigm, where one interrogates a particular target-based response, this work opens the opportunity for virtual screening and lead optimization for desired multitarget gene expression profiles.
Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics
Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A.; Caron, Christophe
2015-01-01
Summary: The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. Availability and implementation: http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). Contact: contact@workflow4metabolomics.org PMID:25527831
Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics.
Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A; Caron, Christophe
2015-05-01
The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). contact@workflow4metabolomics.org. © The Author 2014. Published by Oxford University Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, Qiang
At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less
ChemHTPS - A virtual high-throughput screening program suite for the chemical and materials sciences
NASA Astrophysics Data System (ADS)
Afzal, Mohammad Atif Faiz; Evangelista, William; Hachmann, Johannes
The discovery of new compounds, materials, and chemical reactions with exceptional properties is the key for the grand challenges in innovation, energy and sustainability. This process can be dramatically accelerated by means of the virtual high-throughput screening (HTPS) of large-scale candidate libraries. The resulting data can further be used to study the underlying structure-property relationships and thus facilitate rational design capability. This approach has been extensively used for many years in the drug discovery community. However, the lack of openly available virtual HTPS tools is limiting the use of these techniques in various other applications such as photovoltaics, optoelectronics, and catalysis. Thus, we developed ChemHTPS, a general-purpose, comprehensive and user-friendly suite, that will allow users to efficiently perform large in silico modeling studies and high-throughput analyses in these applications. ChemHTPS also includes a massively parallel molecular library generator which offers a multitude of options to customize and restrict the scope of the enumerated chemical space and thus tailor it for the demands of specific applications. To streamline the non-combinatorial exploration of chemical space, we incorporate genetic algorithms into the framework. In addition to implementing smarter algorithms, we also focus on the ease of use, workflow, and code integration to make this technology more accessible to the community.
VSDMIP: virtual screening data management on an integrated platform
NASA Astrophysics Data System (ADS)
Gil-Redondo, Rubén; Estrada, Jorge; Morreale, Antonio; Herranz, Fernando; Sancho, Javier; Ortiz, Ángel R.
2009-03-01
A novel software (VSDMIP) for the virtual screening (VS) of chemical libraries integrated within a MySQL relational database is presented. Two main features make VSDMIP clearly distinguishable from other existing computational tools: (i) its database, which stores not only ligand information but also the results from every step in the VS process, and (ii) its modular and pluggable architecture, which allows customization of the VS stages (such as the programs used for conformer generation or docking), through the definition of a detailed workflow employing user-configurable XML files. VSDMIP, therefore, facilitates the storage and retrieval of VS results, easily adapts to the specific requirements of each method and tool used in the experiments, and allows the comparison of different VS methodologies. To validate the usefulness of VSDMIP as an automated tool for carrying out VS several experiments were run on six protein targets (acetylcholinesterase, cyclin-dependent kinase 2, coagulation factor Xa, estrogen receptor alpha, p38 MAP kinase, and neuraminidase) using nine binary (actives/inactive) test sets. The performance of several VS configurations was evaluated by means of enrichment factors and receiver operating characteristic plots.
BioVeL: a virtual laboratory for data analysis and modelling in biodiversity science and ecology.
Hardisty, Alex R; Bacall, Finn; Beard, Niall; Balcázar-Vargas, Maria-Paula; Balech, Bachir; Barcza, Zoltán; Bourlat, Sarah J; De Giovanni, Renato; de Jong, Yde; De Leo, Francesca; Dobor, Laura; Donvito, Giacinto; Fellows, Donal; Guerra, Antonio Fernandez; Ferreira, Nuno; Fetyukova, Yuliya; Fosso, Bruno; Giddy, Jonathan; Goble, Carole; Güntsch, Anton; Haines, Robert; Ernst, Vera Hernández; Hettling, Hannes; Hidy, Dóra; Horváth, Ferenc; Ittzés, Dóra; Ittzés, Péter; Jones, Andrew; Kottmann, Renzo; Kulawik, Robert; Leidenberger, Sonja; Lyytikäinen-Saarenmaa, Päivi; Mathew, Cherian; Morrison, Norman; Nenadic, Aleksandra; de la Hidalga, Abraham Nieva; Obst, Matthias; Oostermeijer, Gerard; Paymal, Elisabeth; Pesole, Graziano; Pinto, Salvatore; Poigné, Axel; Fernandez, Francisco Quevedo; Santamaria, Monica; Saarenmaa, Hannu; Sipos, Gergely; Sylla, Karl-Heinz; Tähtinen, Marko; Vicario, Saverio; Vos, Rutger Aldo; Williams, Alan R; Yilmaz, Pelin
2016-10-20
Making forecasts about biodiversity and giving support to policy relies increasingly on large collections of data held electronically, and on substantial computational capability and capacity to analyse, model, simulate and predict using such data. However, the physically distributed nature of data resources and of expertise in advanced analytical tools creates many challenges for the modern scientist. Across the wider biological sciences, presenting such capabilities on the Internet (as "Web services") and using scientific workflow systems to compose them for particular tasks is a practical way to carry out robust "in silico" science. However, use of this approach in biodiversity science and ecology has thus far been quite limited. BioVeL is a virtual laboratory for data analysis and modelling in biodiversity science and ecology, freely accessible via the Internet. BioVeL includes functions for accessing and analysing data through curated Web services; for performing complex in silico analysis through exposure of R programs, workflows, and batch processing functions; for on-line collaboration through sharing of workflows and workflow runs; for experiment documentation through reproducibility and repeatability; and for computational support via seamless connections to supporting computing infrastructures. We developed and improved more than 60 Web services with significant potential in many different kinds of data analysis and modelling tasks. We composed reusable workflows using these Web services, also incorporating R programs. Deploying these tools into an easy-to-use and accessible 'virtual laboratory', free via the Internet, we applied the workflows in several diverse case studies. We opened the virtual laboratory for public use and through a programme of external engagement we actively encouraged scientists and third party application and tool developers to try out the services and contribute to the activity. Our work shows we can deliver an operational, scalable and flexible Internet-based virtual laboratory to meet new demands for data processing and analysis in biodiversity science and ecology. In particular, we have successfully integrated existing and popular tools and practices from different scientific disciplines to be used in biodiversity and ecological research.
Kinyua, Juliet; Negreira, Noelia; Ibáñez, María; Bijlsma, Lubertus; Hernández, Félix; Covaci, Adrian; van Nuijs, Alexander L N
2015-11-01
Identification of new psychoactive substances (NPS) is challenging. Developing targeted methods for their analysis can be difficult and costly due to their impermanence on the drug scene. Accurate-mass mass spectrometry (AMMS) using a quadrupole time-of-flight (QTOF) analyzer can be useful for wide-scope screening since it provides sensitive, full-spectrum MS data. Our article presents a qualitative screening workflow based on data-independent acquisition mode (all-ions MS/MS) on liquid chromatography (LC) coupled to QTOFMS for the detection and identification of NPS in biological matrices. The workflow combines and structures fundamentals of target and suspect screening data processing techniques in a structured algorithm. This allows the detection and tentative identification of NPS and their metabolites. We have applied the workflow to two actual case studies involving drug intoxications where we detected and confirmed the parent compounds ketamine, 25B-NBOMe, 25C-NBOMe, and several predicted phase I and II metabolites not previously reported in urine and serum samples. The screening workflow demonstrates the added value for the detection and identification of NPS in biological matrices.
Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms.
Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel
2014-01-01
With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.
RADER: a RApid DEcoy Retriever to facilitate decoy based assessment of virtual screening.
Wang, Ling; Pang, Xiaoqian; Li, Yecheng; Zhang, Ziying; Tan, Wen
2017-04-15
Evaluation of the capacity for separating actives from challenging decoys is a crucial metric of performance related to molecular docking or a virtual screening workflow. The Directory of Useful Decoys (DUD) and its enhanced version (DUD-E) provide a benchmark for molecular docking, although they only contain a limited set of decoys for limited targets. DecoyFinder was released to compensate the limitations of DUD or DUD-E for building target-specific decoy sets. However, desirable query template design, generation of multiple decoy sets of similar quality, and computational speed remain bottlenecks, particularly when the numbers of queried actives and retrieved decoys increases to hundreds or more. Here, we developed a program suite called RApid DEcoy Retriever (RADER) to facilitate the decoy-based assessment of virtual screening. This program adopts a novel database-management regime that supports rapid and large-scale retrieval of decoys, enables high portability of databases, and provides multifaceted options for designing initial query templates from a large number of active ligands and generating subtle decoy sets. RADER provides two operational modes: as a command-line tool and on a web server. Validation of the performance and efficiency of RADER was also conducted and is described. RADER web server and a local version are freely available at http://rcidm.org/rader/ . lingwang@scut.edu.cn or went@scut.edu.cn . Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Maserat, Elham; Seied Farajollah, Seiede Sedigheh; Safdari, Reza; Ghazisaeedi, Marjan; Aghdaei, Hamid Asadzadeh; Zali, Mohammad Reza
2015-01-01
Colorectal cancer is a major cause of morbidity and mortality throughout the world. Colorectal cancer screening is an optimal way for reducing of morbidity and mortality and a clinical decision support system (CDSS) plays an important role in predicting success of screening processes. DSS is a computer-based information system that improves the delivery of preventive care services. The aim of this article was to detail engineering of information requirements and work flow design of CDSS for a colorectal cancer screening program. In the first stage a screening minimum data set was determined. Developed and developing countries were analyzed for identifying this data set. Then information deficiencies and gaps were determined by check list. The second stage was a qualitative survey with a semi-structured interview as the study tool. A total of 15 users and stakeholders' perspectives about workflow of CDSS were studied. Finally workflow of DSS of control program was designed by standard clinical practice guidelines and perspectives. Screening minimum data set of national colorectal cancer screening program was defined in five sections, including colonoscopy data set, surgery, pathology, genetics and pedigree data set. Deficiencies and information gaps were analyzed. Then we designed a work process standard of screening. Finally workflow of DSS and entry stage were determined. A CDSS facilitates complex decision making for screening and has key roles in designing optimal interactions between colonoscopy, pathology and laboratory departments. Also workflow analysis is useful to identify data reconciliation strategies to address documentation gaps. Following recommendations of CDSS should improve quality of colorectal cancer screening.
Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms
Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel
2017-01-01
With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies. PMID:29399237
From Panoramic Photos to a Low-Cost Photogrammetric Workflow for Cultural Heritage 3d Documentation
NASA Astrophysics Data System (ADS)
D'Annibale, E.; Tassetti, A. N.; Malinverni, E. S.
2013-07-01
The research aims to optimize a workflow of architecture documentation: starting from panoramic photos, tackling available instruments and technologies to propose an integrated, quick and low-cost solution of Virtual Architecture. The broader research background shows how to use spherical panoramic images for the architectural metric survey. The input data (oriented panoramic photos), the level of reliability and Image-based Modeling methods constitute an integrated and flexible 3D reconstruction approach: from the professional survey of cultural heritage to its communication in virtual museum. The proposed work results from the integration and implementation of different techniques (Multi-Image Spherical Photogrammetry, Structure from Motion, Imagebased Modeling) with the aim to achieve high metric accuracy and photorealistic performance. Different documentation chances are possible within the proposed workflow: from the virtual navigation of spherical panoramas to complex solutions of simulation and virtual reconstruction. VR tools make for the integration of different technologies and the development of new solutions for virtual navigation. Image-based Modeling techniques allow 3D model reconstruction with photo realistic and high-resolution texture. High resolution of panoramic photo and algorithms of panorama orientation and photogrammetric restitution vouch high accuracy and high-resolution texture. Automated techniques and their following integration are subject of this research. Data, advisably processed and integrated, provide different levels of analysis and virtual reconstruction joining the photogrammetric accuracy to the photorealistic performance of the shaped surfaces. Lastly, a new solution of virtual navigation is tested. Inside the same environment, it proposes the chance to interact with high resolution oriented spherical panorama and 3D reconstructed model at once.
The essential roles of chemistry in high-throughput screening triage
Dahlin, Jayme L; Walters, Michael A
2015-01-01
It is increasingly clear that academic high-throughput screening (HTS) and virtual HTS triage suffers from a lack of scientists trained in the art and science of early drug discovery chemistry. Many recent publications report the discovery of compounds by screening that are most likely artifacts or promiscuous bioactive compounds, and these results are not placed into the context of previous studies. For HTS to be most successful, it is our contention that there must exist an early partnership between biologists and medicinal chemists. Their combined skill sets are necessary to design robust assays and efficient workflows that will weed out assay artifacts, false positives, promiscuous bioactive compounds and intractable screening hits, efforts that ultimately give projects a better chance at identifying truly useful chemical matter. Expertise in medicinal chemistry, cheminformatics and purification sciences (analytical chemistry) can enhance the post-HTS triage process by quickly removing these problematic chemotypes from consideration, while simultaneously prioritizing the more promising chemical matter for follow-up testing. It is only when biologists and chemists collaborate effectively that HTS can manifest its full promise. PMID:25163000
Data Intensive Scientific Workflows on a Federated Cloud: CRADA Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele
The Fermilab Scientific Computing Division and the KISTI Global Science Experimental Data Hub Center have built a prototypical large-scale infrastructure to handle scientific workflows of stakeholders to run on multiple cloud resources. The demonstrations have been in the areas of (a) Data-Intensive Scientific Workflows on Federated Clouds, (b) Interoperability and Federation of Cloud Resources, and (c) Virtual Infrastructure Automation to enable On-Demand Services.
Ou-Yang, Si-sheng; Lu, Jun-yan; Kong, Xiang-qian; Liang, Zhong-jie; Luo, Cheng; Jiang, Hualiang
2012-01-01
Computational drug discovery is an effective strategy for accelerating and economizing drug discovery and development process. Because of the dramatic increase in the availability of biological macromolecule and small molecule information, the applicability of computational drug discovery has been extended and broadly applied to nearly every stage in the drug discovery and development workflow, including target identification and validation, lead discovery and optimization and preclinical tests. Over the past decades, computational drug discovery methods such as molecular docking, pharmacophore modeling and mapping, de novo design, molecular similarity calculation and sequence-based virtual screening have been greatly improved. In this review, we present an overview of these important computational methods, platforms and successful applications in this field. PMID:22922346
The standard-based open workflow system in GeoBrain (Invited)
NASA Astrophysics Data System (ADS)
Di, L.; Yu, G.; Zhao, P.; Deng, M.
2013-12-01
GeoBrain is an Earth science Web-service system developed and operated by the Center for Spatial Information Science and Systems, George Mason University. In GeoBrain, a standard-based open workflow system has been implemented to accommodate the automated processing of geospatial data through a set of complex geo-processing functions for advanced production generation. The GeoBrain models the complex geoprocessing at two levels, the conceptual and concrete. At the conceptual level, the workflows exist in the form of data and service types defined by ontologies. The workflows at conceptual level are called geo-processing models and cataloged in GeoBrain as virtual product types. A conceptual workflow is instantiated into a concrete, executable workflow when a user requests a product that matches a virtual product type. Both conceptual and concrete workflows are encoded in Business Process Execution Language (BPEL). A BPEL workflow engine, called BPELPower, has been implemented to execute the workflow for the product generation. A provenance capturing service has been implemented to generate the ISO 19115-compliant complete product provenance metadata before and after the workflow execution. The generation of provenance metadata before the workflow execution allows users to examine the usability of the final product before the lengthy and expensive execution takes place. The three modes of workflow executions defined in the ISO 19119, transparent, translucent, and opaque, are available in GeoBrain. A geoprocessing modeling portal has been developed to allow domain experts to develop geoprocessing models at the type level with the support of both data and service/processing ontologies. The geoprocessing models capture the knowledge of the domain experts and are become the operational offering of the products after a proper peer review of models is conducted. An automated workflow composition has been experimented successfully based on ontologies and artificial intelligence technology. The GeoBrain workflow system has been used in multiple Earth science applications, including the monitoring of global agricultural drought, the assessment of flood damage, the derivation of national crop condition and progress information, and the detection of nuclear proliferation facilities and events.
Virtual planning for craniomaxillofacial surgery--7 years of experience.
Adolphs, Nicolai; Haberl, Ernst-Johannes; Liu, Weichen; Keeve, Erwin; Menneking, Horst; Hoffmeister, Bodo
2014-07-01
Contemporary computer-assisted surgery systems more and more allow for virtual simulation of even complex surgical procedures with increasingly realistic predictions. Preoperative workflows are established and different commercially software solutions are available. Potential and feasibility of virtual craniomaxillofacial surgery as an additional planning tool was assessed retrospectively by comparing predictions and surgical results. Since 2006 virtual simulation has been performed in selected patient cases affected by complex craniomaxillofacial disorders (n = 8) in addition to standard surgical planning based on patient specific 3d-models. Virtual planning could be performed for all levels of the craniomaxillofacial framework within a reasonable preoperative workflow. Simulation of even complex skeletal displacements corresponded well with the real surgical result and soft tissue simulation proved to be helpful. In combination with classic 3d-models showing the underlying skeletal pathology virtual simulation improved planning and transfer of craniomaxillofacial corrections. Additional work and expenses may be justified by increased possibilities of visualisation, information, instruction and documentation in selected craniomaxillofacial procedures. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Scientific Workflows and the Sensor Web for Virtual Environmental Observatories
NASA Astrophysics Data System (ADS)
Simonis, I.; Vahed, A.
2008-12-01
Virtual observatories mature from their original domain and become common practice for earth observation research and policy building. The term Virtual Observatory originally came from the astronomical research community. Here, virtual observatories provide universal access to the available astronomical data archives of space and ground-based observatories. Further on, as those virtual observatories aim at integrating heterogeneous ressources provided by a number of participating organizations, the virtual observatory acts as a coordinating entity that strives for common data analysis techniques and tools based on common standards. The Sensor Web is on its way to become one of the major virtual observatories outside of the astronomical research community. Like the original observatory that consists of a number of telescopes, each observing a specific part of the wave spectrum and with a collection of astronomical instruments, the Sensor Web provides a multi-eyes perspective on the current, past, as well as future situation of our planet and its surrounding spheres. The current view of the Sensor Web is that of a single worldwide collaborative, coherent, consistent and consolidated sensor data collection, fusion and distribution system. The Sensor Web can perform as an extensive monitoring and sensing system that provides timely, comprehensive, continuous and multi-mode observations. This technology is key to monitoring and understanding our natural environment, including key areas such as climate change, biodiversity, or natural disasters on local, regional, and global scales. The Sensor Web concept has been well established with ongoing global research and deployment of Sensor Web middleware and standards and represents the foundation layer of systems like the Global Earth Observation System of Systems (GEOSS). The Sensor Web consists of a huge variety of physical and virtual sensors as well as observational data, made available on the Internet at standardized interfaces. All data sets and sensor communication follow well-defined abstract models and corresponding encodings, mostly developed by the OGC Sensor Web Enablement initiative. Scientific progress is currently accelerated by an emerging new concept called scientific workflows, which organize and manage complex distributed computations. A scientific workflow represents and records the highly complex processes that a domain scientist typically would follow in exploration, discovery and ultimately, transformation of raw data to publishable results. The challenge is now to integrate the benefits of scientific workflows with those provided by the Sensor Web in order to leverage all resources for scientific exploration, problem solving, and knowledge generation. Scientific workflows for the Sensor Web represent the next evolutionary step towards efficient, powerful, and flexible earth observation frameworks and platforms. Those platforms support the entire process from capturing data, sharing and integrating, to requesting additional observations. Multiple sites and organizations will participate on single platforms and scientists from different countries and organizations interact and contribute to large-scale research projects. Simultaneously, the data- and information overload becomes manageable, as multiple layers of abstraction will free scientists to deal with underlying data-, processing or storage peculiarities. The vision are automated investigation and discovery mechanisms that allow scientists to pose queries to the system, which in turn would identify potentially related resources, schedules processing tasks and assembles all parts in workflows that may satisfy the query.
ERIC Educational Resources Information Center
Dyrberg, Nadia Rahbek; Treusch, Alexander H.; Wiegand, Claudia
2017-01-01
Potential benefits of simulations and virtual laboratory exercises in natural sciences have been both theorised and studied recently. This study reports findings from a pilot study on student attitude, motivation and self-efficacy when using the virtual laboratory programme Labster. The programme allows interactive learning about the workflows and…
Virtual Sensor Web Architecture
NASA Astrophysics Data System (ADS)
Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.
2006-12-01
NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele
The Fermilab Grid and Cloud Computing Department and the KISTI Global Science experimental Data hub Center are working on a multi-year Collaborative Research and Development Agreement.With the knowledge developed in the first year on how to provision and manage a federation of virtual machines through Cloud management systems. In this second year, we expanded the work on provisioning and federation, increasing both scale and diversity of solutions, and we started to build on-demand services on the established fabric, introducing the paradigm of Platform as a Service to assist with the execution of scientific workflows. We have enabled scientific workflows ofmore » stakeholders to run on multiple cloud resources at the scale of 1,000 concurrent machines. The demonstrations have been in the areas of (a) Virtual Infrastructure Automation and Provisioning, (b) Interoperability and Federation of Cloud Resources, and (c) On-demand Services for ScientificWorkflows.« less
Development of CXCR4 modulators by virtual HTS of a novel amide-sulfamide compound library.
Bai, Renren; Shi, Qi; Liang, Zhongxing; Yoon, Younghyoun; Han, Yiran; Feng, Amber; Liu, Shuangping; Oum, Yoonhyeun; Yun, C Chris; Shim, Hyunsuk
2017-01-27
CXCR4 plays a crucial role in recruitment of inflammatory cells to inflammation sites at the beginning of the disease process. Modulating CXCR4 functions presents a new avenue for anti-inflammatory strategies. However, using CXCR4 antagonists for a long term usage presents potential serious side effect due to their stem cell mobilizing property. We have been developing partial CXCR4 antagonists without such property. A new computer-aided drug design program, the FRESH workflow, was used for anti-CXCR4 lead compound discovery and optimization, which coupled both compound library building and CXCR4 docking screens in one campaign. Based on the designed parent framework, 30 prioritized amide-sulfamide structures were obtained after systemic filtering and docking screening. Twelve compounds were prepared from the top-30 list. Most synthesized compounds exhibited good to excellent binding affinity to CXCR4. Compounds Ig and Im demonstrated notable in vivo suppressive activity against xylene-induced mouse ear inflammation (with 56% and 54% inhibition). Western blot analyses revealed that Ig significantly blocked CXCR4/CXCL12-mediated phosphorylation of Akt. Moreover, Ig attenuated the amount of TNF-α secreted by pathogenic E. coli-infected macrophages. More importantly, Ig had no observable cytotoxicity. Our results demonstrated that FRESH virtual high throughput screening program of targeted chemical class could successfully find potent lead compounds, and the amide-sulfamide pharmacophore was a novel and effective framework blocking CXCR4 function. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Gilmour, Matthew W.; DeGagne, Pat; Nichol, Kim; Karlowsky, James A.
2014-01-01
An efficient workflow to screen for and confirm the presence of carbapenemase-producing Gram-negative bacilli was developed by evaluating five chromogenic screening agar media and two confirmatory assays, the Rapid Carb screen test (Rosco Diagnostica A/S, Taastrup, Denmark) and the modified Hodge test. A panel of 150 isolates was used, including 49 carbapenemase-producing isolates representing a variety of β-lactamase enzyme classes. An evaluation of analytical performance, assay cost, and turnaround time indicated that the preferred workflow (screening test followed by confirmatory testing) was the chromID Carba agar medium (bioMérieux, Marcy l'Étoile, France), followed by the Rapid Carb screen test, yielding a combined sensitivity of 89.8% and a specificity of 100%. As an optional component of the workflow, a determination of carbapenemase gene class via molecular means could be performed subsequent to confirmatory testing. PMID:25355764
NASA Astrophysics Data System (ADS)
Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin
2006-02-01
A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.
Approaches to virtual screening and screening library selection.
Wildman, Scott A
2013-01-01
The ease of access to virtual screening (VS) software in recent years has resulted in a large increase in literature reports. Over 300 publications in the last year report the use of virtual screening techniques to identify new chemical matter or present the development of new virtual screening techniques. The increased use is accompanied by a corresponding increase in misuse and misinterpretation of virtual screening results. This review aims to identify many of the common difficulties associated with virtual screening and allow researchers to better assess the reliability of their virtual screening effort.
Lin, Wei-Shao; Harris, Bryan T; Phasuk, Kamolphob; Llop, Daniel R; Morton, Dean
2018-02-01
This clinical report describes a digital workflow using the virtual smile design approach augmented with a static 3-dimensional (3D) virtual patient with photorealistic appearance to restore maxillary central incisors by using computer-aided design and computer-aided manufacturing (CAD-CAM) monolithic lithium disilicate ceramic veneers. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
The complete digital workflow in fixed prosthodontics: a systematic review.
Joda, Tim; Zarone, Fernando; Ferrari, Marco
2017-09-19
The continuous development in dental processing ensures new opportunities in the field of fixed prosthodontics in a complete virtual environment without any physical model situations. The aim was to compare fully digitalized workflows to conventional and/or mixed analog-digital workflows for the treatment with tooth-borne or implant-supported fixed reconstructions. A PICO strategy was executed using an electronic (MEDLINE, EMBASE, Google Scholar) plus manual search up to 2016-09-16 focusing on RCTs investigating complete digital workflows in fixed prosthodontics with regard to economics or esthetics or patient-centered outcomes with or without follow-up or survival/success rate analysis as well as complication assessment of at least 1 year under function. The search strategy was assembled from MeSH-Terms and unspecific free-text words: {(("Dental Prosthesis" [MeSH]) OR ("Crowns" [MeSH]) OR ("Dental Prosthesis, Implant-Supported" [MeSH])) OR ((crown) OR (fixed dental prosthesis) OR (fixed reconstruction) OR (dental bridge) OR (implant crown) OR (implant prosthesis) OR (implant restoration) OR (implant reconstruction))} AND {("Computer-Aided Design" [MeSH]) OR ((digital workflow) OR (digital technology) OR (computerized dentistry) OR (intraoral scan) OR (digital impression) OR (scanbody) OR (virtual design) OR (digital design) OR (cad/cam) OR (rapid prototyping) OR (monolithic) OR (full-contour))} AND {("Dental Technology" [MeSH) OR ((conventional workflow) OR (lost-wax-technique) OR (porcelain-fused-to-metal) OR (PFM) OR (implant impression) OR (hand-layering) OR (veneering) OR (framework))} AND {(("Study, Feasibility" [MeSH]) OR ("Survival" [MeSH]) OR ("Success" [MeSH]) OR ("Economics" [MeSH]) OR ("Costs, Cost Analysis" [MeSH]) OR ("Esthetics, Dental" [MeSH]) OR ("Patient Satisfaction" [MeSH])) OR ((feasibility) OR (efficiency) OR (patient-centered outcome))}. Assessment of risk of bias in selected studies was done at a 'trial level' including random sequence generation, allocation concealment, blinding, completeness of outcome data, selective reporting, and other bias using the Cochrane Collaboration tool. A judgment of risk of bias was assigned if one or more key domains had a high or unclear risk of bias. An official registration of the systematic review was not performed. The systematic search identified 67 titles, 32 abstracts thereof were screened, and subsequently, three full-texts included for data extraction. Analysed RCTs were heterogeneous without follow-up. One study demonstrated that fully digitally produced dental crowns revealed the feasibility of the process itself; however, the marginal precision was lower for lithium disilicate (LS2) restorations (113.8 μm) compared to conventional metal-ceramic (92.4 μm) and zirconium dioxide (ZrO2) crowns (68.5 μm) (p < 0.05). Another study showed that leucite-reinforced glass ceramic crowns were esthetically favoured by the patients (8/2 crowns) and clinicians (7/3 crowns) (p < 0.05). The third study investigated implant crowns. The complete digital workflow was more than twofold faster (75.3 min) in comparison to the mixed analog-digital workflow (156.6 min) (p < 0.05). No RCTs could be found investigating multi-unit fixed dental prostheses (FDP). The number of RCTs testing complete digital workflows in fixed prosthodontics is low. Scientifically proven recommendations for clinical routine cannot be given at this time. Research with high-quality trials seems to be slower than the industrial progress of available digital applications. Future research with well-designed RCTs including follow-up observation is compellingly necessary in the field of complete digital processing.
Using Docker Compose for the Simple Deployment of an Integrated Drug Target Screening Platform.
List, Markus
2017-06-10
Docker virtualization allows for software tools to be executed in an isolated and controlled environment referred to as a container. In Docker containers, dependencies are provided exactly as intended by the developer and, consequently, they simplify the distribution of scientific software and foster reproducible research. The Docker paradigm is that each container encapsulates one particular software tool. However, to analyze complex biomedical data sets, it is often necessary to combine several software tools into elaborate workflows. To address this challenge, several Docker containers need to be instantiated and properly integrated, which complicates the software deployment process unnecessarily. Here, we demonstrate how an extension to Docker, Docker compose, can be used to mitigate these problems by providing a unified setup routine that deploys several tools in an integrated fashion. We demonstrate the power of this approach by example of a Docker compose setup for a drug target screening platform consisting of five integrated web applications and shared infrastructure, deployable in just two lines of codes.
Solaberrieta, Eneko; Garmendia, Asier; Minguez, Rikardo; Brizuela, Aritza; Pradies, Guillermo
2015-12-01
This article describes a virtual technique for transferring the location of a digitized cast from the patient to a virtual articulator (virtual facebow transfer). Using a virtual procedure, the maxillary digital cast is transferred to a virtual articulator by means of reverse engineering devices. The following devices necessary to carry out this protocol are available in many contemporary practices: an intraoral scanner, a digital camera, and specific software. Results prove the viability of integrating different tools and software and of completely integrating this procedure into a dental digital workflow. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Virtual Sensors in a Web 2.0 Digital Watershed
NASA Astrophysics Data System (ADS)
Liu, Y.; Hill, D. J.; Marini, L.; Kooper, R.; Rodriguez, A.; Myers, J. D.
2008-12-01
The lack of rainfall data in many watersheds is one of the major barriers for modeling and studying many environmental and hydrological processes and supporting decision making. There are just not enough rain gages on the ground. To overcome this data scarcity issue, a Web 2.0 digital watershed is developed at NCSA(National Center for Supercomputing Applications), where users can point-and-click on a web-based google map interface and create new precipitation virtual sensors at any location within the same coverage region as a NEXRAD station. A set of scientific workflows are implemented to perform spatial, temporal and thematic transformations to the near-real-time NEXRAD Level II data. Such workflows can be triggered by the users' actions and generate either rainfall rate or rainfall accumulation streaming data at a user-specified time interval. We will discuss some underlying components of this digital watershed, which consists of a semantic content management middleware, a semantically enhanced streaming data toolkit, virtual sensor management functionality, and RESTful (REpresentational State Transfer) web service that can trigger the workflow execution. Such loosely coupled architecture presents a generic framework for constructing a Web 2.0 style digital watershed. An implementation of this architecture at the Upper Illinois Rive Basin will be presented. We will also discuss the implications of the virtual sensor concept for the broad environmental observatory community and how such concept will help us move towards a participatory digital watershed.
NASA Astrophysics Data System (ADS)
Cox, S. J.; Wyborn, L. A.; Fraser, R.; Rankine, T.; Woodcock, R.; Vote, J.; Evans, B.
2012-12-01
The Virtual Geophysics Laboratory (VGL) is web portal that provides geoscientists with an integrated online environment that: seamlessly accesses geophysical and geoscience data services from the AuScope national geoscience information infrastructure; loosely couples these data to a variety of gesocience software tools; and provides large scale processing facilities via cloud computing. VGL is a collaboration between CSIRO, Geoscience Australia, National Computational Infrastructure, Monash University, Australian National University and the University of Queensland. The VGL provides a distributed system whereby a user can enter an online virtual laboratory to seamlessly connect to OGC web services for geoscience data. The data is supplied in open standards formats using international standards like GeoSciML. A VGL user uses a web mapping interface to discover and filter the data sources using spatial and attribute filters to define a subset. Once the data is selected the user is not required to download the data. VGL collates the service query information for later in the processing workflow where it will be staged directly to the computing facilities. The combination of deferring data download and access to Cloud computing enables VGL users to access their data at higher resolutions and to undertake larger scale inversions, more complex models and simulations than their own local computing facilities might allow. Inside the Virtual Geophysics Laboratory, the user has access to a library of existing models, complete with exemplar workflows for specific scientific problems based on those models. For example, the user can load a geological model published by Geoscience Australia, apply a basic deformation workflow provided by a CSIRO scientist, and have it run in a scientific code from Monash. Finally the user can publish these results to share with a colleague or cite in a paper. This opens new opportunities for access and collaboration as all the resources (models, code, data, processing) are shared in the one virtual laboratory. VGL provides end users with access to an intuitive, user-centered interface that leverages cloud storage and cloud and cluster processing from both the research communities and commercial suppliers (e.g. Amazon). As the underlying data and information services are agnostic of the scientific domain, they can support many other data types. This fundamental characteristic results in a highly reusable virtual laboratory infrastructure that could also be used for example natural hazards, satellite processing, soil geochemistry, climate modeling, agriculture crop modeling.
DecoyFinder: an easy-to-use python GUI application for building target-specific decoy sets.
Cereto-Massagué, Adrià; Guasch, Laura; Valls, Cristina; Mulero, Miquel; Pujadas, Gerard; Garcia-Vallvé, Santiago
2012-06-15
Decoys are molecules that are presumed to be inactive against a target (i.e. will not likely bind to the target) and are used to validate the performance of molecular docking or a virtual screening workflow. The Directory of Useful Decoys database (http://dud.docking.org/) provides a free directory of decoys for use in virtual screening, though it only contains a limited set of decoys for 40 targets.To overcome this limitation, we have developed an application called DecoyFinder that selects, for a given collection of active ligands of a target, a set of decoys from a database of compounds. Decoys are selected if they are similar to active ligands according to five physical descriptors (molecular weight, number of rotational bonds, total hydrogen bond donors, total hydrogen bond acceptors and the octanol-water partition coefficient) without being chemically similar to any of the active ligands used as an input (according to the Tanimoto coefficient between MACCS fingerprints). To the best of our knowledge, DecoyFinder is the first application designed to build target-specific decoy sets. A complete description of the software is included on the application home page. A validation of DecoyFinder on 10 DUD targets is provided as Supplementary Table S1. DecoyFinder is freely available at http://URVnutrigenomica-CTNS.github.com/DecoyFinder.
NASA Astrophysics Data System (ADS)
Evans, J. D.; Hao, W.; Chettri, S.
2013-12-01
The cloud is proving to be a uniquely promising platform for scientific computing. Our experience with processing satellite data using Amazon Web Services highlights several opportunities for enhanced performance, flexibility, and cost effectiveness in the cloud relative to traditional computing -- for example: - Direct readout from a polar-orbiting satellite such as the Suomi National Polar-Orbiting Partnership (S-NPP) requires bursts of processing a few times a day, separated by quiet periods when the satellite is out of receiving range. In the cloud, by starting and stopping virtual machines in minutes, we can marshal significant computing resources quickly when needed, but not pay for them when not needed. To take advantage of this capability, we are automating a data-driven approach to the management of cloud computing resources, in which new data availability triggers the creation of new virtual machines (of variable size and processing power) which last only until the processing workflow is complete. - 'Spot instances' are virtual machines that run as long as one's asking price is higher than the provider's variable spot price. Spot instances can greatly reduce the cost of computing -- for software systems that are engineered to withstand unpredictable interruptions in service (as occurs when a spot price exceeds the asking price). We are implementing an approach to workflow management that allows data processing workflows to resume with minimal delays after temporary spot price spikes. This will allow systems to take full advantage of variably-priced 'utility computing.' - Thanks to virtual machine images, we can easily launch multiple, identical machines differentiated only by 'user data' containing individualized instructions (e.g., to fetch particular datasets or to perform certain workflows or algorithms) This is particularly useful when (as is the case with S-NPP data) we need to launch many very similar machines to process an unpredictable number of data files concurrently. Our experience shows the viability and flexibility of this approach to workflow management for scientific data processing. - Finally, cloud computing is a promising platform for distributed volunteer ('interstitial') computing, via mechanisms such as the Berkeley Open Infrastructure for Network Computing (BOINC) popularized with the SETI@Home project and others such as ClimatePrediction.net and NASA's Climate@Home. Interstitial computing faces significant challenges as commodity computing shifts from (always on) desktop computers towards smartphones and tablets (untethered and running on scarce battery power); but cloud computing offers significant slack capacity. This capacity includes virtual machines with unused RAM or underused CPUs; virtual storage volumes allocated (& paid for) but not full; and virtual machines that are paid up for the current hour but whose work is complete. We are devising ways to facilitate the reuse of these resources (i.e., cloud-based interstitial computing) for satellite data processing and related analyses. We will present our findings and research directions on these and related topics.
NASA Astrophysics Data System (ADS)
Greenwood, Jeremy R.; Calkins, David; Sullivan, Arron P.; Shelley, John C.
2010-06-01
Generating the appropriate protonation states of drug-like molecules in solution is important for success in both ligand- and structure-based virtual screening. Screening collections of millions of compounds requires a method for determining tautomers and their energies that is sufficiently rapid, accurate, and comprehensive. To maximise enrichment, the lowest energy tautomers must be determined from heterogeneous input, without over-enumerating unfavourable states. While computationally expensive, the density functional theory (DFT) method M06-2X/aug-cc-pVTZ(-f) [PB-SCRF] provides accurate energies for enumerated model tautomeric systems. The empirical Hammett-Taft methodology can very rapidly extrapolate substituent effects from model systems to drug-like molecules via the relationship between pKT and pKa. Combining the two complementary approaches transforms the tautomer problem from a scientific challenge to one of engineering scale-up, and avoids issues that arise due to the very limited number of measured pKT values, especially for the complicated heterocycles often favoured by medicinal chemists for their novelty and versatility. Several hundreds of pre-calculated tautomer energies and substituent pKa effects are tabulated in databases for use in structural adjustment by the program Epik, which treats tautomers as a subset of the larger problem of the protonation states in aqueous ensembles and their energy penalties. Accuracy and coverage is continually improved and expanded by parameterizing new systems of interest using DFT and experimental data. Recommendations are made for how to best incorporate tautomers in molecular design and virtual screening workflows.
Spjuth, Ola; Karlsson, Andreas; Clements, Mark; Humphreys, Keith; Ivansson, Emma; Dowling, Jim; Eklund, Martin; Jauhiainen, Alexandra; Czene, Kamila; Grönberg, Henrik; Sparén, Pär; Wiklund, Fredrik; Cheddad, Abbas; Pálsdóttir, Þorgerður; Rantalainen, Mattias; Abrahamsson, Linda; Laure, Erwin; Litton, Jan-Eric; Palmgren, Juni
2017-09-01
We provide an e-Science perspective on the workflow from risk factor discovery and classification of disease to evaluation of personalized intervention programs. As case studies, we use personalized prostate and breast cancer screenings. We describe an e-Science initiative in Sweden, e-Science for Cancer Prevention and Control (eCPC), which supports biomarker discovery and offers decision support for personalized intervention strategies. The generic eCPC contribution is a workflow with 4 nodes applied iteratively, and the concept of e-Science signifies systematic use of tools from the mathematical, statistical, data, and computer sciences. The eCPC workflow is illustrated through 2 case studies. For prostate cancer, an in-house personalized screening tool, the Stockholm-3 model (S3M), is presented as an alternative to prostate-specific antigen testing alone. S3M is evaluated in a trial setting and plans for rollout in the population are discussed. For breast cancer, new biomarkers based on breast density and molecular profiles are developed and the US multicenter Women Informed to Screen Depending on Measures (WISDOM) trial is referred to for evaluation. While current eCPC data management uses a traditional data warehouse model, we discuss eCPC-developed features of a coherent data integration platform. E-Science tools are a key part of an evidence-based process for personalized medicine. This paper provides a structured workflow from data and models to evaluation of new personalized intervention strategies. The importance of multidisciplinary collaboration is emphasized. Importantly, the generic concepts of the suggested eCPC workflow are transferrable to other disease domains, although each disease will require tailored solutions. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.
A Virtual Environment for Process Management. A Step by Step Implementation
ERIC Educational Resources Information Center
Mayer, Sergio Valenzuela
2003-01-01
In this paper it is presented a virtual organizational environment, conceived with the integration of three computer programs: a manufacturing simulation package, an automation of businesses processes (workflows), and business intelligence (Balanced Scorecard) software. It was created as a supporting tool for teaching IE, its purpose is to give…
Cloud services for the Fermilab scientific stakeholders
Timm, S.; Garzoglio, G.; Mhashilkar, P.; ...
2015-12-23
As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less
Cloud services for the Fermilab scientific stakeholders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timm, S.; Garzoglio, G.; Mhashilkar, P.
As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic raymore » simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. As a result, we present in detail the technological improvements that were used to make this work a reality.« less
Scheduling Multilevel Deadline-Constrained Scientific Workflows on Clouds Based on Cost Optimization
Malawski, Maciej; Figiela, Kamil; Bubak, Marian; ...
2015-01-01
This paper presents a cost optimization model for scheduling scientific workflows on IaaS clouds such as Amazon EC2 or RackSpace. We assume multiple IaaS clouds with heterogeneous virtual machine instances, with limited number of instances per cloud and hourly billing. Input and output data are stored on a cloud object store such as Amazon S3. Applications are scientific workflows modeled as DAGs as in the Pegasus Workflow Management System. We assume that tasks in the workflows are grouped into levels of identical tasks. Our model is specified using mathematical programming languages (AMPL and CMPL) and allows us to minimize themore » cost of workflow execution under deadline constraints. We present results obtained using our model and the benchmark workflows representing real scientific applications in a variety of domains. The data used for evaluation come from the synthetic workflows and from general purpose cloud benchmarks, as well as from the data measured in our own experiments with Montage, an astronomical application, executed on Amazon EC2 cloud. We indicate how this model can be used for scenarios that require resource planning for scientific workflows and their ensembles.« less
2011-01-01
Background Based on barriers to the use of computerized clinical decision support (CDS) learned in an earlier field study, we prototyped design enhancements to the Veterans Health Administration's (VHA's) colorectal cancer (CRC) screening clinical reminder to compare against the VHA's current CRC reminder. Methods In a controlled simulation experiment, 12 primary care providers (PCPs) used prototypes of the current and redesigned CRC screening reminder in a within-subject comparison. Quantitative measurements were based on a usability survey, workload assessment instrument, and workflow integration survey. We also collected qualitative data on both designs. Results Design enhancements to the VHA's existing CRC screening clinical reminder positively impacted aspects of usability and workflow integration but not workload. The qualitative analysis revealed broad support across participants for the design enhancements with specific suggestions for improving the reminder further. Conclusions This study demonstrates the value of a human-computer interaction evaluation in informing the redesign of information tools to foster uptake, integration into workflow, and use in clinical practice. PMID:22126324
A web-based platform for virtual screening.
Watson, Paul; Verdonk, Marcel; Hartshorn, Michael J
2003-09-01
A fully integrated, web-based, virtual screening platform has been developed to allow rapid virtual screening of large numbers of compounds. ORACLE is used to store information at all stages of the process. The system includes a large database of historical compounds from high throughput screenings (HTS) chemical suppliers, ATLAS, containing over 3.1 million unique compounds with their associated physiochemical properties (ClogP, MW, etc.). The database can be screened using a web-based interface to produce compound subsets for virtual screening or virtual library (VL) enumeration. In order to carry out the latter task within ORACLE a reaction data cartridge has been developed. Virtual libraries can be enumerated rapidly using the web-based interface to the cartridge. The compound subsets can be seamlessly submitted for virtual screening experiments, and the results can be viewed via another web-based interface allowing ad hoc querying of the virtual screening data stored in ORACLE.
Virtual Screening with AutoDock: Theory and Practice
Cosconati, Sandro; Forli, Stefano; Perryman, Alex L.; Harris, Rodney; Goodsell, David S.; Olson, Arthur J.
2011-01-01
Importance to the field Virtual screening is a computer-based technique for identifying promising compounds to bind to a target molecule of known structure. Given the rapidly increasing number of protein and nucleic acid structures, virtual screening continues to grow as an effective method for the discovery of new inhibitors and drug molecules. Areas covered in this review We describe virtual screening methods that are available in the AutoDock suite of programs, and several of our successes in using AutoDock virtual screening in pharmaceutical lead discovery. What the reader will gain A general overview of the challenges of virtual screening is presented, along with the tools available in the AutoDock suite of programs for addressing these challenges. Take home message Virtual screening is an effective tool for the discovery of compounds for use as leads in drug discovery, and the free, open source program AutoDock is an effective tool for virtual screening. PMID:21532931
Hierarchical virtual screening approaches in small molecule drug discovery.
Kumar, Ashutosh; Zhang, Kam Y J
2015-01-01
Virtual screening has played a significant role in the discovery of small molecule inhibitors of therapeutic targets in last two decades. Various ligand and structure-based virtual screening approaches are employed to identify small molecule ligands for proteins of interest. These approaches are often combined in either hierarchical or parallel manner to take advantage of the strength and avoid the limitations associated with individual methods. Hierarchical combination of ligand and structure-based virtual screening approaches has received noteworthy success in numerous drug discovery campaigns. In hierarchical virtual screening, several filters using ligand and structure-based approaches are sequentially applied to reduce a large screening library to a number small enough for experimental testing. In this review, we focus on different hierarchical virtual screening strategies and their application in the discovery of small molecule modulators of important drug targets. Several virtual screening studies are discussed to demonstrate the successful application of hierarchical virtual screening in small molecule drug discovery. Copyright © 2014 Elsevier Inc. All rights reserved.
myExperiment: a repository and social network for the sharing of bioinformatics workflows
Goble, Carole A.; Bhagat, Jiten; Aleksejevs, Sergejs; Cruickshank, Don; Michaelides, Danius; Newman, David; Borkum, Mark; Bechhofer, Sean; Roos, Marco; Li, Peter; De Roure, David
2010-01-01
myExperiment (http://www.myexperiment.org) is an online research environment that supports the social sharing of bioinformatics workflows. These workflows are procedures consisting of a series of computational tasks using web services, which may be performed on data from its retrieval, integration and analysis, to the visualization of the results. As a public repository of workflows, myExperiment allows anybody to discover those that are relevant to their research, which can then be reused and repurposed to their specific requirements. Conversely, developers can submit their workflows to myExperiment and enable them to be shared in a secure manner. Since its release in 2007, myExperiment currently has over 3500 registered users and contains more than 1000 workflows. The social aspect to the sharing of these workflows is facilitated by registered users forming virtual communities bound together by a common interest or research project. Contributors of workflows can build their reputation within these communities by receiving feedback and credit from individuals who reuse their work. Further documentation about myExperiment including its REST web service is available from http://wiki.myexperiment.org. Feedback and requests for support can be sent to bugs@myexperiment.org. PMID:20501605
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...
2017-09-29
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
Carpenter, Kristy A; Huang, Xudong
2018-06-07
Virtual Screening (VS) has emerged as an important tool in the drug development process, as it conducts efficient in silico searches over millions of compounds, ultimately increasing yields of potential drug leads. As a subset of Artificial Intelligence (AI), Machine Learning (ML) is a powerful way of conducting VS for drug leads. ML for VS generally involves assembling a filtered training set of compounds, comprised of known actives and inactives. After training the model, it is validated and, if sufficiently accurate, used on previously unseen databases to screen for novel compounds with desired drug target binding activity. The study aims to review ML-based methods used for VS and applications to Alzheimer's disease (AD) drug discovery. To update the current knowledge on ML for VS, we review thorough backgrounds, explanations, and VS applications of the following ML techniques: Naïve Bayes (NB), k-Nearest Neighbors (kNN), Support Vector Machines (SVM), Random Forests (RF), and Artificial Neural Networks (ANN). All techniques have found success in VS, but the future of VS is likely to lean more heavily toward the use of neural networks - and more specifically, Convolutional Neural Networks (CNN), which are a subset of ANN that utilize convolution. We additionally conceptualize a work flow for conducting ML-based VS for potential therapeutics of for AD, a complex neurodegenerative disease with no known cure and prevention. This both serves as an example of how to apply the concepts introduced earlier in the review and as a potential workflow for future implementation. Different ML techniques are powerful tools for VS, and they have advantages and disadvantages albeit. ML-based VS can be applied to AD drug development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Xia, Jie; Hsieh, Jui-Hua; Hu, Huabin; Wu, Song; Wang, Xiang Simon
2017-06-26
Structure-based virtual screening (SBVS) has become an indispensable technique for hit identification at the early stage of drug discovery. However, the accuracy of current scoring functions is not high enough to confer success to every target and thus remains to be improved. Previously, we had developed binary pose filters (PFs) using knowledge derived from the protein-ligand interface of a single X-ray structure of a specific target. This novel approach had been validated as an effective way to improve ligand enrichment. Continuing from it, in the present work we attempted to incorporate knowledge collected from diverse protein-ligand interfaces of multiple crystal structures of the same target to build PF ensembles (PFEs). Toward this end, we first constructed a comprehensive data set to meet the requirements of ensemble modeling and validation. This set contains 10 diverse targets, 118 well-prepared X-ray structures of protein-ligand complexes, and large benchmarking actives/decoys sets. Notably, we designed a unique workflow of two-layer classifiers based on the concept of ensemble learning and applied it to the construction of PFEs for all of the targets. Through extensive benchmarking studies, we demonstrated that (1) coupling PFE with Chemgauss4 significantly improves the early enrichment of Chemgauss4 itself and (2) PFEs show greater consistency in boosting early enrichment and larger overall enrichment than our prior PFs. In addition, we analyzed the pairwise topological similarities among cognate ligands used to construct PFEs and found that it is the higher chemical diversity of the cognate ligands that leads to the improved performance of PFEs. Taken together, the results so far prove that the incorporation of knowledge from diverse protein-ligand interfaces by ensemble modeling is able to enhance the screening competence of SBVS scoring functions.
Virtual High-Throughput Screening To Identify Novel Activin Antagonists
Zhu, Jie; Mishra, Rama K.; Schiltz, Gary E.; Makanji, Yogeshwar; Scheidt, Karl A.; Mazar, Andrew P.; Woodruff, Teresa K.
2015-01-01
Activin belongs to the TGFβ superfamily, which is associated with several disease conditions, including cancer-related cachexia, preterm labor with delivery, and osteoporosis. Targeting activin and its related signaling pathways holds promise as a therapeutic approach to these diseases. A small-molecule ligand-binding groove was identified in the interface between the two activin βA subunits and was used for a virtual high-throughput in silico screening of the ZINC database to identify hits. Thirty-nine compounds without significant toxicity were tested in two well-established activin assays: FSHβ transcription and HepG2 cell apoptosis. This screening workflow resulted in two lead compounds: NUCC-474 and NUCC-555. These potential activin antagonists were then shown to inhibit activin A-mediated cell proliferation in ex vivo ovary cultures. In vivo testing showed that our most potent compound (NUCC-555) caused a dose-dependent decrease in FSH levels in ovariectomized mice. The Blitz competition binding assay confirmed target binding of NUCC-555 to the activin A:ActRII that disrupts the activin A:ActRII complex’s binding with ALK4-ECD-Fc in a dose-dependent manner. The NUCC-555 also specifically binds to activin A compared with other TGFβ superfamily member myostatin (GDF8). These data demonstrate a new in silico-based strategy for identifying small-molecule activin antagonists. Our approach is the first to identify a first-in-class small-molecule antagonist of activin binding to ALK4, which opens a completely new approach to inhibiting the activity of TGFβ receptor superfamily members. in addition, the lead compound can serve as a starting point for lead optimization toward the goal of a compound that may be effective in activin-mediated diseases. PMID:26098096
SIMPLEX: Cloud-Enabled Pipeline for the Comprehensive Analysis of Exome Sequencing Data
Fischer, Maria; Snajder, Rene; Pabinger, Stephan; Dander, Andreas; Schossig, Anna; Zschocke, Johannes; Trajanoski, Zlatko; Stocker, Gernot
2012-01-01
In recent studies, exome sequencing has proven to be a successful screening tool for the identification of candidate genes causing rare genetic diseases. Although underlying targeted sequencing methods are well established, necessary data handling and focused, structured analysis still remain demanding tasks. Here, we present a cloud-enabled autonomous analysis pipeline, which comprises the complete exome analysis workflow. The pipeline combines several in-house developed and published applications to perform the following steps: (a) initial quality control, (b) intelligent data filtering and pre-processing, (c) sequence alignment to a reference genome, (d) SNP and DIP detection, (e) functional annotation of variants using different approaches, and (f) detailed report generation during various stages of the workflow. The pipeline connects the selected analysis steps, exposes all available parameters for customized usage, performs required data handling, and distributes computationally expensive tasks either on a dedicated high-performance computing infrastructure or on the Amazon cloud environment (EC2). The presented application has already been used in several research projects including studies to elucidate the role of rare genetic diseases. The pipeline is continuously tested and is publicly available under the GPL as a VirtualBox or Cloud image at http://simplex.i-med.ac.at; additional supplementary data is provided at http://www.icbi.at/exome. PMID:22870267
Kakarala, Kavita Kumari; Jamil, Kaiser
2015-02-01
Drug resistance and drug-associated toxicity are the primary causes for withdrawal of many drugs, although patient recovery is satisfactory in many instances. Interestingly, the use of phytochemicals in the treatment of cancer as an alternative to synthetic drugs comes with a host of advantages; minimum side effects, good human absorption and low toxicity to normal cells. Protease activated receptor 1 (PAR1) has been established as a promising target in many diseases including various cancers. Strong evidences suggest its role in metastasis also. There are no natural compounds known to inhibit its activity, so we aimed to identify phytochemicals with antagonist activity against PAR1. We screened phytochemicals from Naturally Occurring Plant-based Anticancer Compound-Activity-Target database (NPACT, http://crdd.osdd.net/raghava/npact/ ) against PAR1 using virtual screening workflow of Schrödinger software. It analyzes pharmaceutically relevant properties using Qikprop and calculates binding energy using Glide at three accuracy levels (high-throughput virtual screening, standard precision and extra precision). Our study led to the identification of phytochemicals, which showed interaction with at least one experimentally determined active site residue of PAR1, showed no violations to Lipinski's rule of five along with predicted high human absorption. Furthermore, structural interaction fingerprint analysis indicated that the residues H255, D256, E260, S344, V257, L258, L262, Y337 and S344 may play an important role in the hydrogen bond interactions of the phytochemicals screened. Of these residues, H255 and L258 residues were experimentally proved to be important for antagonist binding. The residues Y183, L237, L258, L262, F271, L332, L333, Y337, L340, A349, Y350, A352, and Y353 showed maximum hydrophobic interactions with the phytochemicals screened. The results of this work suggest that phytochemicals Reissantins D, 24,25-dihydro-27-desoxywithaferin A, Isoguaiacin, 20-hydroxy-12-deoxyphorbol angelate, etc. could be potential antagonist of PAR1. However, further experimental studies are necessary to validate their antagonistic activity against PAR1.
Dockres: a computer program that analyzes the output of virtual screening of small molecules
2010-01-01
Background This paper describes a computer program named Dockres that is designed to analyze and summarize results of virtual screening of small molecules. The program is supplemented with utilities that support the screening process. Foremost among these utilities are scripts that run the virtual screening of a chemical library on a large number of processors in parallel. Methods Dockres and some of its supporting utilities are written Fortran-77; other utilities are written as C-shell scripts. They support the parallel execution of the screening. The current implementation of the program handles virtual screening with Autodock-3 and Autodock-4, but can be extended to work with the output of other programs. Results Analysis of virtual screening by Dockres led to both active and selective lead compounds. Conclusions Analysis of virtual screening was facilitated and enhanced by Dockres in both the authors' laboratories as well as laboratories elsewhere. PMID:20205801
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alexander J.
There is a lack of state-of-the-art HPC simulation tools for simulating general quantum computing. Furthermore, there are no real software tools that integrate current quantum computers into existing classical HPC workflows. This product, the Quantum Virtual Machine (QVM), solves this problem by providing an extensible framework for pluggable virtual, or physical, quantum processing units (QPUs). It enables the execution of low level quantum assembly codes and returns the results of such executions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele
The Fermilab Grid and Cloud Computing Department and the KISTI Global Science experimental Data hub Center propose a joint project. The goals are to enable scientific workflows of stakeholders to run on multiple cloud resources by use of (a) Virtual Infrastructure Automation and Provisioning, (b) Interoperability and Federat ion of Cloud Resources , and (c) High-Throughput Fabric Virtualization. This is a matching fund project in which Fermilab and KISTI will contribute equal resources .
Lu, Xinyan
2016-01-01
There is a clear requirement for enhancing laboratory information management during early absorption, distribution, metabolism and excretion (ADME) screening. The application of a commercial laboratory information management system (LIMS) is limited by complexity, insufficient flexibility, high costs and extended timelines. An improved custom in-house LIMS for ADME screening was developed using Excel. All Excel templates were generated through macros and formulae, and information flow was streamlined as much as possible. This system has been successfully applied in task generation, process control and data management, with a reduction in both labor time and human error rates. An Excel-based LIMS can provide a simple, flexible and cost/time-saving solution for improving workflow efficiencies in early ADME screening.
Virtual occlusal definition for orthognathic surgery.
Liu, X J; Li, Q Q; Zhang, Z; Li, T T; Xie, Z; Zhang, Y
2016-03-01
Computer-assisted surgical simulation is being used increasingly in orthognathic surgery. However, occlusal definition is still undertaken using model surgery with subsequent digitization via surface scanning or cone beam computed tomography. A software tool has been developed and a workflow set up in order to achieve a virtual occlusal definition. The results of a validation study carried out on 60 models of normal occlusion are presented. Inter- and intra-user correlation tests were used to investigate the reproducibility of the manual setting point procedure. The errors between the virtually set positions (test) and the digitized manually set positions (gold standard) were compared. The consistency in virtual set positions performed by three individual users was investigated by one way analysis of variance test. Inter- and intra-observer correlation coefficients for manual setting points were all greater than 0.95. Overall, the median error between the test and the gold standard positions was 1.06mm. Errors did not differ among teeth (F=0.371, P>0.05). The errors were not significantly different from 1mm (P>0.05). There were no significant differences in the errors made by the three independent users (P>0.05). In conclusion, this workflow for virtual occlusal definition was found to be reliable and accurate. Copyright © 2015 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Discovery of novel inhibitors for DHODH via virtual screening and X-ray crystallographic structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLean, Larry R.; Zhang, Ying; Degnen, William
2010-10-28
Amino-benzoic acid derivatives 1-4 were found to be inhibitors for DHODH by virtual screening, biochemical, and X-ray crystallographic studies. X-ray structures showed that 1 and 2 bind to DHODH as predicted by virtual screening, but 3 and 4 were found to be structurally different from the corresponding compounds initially identified by virtual screening.
AutoDrug: fully automated macromolecular crystallography workflows for fragment-based drug discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, Yingssu; Stanford University, 333 Campus Drive, Mudd Building, Stanford, CA 94305-5080; McPhillips, Scott E.
New software has been developed for automating the experimental and data-processing stages of fragment-based drug discovery at a macromolecular crystallography beamline. A new workflow-automation framework orchestrates beamline-control and data-analysis software while organizing results from multiple samples. AutoDrug is software based upon the scientific workflow paradigm that integrates the Stanford Synchrotron Radiation Lightsource macromolecular crystallography beamlines and third-party processing software to automate the crystallography steps of the fragment-based drug-discovery process. AutoDrug screens a cassette of fragment-soaked crystals, selects crystals for data collection based on screening results and user-specified criteria and determines optimal data-collection strategies. It then collects and processes diffraction data,more » performs molecular replacement using provided models and detects electron density that is likely to arise from bound fragments. All processes are fully automated, i.e. are performed without user interaction or supervision. Samples can be screened in groups corresponding to particular proteins, crystal forms and/or soaking conditions. A single AutoDrug run is only limited by the capacity of the sample-storage dewar at the beamline: currently 288 samples. AutoDrug was developed in conjunction with RestFlow, a new scientific workflow-automation framework. RestFlow simplifies the design of AutoDrug by managing the flow of data and the organization of results and by orchestrating the execution of computational pipeline steps. It also simplifies the execution and interaction of third-party programs and the beamline-control system. Modeling AutoDrug as a scientific workflow enables multiple variants that meet the requirements of different user groups to be developed and supported. A workflow tailored to mimic the crystallography stages comprising the drug-discovery pipeline of CoCrystal Discovery Inc. has been deployed and successfully demonstrated. This workflow was run once on the same 96 samples that the group had examined manually and the workflow cycled successfully through all of the samples, collected data from the same samples that were selected manually and located the same peaks of unmodeled density in the resulting difference Fourier maps.« less
Cornelius, Carl-Peter; Smolka, Wenko; Giessler, Goetz A; Wilde, Frank; Probst, Florian A
2015-06-01
Preoperative planning of mandibular reconstruction has moved from mechanical simulation by dental model casts or stereolithographic models into an almost completely virtual environment. CAD/CAM applications allow a high level of accuracy by providing a custom template-assisted contouring approach for bone flaps. However, the clinical accuracy of CAD reconstruction is limited by the use of prebent reconstruction plates, an analogue step in an otherwise digital workstream. In this paper the integration of computerized, numerically-controlled (CNC) milled, patient-specific mandibular plates (PSMP) within the virtual workflow of computer-assisted mandibular free fibula flap reconstruction is illustrated in a clinical case. Intraoperatively, the bone segments as well as the plate arms showed a very good fit. Postoperative CT imaging demonstrated close approximation of the PSMP and fibular segments, and good alignment of native mandible and fibular segments and intersegmentally. Over a follow-up period of 12 months, there was an uneventful course of healing with good bony consolidation. The virtual design and automated fabrication of patient-specific mandibular reconstruction plates provide the missing link in the virtual workflow of computer-assisted mandibular free fibula flap reconstruction. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
ToxCast Workflow: High-throughput screening assay data processing, analysis and management (SOT)
US EPA’s ToxCast program is generating data in high-throughput screening (HTS) and high-content screening (HCS) assays for thousands of environmental chemicals, for use in developing predictive toxicity models. Currently the ToxCast screening program includes over 1800 unique c...
Automated recycling of chemistry for virtual screening and library design.
Vainio, Mikko J; Kogej, Thierry; Raubacher, Florian
2012-07-23
An early stage drug discovery project needs to identify a number of chemically diverse and attractive compounds. These hit compounds are typically found through high-throughput screening campaigns. The diversity of the chemical libraries used in screening is therefore important. In this study, we describe a virtual high-throughput screening system called Virtual Library. The system automatically "recycles" validated synthetic protocols and available starting materials to generate a large number of virtual compound libraries, and allows for fast searches in the generated libraries using a 2D fingerprint based screening method. Virtual Library links the returned virtual hit compounds back to experimental protocols to quickly assess the synthetic accessibility of the hits. The system can be used as an idea generator for library design to enrich the screening collection and to explore the structure-activity landscape around a specific active compound.
Impact of digital radiography on clinical workflow.
May, G A; Deer, D D; Dackiewicz, D
2000-05-01
It is commonly accepted that digital radiography (DR) improves workflow and patient throughput compared with traditional film radiography or computed radiography (CR). DR eliminates the film development step and the time to acquire the image from a CR reader. In addition, the wide dynamic range of DR is such that the technologist can perform the quality-control (QC) step directly at the modality in a few seconds, rather than having to transport the newly acquired image to a centralized QC station for review. Furthermore, additional workflow efficiencies can be achieved with DR by employing tight radiology information system (RIS) integration. In the DR imaging environment, this provides for patient demographic information to be automatically downloaded from the RIS to populate the DR Digital Imaging and Communications in Medicine (DICOM) image header. To learn more about this workflow efficiency improvement, we performed a comparative study of workflow steps under three different conditions: traditional film/screen x-ray, DR without RIS integration (ie, manual entry of patient demographics), and DR with RIS integration. This study was performed at the Cleveland Clinic Foundation (Cleveland, OH) using a newly acquired amorphous silicon flat-panel DR system from Canon Medical Systems (Irvine, CA). Our data show that DR without RIS results in substantial workflow savings over traditional film/screen practice. There is an additional 30% reduction in total examination time using DR with RIS integration.
NASA Astrophysics Data System (ADS)
Palestini, C.; Basso, A.
2017-11-01
In recent years, an increase in international investment in hardware and software technology to support programs that adopt algorithms for photomodeling or data management from laser scanners significantly reduced the costs of operations in support of Augmented Reality and Virtual Reality, designed to generate real-time explorable digital environments integrated to virtual stereoscopic headset. The research analyzes transversal methodologies related to the acquisition of these technologies in order to intervene directly on the phenomenon of acquiring the current VR tools within a specific workflow, in light of any issues related to the intensive use of such devices , outlining a quick overview of the possible "virtual migration" phenomenon, assuming a possible integration with the new internet hyper-speed systems, capable of triggering a massive cyberspace colonization process that paradoxically would also affect the everyday life and more in general, on human space perception. The contribution aims at analyzing the application systems used for low cost 3d photogrammetry by means of a precise pipeline, clarifying how a 3d model is generated, automatically retopologized, textured by color painting or photo-cloning techniques, and optimized for parametric insertion on virtual exploration platforms. Workflow analysis will follow some case studies related to photomodeling, digital retopology and "virtual 3d transfer" of some small archaeological artifacts and an architectural compartment corresponding to the pronaus of Aurum, a building designed in the 1940s by Michelucci. All operations will be conducted on cheap or free licensed software that today offer almost the same performance as their paid counterparts, progressively improving in the data processing speed and management.
Virtual Field Reconnaissance to enable multi-site collaboration in geoscience fieldwork in Chile.
NASA Astrophysics Data System (ADS)
Hughes, Leanne; Bateson, Luke; Ford, Jonathan; Napier, Bruce; Creixell, Christian; Contreras, Juan-Pablo; Vallette, Jane
2017-04-01
The unique challenges of geological mapping in remote terrains can make cross-organisation collaboration challenging. Cooperation between the British and Chilean Geological Surveys and the Chilean national mining company used the BGS digital Mapping Workflow and virtual field reconnaissance software (GeoVisionary) to undertake geological mapping in a complex area of Andean Geology. The international team undertook a pre-field evaluation using GeoVisionary to integrate massive volumes of data and interpret high resolution satellite imagery, terrain models and existing geological information to capture, manipulate and understand geological features and re-interpret existing maps. This digital interpretation was then taken into the field and verified using the BGS digital data capture system (SIGMA.mobile). This allowed the production of final geological interpretation and creation of a geological map. This presentation describes the digital mapping workflow used in Chile and highlights the key advantages of increased efficiency and communication to colleagues, stakeholders and funding bodies.
Improving virtual screening of G protein-coupled receptors via ligand-directed modeling
Simms, John; Christopoulos, Arthur; Wootten, Denise
2017-01-01
G protein-coupled receptors (GPCRs) play crucial roles in cell physiology and pathophysiology. There is increasing interest in using structural information for virtual screening (VS) of libraries and for structure-based drug design to identify novel agonist or antagonist leads. However, the sparse availability of experimentally determined GPCR/ligand complex structures with diverse ligands impedes the application of structure-based drug design (SBDD) programs directed to identifying new molecules with a select pharmacology. In this study, we apply ligand-directed modeling (LDM) to available GPCR X-ray structures to improve VS performance and selectivity towards molecules of specific pharmacological profile. The described method refines a GPCR binding pocket conformation using a single known ligand for that GPCR. The LDM method is a computationally efficient, iterative workflow consisting of protein sampling and ligand docking. We developed an extensive benchmark comparing LDM-refined binding pockets to GPCR X-ray crystal structures across seven different GPCRs bound to a range of ligands of different chemotypes and pharmacological profiles. LDM-refined models showed improvement in VS performance over origin X-ray crystal structures in 21 out of 24 cases. In all cases, the LDM-refined models had superior performance in enriching for the chemotype of the refinement ligand. This likely contributes to the LDM success in all cases of inhibitor-bound to agonist-bound binding pocket refinement, a key task for GPCR SBDD programs. Indeed, agonist ligands are required for a plethora of GPCRs for therapeutic intervention, however GPCR X-ray structures are mostly restricted to their inactive inhibitor-bound state. PMID:29131821
GPURFSCREEN: a GPU based virtual screening tool using random forest classifier.
Jayaraj, P B; Ajay, Mathias K; Nufail, M; Gopakumar, G; Jaleel, U C A
2016-01-01
In-silico methods are an integral part of modern drug discovery paradigm. Virtual screening, an in-silico method, is used to refine data models and reduce the chemical space on which wet lab experiments need to be performed. Virtual screening of a ligand data model requires large scale computations, making it a highly time consuming task. This process can be speeded up by implementing parallelized algorithms on a Graphical Processing Unit (GPU). Random Forest is a robust classification algorithm that can be employed in the virtual screening. A ligand based virtual screening tool (GPURFSCREEN) that uses random forests on GPU systems has been proposed and evaluated in this paper. This tool produces optimized results at a lower execution time for large bioassay data sets. The quality of results produced by our tool on GPU is same as that on a regular serial environment. Considering the magnitude of data to be screened, the parallelized virtual screening has a significantly lower running time at high throughput. The proposed parallel tool outperforms its serial counterpart by successfully screening billions of molecules in training and prediction phases.
Knowledge-driven lead discovery.
Pirard, Bernard
2005-11-01
Virtual screening encompasses several computational approaches which have proven valuable for identifying novel leads. These approaches rely on available information. Herein, we review recent successful applications of virtual screening. The extension of virtual screening methodologies to target families is also briefly discussed.
De Paris, Renata; Frantz, Fábio A.; Norberto de Souza, Osmar; Ruiz, Duncan D. A.
2013-01-01
Molecular docking simulations of fully flexible protein receptor (FFR) models are coming of age. In our studies, an FFR model is represented by a series of different conformations derived from a molecular dynamic simulation trajectory of the receptor. For each conformation in the FFR model, a docking simulation is executed and analyzed. An important challenge is to perform virtual screening of millions of ligands using an FFR model in a sequential mode since it can become computationally very demanding. In this paper, we propose a cloud-based web environment, called web Flexible Receptor Docking Workflow (wFReDoW), which reduces the CPU time in the molecular docking simulations of FFR models to small molecules. It is based on the new workflow data pattern called self-adaptive multiple instances (P-SaMIs) and on a middleware built on Amazon EC2 instances. P-SaMI reduces the number of molecular docking simulations while the middleware speeds up the docking experiments using a High Performance Computing (HPC) environment on the cloud. The experimental results show a reduction in the total elapsed time of docking experiments and the quality of the new reduced receptor models produced by discarding the nonpromising conformations from an FFR model ruled by the P-SaMI data pattern. PMID:23691504
Xing, Junhao; Yang, Lingyun; Li, Hui; Li, Qing; Zhao, Leilei; Wang, Xinning; Zhang, Yuan; Zhou, Muxing; Zhou, Jinpei; Zhang, Huibin
2015-05-05
The coagulation enzyme factor Xa (fXa) plays a crucial role in the blood coagulation cascade. In this study, three-dimensional fragment based drug design (FBDD) combined with structure-based pharmacophore (SBP) model and structural consensus docking were employed to identify novel fXa inhibitors. After a multi-stage virtual screening (VS) workflow, two hit compounds 3780 and 319 having persistent high performance were identified. Then, these two hit compounds and several analogs were synthesized and screened for in-vitro inhibition of fXa. The experimental data showed that most of the designed compounds displayed significant in vitro potency against fXa. Among them, compound 9b displayed the greatest in vitro potency against fXa with the IC50 value of 23 nM and excellent selectivity versus thrombin (IC50 = 40 μM). Moreover, the prolongation of the prothrombin time (PT) was measured for compound 9b to evaluate its in vitro anticoagulant activity. As a result, compound 9b exhibited pronounced anticoagulant activity with the 2 × PT value of 8.7 μM. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Improving data collection, documentation, and workflow in a dementia screening study.
Read, Kevin B; LaPolla, Fred Willie Zametkin; Tolea, Magdalena I; Galvin, James E; Surkis, Alisa
2017-04-01
A clinical study team performing three multicultural dementia screening studies identified the need to improve data management practices and facilitate data sharing. A collaboration was initiated with librarians as part of the National Library of Medicine (NLM) informationist supplement program. The librarians identified areas for improvement in the studies' data collection, entry, and processing workflows. The librarians' role in this project was to meet needs expressed by the study team around improving data collection and processing workflows to increase study efficiency and ensure data quality. The librarians addressed the data collection, entry, and processing weaknesses through standardizing and renaming variables, creating an electronic data capture system using REDCap, and developing well-documented, reproducible data processing workflows. NLM informationist supplements provide librarians with valuable experience in collaborating with study teams to address their data needs. For this project, the librarians gained skills in project management, REDCap, and understanding of the challenges and specifics of a clinical research study. However, the time and effort required to provide targeted and intensive support for one study team was not scalable to the library's broader user community.
NASA Astrophysics Data System (ADS)
Li, Xiayue; Curtis, Farren S.; Rose, Timothy; Schober, Christoph; Vazquez-Mayagoitia, Alvaro; Reuter, Karsten; Oberhofer, Harald; Marom, Noa
2018-06-01
We present Genarris, a Python package that performs configuration space screening for molecular crystals of rigid molecules by random sampling with physical constraints. For fast energy evaluations, Genarris employs a Harris approximation, whereby the total density of a molecular crystal is constructed via superposition of single molecule densities. Dispersion-inclusive density functional theory is then used for the Harris density without performing a self-consistency cycle. Genarris uses machine learning for clustering, based on a relative coordinate descriptor developed specifically for molecular crystals, which is shown to be robust in identifying packing motif similarity. In addition to random structure generation, Genarris offers three workflows based on different sequences of successive clustering and selection steps: the "Rigorous" workflow is an exhaustive exploration of the potential energy landscape, the "Energy" workflow produces a set of low energy structures, and the "Diverse" workflow produces a maximally diverse set of structures. The latter is recommended for generating initial populations for genetic algorithms. Here, the implementation of Genarris is reported and its application is demonstrated for three test cases.
DHM simulation in virtual environments: a case-study on control room design.
Zamberlan, M; Santos, V; Streit, P; Oliveira, J; Cury, R; Negri, T; Pastura, F; Guimarães, C; Cid, G
2012-01-01
This paper will present the workflow developed for the application of serious games in the design of complex cooperative work settings. The project was based on ergonomic studies and development of a control room among participative design process. Our main concerns were the 3D human virtual representation acquired from 3D scanning, human interaction, workspace layout and equipment designed considering ergonomics standards. Using Unity3D platform to design the virtual environment, the virtual human model can be controlled by users on dynamic scenario in order to evaluate the new work settings and simulate work activities. The results obtained showed that this virtual technology can drastically change the design process by improving the level of interaction between final users and, managers and human factors team.
Shape-Based Virtual Screening with Volumetric Aligned Molecular Shapes
Koes, David Ryan; Camacho, Carlos J.
2014-01-01
Shape-based virtual screening is an established and effective method for identifying small molecules that are similar in shape and function to a reference ligand. We describe a new method of shape-based virtual screening, volumetric aligned molecular shapes (VAMS). VAMS uses efficient data structures to encode and search molecular shapes. We demonstrate that VAMS is an effective method for shape-based virtual screening and that it can be successfully used as a pre-filter to accelerate more computationally demanding search algorithms. Unique to VAMS is a novel minimum/maximum shape constraint query for precisely specifying the desired molecular shape. Shape constraint searches in VAMS are particularly efficient and millions of shapes can be searched in a fraction of a second. We compare the performance of VAMS with two other shape-based virtual screening algorithms a benchmark of 102 protein targets consisting of more than 32 million molecular shapes and find that VAMS provides a competitive trade-off between run-time performance and virtual screening performance. PMID:25049193
Support for Taverna workflows in the VPH-Share cloud platform.
Kasztelnik, Marek; Coto, Ernesto; Bubak, Marian; Malawski, Maciej; Nowakowski, Piotr; Arenas, Juan; Saglimbeni, Alfredo; Testi, Debora; Frangi, Alejandro F
2017-07-01
To address the increasing need for collaborative endeavours within the Virtual Physiological Human (VPH) community, the VPH-Share collaborative cloud platform allows researchers to expose and share sequences of complex biomedical processing tasks in the form of computational workflows. The Taverna Workflow System is a very popular tool for orchestrating complex biomedical & bioinformatics processing tasks in the VPH community. This paper describes the VPH-Share components that support the building and execution of Taverna workflows, and explains how they interact with other VPH-Share components to improve the capabilities of the VPH-Share platform. Taverna workflow support is delivered by the Atmosphere cloud management platform and the VPH-Share Taverna plugin. These components are explained in detail, along with the two main procedures that were developed to enable this seamless integration: workflow composition and execution. 1) Seamless integration of VPH-Share with other components and systems. 2) Extended range of different tools for workflows. 3) Successful integration of scientific workflows from other VPH projects. 4) Execution speed improvement for medical applications. The presented workflow integration provides VPH-Share users with a wide range of different possibilities to compose and execute workflows, such as desktop or online composition, online batch execution, multithreading, remote execution, etc. The specific advantages of each supported tool are presented, as are the roles of Atmosphere and the VPH-Share plugin within the VPH-Share project. The combination of the VPH-Share plugin and Atmosphere engenders the VPH-Share infrastructure with far more flexible, powerful and usable capabilities for the VPH-Share community. As both components can continue to evolve and improve independently, we acknowledge that further improvements are still to be developed and will be described. Copyright © 2017 Elsevier B.V. All rights reserved.
[Chemical databases and virtual screening].
Rognan, Didier; Bonnet, Pascal
2014-12-01
A prerequisite to any virtual screening is the definition of compound libraries to be screened. As we describe here, various sources are available. The selection of the proper library is usually project-dependent but at least as important as the screening method itself. This review details the main compound libraries that are available for virtual screening and guide the reader to the best possible selection according to its needs. © 2014 médecine/sciences – Inserm.
FluxCTTX: A LIMS-based tool for management and analysis of cytotoxicity assays data
2015-01-01
Background Cytotoxicity assays have been used by researchers to screen for cytotoxicity in compound libraries. Researchers can either look for cytotoxic compounds or screen "hits" from initial high-throughput drug screens for unwanted cytotoxic effects before investing in their development as a pharmaceutical. These assays may be used as an alternative to animal experimentation and are becoming increasingly important in modern laboratories. However, the execution of these assays in large scale and different laboratories requires, among other things, the management of protocols, reagents, cell lines used as well as the data produced, which can be a challenge. The management of all this information is greatly improved by the utilization of computational tools to save time and guarantee quality. However, a tool that performs this task designed specifically for cytotoxicity assays is not yet available. Results In this work, we have used a workflow based LIMS -- the Flux system -- and the Together Workflow Editor as a framework to develop FluxCTTX, a tool for management of data from cytotoxicity assays performed at different laboratories. The main work is the development of a workflow, which represents all stages of the assay and has been developed and uploaded in Flux. This workflow models the activities of cytotoxicity assays performed as described in the OECD 129 Guidance Document. Conclusions FluxCTTX presents a solution for the management of the data produced by cytotoxicity assays performed at Interlaboratory comparisons. Its adoption will contribute to guarantee the quality of activities in the process of cytotoxicity tests and enforce the use of Good Laboratory Practices (GLP). Furthermore, the workflow developed is complete and can be adapted to other contexts and different tests for management of other types of data. PMID:26696462
Modern approaches to accelerate discovery of new antischistosomal drugs.
Neves, Bruno Junior; Muratov, Eugene; Machado, Renato Beilner; Andrade, Carolina Horta; Cravo, Pedro Vitor Lemos
2016-06-01
The almost exclusive use of only praziquantel for the treatment of schistosomiasis has raised concerns about the possible emergence of drug-resistant schistosomes. Consequently, there is an urgent need for new antischistosomal drugs. The identification of leads and the generation of high quality data are crucial steps in the early stages of schistosome drug discovery projects. Herein, the authors focus on the current developments in antischistosomal lead discovery, specifically referring to the use of automated in vitro target-based and whole-organism screens and virtual screening of chemical databases. They highlight the strengths and pitfalls of each of the above-mentioned approaches, and suggest possible roadmaps towards the integration of several strategies, which may contribute for optimizing research outputs and led to more successful and cost-effective drug discovery endeavors. Increasing partnerships and access to funding for drug discovery have strengthened the battle against schistosomiasis in recent years. However, the authors believe this battle also includes innovative strategies to overcome scientific challenges. In this context, significant advances of in vitro screening as well as computer-aided drug discovery have contributed to increase the success rate and reduce the costs of drug discovery campaigns. Although some of these approaches were already used in current antischistosomal lead discovery pipelines, the integration of these strategies in a solid workflow should allow the production of new treatments for schistosomiasis in the near future.
Eleven quick tips for architecting biomedical informatics workflows with cloud computing.
Cole, Brian S; Moore, Jason H
2018-03-01
Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world's largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction.
Pozzi, Alessandro; Arcuri, Lorenzo; Moy, Peter K
2018-03-01
The growing interest in minimally invasive implant placement and delivery of a prefabricated provisional prosthesis immediately, thus minimizing "time to teeth," has led to the development of numerous 3-dimensional (3D) planning software programs. Given the enhancements associated with fully digital workflows, such as better 3D soft-tissue visualization and virtual tooth rendering, computer-guided implant surgery and immediate function has become an effective and reliable procedure. This article describes how modern implant planning software programs provide a comprehensive digital platform that enables efficient interplay between the surgical and restorative aspects of implant treatment. These new technologies that streamline the overall digital workflow allow transformation of the digital wax-up into a personalized, CAD/CAM-milled provisional restoration. Thus, collaborative digital workflows provide a novel approach for time-efficient delivery of a customized, screw-retained provisional restoration on the day of implant surgery, resulting in improved predictability for immediate function in the partially edentate patient.
Eleven quick tips for architecting biomedical informatics workflows with cloud computing
Moore, Jason H.
2018-01-01
Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world’s largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction. PMID:29596416
Building a virtual ligand screening pipeline using free software: a survey.
Glaab, Enrico
2016-03-01
Virtual screening, the search for bioactive compounds via computational methods, provides a wide range of opportunities to speed up drug development and reduce the associated risks and costs. While virtual screening is already a standard practice in pharmaceutical companies, its applications in preclinical academic research still remain under-exploited, in spite of an increasing availability of dedicated free databases and software tools. In this survey, an overview of recent developments in this field is presented, focusing on free software and data repositories for screening as alternatives to their commercial counterparts, and outlining how available resources can be interlinked into a comprehensive virtual screening pipeline using typical academic computing facilities. Finally, to facilitate the set-up of corresponding pipelines, a downloadable software system is provided, using platform virtualization to integrate pre-installed screening tools and scripts for reproducible application across different operating systems. © The Author 2015. Published by Oxford University Press.
Building a virtual ligand screening pipeline using free software: a survey
2016-01-01
Virtual screening, the search for bioactive compounds via computational methods, provides a wide range of opportunities to speed up drug development and reduce the associated risks and costs. While virtual screening is already a standard practice in pharmaceutical companies, its applications in preclinical academic research still remain under-exploited, in spite of an increasing availability of dedicated free databases and software tools. In this survey, an overview of recent developments in this field is presented, focusing on free software and data repositories for screening as alternatives to their commercial counterparts, and outlining how available resources can be interlinked into a comprehensive virtual screening pipeline using typical academic computing facilities. Finally, to facilitate the set-up of corresponding pipelines, a downloadable software system is provided, using platform virtualization to integrate pre-installed screening tools and scripts for reproducible application across different operating systems. PMID:26094053
Can virtual reality be used to conduct mass prophylaxis clinic training? A pilot program.
Yellowlees, Peter; Cook, James N; Marks, Shayna L; Wolfe, Daniel; Mangin, Elanor
2008-03-01
To create and evaluate a pilot bioterrorism defense training environment using virtual reality technology. The present pilot project used Second Life, an internet-based virtual world system, to construct a virtual reality environment to mimic an actual setting that might be used as a Strategic National Stockpile (SNS) distribution site for northern California in the event of a bioterrorist attack. Scripted characters were integrated into the system as mock patients to analyze various clinic workflow scenarios. Users tested the virtual environment over two sessions. Thirteen users who toured the environment were asked to complete an evaluation survey. Respondents reported that the virtual reality system was relevant to their practice and had potential as a method of bioterrorism defense training. Computer simulations of bioterrorism defense training scenarios are feasible with existing personal computer technology. The use of internet-connected virtual environments holds promise for bioterrorism defense training. Recommendations are made for public health agencies regarding the implementation and benefits of using virtual reality for mass prophylaxis clinic training.
Fragment-based screening in tandem with phenotypic screening provides novel antiparasitic hits.
Blaazer, Antoni R; Orrling, Kristina M; Shanmugham, Anitha; Jansen, Chimed; Maes, Louis; Edink, Ewald; Sterk, Geert Jan; Siderius, Marco; England, Paul; Bailey, David; de Esch, Iwan J P; Leurs, Rob
2015-01-01
Methods to discover biologically active small molecules include target-based and phenotypic screening approaches. One of the main difficulties in drug discovery is elucidating and exploiting the relationship between drug activity at the protein target and disease modification, a phenotypic endpoint. Fragment-based drug discovery is a target-based approach that typically involves the screening of a relatively small number of fragment-like (molecular weight <300) molecules that efficiently cover chemical space. Here, we report a fragment screening on TbrPDEB1, an essential cyclic nucleotide phosphodiesterase (PDE) from Trypanosoma brucei, and human PDE4D, an off-target, in a workflow in which fragment hits and a series of close analogs are subsequently screened for antiparasitic activity in a phenotypic panel. The phenotypic panel contained T. brucei, Trypanosoma cruzi, Leishmania infantum, and Plasmodium falciparum, the causative agents of human African trypanosomiasis (sleeping sickness), Chagas disease, leishmaniasis, and malaria, respectively, as well as MRC-5 human lung cells. This hybrid screening workflow has resulted in the discovery of various benzhydryl ethers with antiprotozoal activity and low toxicity, representing interesting starting points for further antiparasitic optimization. © 2014 Society for Laboratory Automation and Screening.
ESO Reflex: a graphical workflow engine for data reduction
NASA Astrophysics Data System (ADS)
Hook, Richard; Ullgrén, Marko; Romaniello, Martino; Maisala, Sami; Oittinen, Tero; Solin, Otto; Savolainen, Ville; Järveläinen, Pekka; Tyynelä, Jani; Péron, Michèle; Ballester, Pascal; Gabasch, Armin; Izzo, Carlo
ESO Reflex is a prototype software tool that provides a novel approach to astronomical data reduction by integrating a modern graphical workflow system (Taverna) with existing legacy data reduction algorithms. Most of the raw data produced by instruments at the ESO Very Large Telescope (VLT) in Chile are reduced using recipes. These are compiled C applications following an ESO standard and utilising routines provided by the Common Pipeline Library (CPL). Currently these are run in batch mode as part of the data flow system to generate the input to the ESO/VLT quality control process and are also exported for use offline. ESO Reflex can invoke CPL-based recipes in a flexible way through a general purpose graphical interface. ESO Reflex is based on the Taverna system that was originally developed within the UK life-sciences community. Workflows have been created so far for three VLT/VLTI instruments, and the GUI allows the user to make changes to these or create workflows of their own. Python scripts or IDL procedures can be easily brought into workflows and a variety of visualisation and display options, including custom product inspection and validation steps, are available. Taverna is intended for use with web services and experiments using ESO Reflex to access Virtual Observatory web services have been successfully performed. ESO Reflex is the main product developed by Sampo, a project led by ESO and conducted by a software development team from Finland as an in-kind contribution to joining ESO. The goal was to look into the needs of the ESO community in the area of data reduction environments and to create pilot software products that illustrate critical steps along the road to a new system. Sampo concluded early in 2008. This contribution will describe ESO Reflex and show several examples of its use both locally and using Virtual Observatory remote web services. ESO Reflex is expected to be released to the community in early 2009.
A Computational Workflow for the Automated Generation of Models of Genetic Designs.
Misirli, Göksel; Nguyen, Tramy; McLaughlin, James Alastair; Vaidyanathan, Prashant; Jones, Timothy S; Densmore, Douglas; Myers, Chris; Wipat, Anil
2018-06-05
Computational models are essential to engineer predictable biological systems and to scale up this process for complex systems. Computational modeling often requires expert knowledge and data to build models. Clearly, manual creation of models is not scalable for large designs. Despite several automated model construction approaches, computational methodologies to bridge knowledge in design repositories and the process of creating computational models have still not been established. This paper describes a workflow for automatic generation of computational models of genetic circuits from data stored in design repositories using existing standards. This workflow leverages the software tool SBOLDesigner to build structural models that are then enriched by the Virtual Parts Repository API using Systems Biology Open Language (SBOL) data fetched from the SynBioHub design repository. The iBioSim software tool is then utilized to convert this SBOL description into a computational model encoded using the Systems Biology Markup Language (SBML). Finally, this SBML model can be simulated using a variety of methods. This workflow provides synthetic biologists with easy to use tools to create predictable biological systems, hiding away the complexity of building computational models. This approach can further be incorporated into other computational workflows for design automation.
Reynolds, Christopher R; Muggleton, Stephen H; Sternberg, Michael J E
2015-01-01
The use of virtual screening has become increasingly central to the drug development pipeline, with ligand-based virtual screening used to screen databases of compounds to predict their bioactivity against a target. These databases can only represent a small fraction of chemical space, and this paper describes a method of exploring synthetic space by applying virtual reactions to promising compounds within a database, and generating focussed libraries of predicted derivatives. A ligand-based virtual screening tool Investigational Novel Drug Discovery by Example (INDDEx) is used as the basis for a system of virtual reactions. The use of virtual reactions is estimated to open up a potential space of 1.21×1012 potential molecules. A de novo design algorithm known as Partial Logical-Rule Reactant Selection (PLoRRS) is introduced and incorporated into the INDDEx methodology. PLoRRS uses logical rules from the INDDEx model to select reactants for the de novo generation of potentially active products. The PLoRRS method is found to increase significantly the likelihood of retrieving molecules similar to known actives with a p-value of 0.016. Case studies demonstrate that the virtual reactions produce molecules highly similar to known actives, including known blockbuster drugs. PMID:26583052
DEC Ada interface to Screen Management Guidelines (SMG)
NASA Technical Reports Server (NTRS)
Laomanachareon, Somsak; Lekkos, Anthony A.
1986-01-01
DEC's Screen Management Guidelines are the Run-Time Library procedures that perform terminal-independent screen management functions on a VT100-class terminal. These procedures assist users in designing, composing, and keeping track of complex images on a video screen. There are three fundamental elements in the screen management model: the pasteboard, the virtual display, and the virtual keyboard. The pasteboard is like a two-dimensional area on which a user places and manipulates screen displays. The virtual display is a rectangular part of the terminal screen to which a program writes data with procedure calls. The virtual keyboard is a logical structure for input operation associated with a physical keyboard. SMG can be called by all major VAX languages. Through Ada, predefined language Pragmas are used to interface with SMG. These features and elements of SMG are briefly discussed.
NASA Astrophysics Data System (ADS)
Hsieh, Jui-Hua; Wang, Xiang S.; Teotico, Denise; Golbraikh, Alexander; Tropsha, Alexander
2008-09-01
The use of inaccurate scoring functions in docking algorithms may result in the selection of compounds with high predicted binding affinity that nevertheless are known experimentally not to bind to the target receptor. Such falsely predicted binders have been termed `binding decoys'. We posed a question as to whether true binders and decoys could be distinguished based only on their structural chemical descriptors using approaches commonly used in ligand based drug design. We have applied the k-Nearest Neighbor ( kNN) classification QSAR approach to a dataset of compounds characterized as binders or binding decoys of AmpC beta-lactamase. Models were subjected to rigorous internal and external validation as part of our standard workflow and a special QSAR modeling scheme was employed that took into account the imbalanced ratio of inhibitors to non-binders (1:4) in this dataset. 342 predictive models were obtained with correct classification rate (CCR) for both training and test sets as high as 0.90 or higher. The prediction accuracy was as high as 100% (CCR = 1.00) for the external validation set composed of 10 compounds (5 true binders and 5 decoys) selected randomly from the original dataset. For an additional external set of 50 known non-binders, we have achieved the CCR of 0.87 using very conservative model applicability domain threshold. The validated binary kNN QSAR models were further employed for mining the NCGC AmpC screening dataset (69653 compounds). The consensus prediction of 64 compounds identified as screening hits in the AmpC PubChem assay disagreed with their annotation in PubChem but was in agreement with the results of secondary assays. At the same time, 15 compounds were identified as potential binders contrary to their annotation in PubChem. Five of them were tested experimentally and showed inhibitory activities in millimolar range with the highest binding constant Ki of 135 μM. Our studies suggest that validated QSAR models could complement structure based docking and scoring approaches in identifying promising hits by virtual screening of molecular libraries.
Improving data collection, documentation, and workflow in a dementia screening study
Read, Kevin B.; LaPolla, Fred Willie Zametkin; Tolea, Magdalena I.; Galvin, James E.; Surkis, Alisa
2017-01-01
Background A clinical study team performing three multicultural dementia screening studies identified the need to improve data management practices and facilitate data sharing. A collaboration was initiated with librarians as part of the National Library of Medicine (NLM) informationist supplement program. The librarians identified areas for improvement in the studies’ data collection, entry, and processing workflows. Case Presentation The librarians’ role in this project was to meet needs expressed by the study team around improving data collection and processing workflows to increase study efficiency and ensure data quality. The librarians addressed the data collection, entry, and processing weaknesses through standardizing and renaming variables, creating an electronic data capture system using REDCap, and developing well-documented, reproducible data processing workflows. Conclusions NLM informationist supplements provide librarians with valuable experience in collaborating with study teams to address their data needs. For this project, the librarians gained skills in project management, REDCap, and understanding of the challenges and specifics of a clinical research study. However, the time and effort required to provide targeted and intensive support for one study team was not scalable to the library’s broader user community. PMID:28377680
Commuting from Electronic Cottage to Virtual Library.
ERIC Educational Resources Information Center
Woodward, Jeannette
1996-01-01
Although telecommuting has been found to increase productivity and morale in business environments, libraries rarely consider it. This article discusses telecommuting's potential impact on contact with users, length of employment, job descriptions, budgets, management style, communication, and workflow. This option may help libraries retain older…
Integrated workflows for spiking neuronal network simulations
Antolík, Ján; Davison, Andrew P.
2013-01-01
The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages. PMID:24368902
Integrated workflows for spiking neuronal network simulations.
Antolík, Ján; Davison, Andrew P
2013-01-01
The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages.
Paini, Alicia; Sala Benito, Jose Vicente; Bessems, Jos; Worth, Andrew P
2017-12-01
Physiologically based kinetic (PBK) models and the virtual cell based assay can be linked to form so called physiologically based dynamic (PBD) models. This study illustrates the development and application of a PBK model for prediction of estragole-induced DNA adduct formation and hepatotoxicity in humans. To address the hepatotoxicity, HepaRG cells were used as a surrogate for liver cells, with cell viability being used as the in vitro toxicological endpoint. Information on DNA adduct formation was taken from the literature. Since estragole induced cell damage is not directly caused by the parent compound, but by a reactive metabolite, information on the metabolic pathway was incorporated into the model. In addition, a user-friendly tool was developed by implementing the PBK/D model into a KNIME workflow. This workflow can be used to perform in vitro to in vivo extrapolation and forward as backward dosimetry in support of chemical risk assessment. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Leveraging Existing Heritage Documentation for Animations: Senate Virtual Tour
NASA Astrophysics Data System (ADS)
Dhanda, A.; Fai, S.; Graham, K.; Walczak, G.
2017-08-01
The use of digital documentation techniques has led to an increase in opportunities for using documentation data for valorization purposes, in addition to technical purposes. Likewise, building information models (BIMs) made from these data sets hold valuable information that can be as effective for public education as it is for rehabilitation. A BIM can reveal the elements of a building, as well as the different stages of a building over time. Valorizing this information increases the possibility for public engagement and interest in a heritage place. Digital data sets were leveraged by the Carleton Immersive Media Studio (CIMS) for parts of a virtual tour of the Senate of Canada. For the tour, workflows involving four different programs were explored to determine an efficient and effective way to leverage the existing documentation data to create informative and visually enticing animations for public dissemination: Autodesk Revit, Enscape, Autodesk 3ds Max, and Bentley Pointools. The explored workflows involve animations of point clouds, BIMs, and a combination of the two.
Abreu, Rui Mv; Froufe, Hugo Jc; Queiroz, Maria João Rp; Ferreira, Isabel Cfr
2010-10-28
Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a potential maximum speed-up of 10x, the parallel algorithm of MOLA performed with a speed-up of 8,64× using AutoDock4 and 8,60× using Vina.
Liu, Chi; He, Gu; Jiang, Qinglin; Han, Bo; Peng, Cheng
2013-01-01
Methione tRNA synthetase (MetRS) is an essential enzyme involved in protein biosynthesis in all living organisms and is a potential antibacterial target. In the current study, the structure-based pharmacophore (SBP)-guided method has been suggested to generate a comprehensive pharmacophore of MetRS based on fourteen crystal structures of MetRS-inhibitor complexes. In this investigation, a hybrid protocol of a virtual screening method, comprised of pharmacophore model-based virtual screening (PBVS), rigid and flexible docking-based virtual screenings (DBVS), is used for retrieving new MetRS inhibitors from commercially available chemical databases. This hybrid virtual screening approach was then applied to screen the Specs (202,408 compounds) database, a structurally diverse chemical database. Fifteen hit compounds were selected from the final hits and shifted to experimental studies. These results may provide important information for further research of novel MetRS inhibitors as antibacterial agents. PMID:23839093
Ramasamy, Thilagavathi; Selvam, Chelliah
2015-10-15
Virtual screening has become an important tool in drug discovery process. Structure based and ligand based approaches are generally used in virtual screening process. To date, several benchmark sets for evaluating the performance of the virtual screening tool are available. In this study, our aim is to compare the performance of both structure based and ligand based virtual screening methods. Ten anti-cancer targets and their corresponding benchmark sets from 'Demanding Evaluation Kits for Objective In silico Screening' (DEKOIS) library were selected. X-ray crystal structures of protein-ligand complexes were selected based on their resolution. Openeye tools such as FRED, vROCS were used and the results were carefully analyzed. At EF1%, vROCS produced better results but at EF5% and EF10%, both FRED and ROCS produced almost similar results. It was noticed that the enrichment factor values were decreased while going from EF1% to EF5% and EF10% in many cases. Published by Elsevier Ltd.
Berkley, Holly; Barnes, Matthew; Carnahan, David; Hayhurst, Janet; Bockhorst, Archie; Neville, James
2017-03-01
To describe the use of template-based screening for risk of infectious disease exposure of patients presenting to primary care medical facilities during the 2014 West African Ebola virus outbreak. The Military Health System implemented an Ebola risk-screening tool in primary care settings in order to create early notifications and early responses to potentially infected persons. Three time-sensitive, evidence-based screening questions were developed and posted to Tri-Service Workflow (TSWF) AHLTA templates in conjunction with appropriate training. Data were collected in January 2015, to assess the adoption of the TSWF-based Ebola risk-screening tool. Among encounters documented using TSWF templates, 41% of all encounters showed use of the TSWF-based Ebola risk-screening questions by the fourth day. The screening rate increased over the next 3 weeks, and reached a plateau at approximately 50%. This report demonstrates the MHS capability to deploy a standardized, globally applicable decision support aid that could be seen the same day by all primary care clinics across the military health direct care system, potentially improving rapid compliance with screening directives. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
CLEW: A Cooperative Learning Environment for the Web.
ERIC Educational Resources Information Center
Ribeiro, Marcelo Blois; Noya, Ricardo Choren; Fuks, Hugo
This paper outlines CLEW (collaborative learning environment for the Web). The project combines MUD (Multi-User Dimension), workflow, VRML (Virtual Reality Modeling Language) and educational concepts like constructivism in a learning environment where students actively participate in the learning process. The MUD shapes the environment structure.…
Workflow of CAD / CAM Scoliosis Brace Adjustment in Preparation Using 3D Printing.
Weiss, Hans-Rudolf; Tournavitis, Nicos; Nan, Xiaofeng; Borysov, Maksym; Paul, Lothar
2017-01-01
High correction bracing is the most effective conservative treatment for patients with scoliosis during growth. Still today braces for the treatment of scoliosis are made by casting patients while computer aided design (CAD) and computer aided manufacturing (CAM) is available with all possibilities to standardize pattern specific brace treatment and improve wearing comfort. CAD / CAM brace production mainly relies on carving a polyurethane foam model which is the basis for vacuuming a polyethylene (PE) or polypropylene (PP) brace. Purpose of this short communication is to describe the workflow currently used and to outline future requirements with respect to 3D printing technology. Description of the steps of virtual brace adjustment as available today are content of this paper as well as an outline of the great potential there is for the future 3D printing technology. For 3D printing of scoliosis braces it is necessary to establish easy to use software plug-ins in order to allow adding 3D printing technology to the current workflow of virtual CAD / CAM brace adjustment. Textures and structures can be added to the brace models at certain well defined locations offering the potential of more wearing comfort without losing in-brace correction. Advances have to be made in the field of CAD / CAM software tools with respect to design and generation of individually structured brace models based on currently well established and standardized scoliosis brace libraries.
Virtual screening of compound libraries.
Cerqueira, Nuno M F S A; Sousa, Sérgio F; Fernandes, Pedro A; Ramos, Maria João
2009-01-01
During the last decade, Virtual Screening (VS) has definitively established itself as an important part of the drug discovery and development process. VS involves the selection of likely drug candidates from large libraries of chemical structures by using computational methodologies, but the generic definition of VS encompasses many different methodologies. This chapter provides an introduction to the field by reviewing a variety of important aspects, including the different types of virtual screening methods, and the several steps required for a successful virtual screening campaign within a state-of-the-art approach, from target selection to postfilter application. This analysis is further complemented with a small collection important VS success stories.
Human Systems Integration Design Environment (HSIDE)
2012-04-09
quality of the resulting HSI products. 15. SUBJECT TERMS HSI , Manning Estimation and Validation , Risk Assessment, I POE, PLM, BPMN , Workflow...business process model in Business Process Modeling Notation ( BPMN ) or the actual workflow template associated with the specific functional area, again...as filtered by the user settings in the high level interface. Figure 3 shows the initial screen which allows the user to select either the BPMN or
Protein tyrosine phosphatases: Ligand interaction analysis and optimisation of virtual screening.
Ghattas, Mohammad A; Atatreh, Noor; Bichenkova, Elena V; Bryce, Richard A
2014-07-01
Docking-based virtual screening is an established component of structure-based drug discovery. Nevertheless, scoring and ranking of computationally docked ligand libraries still suffer from many false positives. Identifying optimal docking parameters for a target protein prior to virtual screening can improve experimental hit rates. Here, we examine protocols for virtual screening against the important but challenging class of drug target, protein tyrosine phosphatases. In this study, common interaction features were identified from analysis of protein-ligand binding geometries of more than 50 complexed phosphatase crystal structures. It was found that two interactions were consistently formed across all phosphatase inhibitors: (1) a polar contact with the conserved arginine residue, and (2) at least one interaction with the P-loop backbone amide. In order to investigate the significance of these features on phosphatase-ligand binding, a series of seeded virtual screening experiments were conducted on three phosphatase enzymes, PTP1B, Cdc25b and IF2. It was observed that when the conserved arginine and P-loop amide interactions were used as pharmacophoric constraints during docking, enrichment of the virtual screen significantly increased in the three studied phosphatases, by up to a factor of two in some cases. Additionally, the use of such pharmacophoric constraints considerably improved the ability of docking to predict the inhibitor's bound pose, decreasing RMSD to the crystallographic geometry by 43% on average. Constrained docking improved enrichment of screens against both open and closed conformations of PTP1B. Incorporation of an ordered water molecule in PTP1B screening was also found to generally improve enrichment. The knowledge-based computational strategies explored here can potentially inform structure-based design of new phosphatase inhibitors using docking-based virtual screening. Copyright © 2014 Elsevier Inc. All rights reserved.
Metaworkflows and Workflow Interoperability for Heliophysics
NASA Astrophysics Data System (ADS)
Pierantoni, Gabriele; Carley, Eoin P.
2014-06-01
Heliophysics is a relatively new branch of physics that investigates the relationship between the Sun and the other bodies of the solar system. To investigate such relationships, heliophysicists can rely on various tools developed by the community. Some of these tools are on-line catalogues that list events (such as Coronal Mass Ejections, CMEs) and their characteristics as they were observed on the surface of the Sun or on the other bodies of the Solar System. Other tools offer on-line data analysis and access to images and data catalogues. During their research, heliophysicists often perform investigations that need to coordinate several of these services and to repeat these complex operations until the phenomena under investigation are fully analyzed. Heliophysicists combine the results of these services; this service orchestration is best suited for workflows. This approach has been investigated in the HELIO project. The HELIO project developed an infrastructure for a Virtual Observatory for Heliophysics and implemented service orchestration using TAVERNA workflows. HELIO developed a set of workflows that proved to be useful but lacked flexibility and re-usability. The TAVERNA workflows also needed to be executed directly in TAVERNA workbench, and this forced all users to learn how to use the workbench. Within the SCI-BUS and ER-FLOW projects, we have started an effort to re-think and re-design the heliophysics workflows with the aim of fostering re-usability and ease of use. We base our approach on two key concepts, that of meta-workflows and that of workflow interoperability. We have divided the produced workflows in three different layers. The first layer is Basic Workflows, developed both in the TAVERNA and WS-PGRADE languages. They are building blocks that users compose to address their scientific challenges. They implement well-defined Use Cases that usually involve only one service. The second layer is Science Workflows usually developed in TAVERNA. They- implement Science Cases (the definition of a scientific challenge) by composing different Basic Workflows. The third and last layer,Iterative Science Workflows, is developed in WSPGRADE. It executes sub-workflows (either Basic or Science Workflows) as parameter sweep jobs to investigate Science Cases on large multiple data sets. So far, this approach has proven fruitful for three Science Cases of which one has been completed and two are still being tested.
Large datasets, logistics, sharing and workflow in screening.
Cook, Tessa S
2018-03-29
Cancer screening initiatives exist around the world for different malignancies, most frequently breast, colorectal, and cervical cancer. A number of cancer registries exist to collect relevant data, but while these data may include imaging findings, they rarely, if ever, include actual images. Additionally, the data submitted to the registry are usually correlated with eventual cancer diagnoses and patient outcomes, rather than used with the individual's future screenings. Developing screening programs that allow for images to be submitted to a central location in addition to patient meta data and used for comparison to future screening exams would be very valuable in increasing access to care and ensuring that individuals are effectively screened at appropriate intervals. It would also change the way imaging results and additional patient data are correlated to eventual outcomes. However, it introduces logistical challenges surrounding secure storage and transmission of data to subsequent screening sites. In addition, in the absence of standardized protocols for screening, comparing current and prior imaging, especially from different equipment, can be challenging. Implementing a large-scale screening program with an image-enriched screening registry-effectively, an image-enriched electronic screening record-also requires that incentives exist for screening sites, physicians, and patients to participate; to maximize coverage, participation may have to be supported by government agencies. Workflows will also have to be adjusted to support registry participation for all screening patients in an effort to create a large, robust data set that can be used for future screening efforts as well as research initiatives.center.
Flexible workflow sharing and execution services for e-scientists
NASA Astrophysics Data System (ADS)
Kacsuk, Péter; Terstyanszky, Gábor; Kiss, Tamas; Sipos, Gergely
2013-04-01
The sequence of computational and data manipulation steps required to perform a specific scientific analysis is called a workflow. Workflows that orchestrate data and/or compute intensive applications on Distributed Computing Infrastructures (DCIs) recently became standard tools in e-science. At the same time the broad and fragmented landscape of workflows and DCIs slows down the uptake of workflow-based work. The development, sharing, integration and execution of workflows is still a challenge for many scientists. The FP7 "Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs" (SHIWA) project significantly improved the situation, with a simulation platform that connects different workflow systems, different workflow languages, different DCIs and workflows into a single, interoperable unit. The SHIWA Simulation Platform is a service package, already used by various scientific communities, and used as a tool by the recently started ER-flow FP7 project to expand the use of workflows among European scientists. The presentation will introduce the SHIWA Simulation Platform and the services that ER-flow provides based on the platform to space and earth science researchers. The SHIWA Simulation Platform includes: 1. SHIWA Repository: A database where workflows and meta-data about workflows can be stored. The database is a central repository to discover and share workflows within and among communities . 2. SHIWA Portal: A web portal that is integrated with the SHIWA Repository and includes a workflow executor engine that can orchestrate various types of workflows on various grid and cloud platforms. 3. SHIWA Desktop: A desktop environment that provides similar access capabilities than the SHIWA Portal, however it runs on the users' desktops/laptops instead of a portal server. 4. Workflow engines: the ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflow engines are already integrated with the execution engine of the SHIWA Portal. Other engines can be added when required. Through the SHIWA Portal one can define and run simulations on the SHIWA Virtual Organisation, an e-infrastructure that gathers computing and data resources from various DCIs, including the European Grid Infrastructure. The Portal via third party workflow engines provides support for the most widely used academic workflow engines and it can be extended with other engines on demand. Such extensions translate between workflow languages and facilitate the nesting of workflows into larger workflows even when those are written in different languages and require different interpreters for execution. Through the workflow repository and the portal lonely scientists and scientific collaborations can share and offer workflows for reuse and execution. Given the integrated nature of the SHIWA Simulation Platform the shared workflows can be executed online, without installing any special client environment and downloading workflows. The FP7 "Building a European Research Community through Interoperable Workflows and Data" (ER-flow) project disseminates the achievements of the SHIWA project and use these achievements to build workflow user communities across Europe. ER-flow provides application supports to research communities within and beyond the project consortium to develop, share and run workflows with the SHIWA Simulation Platform.
Virtual screening methods as tools for drug lead discovery from large chemical libraries.
Ma, X H; Zhu, F; Liu, X; Shi, Z; Zhang, J X; Yang, S Y; Wei, Y Q; Chen, Y Z
2012-01-01
Virtual screening methods have been developed and explored as useful tools for searching drug lead compounds from chemical libraries, including large libraries that have become publically available. In this review, we discussed the new developments in exploring virtual screening methods for enhanced performance in searching large chemical libraries, their applications in screening libraries of ~ 1 million or more compounds in the last five years, the difficulties in their applications, and the strategies for further improving these methods.
The importance of employing computational resources for the automation of drug discovery.
Rosales-Hernández, Martha Cecilia; Correa-Basurto, José
2015-03-01
The application of computational tools to drug discovery helps researchers to design and evaluate new drugs swiftly with a reduce economic resources. To discover new potential drugs, computational chemistry incorporates automatization for obtaining biological data such as adsorption, distribution, metabolism, excretion and toxicity (ADMET), as well as drug mechanisms of action. This editorial looks at examples of these computational tools, including docking, molecular dynamics simulation, virtual screening, quantum chemistry, quantitative structural activity relationship, principal component analysis and drug screening workflow systems. The authors then provide their perspectives on the importance of these techniques for drug discovery. Computational tools help researchers to design and discover new drugs for the treatment of several human diseases without side effects, thus allowing for the evaluation of millions of compounds with a reduced cost in both time and economic resources. The problem is that operating each program is difficult; one is required to use several programs and understand each of the properties being tested. In the future, it is possible that a single computer and software program will be capable of evaluating the complete properties (mechanisms of action and ADMET properties) of ligands. It is also possible that after submitting one target, this computer-software will be capable of suggesting potential compounds along with ways to synthesize them, and presenting biological models for testing.
Computer-aided dental prostheses construction using reverse engineering.
Solaberrieta, E; Minguez, R; Barrenetxea, L; Sierra, E; Etxaniz, O
2014-01-01
The implementation of computer-aided design/computer-aided manufacturing (CAD/CAM) systems with virtual articulators, which take into account the kinematics, constitutes a breakthrough in the construction of customised dental prostheses. This paper presents a multidisciplinary protocol involving CAM techniques to produce dental prostheses. This protocol includes a step-by-step procedure using innovative reverse engineering technologies to transform completely virtual design processes into customised prostheses. A special emphasis is placed on a novel method that permits a virtual location of the models. The complete workflow includes the optical scanning of the patient, the use of reverse engineering software and, if necessary, the use of rapid prototyping to produce CAD temporary prostheses.
The Cancer Analysis Virtual Machine (CAVM) project will leverage cloud technology, the UCSC Cancer Genomics Browser, and the Galaxy analysis workflow system to provide investigators with a flexible, scalable platform for hosting, visualizing and analyzing their own genomic data.
Development of a High-Throughput Ion-Exchange Resin Characterization Workflow.
Liu, Chun; Dermody, Daniel; Harris, Keith; Boomgaard, Thomas; Sweeney, Jeff; Gisch, Daryl; Goltz, Bob
2017-06-12
A novel high-throughout (HTR) ion-exchange (IEX) resin workflow has been developed for characterizing ion exchange equilibrium of commercial and experimental IEX resins against a range of different applications where water environment differs from site to site. Because of its much higher throughput, design of experiment (DOE) methodology can be easily applied for studying the effects of multiple factors on resin performance. Two case studies will be presented to illustrate the efficacy of the combined HTR workflow and DOE method. In case study one, a series of anion exchange resins have been screened for selective removal of NO 3 - and NO 2 - in water environments consisting of multiple other anions, varied pH, and ionic strength. The response surface model (RSM) is developed to statistically correlate the resin performance with the water composition and predict the best resin candidate. In case study two, the same HTR workflow and DOE method have been applied for screening different cation exchange resins in terms of the selective removal of Mg 2+ , Ca 2+ , and Ba 2+ from high total dissolved salt (TDS) water. A master DOE model including all of the cation exchange resins is created to predict divalent cation removal by different IEX resins under specific conditions, from which the best resin candidates can be identified. The successful adoption of HTR workflow and DOE method for studying the ion exchange of IEX resins can significantly reduce the resources and time to address industry and application needs.
A virtual screening method for inhibitory peptides of Angiotensin I-converting enzyme.
Wu, Hongxi; Liu, Yalan; Guo, Mingrong; Xie, Jingli; Jiang, XiaMin
2014-09-01
Natural small peptides from foods have been proven to be efficient inhibitors of Angiotensin I-converting enzyme (ACE) for the regulation of blood pressure. The traditional ACE inhibitory peptides screening method is both time consuming and money costing, to the contrary, virtual screening method by computation can break these limitations. We establish a virtual screening method to obtain ACE inhibitory peptides with the help of Libdock module of Discovery Studio 3.5 software. A significant relationship between Libdock score and experimental IC(50) was found, Libdock score = 10.063 log(1/IC(50)) + 68.08 (R(2) = 0.62). The credibility of the relationship was confirmed by testing the coincidence of the estimated log(1/IC(50)) and measured log(1/IC(50)) (IC(50) is 50% inhibitory concentration toward ACE, in μmol/L) of 5 synthetic ACE inhibitory peptides, which was virtual hydrolyzed and screened from a kind of seafood, Phascolosoma esculenta. Accordingly, Libdock method is a valid IC(50) estimation tool and virtual screening method for small ACE inhibitory peptides. © 2014 Institute of Food Technologists®
The Diabetic Retinopathy Screening Workflow
Bolster, Nigel M.; Giardini, Mario E.; Bastawrous, Andrew
2015-01-01
Complications of diabetes mellitus, namely diabetic retinopathy and diabetic maculopathy, are the leading cause of blindness in working aged people. Sufferers can avoid blindness if identified early via retinal imaging. Systematic screening of the diabetic population has been shown to greatly reduce the prevalence and incidence of blindness within the population. Many national screening programs have digital fundus photography as their basis. In the past 5 years several techniques and adapters have been developed that allow digital fundus photography to be performed using smartphones. We review recent progress in smartphone-based fundus imaging and discuss its potential for integration into national systematic diabetic retinopathy screening programs. Some systems have produced promising initial results with respect to their agreement with reference standards. However further multisite trialling of such systems’ use within implementable screening workflows is required if an evidence base strong enough to affect policy change is to be established. If this were to occur national diabetic retinopathy screening would, for the first time, become possible in low- and middle-income settings where cost and availability of trained eye care personnel are currently key barriers to implementation. As diabetes prevalence and incidence is increasing sharply in these settings, the impact on global blindness could be profound. PMID:26596630
Virtual biomedical universities and e-learning.
Beux, P Le; Fieschi, M
2007-01-01
In this special issue on virtual biomedical universities and e-learning we will make a survey on the principal existing teaching applications of ICT used in medical Schools around the world. In the following we identify five types of research and experiments in this field of medical e-learning and virtual medical universities. The topics of this special issue goes from educational computer program to create and simulate virtual patients with a wide variety of medical conditions in different clinical settings and over different time frames to using distance learning in developed and developing countries program training medical informatics of clinicians. We also present the necessity of good indexing and research tools for training resources together with workflows to manage the multiple source content of virtual campus or universities and the virtual digital video resources. A special attention is given to training new generations of clinicians in ICT tools and methods to be used in clinical settings as well as in medical schools.
Harris, Bryan T; Montero, Daniel; Grant, Gerald T; Morton, Dean; Llop, Daniel R; Lin, Wei-Shao
2017-02-01
This clinical report proposes a digital workflow using 2-dimensional (2D) digital photographs, a 3D extraoral facial scan, and cone beam computed tomography (CBCT) volumetric data to create a 3D virtual patient with craniofacial hard tissue, remaining dentition (including surrounding intraoral soft tissue), and the realistic appearance of facial soft tissue at an exaggerated smile under static conditions. The 3D virtual patient was used to assist the virtual diagnostic tooth arrangement process, providing patient with a pleasing preoperative virtual smile design that harmonized with facial features. The 3D virtual patient was also used to gain patient's pretreatment approval (as a communication tool), design a prosthetically driven surgical plan for computer-guided implant surgery, and fabricate the computer-aided design and computer-aided manufacturing (CAD-CAM) interim prostheses. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Improved compliance by BPM-driven workflow automation.
Holzmüller-Laue, Silke; Göde, Bernd; Fleischer, Heidi; Thurow, Kerstin
2014-12-01
Using methods and technologies of business process management (BPM) for the laboratory automation has important benefits (i.e., the agility of high-level automation processes, rapid interdisciplinary prototyping and implementation of laboratory tasks and procedures, and efficient real-time process documentation). A principal goal of the model-driven development is the improved transparency of processes and the alignment of process diagrams and technical code. First experiences of using the business process model and notation (BPMN) show that easy-to-read graphical process models can achieve and provide standardization of laboratory workflows. The model-based development allows one to change processes quickly and an easy adaption to changing requirements. The process models are able to host work procedures and their scheduling in compliance with predefined guidelines and policies. Finally, the process-controlled documentation of complex workflow results addresses modern laboratory needs of quality assurance. BPMN 2.0 as an automation language to control every kind of activity or subprocess is directed to complete workflows in end-to-end relationships. BPMN is applicable as a system-independent and cross-disciplinary graphical language to document all methods in laboratories (i.e., screening procedures or analytical processes). That means, with the BPM standard, a communication method of sharing process knowledge of laboratories is also available. © 2014 Society for Laboratory Automation and Screening.
New Trends of Emerging Technologies in Digital Pathology.
Bueno, Gloria; Fernández-Carrobles, M Milagro; Deniz, Oscar; García-Rojo, Marcial
2016-01-01
The future paradigm of pathology will be digital. Instead of conventional microscopy, a pathologist will perform a diagnosis through interacting with images on computer screens and performing quantitative analysis. The fourth generation of virtual slide telepathology systems, so-called virtual microscopy and whole-slide imaging (WSI), has allowed for the storage and fast dissemination of image data in pathology and other biomedical areas. These novel digital imaging modalities encompass high-resolution scanning of tissue slides and derived technologies, including automatic digitization and computational processing of whole microscopic slides. Moreover, automated image analysis with WSI can extract specific diagnostic features of diseases and quantify individual components of these features to support diagnoses and provide informative clinical measures of disease. Therefore, the challenge is to apply information technology and image analysis methods to exploit the new and emerging digital pathology technologies effectively in order to process and model all the data and information contained in WSI. The final objective is to support the complex workflow from specimen receipt to anatomic pathology report transmission, that is, to improve diagnosis both in terms of pathologists' efficiency and with new information. This article reviews the main concerns about and novel methods of digital pathology discussed at the latest workshop in the field carried out within the European project AIDPATH (Academia and Industry Collaboration for Digital Pathology). © 2016 S. Karger AG, Basel.
2013-01-01
Background Information is lacking about the capacity of those working in community practice settings to utilize health information technology for colorectal cancer screening. Objective To address this gap we asked those working in community practice settings to share their perspectives about how the implementation of a Web-based patient-led decision aid might affect patient-clinician conversations about colorectal cancer screening and the day-to-day clinical workflow. Methods Five focus groups in five community practice settings were conducted with 8 physicians, 1 physician assistant, and 18 clinic staff. Focus groups were organized using a semistructured discussion guide designed to identify factors that mediate and impede the use of a Web-based decision aid intended to clarify patient preferences for colorectal cancer screening and to trigger shared decision making during the clinical encounter. Results All physicians, the physician assistant, and 8 of the 18 clinic staff were active participants in the focus groups. Clinician and staff participants from each setting reported a belief that the Web-based patient-led decision aid could be an informative and educational tool; in all but one setting participants reported a readiness to recommend the tool to patients. The exception related to clinicians from one clinic who described a preference for patients having fewer screening choices, noting that a colonoscopy was the preferred screening modality for patients in their clinic. Perceived barriers to utilizing the Web-based decision aid included patients’ lack of Internet access or low computer literacy, and potential impediments to the clinics’ daily workflow. Expanding patients’ use of an online decision aid that is both easy to access and understand and that is utilized by patients outside of the office visit was described as a potentially efficient means for soliciting patients’ screening preferences. Participants described that a system to link the online decision aid to a computerized reminder system could promote a better understanding of patients’ screening preferences, though some expressed concern that such a system could be difficult to keep up and running. Conclusions Community practice clinicians and staff perceived the Web-based decision aid technology as promising but raised questions as to how the technology and resultant information would be integrated into their daily practice workflow. Additional research investigating how to best implement online decision aids should be conducted prior to the widespread adoption of such technology so as to maximize the benefits of the technology while minimizing workflow disruptions. PMID:24351420
Applying operations research to optimize a novel population management system for cancer screening.
Zai, Adrian H; Kim, Seokjin; Kamis, Arnold; Hung, Ken; Ronquillo, Jeremiah G; Chueh, Henry C; Atlas, Steven J
2014-02-01
To optimize a new visit-independent, population-based cancer screening system (TopCare) by using operations research techniques to simulate changes in patient outreach staffing levels (delegates, navigators), modifications to user workflow within the information technology (IT) system, and changes in cancer screening recommendations. TopCare was modeled as a multiserver, multiphase queueing system. Simulation experiments implemented the queueing network model following a next-event time-advance mechanism, in which systematic adjustments were made to staffing levels, IT workflow settings, and cancer screening frequency in order to assess their impact on overdue screenings per patient. TopCare reduced the average number of overdue screenings per patient from 1.17 at inception to 0.86 during simulation to 0.23 at steady state. Increases in the workforce improved the effectiveness of TopCare. In particular, increasing the delegate or navigator staff level by one person improved screening completion rates by 1.3% or 12.2%, respectively. In contrast, changes in the amount of time a patient entry stays on delegate and navigator lists had little impact on overdue screenings. Finally, lengthening the screening interval increased efficiency within TopCare by decreasing overdue screenings at the patient level, resulting in a smaller number of overdue patients needing delegates for screening and a higher fraction of screenings completed by delegates. Simulating the impact of changes in staffing, system parameters, and clinical inputs on the effectiveness and efficiency of care can inform the allocation of limited resources in population management.
Adapting Document Similarity Measures for Ligand-Based Virtual Screening.
Himmat, Mubarak; Salim, Naomie; Al-Dabbagh, Mohammed Mumtaz; Saeed, Faisal; Ahmed, Ali
2016-04-13
Quantifying the similarity of molecules is considered one of the major tasks in virtual screening. There are many similarity measures that have been proposed for this purpose, some of which have been derived from document and text retrieving areas as most often these similarity methods give good results in document retrieval and can achieve good results in virtual screening. In this work, we propose a similarity measure for ligand-based virtual screening, which has been derived from a text processing similarity measure. It has been adopted to be suitable for virtual screening; we called this proposed measure the Adapted Similarity Measure of Text Processing (ASMTP). For evaluating and testing the proposed ASMTP we conducted several experiments on two different benchmark datasets: the Maximum Unbiased Validation (MUV) and the MDL Drug Data Report (MDDR). The experiments have been conducted by choosing 10 reference structures from each class randomly as queries and evaluate them in the recall of cut-offs at 1% and 5%. The overall obtained results are compared with some similarity methods including the Tanimoto coefficient, which are considered to be the conventional and standard similarity coefficients for fingerprint-based similarity calculations. The achieved results show that the performance of ligand-based virtual screening is better and outperforms the Tanimoto coefficients and other methods.
Evaluating the Predictivity of Virtual Screening for Abl Kinase Inhibitors to Hinder Drug Resistance
Gani, Osman A B S M; Narayanan, Dilip; Engh, Richard A
2013-01-01
Virtual screening methods are now widely used in early stages of drug discovery, aiming to rank potential inhibitors. However, any practical ligand set (of active or inactive compounds) chosen for deriving new virtual screening approaches cannot fully represent all relevant chemical space for potential new compounds. In this study, we have taken a retrospective approach to evaluate virtual screening methods for the leukemia target kinase ABL1 and its drug-resistant mutant ABL1-T315I. ‘Dual active’ inhibitors against both targets were grouped together with inactive ligands chosen from different decoy sets and tested with virtual screening approaches with and without explicit use of target structures (docking). We show how various scoring functions and choice of inactive ligand sets influence overall and early enrichment of the libraries. Although ligand-based methods, for example principal component analyses of chemical properties, can distinguish some decoy sets from active compounds, the addition of target structural information via docking improves enrichment, and explicit consideration of multiple target conformations (i.e. types I and II) achieves best enrichment of active versus inactive ligands, even without assuming knowledge of the binding mode. We believe that this study can be extended to other therapeutically important kinases in prospective virtual screening studies. PMID:23746052
Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z
2009-05-01
Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.
A Workflow for Identifying Metabolically Active Chemicals to Complement in vitro Toxicity Screening
The new paradigm of toxicity testing approaches involves rapid screening of thousands of chemicals across hundreds of biological targets through use of in vitro assays. Such assays may lead to false negatives when the complex metabolic processes that render a chemical bioactive i...
Chen, Haining; Li, Sijia; Hu, Yajiao; Chen, Guo; Jiang, Qinglin; Tong, Rongsheng; Zang, Zhihe; Cai, Lulu
2016-01-01
Rho-associated, coiled-coil containing protein kinase 1 (ROCK1) is an important regulator of focal adhesion, actomyosin contraction and cell motility. In this manuscript, a combination of the multi-complex-based pharmacophore (MCBP), molecular dynamics simulation and a hybrid protocol of a virtual screening method, comprised of multipharmacophore- based virtual screening (PBVS) and ensemble docking-based virtual screening (DBVS) methods were used for retrieving novel ROCK1 inhibitors from the natural products database embedded in the ZINC database. Ten hit compounds were selected from the hit compounds, and five compounds were tested experimentally. Thus, these results may provide valuable information for further discovery of more novel ROCK1 inhibitors.
Cerec omnicam and the virtual articulator--a case report.
Fritzsche, G
2013-01-01
This case report demonstrates how two opposing teeth were restored with full crowns using Cerec software version 4.2 (pre-release version). In addition, an anterior tooth was provided with a veneer. The situation was scanned with the Cerec Omnicam. The new virtual articulator was used for the design to obtain correct dynamic contacts. The Cerec Omnicam can scan the entire situation prior to preparation without the help of an assistant, as no surface pretreatment is necessary. The locations of the occlusal contacts can be marked with articulating paper and are indicated on the virtual models. Selective deletion of individual areas allows the prepared teeth to be rescanned, considerably speeding up the workflow. A video demonstration is available of the acquisition and design procedure.
NASA Astrophysics Data System (ADS)
Kamstra, Rhiannon L.; Dadgar, Saedeh; Wigg, John; Chowdhury, Morshed A.; Phenix, Christopher P.; Floriano, Wely B.
2014-11-01
Our group has recently demonstrated that virtual screening is a useful technique for the identification of target-specific molecular probes. In this paper, we discuss some of our proof-of-concept results involving two biologically relevant target proteins, and report the development of a computational script to generate large databases of fluorescence-labelled compounds for computer-assisted molecular design. The virtual screening of a small library of 1,153 fluorescently-labelled compounds against two targets, and the experimental testing of selected hits reveal that this approach is efficient at identifying molecular probes, and that the screening of a labelled library is preferred over the screening of base compounds followed by conjugation of confirmed hits. The automated script for library generation explores the known reactivity of commercially available dyes, such as NHS-esters, to create large virtual databases of fluorescence-tagged small molecules that can be easily synthesized in a laboratory. A database of 14,862 compounds, each tagged with the ATTO680 fluorophore was generated with the automated script reported here. This library is available for downloading and it is suitable for virtual ligand screening aiming at the identification of target-specific fluorescent molecular probes.
Spherical harmonics coefficients for ligand-based virtual screening of cyclooxygenase inhibitors.
Wang, Quan; Birod, Kerstin; Angioni, Carlo; Grösch, Sabine; Geppert, Tim; Schneider, Petra; Rupp, Matthias; Schneider, Gisbert
2011-01-01
Molecular descriptors are essential for many applications in computational chemistry, such as ligand-based similarity searching. Spherical harmonics have previously been suggested as comprehensive descriptors of molecular structure and properties. We investigate a spherical harmonics descriptor for shape-based virtual screening. We introduce and validate a partially rotation-invariant three-dimensional molecular shape descriptor based on the norm of spherical harmonics expansion coefficients. Using this molecular representation, we parameterize molecular surfaces, i.e., isosurfaces of spatial molecular property distributions. We validate the shape descriptor in a comprehensive retrospective virtual screening experiment. In a prospective study, we virtually screen a large compound library for cyclooxygenase inhibitors, using a self-organizing map as a pre-filter and the shape descriptor for candidate prioritization. 12 compounds were tested in vitro for direct enzyme inhibition and in a whole blood assay. Active compounds containing a triazole scaffold were identified as direct cyclooxygenase-1 inhibitors. This outcome corroborates the usefulness of spherical harmonics for representation of molecular shape in virtual screening of large compound collections. The combination of pharmacophore and shape-based filtering of screening candidates proved to be a straightforward approach to finding novel bioactive chemotypes with minimal experimental effort.
Schneider, Petra; Hoy, Benjamin; Wessler, Silja; Schneider, Gisbert
2011-01-01
Background The human pathogen Helicobacter pylori (H. pylori) is a main cause for gastric inflammation and cancer. Increasing bacterial resistance against antibiotics demands for innovative strategies for therapeutic intervention. Methodology/Principal Findings We present a method for structure-based virtual screening that is based on the comprehensive prediction of ligand binding sites on a protein model and automated construction of a ligand-receptor interaction map. Pharmacophoric features of the map are clustered and transformed in a correlation vector (‘virtual ligand’) for rapid virtual screening of compound databases. This computer-based technique was validated for 18 different targets of pharmaceutical interest in a retrospective screening experiment. Prospective screening for inhibitory agents was performed for the protease HtrA from the human pathogen H. pylori using a homology model of the target protein. Among 22 tested compounds six block E-cadherin cleavage by HtrA in vitro and result in reduced scattering and wound healing of gastric epithelial cells, thereby preventing bacterial infiltration of the epithelium. Conclusions/Significance This study demonstrates that receptor-based virtual screening with a permissive (‘fuzzy’) pharmacophore model can help identify small bioactive agents for combating bacterial infection. PMID:21483848
Scaffold-Focused Virtual Screening: Prospective Application to the Discovery of TTK Inhibitors
2013-01-01
We describe and apply a scaffold-focused virtual screen based upon scaffold trees to the mitotic kinase TTK (MPS1). Using level 1 of the scaffold tree, we perform both 2D and 3D similarity searches between a query scaffold and a level 1 scaffold library derived from a 2 million compound library; 98 compounds from 27 unique top-ranked level 1 scaffolds are selected for biochemical screening. We show that this scaffold-focused virtual screen prospectively identifies eight confirmed active compounds that are structurally differentiated from the query compound. In comparison, 100 compounds were selected for biochemical screening using a virtual screen based upon whole molecule similarity resulting in 12 confirmed active compounds that are structurally similar to the query compound. We elucidated the binding mode for four of the eight confirmed scaffold hops to TTK by determining their protein–ligand crystal structures; each represents a ligand-efficient scaffold for inhibitor design. PMID:23672464
HPPD: ligand- and target-based virtual screening on a herbicide target.
López-Ramos, Miriam; Perruccio, Francesca
2010-05-24
Hydroxyphenylpyruvate dioxygenase (HPPD) has proven to be a very successful target for the development of herbicides with bleaching properties, and today HPPD inhibitors are well established in the agrochemical market. Syngenta has a long history of HPPD-inhibitor research, and HPPD was chosen as a case study for the validation of diverse ligand- and target-based virtual screening approaches to identify compounds with inhibitory properties. Two-dimensional extended connectivity fingerprints, three-dimensional shape-based tools (ROCS, EON, and Phase-shape) and a pharmacophore approach (Phase) were used as ligand-based methods; Glide and Gold were used as target-based. Both the virtual screening utility and the scaffold-hopping ability of the screening tools were assessed. Particular emphasis was put on the specific pitfalls to take into account for the design of a virtual screening campaign in an agrochemical context, as compared to a pharmaceutical environment.
A Novel Approach for Efficient Pharmacophore-based Virtual Screening: Method and Applications
Dror, Oranit; Schneidman-Duhovny, Dina; Inbar, Yuval; Nussinov, Ruth; Wolfson, Haim J.
2009-01-01
Virtual screening is emerging as a productive and cost-effective technology in rational drug design for the identification of novel lead compounds. An important model for virtual screening is the pharmacophore. Pharmacophore is the spatial configuration of essential features that enable a ligand molecule to interact with a specific target receptor. In the absence of a known receptor structure, a pharmacophore can be identified from a set of ligands that have been observed to interact with the target receptor. Here, we present a novel computational method for pharmacophore detection and virtual screening. The pharmacophore detection module is able to: (i) align multiple flexible ligands in a deterministic manner without exhaustive enumeration of the conformational space, (ii) detect subsets of input ligands that may bind to different binding sites or have different binding modes, (iii) address cases where the input ligands have different affinities by defining weighted pharmacophores based on the number of ligands that share them, and (iv) automatically select the most appropriate pharmacophore candidates for virtual screening. The algorithm is highly efficient, allowing a fast exploration of the chemical space by virtual screening of huge compound databases. The performance of PharmaGist was successfully evaluated on a commonly used dataset of G-Protein Coupled Receptor alpha1A. Additionally, a large-scale evaluation using the DUD (directory of useful decoys) dataset was performed. DUD contains 2950 active ligands for 40 different receptors, with 36 decoy compounds for each active ligand. PharmaGist enrichment rates are comparable with other state-of-the-art tools for virtual screening. Availability The software is available for download. A user-friendly web interface for pharmacophore detection is available at http://bioinfo3d.cs.tau.ac.il/PharmaGist. PMID:19803502
Kirchmair, Johannes; Markt, Patrick; Distinto, Simona; Wolber, Gerhard; Langer, Thierry
2008-01-01
Within the last few years a considerable amount of evaluative studies has been published that investigate the performance of 3D virtual screening approaches. Thereby, in particular assessments of protein-ligand docking are facing remarkable interest in the scientific community. However, comparing virtual screening approaches is a non-trivial task. Several publications, especially in the field of molecular docking, suffer from shortcomings that are likely to affect the significance of the results considerably. These quality issues often arise from poor study design, biasing, by using improper or inexpressive enrichment descriptors, and from errors in interpretation of the data output. In this review we analyze recent literature evaluating 3D virtual screening methods, with focus on molecular docking. We highlight problematic issues and provide guidelines on how to improve the quality of computational studies. Since 3D virtual screening protocols are in general assessed by their ability to discriminate between active and inactive compounds, we summarize the impact of the composition and preparation of test sets on the outcome of evaluations. Moreover, we investigate the significance of both classic enrichment parameters and advanced descriptors for the performance of 3D virtual screening methods. Furthermore, we review the significance and suitability of RMSD as a measure for the accuracy of protein-ligand docking algorithms and of conformational space sub sampling algorithms.
Raising Virtual Laboratories in Australia onto global platforms
NASA Astrophysics Data System (ADS)
Wyborn, L. A.; Barker, M.; Fraser, R.; Evans, B. J. K.; Moloney, G.; Proctor, R.; Moise, A. F.; Hamish, H.
2016-12-01
Across the globe, Virtual Laboratories (VLs), Science Gateways (SGs), and Virtual Research Environments (VREs) are being developed that enable users who are not co-located to actively work together at various scales to share data, models, tools, software, workflows, best practices, etc. Outcomes range from enabling `long tail' researchers to more easily access specific data collections, to facilitating complex workflows on powerful supercomputers. In Australia, government funding has facilitated the development of a range of VLs through the National eResearch Collaborative Tools and Resources (NeCTAR) program. The VLs provide highly collaborative, research-domain oriented, integrated software infrastructures that meet user community needs. Twelve VLs have been funded since 2012, including the Virtual Geophysics Laboratory (VGL); Virtual Hazards, Impact and Risk Laboratory (VHIRL); Climate and Weather Science Laboratory (CWSLab); Marine Virtual Laboratory (MarVL); and Biodiversity and Climate Change Virtual Laboratory (BCCVL). These VLs share similar technical challenges, with common issues emerging on integration of tools, applications and access data collections via both cloud-based environments and other distributed resources. While each VL began with a focus on a specific research domain, communities of practice have now formed across the VLs around common issues, and facilitate identification of best practice case studies, and new standards. As a result, tools are now being shared where the VLs access data via data services using international standards such as ISO, OGC, W3C. The sharing of these approaches is starting to facilitate re-usability of infrastructure and is a step towards supporting interdisciplinary research. Whilst the focus of the VLs are Australia-centric, by using standards, these environments are able to be extended to analysis on other international datasets. Many VL datasets are subsets of global datasets and so extension to global is a small (and often requested) step. Similarly, most of the tools, software, and other technologies could be shared across infrastructures globally. Therefore, it is now time to better connect the Australian VLs with similar initiatives elsewhere to create international platforms that can contribute to global research challenges.
Ying, William; Levons, Jaquan K; Carney, Andrea; Gandhi, Rajesh; Vydra, Vicky; Rubin, A Erik
2016-06-01
A novel semiautomated buffer exchange process workflow was developed to enable efficient early protein formulation screening. An antibody fragment protein, BMSdab, was used to demonstrate the workflow. The process afforded 60% to 80% cycle time and scientist time savings and significant material efficiencies. These efficiencies ultimately facilitated execution of this stability work earlier in the drug development process, allowing this tool to inform the developability of potential candidates for development from a formulation perspective. To overcome the key technical challenges, the protein solution was buffer-exchanged by centrifuge filtration into formulations for stability screening in a 96-well plate with an ultrafiltration membrane, leveraging automated liquid handling and acoustic volume measurements to allow several cycles of exchanges. The formulations were transferred into a vacuum manifold and sterile filtered into a rack holding 96 glass vials. The vials were sealed with a capmat of individual caps and placed in stability stations. Stability of the samples prepared by this process and by the standard process was demonstrated to be comparable. This process enabled screening a number of formulations of a protein at an early pharmaceutical development stage with a short sample preparation time. © 2015 Society for Laboratory Automation and Screening.
Applying operations research to optimize a novel population management system for cancer screening
Zai, Adrian H; Kim, Seokjin; Kamis, Arnold; Hung, Ken; Ronquillo, Jeremiah G; Chueh, Henry C; Atlas, Steven J
2014-01-01
Objective To optimize a new visit-independent, population-based cancer screening system (TopCare) by using operations research techniques to simulate changes in patient outreach staffing levels (delegates, navigators), modifications to user workflow within the information technology (IT) system, and changes in cancer screening recommendations. Materials and methods TopCare was modeled as a multiserver, multiphase queueing system. Simulation experiments implemented the queueing network model following a next-event time-advance mechanism, in which systematic adjustments were made to staffing levels, IT workflow settings, and cancer screening frequency in order to assess their impact on overdue screenings per patient. Results TopCare reduced the average number of overdue screenings per patient from 1.17 at inception to 0.86 during simulation to 0.23 at steady state. Increases in the workforce improved the effectiveness of TopCare. In particular, increasing the delegate or navigator staff level by one person improved screening completion rates by 1.3% or 12.2%, respectively. In contrast, changes in the amount of time a patient entry stays on delegate and navigator lists had little impact on overdue screenings. Finally, lengthening the screening interval increased efficiency within TopCare by decreasing overdue screenings at the patient level, resulting in a smaller number of overdue patients needing delegates for screening and a higher fraction of screenings completed by delegates. Conclusions Simulating the impact of changes in staffing, system parameters, and clinical inputs on the effectiveness and efficiency of care can inform the allocation of limited resources in population management. PMID:24043318
Virtual Geophysics Laboratory: Exploiting the Cloud and Empowering Geophysicsts
NASA Astrophysics Data System (ADS)
Fraser, Ryan; Vote, Josh; Goh, Richard; Cox, Simon
2013-04-01
Over the last five decades geoscientists from Australian state and federal agencies have collected and assembled around 3 Petabytes of geoscience data sets under public funding. As a consequence of technological progress, data is now being acquired at exponential rates and in higher resolution than ever before. Effective use of these big data sets challenges the storage and computational infrastructure of most organizations. The Virtual Geophysics Laboratory (VGL) is a scientific workflow portal addresses some of the resulting issues by providing Australian geophysicists with access to a Web 2.0 or Rich Internet Application (RIA) based integrated environment that exploits eResearch tools and Cloud computing technology, and promotes collaboration between the user community. VGL simplifies and automates large portions of what were previously manually intensive scientific workflow processes, allowing scientists to focus on the natural science problems, rather than computer science and IT. A number of geophysical processing codes are incorporated to support multiple workflows. For example a gravity inversion can be performed by combining the Escript/Finley codes (from the University of Queensland) with the gravity data registered in VGL. Likewise, tectonic processes can also be modeled by combining the Underworld code (from Monash University) with one of the various 3D models available to VGL. Cloud services provide scalable and cost effective compute resources. VGL is built on top of mature standards-compliant information services, many deployed using the Spatial Information Services Stack (SISS), which provides direct access to geophysical data. A large number of data sets from Geoscience Australia assist users in data discovery. GeoNetwork provides a metadata catalog to store workflow results for future use, discovery and provenance tracking. VGL has been developed in collaboration with the research community using incremental software development practices and open source tools. While developed to provide the geophysics research community with a sustainable platform and scalable infrastructure; VGL has also developed a number of concepts, patterns and generic components of which have been reused for cases beyond geophysics, including natural hazards, satellite processing and other areas requiring spatial data discovery and processing. Future plans for VGL include a number of improvements in both functional and non-functional areas in response to its user community needs and advancement in information technologies. In particular, research is underway in the following areas (a) distributed and parallel workflow processing in the cloud, (b) seamless integration with various cloud providers, and (c) integration with virtual laboratories representing other science domains. Acknowledgements: VGL was developed by CSIRO in collaboration with Geoscience Australia, National Computational Infrastructure, Australia National University, Monash University and University of Queensland, and has been supported by the Australian Government's Education Investment Funds through NeCTAR.
Design and implementation of an internet-based electrical engineering laboratory.
He, Zhenlei; Shen, Zhangbiao; Zhu, Shanan
2014-09-01
This paper describes an internet-based electrical engineering laboratory (IEE-Lab) with virtual and physical experiments at Zhejiang University. In order to synthesize the advantages of both experiment styles, the IEE-Lab is come up with Client/Server/Application framework and combines the virtual and physical experiments. The design and workflow of IEE-Lab are introduced. The analog electronic experiment is taken as an example to show Flex plug-in design, data communication based on XML (Extensible Markup Language), experiment simulation modeled by Modelica and control terminals' design. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Stereo 3D vision adapter using commercial DIY goods
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Ohara, Takashi
2009-10-01
The conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. Meanwhile the mirror supplies us with the same image but this mirror image is usually upside down. Assume that the images on an original screen and a virtual screen in the mirror are completely different and both images can be displayed independently. It would be possible to enlarge a screen area twice. This extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. Although the displaying region is doubled, this virtual display could not produce 3D images. In this paper, we present an extension method using a unidirectional diffusing image screen and an improvement for displaying a 3D image using orthogonal polarized image projection.
The Diabetic Retinopathy Screening Workflow: Potential for Smartphone Imaging.
Bolster, Nigel M; Giardini, Mario E; Bastawrous, Andrew
2015-11-23
Complications of diabetes mellitus, namely diabetic retinopathy and diabetic maculopathy, are the leading cause of blindness in working aged people. Sufferers can avoid blindness if identified early via retinal imaging. Systematic screening of the diabetic population has been shown to greatly reduce the prevalence and incidence of blindness within the population. Many national screening programs have digital fundus photography as their basis. In the past 5 years several techniques and adapters have been developed that allow digital fundus photography to be performed using smartphones. We review recent progress in smartphone-based fundus imaging and discuss its potential for integration into national systematic diabetic retinopathy screening programs. Some systems have produced promising initial results with respect to their agreement with reference standards. However further multisite trialling of such systems' use within implementable screening workflows is required if an evidence base strong enough to affect policy change is to be established. If this were to occur national diabetic retinopathy screening would, for the first time, become possible in low- and middle-income settings where cost and availability of trained eye care personnel are currently key barriers to implementation. As diabetes prevalence and incidence is increasing sharply in these settings, the impact on global blindness could be profound. © 2015 Diabetes Technology Society.
Mixed Reality with HoloLens: Where Virtual Reality Meets Augmented Reality in the Operating Room.
Tepper, Oren M; Rudy, Hayeem L; Lefkowitz, Aaron; Weimer, Katie A; Marks, Shelby M; Stern, Carrie S; Garfein, Evan S
2017-11-01
Virtual reality and augmented reality devices have recently been described in the surgical literature. The authors have previously explored various iterations of these devices, and although they show promise, it has become clear that virtual reality and/or augmented reality devices alone do not adequately meet the demands of surgeons. The solution may lie in a hybrid technology known as mixed reality, which merges many virtual reality and augmented realty features. Microsoft's HoloLens, the first commercially available mixed reality device, provides surgeons intraoperative hands-free access to complex data, the real environment, and bidirectional communication. This report describes the use of HoloLens in the operating room to improve decision-making and surgical workflow. The pace of mixed reality-related technological development will undoubtedly be rapid in the coming years, and plastic surgeons are ideally suited to both lead and benefit from this advance.
NASA Astrophysics Data System (ADS)
Peer, Regina; Peer, Siegfried; Sander, Heike; Marsolek, Ingo; Koller, Wolfgang; Pappert, Dirk; Hierholzer, Johannes
2002-05-01
If new technology is introduced into medical practice it must prove to make a difference. However traditional approaches of outcome analysis failed to show a direct benefit of PACS on patient care and economical benefits are still in debate. A participatory process analysis was performed to compare workflow in a film based hospital and a PACS environment. This included direct observation of work processes, interview of involved staff, structural analysis and discussion of observations with staff members. After definition of common structures strong and weak workflow steps were evaluated. With a common workflow structure in both hospitals, benefits of PACS were revealed in workflow steps related to image reporting with simultaneous image access for ICU-physicians and radiologists, archiving of images as well as image and report distribution. However PACS alone is not able to cover the complete process of 'radiography for intensive care' from ordering of an image till provision of the final product equals image + report. Interference of electronic workflow with analogue process steps such as paper based ordering reduces the potential benefits of PACS. In this regard workflow modeling proved to be very helpful for the evaluation of complex work processes linking radiology and the ICU.
The discovery of novel HDAC3 inhibitors via virtual screening and in vitro bioassay
Hu, Huabin; Xue, Wenjie; Wang, Xiang Simon; Wu, Song
2018-01-01
Abstract Histone deacetylase 3 (HDAC3) is a potential target for the treatment of human diseases such as cancers, diabetes, chronic inflammation and neurodegenerative diseases. Previously, we proposed a virtual screening (VS) pipeline named “Hypo1_FRED_SAHA-3” for the discovery of HDAC3 inhibitors (HDAC3Is) and had thoroughly validated it by theoretical calculations. In this study, we attempted to explore its practical utility in a large-scale VS campaign. To this end, we used the VS pipeline to hierarchically screen the Specs chemical library. In order to facilitate compound cherry-picking, we then developed a knowledge-based pose filter (PF) by using our in-house quantitative structure activity relationship- (QSAR-) modelling approach and coupled it with FRED and Autodock Vina. Afterward, we purchased and tested 11 diverse compounds for their HDAC3 inhibitory activity in vitro. The bioassay has identified compound 2 (Specs ID: AN-979/41971160) as a HDAC3I (IC50 = 6.1 μM), which proved the efficacy of our workflow. As a medicinal chemistry study, we performed a follow-up substructure search and identified two more hit compounds of the same chemical type, i.e. 2–1 (AQ-390/42122119, IC50 = 1.3 μM) and 2–2 (AN-329/43450111, IC50 = 12.5 μM). Based on the chemical structures and activities, we have demonstrated the essential role of the capping group in maintaining the activity for this class of HDAC3Is. In addition, we tested the hit compounds for their in vitro activities on other HDACs, including HDAC1, HDAC2, HDAC8, HDAC4 and HDAC6. We have identified these compounds are HDAC1/2/3 selective inhibitors, of which compound 2 show the best selectivity profile. Taken together, the present study is an experimental validation and an update to our earlier VS strategy. The identified hits could be used as starting structures for the development of highly potent and selective HDAC3Is. PMID:29464997
Flexible End2End Workflow Automation of Hit-Discovery Research.
Holzmüller-Laue, Silke; Göde, Bernd; Thurow, Kerstin
2014-08-01
The article considers a new approach of more complex laboratory automation at the workflow layer. The authors purpose the automation of end2end workflows. The combination of all relevant subprocesses-whether automated or manually performed, independently, and in which organizational unit-results in end2end processes that include all result dependencies. The end2end approach focuses on not only the classical experiments in synthesis or screening, but also on auxiliary processes such as the production and storage of chemicals, cell culturing, and maintenance as well as preparatory activities and analyses of experiments. Furthermore, the connection of control flow and data flow in the same process model leads to reducing of effort of the data transfer between the involved systems, including the necessary data transformations. This end2end laboratory automation can be realized effectively with the modern methods of business process management (BPM). This approach is based on a new standardization of the process-modeling notation Business Process Model and Notation 2.0. In drug discovery, several scientific disciplines act together with manifold modern methods, technologies, and a wide range of automated instruments for the discovery and design of target-based drugs. The article discusses the novel BPM-based automation concept with an implemented example of a high-throughput screening of previously synthesized compound libraries. © 2014 Society for Laboratory Automation and Screening.
Computational fragment-based screening using RosettaLigand: the SAMPL3 challenge
NASA Astrophysics Data System (ADS)
Kumar, Ashutosh; Zhang, Kam Y. J.
2012-05-01
SAMPL3 fragment based virtual screening challenge provides a valuable opportunity for researchers to test their programs, methods and screening protocols in a blind testing environment. We participated in SAMPL3 challenge and evaluated our virtual fragment screening protocol, which involves RosettaLigand as the core component by screening a 500 fragments Maybridge library against bovine pancreatic trypsin. Our study reaffirmed that the real test for any virtual screening approach would be in a blind testing environment. The analyses presented in this paper also showed that virtual screening performance can be improved, if a set of known active compounds is available and parameters and methods that yield better enrichment are selected. Our study also highlighted that to achieve accurate orientation and conformation of ligands within a binding site, selecting an appropriate method to calculate partial charges is important. Another finding is that using multiple receptor ensembles in docking does not always yield better enrichment than individual receptors. On the basis of our results and retrospective analyses from SAMPL3 fragment screening challenge we anticipate that chances of success in a fragment screening process could be increased significantly with careful selection of receptor structures, protein flexibility, sufficient conformational sampling within binding pocket and accurate assignment of ligand and protein partial charges.
ToxCast Data Generation: Chemical Workflow
This page describes the process EPA follows to select chemicals, procure chemicals, register chemicals, conduct a quality review of the chemicals, and prepare the chemicals for high-throughput screening.
NASA Astrophysics Data System (ADS)
Polgár, Tímea; Menyhárd, Dóra K.; Keserű, György M.
2007-09-01
An effective virtual screening protocol was developed against an extended active site of CYP2C9, which was derived from X-ray structures complexed with flubiprofen and S-warfarin. Virtual screening has been effectively supported by our structure-based pharmacophore model. Importance of hot residues identified by mutation data and structural analysis was first estimated in an enrichment study. Key role of Arg108 and Phe114 in ligand binding was also underlined. Our screening protocol successfully identified 76% of known CYP2C9 ligands in the top 1% of the ranked database resulting 76-fold enrichment relative to random situation. Relevance of the protocol was further confirmed in selectivity studies, when 89% of CYP2C9 ligands were retrieved from a mixture of CYP2C9 and CYP2C8 ligands, while only 22% of CYP2C8 ligands were found applying the structure-based pharmacophore constraints. Moderate discrimination of CYP2C9 ligands from CYP2C18 and CYP2C19 ligands could also be achieved extending the application domain of our virtual screening protocol for the entire CYP2C family. Our findings further demonstrate the existence of an active site comprising of at least two binding pockets and strengthens the need of involvement of protein flexibility in virtual screening.
Sense of presence and anxiety during virtual social interactions between a human and virtual humans.
Morina, Nexhmedin; Brinkman, Willem-Paul; Hartanto, Dwi; Emmelkamp, Paul M G
2014-01-01
Virtual reality exposure therapy (VRET) has been shown to be effective in treatment of anxiety disorders. Yet, there is lack of research on the extent to which interaction between the individual and virtual humans can be successfully implanted to increase levels of anxiety for therapeutic purposes. This proof-of-concept pilot study aimed at examining levels of the sense of presence and anxiety during exposure to virtual environments involving social interaction with virtual humans and using different virtual reality displays. A non-clinical sample of 38 participants was randomly assigned to either a head-mounted display (HMD) with motion tracker and sterescopic view condition or a one-screen projection-based virtual reality display condition. Participants in both conditions engaged in free speech dialogues with virtual humans controlled by research assistants. It was hypothesized that exposure to virtual social interactions will elicit moderate levels of sense of presence and anxiety in both groups. Further it was expected that participants in the HMD condition will report higher scores of sense of presence and anxiety than participants in the one-screen projection-based display condition. Results revealed that in both conditions virtual social interactions were associated with moderate levels of sense of presence and anxiety. Additionally, participants in the HMD condition reported significantly higher levels of presence than those in the one-screen projection-based display condition (p = .001). However, contrary to the expectations neither the average level of anxiety nor the highest level of anxiety during exposure to social virtual environments differed between the groups (p = .97 and p = .75, respectively). The findings suggest that virtual social interactions can be successfully applied in VRET to enhance sense of presence and anxiety. Furthermore, our results indicate that one-screen projection-based displays can successfully activate levels of anxiety in social virtual environments. The outcome can prove helpful in using low-cost projection-based virtual reality environments for treating individuals with social phobia.
CASAS: A tool for composing automatically and semantically astrophysical services
NASA Astrophysics Data System (ADS)
Louge, T.; Karray, M. H.; Archimède, B.; Knödlseder, J.
2017-07-01
Multiple astronomical datasets are available through internet and the astrophysical Distributed Computing Infrastructure (DCI) called Virtual Observatory (VO). Some scientific workflow technologies exist for retrieving and combining data from those sources. However selection of relevant services, automation of the workflows composition and the lack of user-friendly platforms remain a concern. This paper presents CASAS, a tool for semantic web services composition in astrophysics. This tool proposes automatic composition of astrophysical web services and brings a semantics-based, automatic composition of workflows. It widens the services choice and eases the use of heterogeneous services. Semantic web services composition relies on ontologies for elaborating the services composition; this work is based on Astrophysical Services ONtology (ASON). ASON had its structure mostly inherited from the VO services capacities. Nevertheless, our approach is not limited to the VO and brings VO plus non-VO services together without the need for premade recipes. CASAS is available for use through a simple web interface.
Grant, Richard John; Roberts, Karen; Pointon, Carly; Hodgson, Clare; Womersley, Lynsey; Jones, Darren Craig; Tang, Eric
2009-06-01
Compound handling is a fundamental and critical step in compound screening throughout the drug discovery process. Although most compound-handling processes within compound management facilities use 100% DMSO solvent, conventional methods of manual or robotic liquid-handling systems in screening workflows often perform dilutions in aqueous solutions to maintain solvent tolerance of the biological assay. However, the use of aqueous media in these applications can lead to suboptimal data quality due to compound carryover or precipitation during the dilution steps. In cell-based assays, this effect is worsened by the unpredictable physical characteristics of compounds and the low DMSO tolerance within the assay. In some cases, the conventional approaches using manual or automated liquid handling resulted in variable IC(50) dose responses. This study examines the cause of this variability and evaluates the accuracy of screening data in these case studies. A number of liquid-handling options have been explored to address the issues and establish a generic compound-handling workflow to support cell-based screening across our screening functions. The authors discuss the validation of the Labcyte Echo reformatter as an effective noncontact solution for generic compound-handling applications against diverse compound classes using triple-quad liquid chromatography/mass spectrometry. The successful validation and implementation challenges of this technology for direct dosing onto cells in cell-based screening is discussed.
1001 Ways to run AutoDock Vina for virtual screening
NASA Astrophysics Data System (ADS)
Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.
2016-03-01
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
1001 Ways to run AutoDock Vina for virtual screening.
Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D
2016-03-01
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
Chen, Can; Wang, Ting; Wu, Fengbo; Huang, Wei; He, Gu; Ouyang, Liang; Xiang, Mingli; Peng, Cheng; Jiang, Qinglin
2014-01-01
Compared with normal differentiated cells, cancer cells upregulate the expression of pyruvate kinase isozyme M2 (PKM2) to support glycolytic intermediates for anabolic processes, including the synthesis of nucleic acids, amino acids, and lipids. In this study, a combination of the structure-based pharmacophore modeling and a hybrid protocol of virtual screening methods comprised of pharmacophore model-based virtual screening, docking-based virtual screening, and in silico ADMET (absorption, distribution, metabolism, excretion and toxicity) analysis were used to retrieve novel PKM2 activators from commercially available chemical databases. Tetrahydroquinoline derivatives were identified as potential scaffolds of PKM2 activators. Thus, the hybrid virtual screening approach was applied to screen the focused tetrahydroquinoline derivatives embedded in the ZINC database. Six hit compounds were selected from the final hits and experimental studies were then performed. Compound 8 displayed a potent inhibitory effect on human lung cancer cells. Following treatment with Compound 8, cell viability, apoptosis, and reactive oxygen species (ROS) production were examined in A549 cells. Finally, we evaluated the effects of Compound 8 on mice xenograft tumor models in vivo. These results may provide important information for further research on novel PKM2 activators as antitumor agents. PMID:25214764
Screening for colon cancer; Colonoscopy - screening; Sigmoidoscopy - screening; Virtual colonoscopy - screening; Fecal immunochemical test; Stool DNA test; sDNA test; Colorectal cancer - screening; Rectal ...
Flachner, Beáta; Hajdú, István; Dobi, Krisztina; Lorincz, Zsolt; Cseh, Sándor; Dormán, György
2013-01-01
Target focused libraries can be rapidly selected by 2D virtual screening methods from multimillion compounds' repositories if structures of active compounds are available. In the present study a multi-step virtual and in vitro screening cascade is reported to select Melanin Concentrating Hormone Receptor-1 (MCHR1) antagonists. The 2D similarity search combined with physicochemical parameter filtering is suitable for selecting candidates from multimillion compounds' repository. The seeds of the first round virtual screening were collected from the literature and commercial databases, while the seeds of the second round were the hits of the first round. In vitro screening underlined the efficiency of our approach, as in the second screening round the hit rate (8.6 %) significantly improved compared to the first round (1.9%), reaching the antagonist activity even below 10 nM.
Impact of the digital revolution on the future of pharmaceutical formulation science.
Leuenberger, Hans; Leuenberger, Michael N
2016-05-25
The ongoing digital revolution is no longer limited to the application of apps on the smart phone for daily needs but starts to affect also our professional life in formulation science. The software platform F-CAD (Formulation-Computer Aided Design) of CINCAP can be used to develop and test in silico capsule and tablet formulations. Such an approach allows the pharmaceutical industry to adopt the workflow of the automotive and aircraft industry. Thus, the first prototype of the drug delivery vehicle is prepared virtually by mimicking the composition (particle size distribution of the active drug substance and of the excipients within the tablet) and the process such as direct compression to obtain a defined porosity. The software is based on a cellular automaton (CA) process mimicking the dissolution profile of the capsule or tablet formulation. To take account of the type of dissolution equipment and all SOPs (Standard Operation Procedures) such as a single punch press to manufacture the tablet, a calibration of the F-CAD dissolution profile of the virtual tablet is needed. Thus, the virtual tablet becomes a copy of the real tablet. This statement is valid for all tablets manufactured within the same formulation design space. For this reason, it is important to define already for Clinical Phase I the formulation design space and to work only within this formulation design space consisting of the composition and the processes during all the Clinical Phases. Thus, it is not recommended to start with a simple capsule formulation as service dosage form and to change later to a market ready tablet formulation. The availability of F-CAD is a necessary, but not a sufficient condition to implement the workflow of the automotive and aircraft industry for developing and testing drug delivery vehicles. For a successful implementation of the new workflow, a harmonization of the equipment and the processes between the development and manufacturing departments is a must. In this context, the clinical samples for Clinical Phases I and II should be prepared with a mechanical simulator of the high-speed rotary press used for large batches for Clinical Phases III & IV. If not, the problem of working practically and virtually in different formulation design spaces will remain causing worldwide annually billion of $ losses according to the study of Benson and MacCabe. The harmonization of equipment and processes needs a close cooperation between the industrial pharmacist and the pharmaceutical engineer. In addition, Virtual Equipment Simulators (VESs) of small and large scale equipment for training and computer assisted scale-up would be desirable. A lean and intelligent management information and documentation system will improve the connectivity between the different work stations. Thus, in future, it may be possible to rent at low costs F-CAD as an IT (Information Technology) platform based on a cloud computing solution. By the adoption of the workflow of the automotive and aircraft industry significant savings, a reduced time to market, a lower attrition rate, and a much higher quality of the final marketed dosage form can be achieved. Copyright © 2016 Elsevier B.V. All rights reserved.
Usability Testing of a National Substance Use Screening Tool Embedded in Electronic Health Records.
Press, Anne; DeStio, Catherine; McCullagh, Lauren; Kapoor, Sandeep; Morley, Jeanne; Conigliaro, Joseph
2016-07-08
Screening, brief intervention, and referral to treatment (SBIRT) is currently being implemented into health systems nationally via paper and electronic methods. The purpose of this study was to evaluate the integration of an electronic SBIRT tool into an existing paper-based SBIRT clinical workflow in a patient-centered medical home. Usability testing was conducted in an academic ambulatory clinic. Two rounds of usability testing were done with medical office assistants (MOAs) using a paper and electronic version of the SBIRT tool, with two and four participants, respectively. Qualitative and quantitative data was analyzed to determine the impact of both tools on clinical workflow. A second round of usability testing was done with the revised electronic version and compared with the first version. Personal workflow barriers cited in the first round of testing were that the electronic health record (EHR) tool was disruptive to patient's visits. In Round 2 of testing, MOAs reported favoring the electronic version due to improved layout and the inclusion of an alert system embedded in the EHR. For example, using the system usability scale (SUS), MOAs reported a grade "1" for the statement, "I would like to use this system frequently" during the first round of testing but a "5" during the second round of analysis. The importance of testing usability of various mediums of tools used in health care screening is highlighted by the findings of this study. In the first round of testing, the electronic tool was reported as less user friendly, being difficult to navigate, and time consuming. Many issues faced in the first generation of the tool were improved in the second generation after usability was evaluated. This study demonstrates how usability testing of an electronic SBRIT tool can help to identify challenges that can impact clinical workflow. However, a limitation of this study was the small sample size of MOAs that participated. The results may have been biased to Northwell Health workers' perceptions of the SBIRT tool and their specific clinical workflow.
Al-Sha'er, Mahmoud A; Khanfar, Mohammad A; Taha, Mutasem O
2014-01-01
Urokinase plasminogen activator (uPA)-a serine protease-is thought to play a central role in tumor metastasis and angiogenesis and, therefore, inhibition of this enzyme could be beneficial in treating cancer. Toward this end, we explored the pharmacophoric space of 202 uPA inhibitors using seven diverse sets of inhibitors to identify high-quality pharmacophores. Subsequently, we employed genetic algorithm-based quantitative structure-activity relationship (QSAR) analysis as a competition arena to select the best possible combination of pharmacophoric models and physicochemical descriptors that can explain bioactivity variation within the training inhibitors (r (2) 162 = 0.74, F-statistic = 64.30, r (2) LOO = 0.71, r (2) PRESS against 40 test inhibitors = 0.79). Three orthogonal pharmacophores emerged in the QSAR equation suggesting the existence of at least three binding modes accessible to ligands within the uPA binding pocket. This conclusion was supported by receiver operating characteristic (ROC) curve analyses of the QSAR-selected pharmacophores. Moreover, the three pharmacophores were comparable with binding interactions seen in crystallographic structures of bound ligands within the uPA binding pocket. We employed the resulting pharmacophoric models and associated QSAR equation to screen the national cancer institute (NCI) list of compounds. The captured hits were tested in vitro. Overall, our modeling workflow identified new low micromolar anti-uPA hits.
Smith, Jordan W.
2015-01-01
Immersive virtual environment (IVE) technology offers a wide range of potential benefits to research focused on understanding how individuals perceive and respond to built and natural environments. In an effort to broaden awareness and use of IVE technology in perception, preference and behavior research, this review paper describes how IVE technology can be used to complement more traditional methods commonly applied in public health research. The paper also describes a relatively simple workflow for creating and displaying 360° virtual environments of built and natural settings and presents two freely-available and customizable applications that scientists from a variety of disciplines, including public health, can use to advance their research into human preferences, perceptions and behaviors related to built and natural settings. PMID:26378565
Kuric, Katelyn M; Harris, Bryan T; Morton, Dean; Azevedo, Bruno; Lin, Wei-Shao
2017-09-29
This clinical report describes a digital workflow using extraoral digital photographs and volumetric datasets from cone beam computed tomography (CBCT) imaging to create a 3-dimensional (3D), virtual patient with photorealistic appearance. In a patient with microstomia, hinge axis approximation, diagnostic casts simulating postextraction alveolar ridge profile, and facial simulation of prosthetic treatment outcome were completed in a 3D, virtual environment. The approach facilitated the diagnosis, communication, and patient acceptance of the treatment of maxillary and mandibular computer-aided design and computer-aided manufacturing (CAD-CAM) of immediate dentures at increased occlusal vertical dimension. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Smith, Jordan W
2015-09-11
Immersive virtual environment (IVE) technology offers a wide range of potential benefits to research focused on understanding how individuals perceive and respond to built and natural environments. In an effort to broaden awareness and use of IVE technology in perception, preference and behavior research, this review paper describes how IVE technology can be used to complement more traditional methods commonly applied in public health research. The paper also describes a relatively simple workflow for creating and displaying 360° virtual environments of built and natural settings and presents two freely-available and customizable applications that scientists from a variety of disciplines, including public health, can use to advance their research into human preferences, perceptions and behaviors related to built and natural settings.
Stockwell, Simon R; Mittnacht, Sibylle
2014-12-16
Advances in understanding the control mechanisms governing the behavior of cells in adherent mammalian tissue culture models are becoming increasingly dependent on modes of single-cell analysis. Methods which deliver composite data reflecting the mean values of biomarkers from cell populations risk losing subpopulation dynamics that reflect the heterogeneity of the studied biological system. In keeping with this, traditional approaches are being replaced by, or supported with, more sophisticated forms of cellular assay developed to allow assessment by high-content microscopy. These assays potentially generate large numbers of images of fluorescent biomarkers, which enabled by accompanying proprietary software packages, allows for multi-parametric measurements per cell. However, the relatively high capital costs and overspecialization of many of these devices have prevented their accessibility to many investigators. Described here is a universally applicable workflow for the quantification of multiple fluorescent marker intensities from specific subcellular regions of individual cells suitable for use with images from most fluorescent microscopes. Key to this workflow is the implementation of the freely available Cell Profiler software(1) to distinguish individual cells in these images, segment them into defined subcellular regions and deliver fluorescence marker intensity values specific to these regions. The extraction of individual cell intensity values from image data is the central purpose of this workflow and will be illustrated with the analysis of control data from a siRNA screen for G1 checkpoint regulators in adherent human cells. However, the workflow presented here can be applied to analysis of data from other means of cell perturbation (e.g., compound screens) and other forms of fluorescence based cellular markers and thus should be useful for a wide range of laboratories.
xGDBvm: A Web GUI-Driven Workflow for Annotating Eukaryotic Genomes in the Cloud[OPEN
Merchant, Nirav
2016-01-01
Genome-wide annotation of gene structure requires the integration of numerous computational steps. Currently, annotation is arguably best accomplished through collaboration of bioinformatics and domain experts, with broad community involvement. However, such a collaborative approach is not scalable at today’s pace of sequence generation. To address this problem, we developed the xGDBvm software, which uses an intuitive graphical user interface to access a number of common genome analysis and gene structure tools, preconfigured in a self-contained virtual machine image. Once their virtual machine instance is deployed through iPlant’s Atmosphere cloud services, users access the xGDBvm workflow via a unified Web interface to manage inputs, set program parameters, configure links to high-performance computing (HPC) resources, view and manage output, apply analysis and editing tools, or access contextual help. The xGDBvm workflow will mask the genome, compute spliced alignments from transcript and/or protein inputs (locally or on a remote HPC cluster), predict gene structures and gene structure quality, and display output in a public or private genome browser complete with accessory tools. Problematic gene predictions are flagged and can be reannotated using the integrated yrGATE annotation tool. xGDBvm can also be configured to append or replace existing data or load precomputed data. Multiple genomes can be annotated and displayed, and outputs can be archived for sharing or backup. xGDBvm can be adapted to a variety of use cases including de novo genome annotation, reannotation, comparison of different annotations, and training or teaching. PMID:27020957
xGDBvm: A Web GUI-Driven Workflow for Annotating Eukaryotic Genomes in the Cloud.
Duvick, Jon; Standage, Daniel S; Merchant, Nirav; Brendel, Volker P
2016-04-01
Genome-wide annotation of gene structure requires the integration of numerous computational steps. Currently, annotation is arguably best accomplished through collaboration of bioinformatics and domain experts, with broad community involvement. However, such a collaborative approach is not scalable at today's pace of sequence generation. To address this problem, we developed the xGDBvm software, which uses an intuitive graphical user interface to access a number of common genome analysis and gene structure tools, preconfigured in a self-contained virtual machine image. Once their virtual machine instance is deployed through iPlant's Atmosphere cloud services, users access the xGDBvm workflow via a unified Web interface to manage inputs, set program parameters, configure links to high-performance computing (HPC) resources, view and manage output, apply analysis and editing tools, or access contextual help. The xGDBvm workflow will mask the genome, compute spliced alignments from transcript and/or protein inputs (locally or on a remote HPC cluster), predict gene structures and gene structure quality, and display output in a public or private genome browser complete with accessory tools. Problematic gene predictions are flagged and can be reannotated using the integrated yrGATE annotation tool. xGDBvm can also be configured to append or replace existing data or load precomputed data. Multiple genomes can be annotated and displayed, and outputs can be archived for sharing or backup. xGDBvm can be adapted to a variety of use cases including de novo genome annotation, reannotation, comparison of different annotations, and training or teaching. © 2016 American Society of Plant Biologists. All rights reserved.
The Texas-Indiana Virtual STAR Center: Zebrafish Models for Developmental Toxicity Screening
The Texas-Indiana Virtual STAR Center: Zebrafish Models for Developmental Toxicity Screening (Presented by Maria Bondesson Bolin, Ph.D, University of Houston, Center for Nuclear Receptors and Cell Signaling) (3/22/2012)
Zhang, Wen; Qiu, Kai-Xiong; Yu, Fang; Xie, Xiao-Guang; Zhang, Shu-Qun; Chen, Ya-Juan; Xie, Hui-Ding
2017-10-01
B-Raf kinase has been identified as an important target in recent cancer treatment. In order to discover structurally diverse and novel B-Raf inhibitors (BRIs), a virtual screening of BRIs against ZINC database was performed by using a combination of pharmacophore modelling, molecular docking, 3D-QSAR model and binding free energy (ΔG bind ) calculation studies in this work. After the virtual screening, six promising hit compounds were obtained, which were then tested for inhibitory activities of A375 cell lines. In the result, five hit compounds show good biological activities (IC 50 <50μM). The present method of virtual screening can be applied to find structurally diverse inhibitors, and the obtained five structurally diverse compounds are expected to develop novel BRIs. Copyright © 2017. Published by Elsevier Ltd.
Exploiting PubChem for Virtual Screening
Xie, Xiang-Qun
2011-01-01
Importance of the field PubChem is a public molecular information repository, a scientific showcase of the NIH Roadmap Initiative. The PubChem database holds over 27 million records of unique chemical structures of compounds (CID) derived from nearly 70 million substance depositions (SID), and contains more than 449,000 bioassay records with over thousands of in vitro biochemical and cell-based screening bioassays established, with targeting more than 7000 proteins and genes linking to over 1.8 million of substances. Areas covered in this review This review builds on recent PubChem-related computational chemistry research reported by other authors while providing readers with an overview of the PubChem database, focusing on its increasing role in cheminformatics, virtual screening and toxicity prediction modeling. What the reader will gain These publicly available datasets in PubChem provide great opportunities for scientists to perform cheminformatics and virtual screening research for computer-aided drug design. However, the high volume and complexity of the datasets, in particular the bioassay-associated false positives/negatives and highly imbalanced datasets in PubChem, also creates major challenges. Several approaches regarding the modeling of PubChem datasets and development of virtual screening models for bioactivity and toxicity predictions are also reviewed. Take home message Novel data-mining cheminformatics tools and virtual screening algorithms are being developed and used to retrieve, annotate and analyze the large-scale and highly complex PubChem biological screening data for drug design. PMID:21691435
Discovery of novel human acrosin inhibitors by virtual screening
NASA Astrophysics Data System (ADS)
Liu, Xuefei; Dong, Guoqiang; Zhang, Jue; Qi, Jingjing; Zheng, Canhui; Zhou, Youjun; Zhu, Ju; Sheng, Chunquan; Lü, Jiaguo
2011-10-01
Human acrosin is an attractive target for the discovery of male contraceptive drugs. For the first time, structure-based drug design was applied to discover structurally diverse human acrosin inhibitors. A parallel virtual screening strategy in combination with pharmacophore-based and docking-based techniques was used to screen the SPECS database. From 16 compounds selected by virtual screening, a total of 10 compounds were found to be human acrosin inhibitors. Compound 2 was found to be the most potent hit (IC50 = 14 μM) and its binding mode was investigated by molecular dynamics simulations. The hit interacted with human acrosin mainly through hydrophobic and hydrogen-bonding interactions, which provided a good starting structure for further optimization studies.
PGen: large-scale genomic variations analysis workflow and browser in SoyKB.
Liu, Yang; Khan, Saad M; Wang, Juexin; Rynge, Mats; Zhang, Yuanxun; Zeng, Shuai; Chen, Shiyuan; Maldonado Dos Santos, Joao V; Valliyodan, Babu; Calyam, Prasad P; Merchant, Nirav; Nguyen, Henry T; Xu, Dong; Joshi, Trupti
2016-10-06
With the advances in next-generation sequencing (NGS) technology and significant reductions in sequencing costs, it is now possible to sequence large collections of germplasm in crops for detecting genome-scale genetic variations and to apply the knowledge towards improvements in traits. To efficiently facilitate large-scale NGS resequencing data analysis of genomic variations, we have developed "PGen", an integrated and optimized workflow using the Extreme Science and Engineering Discovery Environment (XSEDE) high-performance computing (HPC) virtual system, iPlant cloud data storage resources and Pegasus workflow management system (Pegasus-WMS). The workflow allows users to identify single nucleotide polymorphisms (SNPs) and insertion-deletions (indels), perform SNP annotations and conduct copy number variation analyses on multiple resequencing datasets in a user-friendly and seamless way. We have developed both a Linux version in GitHub ( https://github.com/pegasus-isi/PGen-GenomicVariations-Workflow ) and a web-based implementation of the PGen workflow integrated within the Soybean Knowledge Base (SoyKB), ( http://soykb.org/Pegasus/index.php ). Using PGen, we identified 10,218,140 single-nucleotide polymorphisms (SNPs) and 1,398,982 indels from analysis of 106 soybean lines sequenced at 15X coverage. 297,245 non-synonymous SNPs and 3330 copy number variation (CNV) regions were identified from this analysis. SNPs identified using PGen from additional soybean resequencing projects adding to 500+ soybean germplasm lines in total have been integrated. These SNPs are being utilized for trait improvement using genotype to phenotype prediction approaches developed in-house. In order to browse and access NGS data easily, we have also developed an NGS resequencing data browser ( http://soykb.org/NGS_Resequence/NGS_index.php ) within SoyKB to provide easy access to SNP and downstream analysis results for soybean researchers. PGen workflow has been optimized for the most efficient analysis of soybean data using thorough testing and validation. This research serves as an example of best practices for development of genomics data analysis workflows by integrating remote HPC resources and efficient data management with ease of use for biological users. PGen workflow can also be easily customized for analysis of data in other species.
ChemScreener: A Distributed Computing Tool for Scaffold based Virtual Screening.
Karthikeyan, Muthukumarasamy; Pandit, Deepak; Vyas, Renu
2015-01-01
In this work we present ChemScreener, a Java-based application to perform virtual library generation combined with virtual screening in a platform-independent distributed computing environment. ChemScreener comprises a scaffold identifier, a distinct scaffold extractor, an interactive virtual library generator as well as a virtual screening module for subsequently selecting putative bioactive molecules. The virtual libraries are annotated with chemophore-, pharmacophore- and toxicophore-based information for compound prioritization. The hits selected can then be further processed using QSAR, docking and other in silico approaches which can all be interfaced within the ChemScreener framework. As a sample application, in this work scaffold selectivity, diversity, connectivity and promiscuity towards six important therapeutic classes have been studied. In order to illustrate the computational power of the application, 55 scaffolds extracted from 161 anti-psychotic compounds were enumerated to produce a virtual library comprising 118 million compounds (17 GB) and annotated with chemophore, pharmacophore and toxicophore based features in a single step which would be non-trivial to perform with many standard software tools today on libraries of this size.
Dérand, Per; Rännar, Lars-Erik; Hirsch, Jan-M
2012-01-01
The purpose of this article was to describe the workflow from imaging, via virtual design, to manufacturing of patient-specific titanium reconstruction plates, cutting guide and mesh, and its utility in connection with surgical treatment of acquired bone defects in the mandible using additive manufacturing by electron beam melting (EBM). Based on computed tomography scans, polygon skulls were created. Following that virtual treatment plans entailing free microvascular transfer of fibula flaps using patient-specific reconstruction plates, mesh, and cutting guides were designed. The design was based on the specification of a Compact UniLOCK 2.4 Large (Synthes®, Switzerland). The obtained polygon plates were bent virtually round the reconstructed mandibles. Next, the resections of the mandibles were planned virtually. A cutting guide was outlined to facilitate resection, as well as plates and titanium mesh for insertion of bone or bone substitutes. Polygon plates and meshes were converted to stereolithography format and used in the software Magics for preparation of input files for the successive step, additive manufacturing. EBM was used to manufacture the customized implants in a biocompatible titanium grade, Ti6Al4V ELI. The implants and the cutting guide were cleaned and sterilized, then transferred to the operating theater, and applied during surgery. Commercially available software programs are sufficient in order to virtually plan for production of patient-specific implants. Furthermore, EBM-produced implants are fully usable under clinical conditions in reconstruction of acquired defects in the mandible. A good compliance between the treatment plan and the fit was demonstrated during operation. Within the constraints of this article, the authors describe a workflow for production of patient-specific implants, using EBM manufacturing. Titanium cutting guides, reconstruction plates for fixation of microvascular transfer of osteomyocutaneous bone grafts, and mesh to replace resected bone that can function as a carrier for bone or bone substitutes were designed and tested during reconstructive maxillofacial surgery. A clinically fit, well within the requirements for what is needed and obtained using traditional free hand bending of commercially available devices, or even higher precision, was demonstrated in ablative surgery in four patients. PMID:23997858
Dérand, Per; Rännar, Lars-Erik; Hirsch, Jan-M
2012-09-01
The purpose of this article was to describe the workflow from imaging, via virtual design, to manufacturing of patient-specific titanium reconstruction plates, cutting guide and mesh, and its utility in connection with surgical treatment of acquired bone defects in the mandible using additive manufacturing by electron beam melting (EBM). Based on computed tomography scans, polygon skulls were created. Following that virtual treatment plans entailing free microvascular transfer of fibula flaps using patient-specific reconstruction plates, mesh, and cutting guides were designed. The design was based on the specification of a Compact UniLOCK 2.4 Large (Synthes(®), Switzerland). The obtained polygon plates were bent virtually round the reconstructed mandibles. Next, the resections of the mandibles were planned virtually. A cutting guide was outlined to facilitate resection, as well as plates and titanium mesh for insertion of bone or bone substitutes. Polygon plates and meshes were converted to stereolithography format and used in the software Magics for preparation of input files for the successive step, additive manufacturing. EBM was used to manufacture the customized implants in a biocompatible titanium grade, Ti6Al4V ELI. The implants and the cutting guide were cleaned and sterilized, then transferred to the operating theater, and applied during surgery. Commercially available software programs are sufficient in order to virtually plan for production of patient-specific implants. Furthermore, EBM-produced implants are fully usable under clinical conditions in reconstruction of acquired defects in the mandible. A good compliance between the treatment plan and the fit was demonstrated during operation. Within the constraints of this article, the authors describe a workflow for production of patient-specific implants, using EBM manufacturing. Titanium cutting guides, reconstruction plates for fixation of microvascular transfer of osteomyocutaneous bone grafts, and mesh to replace resected bone that can function as a carrier for bone or bone substitutes were designed and tested during reconstructive maxillofacial surgery. A clinically fit, well within the requirements for what is needed and obtained using traditional free hand bending of commercially available devices, or even higher precision, was demonstrated in ablative surgery in four patients.
How to benchmark methods for structure-based virtual screening of large compound libraries.
Christofferson, Andrew J; Huang, Niu
2012-01-01
Structure-based virtual screening is a useful computational technique for ligand discovery. To systematically evaluate different docking approaches, it is important to have a consistent benchmarking protocol that is both relevant and unbiased. Here, we describe the designing of a benchmarking data set for docking screen assessment, a standard docking screening process, and the analysis and presentation of the enrichment of annotated ligands among a background decoy database.
Javan Amoli, Amir Hossein; Maserat, Elham; Safdari, Reza; Zali, Mohammad Reza
2015-01-01
Decision making modalities for screening for many cancer conditions and different stages have become increasingly complex. Computer-based risk assessment systems facilitate scheduling and decision making and support the delivery of cancer screening services. The aim of this article was to survey electronic risk assessment system as an appropriate tool for the prevention of cancer. A qualitative design was used involving 21 face-to-face interviews. Interviewing involved asking questions and getting answers from exclusive managers of cancer screening. Of the participants 6 were female and 15 were male, and ages ranged from 32 to 78 years. The study was based on a grounded theory approach and the tool was a semi- structured interview. Researchers studied 5 dimensions, comprising electronic guideline standards of colorectal cancer screening, work flow of clinical and genetic activities, pathways of colorectal cancer screening and functionality of computer based guidelines and barriers. Electronic guideline standards of colorectal cancer screening were described in the s3 categories of content standard, telecommunications and technical standards and nomenclature and classification standards. According to the participations' views, workflow and genetic pathways of colorectal cancer screening were identified. The study demonstrated an effective role of computer-guided consultation for screening management. Electronic based systems facilitate real-time decision making during a clinical interaction. Electronic pathways have been applied for clinical and genetic decision support, workflow management, update recommendation and resource estimates. A suitable technical and clinical infrastructure is an integral part of clinical practice guidline of screening. As a conclusion, it is recommended to consider the necessity of architecture assessment and also integration standards.
PinAPL-Py: A comprehensive web-application for the analysis of CRISPR/Cas9 screens.
Spahn, Philipp N; Bath, Tyler; Weiss, Ryan J; Kim, Jihoon; Esko, Jeffrey D; Lewis, Nathan E; Harismendy, Olivier
2017-11-20
Large-scale genetic screens using CRISPR/Cas9 technology have emerged as a major tool for functional genomics. With its increased popularity, experimental biologists frequently acquire large sequencing datasets for which they often do not have an easy analysis option. While a few bioinformatic tools have been developed for this purpose, their utility is still hindered either due to limited functionality or the requirement of bioinformatic expertise. To make sequencing data analysis of CRISPR/Cas9 screens more accessible to a wide range of scientists, we developed a Platform-independent Analysis of Pooled Screens using Python (PinAPL-Py), which is operated as an intuitive web-service. PinAPL-Py implements state-of-the-art tools and statistical models, assembled in a comprehensive workflow covering sequence quality control, automated sgRNA sequence extraction, alignment, sgRNA enrichment/depletion analysis and gene ranking. The workflow is set up to use a variety of popular sgRNA libraries as well as custom libraries that can be easily uploaded. Various analysis options are offered, suitable to analyze a large variety of CRISPR/Cas9 screening experiments. Analysis output includes ranked lists of sgRNAs and genes, and publication-ready plots. PinAPL-Py helps to advance genome-wide screening efforts by combining comprehensive functionality with user-friendly implementation. PinAPL-Py is freely accessible at http://pinapl-py.ucsd.edu with instructions and test datasets.
Renard, Jean-Marie; Bourde, Annabel; Cuggia, Marc; Garcelon, Nicolas; Souf, Nathalie; Darmoni, Stephan; Beuscart, Régis; Brunetaud, Jean-Marc
2007-01-01
The " Université Médicale Virtuelle Francophone" (UMVF) is a federation of French medical schools. Its main goal is to share the production and use of pedagogic medical resources generated by academic medical teachers. We developed an Open-Source application based upon a workflow system, which provides an improved publication process for the UMVF. For teachers, the tool permits easy and efficient upload of new educational resources. For web masters it provides a mechanism to easily locate and validate the resources. For librarian it provide a way to improve the efficiency of indexation. For all, the utility provides a workflow system to control the publication process. On the students side, the application improves the value of the UMVF repository by facilitating the publication of new resources and by providing an easy way to find a detailed description of a resource and to check any resource from the UMVF to ascertain its quality and integrity, even if the resource is an old deprecated version. The server tier of the application is used to implement the main workflow functionalities and is deployed on certified UMVF servers using the PHP language, an LDAP directory and an SQL database. The client tier of the application provides both the workflow and the search and check functionalities. A unique signature for each resource, was needed to provide security functionality and is implemented using a Digest algorithm. The testing performed by Rennes and Lille verified the functionality and conformity with our specifications.
Nesaratnam, N; Thomas, P; Vivian, A
2017-10-01
IntroductionDissociated tests of strabismus provide valuable information for diagnosis and monitoring of ocular misalignment in patients with normal retinal correspondence. However, they are vulnerable to operator error and rely on a fixed head position. Virtual reality headsets obviate the need for head fixation, while providing other clear theoretical advantages, including complete control over the illumination and targets presented for the patient's interaction.PurposeWe compared the performance of a virtual reality-based test of ocular misalignment to that of the traditional Lees screen, to establish the feasibility of using virtual reality technology in ophthalmic settings in the future.MethodsThree patients underwent a traditional Lees screen test, and a virtual reality headset-based test of ocular motility. The virtual reality headset-based programme consisted of an initial test to measure horizontal and vertical deviation, followed by a test for torsion.ResultsThe pattern of deviation obtained using the virtual reality-based test showed agreement with that obtained from the Lees screen for patients with a fourth nerve palsy, comitant esotropia, and restrictive thyroid eye disease.ConclusionsThis study reports the first use of a virtual reality headset in assessing ocular misalignment, and demonstrates that it is a feasible dissociative test of strabismus.
MouseNet database: digital management of a large-scale mutagenesis project.
Pargent, W; Heffner, S; Schäble, K F; Soewarto, D; Fuchs, H; Hrabé de Angelis, M
2000-07-01
The Munich ENU Mouse Mutagenesis Screen is a large-scale mutant production, phenotyping, and mapping project. It encompasses two animal breeding facilities and a number of screening groups located in the general area of Munich. A central database is required to manage and process the immense amount of data generated by the mutagenesis project. This database, which we named MouseNet(c), runs on a Sybase platform and will finally store and process all data from the entire project. In addition, the system comprises a portfolio of functions needed to support the workflow management of the core facility and the screening groups. MouseNet(c) will make all of the data available to the participating screening groups, and later to the international scientific community. MouseNet(c) will consist of three major software components:* Animal Management System (AMS)* Sample Tracking System (STS)* Result Documentation System (RDS)MouseNet(c) provides the following major advantages:* being accessible from different client platforms via the Internet* being a full-featured multi-user system (including access restriction and data locking mechanisms)* relying on a professional RDBMS (relational database management system) which runs on a UNIX server platform* supplying workflow functions and a variety of plausibility checks.
Virtual Screening Approaches towards the Discovery of Toll-Like Receptor Modulators
Pérez-Regidor, Lucía; Zarioh, Malik; Ortega, Laura; Martín-Santamaría, Sonsoles
2016-01-01
This review aims to summarize the latest efforts performed in the search for novel chemical entities such as Toll-like receptor (TLR) modulators by means of virtual screening techniques. This is an emergent research field with only very recent (and successful) contributions. Identification of drug-like molecules with potential therapeutic applications for the treatment of a variety of TLR-regulated diseases has attracted considerable interest due to the clinical potential. Additionally, the virtual screening databases and computational tools employed have been overviewed in a descriptive way, widening the scope for researchers interested in the field. PMID:27618029
Davidson, Robert L; Weber, Ralf J M; Liu, Haoyu; Sharma-Oates, Archana; Viant, Mark R
2016-01-01
Metabolomics is increasingly recognized as an invaluable tool in the biological, medical and environmental sciences yet lags behind the methodological maturity of other omics fields. To achieve its full potential, including the integration of multiple omics modalities, the accessibility, standardization and reproducibility of computational metabolomics tools must be improved significantly. Here we present our end-to-end mass spectrometry metabolomics workflow in the widely used platform, Galaxy. Named Galaxy-M, our workflow has been developed for both direct infusion mass spectrometry (DIMS) and liquid chromatography mass spectrometry (LC-MS) metabolomics. The range of tools presented spans from processing of raw data, e.g. peak picking and alignment, through data cleansing, e.g. missing value imputation, to preparation for statistical analysis, e.g. normalization and scaling, and principal components analysis (PCA) with associated statistical evaluation. We demonstrate the ease of using these Galaxy workflows via the analysis of DIMS and LC-MS datasets, and provide PCA scores and associated statistics to help other users to ensure that they can accurately repeat the processing and analysis of these two datasets. Galaxy and data are all provided pre-installed in a virtual machine (VM) that can be downloaded from the GigaDB repository. Additionally, source code, executables and installation instructions are available from GitHub. The Galaxy platform has enabled us to produce an easily accessible and reproducible computational metabolomics workflow. More tools could be added by the community to expand its functionality. We recommend that Galaxy-M workflow files are included within the supplementary information of publications, enabling metabolomics studies to achieve greater reproducibility.
Pei, Fen; Jin, Hongwei; Zhou, Xin; Xia, Jie; Sun, Lidan; Liu, Zhenming; Zhang, Liangren
2015-11-01
Toll-like receptor 8 agonists, which activate adaptive immune responses by inducing robust production of T-helper 1-polarizing cytokines, are promising candidates for vaccine adjuvants. As the binding site of toll-like receptor 8 is large and highly flexible, virtual screening by individual method has inevitable limitations; thus, a comprehensive comparison of different methods may provide insights into seeking effective strategy for the discovery of novel toll-like receptor 8 agonists. In this study, the performance of knowledge-based pharmacophore, shape-based 3D screening, and combined strategies was assessed against a maximum unbiased benchmarking data set containing 13 actives and 1302 decoys specialized for toll-like receptor 8 agonists. Prior structure-activity relationship knowledge was involved in knowledge-based pharmacophore generation, and a set of antagonists was innovatively used to verify the selectivity of the selected knowledge-based pharmacophore. The benchmarking data set was generated from our recently developed 'mubd-decoymaker' protocol. The enrichment assessment demonstrated a considerable performance through our selected three-layer virtual screening strategy: knowledge-based pharmacophore (Phar1) screening, shape-based 3D similarity search (Q4_combo), and then a Gold docking screening. This virtual screening strategy could be further employed to perform large-scale database screening and to discover novel toll-like receptor 8 agonists. © 2015 John Wiley & Sons A/S.
A Quantitative Visual Mapping and Visualization Approach for Deep Ocean Floor Research
NASA Astrophysics Data System (ADS)
Hansteen, T. H.; Kwasnitschka, T.
2013-12-01
Geological fieldwork on the sea floor is still impaired by our inability to resolve features on a sub-meter scale resolution in a quantifiable reference frame and over an area large enough to reveal the context of local observations. In order to overcome these issues, we have developed an integrated workflow of visual mapping techniques leading to georeferenced data sets which we examine using state-of-the-art visualization technology to recreate an effective working style of field geology. We demonstrate a microbathymetrical workflow, which is based on photogrammetric reconstruction of ROV imagery referenced to the acoustic vehicle track. The advantage over established acoustical systems lies in the true three-dimensionality of the data as opposed to the perspective projection from above produced by downward looking mapping methods. A full color texture mosaic derived from the imagery allows studies at resolutions beyond the resolved geometry (usually one order of magnitude below the image resolution) while color gives additional clues, which can only be partly resolved in acoustic backscatter. The creation of a three-dimensional model changes the working style from the temporal domain of a video recording back to the spatial domain of a map. We examine these datasets using a custom developed immersive virtual visualization environment. The ARENA (Artificial Research Environment for Networked Analysis) features a (lower) hemispherical screen at a diameter of six meters, accommodating up to four scientists at once thus providing the ability to browse data interactively among a group of researchers. This environment facilitates (1) the development of spatial understanding analogue to on-land outcrop studies, (2) quantitative observations of seafloor morphology and physical parameters of its deposits, (3) more effective formulation and communication of working hypotheses.
Mayo, Johnathan; Baur, Kilian; Wittmann, Frieder; Riener, Robert; Wolf, Peter
2018-01-01
Background Goal-directed reaching for real-world objects by humans is enabled through visual depth cues. In virtual environments, the number and quality of available visual depth cues is limited, which may affect reaching performance and quality of reaching movements. Methods We assessed three-dimensional reaching movements in five experimental groups each with ten healthy volunteers. Three groups used a two-dimensional computer screen and two groups used a head-mounted display. The first screen group received the typically recreated visual depth cues, such as aerial and linear perspective, occlusion, shadows, and texture gradients. The second screen group received an abstract minimal rendering lacking those. The third screen group received the cues of the first screen group and absolute depth cues enabled by retinal image size of a known object, which realized with visual renderings of the handheld device and a ghost handheld at the target location. The two head-mounted display groups received the same virtually recreated visual depth cues as the second or the third screen group respectively. Additionally, they could rely on stereopsis and motion parallax due to head-movements. Results and conclusion All groups using the screen performed significantly worse than both groups using the head-mounted display in terms of completion time normalized by the straight-line distance to the target. Both groups using the head-mounted display achieved the optimal minimum in number of speed peaks and in hand path ratio, indicating that our subjects performed natural movements when using a head-mounted display. Virtually recreated visual depth cues had a minor impact on reaching performance. Only the screen group with rendered handhelds could outperform the other screen groups. Thus, if reaching performance in virtual environments is in the main scope of a study, we suggest applying a head-mounted display. Otherwise, when two-dimensional screens are used, achievable performance is likely limited by the reduced depth perception and not just by subjects’ motor skills. PMID:29293512
The virtual mirror: a new interaction paradigm for augmented reality environments.
Bichlmeier, Christoph; Heining, Sandro Michael; Feuerstein, Marco; Navab, Nassir
2009-09-01
Medical augmented reality (AR) has been widely discussed within the medical imaging as well as computer aided surgery communities. Different systems for exemplary medical applications have been proposed. Some of them produced promising results. One major issue still hindering AR technology to be regularly used in medical applications is the interaction between physician and the superimposed 3-D virtual data. Classical interaction paradigms, for instance with keyboard and mouse, to interact with visualized medical 3-D imaging data are not adequate for an AR environment. This paper introduces the concept of a tangible/controllable Virtual Mirror for medical AR applications. This concept intuitively augments the direct view of the surgeon with all desired views on volumetric medical imaging data registered with the operation site without moving around the operating table or displacing the patient. We selected two medical procedures to demonstrate and evaluate the potentials of the Virtual Mirror for the surgical workflow. Results confirm the intuitiveness of this new paradigm and its perceptive advantages for AR-based computer aided interventions.
Lam, Walter Y H; Hsung, Richard T C; Choi, Winnie W S; Luk, Henry W K; Cheng, Leo Y Y; Pow, Edmond H N
2017-09-29
Accurate articulator-mounted casts are essential for occlusion analysis and for fabrication of dental prostheses. Although the axis orbital plane has been commonly used as the reference horizontal plane, some clinicians prefer to register the horizontal plane with a spirit level when the patient is in the natural head position (NHP) to avoid anatomic landmark variations. This article presents a digital workflow for registering the patient's horizontal plane in NHP on a virtual articulator. An orientation reference board is used to calibrate a stereophotogrammetry device and a 3-dimensional facial photograph with the patient in NHP. The horizontal plane can then be automatically registered to the patient's virtual model and aligned to the virtual articulator at the transverse horizontal axis level. This technique showed good repeatability with positional differences of less than 1 degree and 1 mm in 5 repeated measurements in 1 patient. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
VirSSPA- a virtual reality tool for surgical planning workflow.
Suárez, C; Acha, B; Serrano, C; Parra, C; Gómez, T
2009-03-01
A virtual reality tool, called VirSSPA, was developed to optimize the planning of surgical processes. Segmentation algorithms for Computed Tomography (CT) images: a region growing procedure was used for soft tissues and a thresholding algorithm was implemented to segment bones. The algorithms operate semiautomati- cally since they only need seed selection with the mouse on each tissue segmented by the user. The novelty of the paper is the adaptation of an enhancement method based on histogram thresholding applied to CT images for surgical planning, which simplifies subsequent segmentation. A substantial improvement of the virtual reality tool VirSSPA was obtained with these algorithms. VirSSPA was used to optimize surgical planning, to decrease the time spent on surgical planning and to improve operative results. The success rate increases due to surgeons being able to see the exact extent of the patient's ailment. This tool can decrease operating room time, thus resulting in reduced costs. Virtual simulation was effective for optimizing surgical planning, which could, consequently, result in improved outcomes with reduced costs.
Inda, Márcia A; van Batenburg, Marinus F; Roos, Marco; Belloum, Adam S Z; Vasunin, Dmitry; Wibisono, Adianto; van Kampen, Antoine H C; Breit, Timo M
2008-08-08
Chromosome location is often used as a scaffold to organize genomic information in both the living cell and molecular biological research. Thus, ever-increasing amounts of data about genomic features are stored in public databases and can be readily visualized by genome browsers. To perform in silico experimentation conveniently with this genomics data, biologists need tools to process and compare datasets routinely and explore the obtained results interactively. The complexity of such experimentation requires these tools to be based on an e-Science approach, hence generic, modular, and reusable. A virtual laboratory environment with workflows, workflow management systems, and Grid computation are therefore essential. Here we apply an e-Science approach to develop SigWin-detector, a workflow-based tool that can detect significantly enriched windows of (genomic) features in a (DNA) sequence in a fast and reproducible way. For proof-of-principle, we utilize a biological use case to detect regions of increased and decreased gene expression (RIDGEs and anti-RIDGEs) in human transcriptome maps. We improved the original method for RIDGE detection by replacing the costly step of estimation by random sampling with a faster analytical formula for computing the distribution of the null hypothesis being tested and by developing a new algorithm for computing moving medians. SigWin-detector was developed using the WS-VLAM workflow management system and consists of several reusable modules that are linked together in a basic workflow. The configuration of this basic workflow can be adapted to satisfy the requirements of the specific in silico experiment. As we show with the results from analyses in the biological use case on RIDGEs, SigWin-detector is an efficient and reusable Grid-based tool for discovering windows enriched for features of a particular type in any sequence of values. Thus, SigWin-detector provides the proof-of-principle for the modular e-Science based concept of integrative bioinformatics experimentation.
Modernizing Earth and Space Science Modeling Workflows in the Big Data Era
NASA Astrophysics Data System (ADS)
Kinter, J. L.; Feigelson, E.; Walker, R. J.; Tino, C.
2017-12-01
Modeling is a major aspect of the Earth and space science research. The development of numerical models of the Earth system, planetary systems or astrophysical systems is essential to linking theory with observations. Optimal use of observations that are quite expensive to obtain and maintain typically requires data assimilation that involves numerical models. In the Earth sciences, models of the physical climate system are typically used for data assimilation, climate projection, and inter-disciplinary research, spanning applications from analysis of multi-sensor data sets to decision-making in climate-sensitive sectors with applications to ecosystems, hazards, and various biogeochemical processes. In space physics, most models are from first principles, require considerable expertise to run and are frequently modified significantly for each case study. The volume and variety of model output data from modeling Earth and space systems are rapidly increasing and have reached a scale where human interaction with data is prohibitively inefficient. A major barrier to progress is that modeling workflows isn't deemed by practitioners to be a design problem. Existing workflows have been created by a slow accretion of software, typically based on undocumented, inflexible scripts haphazardly modified by a succession of scientists and students not trained in modern software engineering methods. As a result, existing modeling workflows suffer from an inability to onboard new datasets into models; an inability to keep pace with accelerating data production rates; and irreproducibility, among other problems. These factors are creating an untenable situation for those conducting and supporting Earth system and space science. Improving modeling workflows requires investments in hardware, software and human resources. This paper describes the critical path issues that must be targeted to accelerate modeling workflows, including script modularization, parallelization, and automation in the near term, and longer term investments in virtualized environments for improved scalability, tolerance for lossy data compression, novel data-centric memory and storage technologies, and tools for peer reviewing, preserving and sharing workflows, as well as fundamental statistical and machine learning algorithms.
NASA Astrophysics Data System (ADS)
Wyborn, L. A.; Fraser, R.; Evans, B. J. K.; Friedrich, C.; Klump, J. F.; Lescinsky, D. T.
2017-12-01
Virtual Research Environments (VREs) are now part of academic infrastructures. Online research workflows can be orchestrated whereby data can be accessed from multiple external repositories with processing taking place on public or private clouds, and centralised supercomputers using a mixture of user codes, and well-used community software and libraries. VREs enable distributed members of research teams to actively work together to share data, models, tools, software, workflows, best practices, infrastructures, etc. These environments and their components are increasingly able to support the needs of undergraduate teaching. External to the research sector, they can also be reused by citizen scientists, and be repurposed for industry users to help accelerate the diffusion and hence enable the translation of research innovations. The Virtual Geophysics Laboratory (VGL) in Australia was started in 2012, built using a collaboration between CSIRO, the National Computational Infrastructure (NCI) and Geoscience Australia, with support funding from the Australian Government Department of Education. VGL comprises three main modules that provide an interface to enable users to first select their required data; to choose a tool to process that data; and then access compute infrastructure for execution. VGL was initially built to enable a specific set of researchers in government agencies access to specific data sets and a limited number of tools. Over the years it has evolved into a multi-purpose Earth science platform with access to an increased variety of data (e.g., Natural Hazards, Geochemistry), a broader range of software packages, and an increasing diversity of compute infrastructures. This expansion has been possible because of the approach to loosely couple data, tools and compute resources via interfaces that are built on international standards and accessed as network-enabled services wherever possible. Built originally for researchers that were not fussy about general usability, increasing emphasis on User Interfaces (UIs) and stability will lead to increased uptake in the education and industry sectors. Simultaneously, improvements are being added to facilitate access to data and tools by experienced researchers who want direct access to both data and flexible workflows.
Procedural Modeling for Rapid-Prototyping of Multiple Building Phases
NASA Astrophysics Data System (ADS)
Saldana, M.; Johanson, C.
2013-02-01
RomeLab is a multidisciplinary working group at UCLA that uses the city of Rome as a laboratory for the exploration of research approaches and dissemination practices centered on the intersection of space and time in antiquity. In this paper we present a multiplatform workflow for the rapid-prototyping of historical cityscapes through the use of geographic information systems, procedural modeling, and interactive game development. Our workflow begins by aggregating archaeological data in a GIS database. Next, 3D building models are generated from the ArcMap shapefiles in Esri CityEngine using procedural modeling techniques. A GIS-based terrain model is also adjusted in CityEngine to fit the building elevations. Finally, the terrain and city models are combined in Unity, a game engine which we used to produce web-based interactive environments which are linked to the GIS data using keyhole markup language (KML). The goal of our workflow is to demonstrate that knowledge generated within a first-person virtual world experience can inform the evaluation of data derived from textual and archaeological sources, and vice versa.
Noeske, Tobias; Trifanova, Dina; Kauss, Valerjans; Renner, Steffen; Parsons, Christopher G; Schneider, Gisbert; Weil, Tanja
2009-08-01
We report the identification of novel potent and selective metabotropic glutamate receptor 1 (mGluR1) antagonists by virtual screening and subsequent hit optimization. For ligand-based virtual screening, molecules were represented by a topological pharmacophore descriptor (CATS-2D) and clustered by a self-organizing map (SOM). The most promising compounds were tested in mGluR1 functional and binding assays. We identified a potent chemotype exhibiting selective antagonistic activity at mGluR1 (functional IC(50)=0.74+/-0.29 microM). Hit optimization yielded lead structure 16 with an affinity of K(i)=0.024+/-0.001 microM and greater than 1000-fold selectivity for mGluR1 versus mGluR5. Homology-based receptor modelling suggests a binding site compatible with previously reported mutation studies. Our study demonstrates the usefulness of ligand-based virtual screening for scaffold-hopping and rapid lead structure identification in early drug discovery projects.
New developments in digital pathology: from telepathology to virtual pathology laboratory.
Kayser, Klaus; Kayser, Gian; Radziszowski, Dominik; Oehmann, Alexander
2004-01-01
To analyse the present status and future development of computerized diagnostic pathology in terms of work-flow integrative telepathology and virtual laboratory. Telepathology has left its childhood. The technical development of telepathology is mature, in contrast to that of virtual pathology. Two kinds of virtual pathology laboratories are emerging: a) those with distributed pathologists and distributed (>=1) laboratories associated to individual biopsy stations/surgical theatres, and b) distributed pathologists working in a centralized laboratory. Both are under technical development. Telepathology can be used for e-learning and e-training in pathology, as exemplarily demonstrated on Digital Lung Pathology Pathology (www.pathology-online.org). A virtual pathology institution (mode a) accepts a complete case with the patient's history, clinical findings, and (pre-selected) images for first diagnosis. The diagnostic responsibility is that of a conventional institution. The internet serves as platform for information transfer, and an open server such as the iPATH (http://telepath.patho.unibas.ch) for coordination and performance of the diagnostic procedure. The size of images has to be limited, and usual different magnifications have to be used. A group of pathologists is "on duty", or selects one member for a predefined duty period. The diagnostic statement of the pathologist(s) on duty is retransmitted to the sender with full responsibility. First experiences of a virtual pathology institution group working with the iPATH server (Dr. L. Banach, Dr. G. Haroske, Dr. I. Hurwitz, Dr. K. Kayser, Dr. K.D. Kunze, Dr. M. Oberholzer,) working with a small hospital of the Salomon islands are promising. A centralized virtual pathology institution (mode b) depends upon the digitalisation of a complete slide, and the transfer of large sized images to different pathologists working in one institution. The technical performance of complete slide digitalisation is still under development and does not completely fulfil the requirements of a conventional pathology institution at present. VIRTUAL PATHOLOGY AND E-LEARNING: At present, e-learning systems are "stand-alone" solutions distributed on CD or via internet. A characteristic example is the Digital Lung Pathology CD (www.pathology-online.org), which includes about 60 different rare and common lung diseases and internet access to scientific library systems (PubMed), distant measurement servers (EuroQuant), or electronic journals (Elec J Pathol Histol). A new and complete data base based upon this CD will combine e-learning and e-teaching with the actual workflow in a virtual pathology institution (mode a). The technological problems are solved and do not depend upon technical constraints such as slide scanning systems. Telepathology serves as promotor for a new landscape in diagnostic pathology, the so-called virtual pathology institution. Industrial and scientific efforts will probably allow an implementation of this technique within the next two years.
Grandjean, Geoffrey; Graham, Ryan; Bartholomeusz, Geoffrey
2011-11-01
In recent years high throughput screening operations have become a critical application in functional and translational research. Although a seemingly unmanageable amount of data is generated by these high-throughput, large-scale techniques, through careful planning, an effective Laboratory Information Management System (LIMS) can be developed and implemented in order to streamline all phases of a workflow. Just as important as data mining and analysis procedures at the end of complex processes is the tracking of individual steps of applications that generate such data. Ultimately, the use of a customized LIMS will enable users to extract meaningful results from large datasets while trusting the robustness of their assays. To illustrate the design of a custom LIMS, this practical example is provided to highlight the important aspects of the design of a LIMS to effectively modulate all aspects of an siRNA screening service. This system incorporates inventory management, control of workflow, data handling and interaction with investigators, statisticians and administrators. All these modules are regulated in a synchronous manner within the LIMS. © 2011 Bentham Science Publishers
Stewart, Eugene L; Brown, Peter J; Bentley, James A; Willson, Timothy M
2004-08-01
A methodology for the selection and validation of nuclear receptor ligand chemical descriptors is described. After descriptors for a targeted chemical space were selected, a virtual screening methodology utilizing this space was formulated for the identification of potential NR ligands from our corporate collection. Using simple descriptors and our virtual screening method, we are able to quickly identify potential NR ligands from a large collection of compounds. As validation of the virtual screening procedure, an 8, 000-membered NR targeted set and a 24, 000-membered diverse control set of compounds were selected from our in-house general screening collection and screened in parallel across a number of orphan NR FRET assays. For the two assays that provided at least one hit per set by the established minimum pEC(50) for activity, the results showed a 2-fold increase in the hit-rate of the targeted compound set over the diverse set.
A large scale virtual screen of DprE1.
Wilsey, Claire; Gurka, Jessica; Toth, David; Franco, Jimmy
2013-12-01
Tuberculosis continues to plague the world with the World Health Organization estimating that about one third of the world's population is infected. Due to the emergence of MDR and XDR strains of TB, the need for novel therapeutics has become increasing urgent. Herein we report the results of a virtual screen of 4.1 million compounds against a promising drug target, DrpE1. The virtual compounds were obtained from the Zinc docking site and screened using the molecular docking program, AutoDock Vina. The computational hits have led to the identification of several promising lead compounds. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Filgueira, R.; Ferreira da Silva, R.; Deelman, E.; Atkinson, M.
2016-12-01
We present the Data-Intensive workflows as a Service (DIaaS) model for enabling easy data-intensive workflow composition and deployment on clouds using containers. DIaaS model backbone is Asterism, an integrated solution for running data-intensive stream-based applications on heterogeneous systems, which combines the benefits of dispel4py with Pegasus workflow systems. The stream-based executions of an Asterism workflow are managed by dispel4py, while the data movement between different e-Infrastructures, and the coordination of the application execution are automatically managed by Pegasus. DIaaS combines Asterism framework with Docker containers to provide an integrated, complete, easy-to-use, portable approach to run data-intensive workflows on distributed platforms. Three containers integrate the DIaaS model: a Pegasus node, and an MPI and an Apache Storm clusters. Container images are described as Dockerfiles (available online at http://github.com/dispel4py/pegasus_dispel4py), linked to Docker Hub for providing continuous integration (automated image builds), and image storing and sharing. In this model, all required software (workflow systems and execution engines) for running scientific applications are packed into the containers, which significantly reduces the effort (and possible human errors) required by scientists or VRE administrators to build such systems. The most common use of DIaaS will be to act as a backend of VREs or Scientific Gateways to run data-intensive applications, deploying cloud resources upon request. We have demonstrated the feasibility of DIaaS using the data-intensive seismic ambient noise cross-correlation application (Figure 1). The application preprocesses (Phase1) and cross-correlates (Phase2) traces from several seismic stations. The application is submitted via Pegasus (Container1), and Phase1 and Phase2 are executed in the MPI (Container2) and Storm (Container3) clusters respectively. Although both phases could be executed within the same environment, this setup demonstrates the flexibility of DIaaS to run applications across e-Infrastructures. In summary, DIaaS delivers specialized software to execute data-intensive applications in a scalable, efficient, and robust manner reducing the engineering time and computational cost.
Fernandez Montenegro, Juan Manuel; Argyriou, Vasileios
2017-05-01
Alzheimer's screening tests are commonly used by doctors to diagnose the patient's condition and stage as early as possible. Most of these tests are based on pen-paper interaction and do not embrace the advantages provided by new technologies. This paper proposes novel Alzheimer's screening tests based on virtual environments and game principles using new immersive technologies combined with advanced Human Computer Interaction (HCI) systems. These new tests are focused on the immersion of the patient in a virtual room, in order to mislead and deceive the patient's mind. In addition, we propose two novel variations of Turing Test proposed by Alan Turing as a method to detect dementia. As a result, four tests are introduced demonstrating the wide range of screening mechanisms that could be designed using virtual environments and game concepts. The proposed tests are focused on the evaluation of memory loss related to common objects, recent conversations and events; the diagnosis of problems in expressing and understanding language; the ability to recognize abnormalities; and to differentiate between virtual worlds and reality, or humans and machines. The proposed screening tests were evaluated and tested using both patients and healthy adults in a comparative study with state-of-the-art Alzheimer's screening tests. The results show the capacity of the new tests to distinguish healthy people from Alzheimer's patients. Copyright © 2017. Published by Elsevier Inc.
Cloke, Jonathan; Matheny, Sharon; Swimley, Michelle; Tebbs, Robert; Burrell, Angelia; Flannery, Jonathan; Bastin, Benjamin; Bird, Patrick; Benzinger, M Joseph; Crowley, Erin; Agin, James; Goins, David; Salfinger, Yvonne; Brodsky, Michael; Fernandez, Maria Cristina
2016-11-01
The Applied Biosystems™ RapidFinder™ STEC Detection Workflow (Thermo Fisher Scientific) is a complete protocol for the rapid qualitative detection of Escherichia coli (E. coli) O157:H7 and the "Big 6" non-O157 Shiga-like toxin-producing E. coli (STEC) serotypes (defined as serogroups: O26, O45, O103, O111, O121, and O145). The RapidFinder STEC Detection Workflow makes use of either the automated preparation of PCR-ready DNA using the Applied Biosystems PrepSEQ™ Nucleic Acid Extraction Kit in conjunction with the Applied Biosystems MagMAX™ Express 96-well magnetic particle processor or the Applied Biosystems PrepSEQ Rapid Spin kit for manual preparation of PCR-ready DNA. Two separate assays comprise the RapidFinder STEC Detection Workflow, the Applied Biosystems RapidFinder STEC Screening Assay and the Applied Biosystems RapidFinder STEC Confirmation Assay. The RapidFinder STEC Screening Assay includes primers and probes to detect the presence of stx1 (Shiga toxin 1), stx2 (Shiga toxin 2), eae (intimin), and E. coli O157 gene targets. The RapidFinder STEC Confirmation Assay includes primers and probes for the "Big 6" non-O157 STEC and E. coli O157:H7. The use of these two assays in tandem allows a user to detect accurately the presence of the "Big 6" STECs and E. coli O157:H7. The performance of the RapidFinder STEC Detection Workflow was evaluated in a method comparison study, in inclusivity and exclusivity studies, and in a robustness evaluation. The assays were compared to the U.S. Department of Agriculture (USDA), Food Safety and Inspection Service (FSIS) Microbiology Laboratory Guidebook (MLG) 5.09: Detection, Isolation and Identification of Escherichia coli O157:H7 from Meat Products and Carcass and Environmental Sponges for raw ground beef (73% lean) and USDA/FSIS-MLG 5B.05: Detection, Isolation and Identification of Escherichia coli non-O157:H7 from Meat Products and Carcass and Environmental Sponges for raw beef trim. No statistically significant differences were observed between the reference method and the individual or combined kits forming the candidate assay using either of the DNA preparation kits (manual or automated extraction). For the inclusivity and exclusivity evaluation, the RapidFinder STEC Detection Workflow, comprising both RapidFinder STEC screening and confirmation kits, correctly identified all 50 target organism isolates and correctly excluded all 30 nontarget strains for both of the assays evaluated. The results of these studies demonstrate the sensitivity and selectivity of the RapidFinder STEC Detection Workflow for the detection of E. coli O157:H7 and the "Big 6" STEC serotypes in both raw ground beef and beef trim. The robustness testing demonstrated that minor variations in the method parameters did not impact the accuracy of the assay and highlighted the importance of following the correct incubation temperatures.
NASA's Hybrid Reality Lab: One Giant Leap for Full Dive
NASA Technical Reports Server (NTRS)
Delgado, Francisco J.; Noyes, Matthew
2017-01-01
This presentation demonstrates how NASA is using consumer VR headsets, game engine technology and NVIDIA's GPUs to create highly immersive future training systems augmented with extremely realistic haptic feedback, sound, additional sensory information, and how these can be used to improve the engineering workflow. Include in this presentation is an environment simulation of the ISS, where users can interact with virtual objects, handrails, and tracked physical objects while inside VR, integration of consumer VR headsets with the Active Response Gravity Offload System, and a space habitat architectural evaluation tool. Attendees will learn how the best elements of real and virtual worlds can be combined into a hybrid reality environment with tangible engineering and scientific applications.
Game-Based Virtual Worlds as Decentralized Virtual Activity Systems
NASA Astrophysics Data System (ADS)
Scacchi, Walt
There is widespread interest in the development and use of decentralized systems and virtual world environments as possible new places for engaging in collaborative work activities. Similarly, there is widespread interest in stimulating new technological innovations that enable people to come together through social networking, file/media sharing, and networked multi-player computer game play. A decentralized virtual activity system (DVAS) is a networked computer supported work/play system whose elements and social activities can be both virtual and decentralized (Scacchi et al. 2008b). Massively multi-player online games (MMOGs) such as World of Warcraft and online virtual worlds such as Second Life are each popular examples of a DVAS. Furthermore, these systems are beginning to be used for research, deve-lopment, and education activities in different science, technology, and engineering domains (Bainbridge 2007, Bohannon et al. 2009; Rieber 2005; Scacchi and Adams 2007; Shaffer 2006), which are also of interest here. This chapter explores two case studies of DVASs developed at the University of California at Irvine that employ game-based virtual worlds to support collaborative work/play activities in different settings. The settings include those that model and simulate practical or imaginative physical worlds in different domains of science, technology, or engineering through alternative virtual worlds where players/workers engage in different kinds of quests or quest-like workflows (Jakobsson 2006).
Towards seamless workflows in agile data science
NASA Astrophysics Data System (ADS)
Klump, J. F.; Robertson, J.
2017-12-01
Agile workflows are a response to projects with requirements that may change over time. They prioritise rapid and flexible responses to change, preferring to adapt to changes in requirements rather than predict them before a project starts. This suits the needs of research very well because research is inherently agile in its methodology. The adoption of agile methods has made collaborative data analysis much easier in a research environment fragmented across institutional data stores, HPC, personal and lab computers and more recently cloud environments. Agile workflows use tools that share a common worldview: in an agile environment, there may be more that one valid version of data, code or environment in play at any given time. All of these versions need references and identifiers. For example, a team of developers following the git-flow conventions (github.com/nvie/gitflow) may have several active branches, one for each strand of development. These workflows allow rapid and parallel iteration while maintaining identifiers pointing to individual snapshots of data and code and allowing rapid switching between strands. In contrast, the current focus of versioning in research data management is geared towards managing data for reproducibility and long-term preservation of the record of science. While both are important goals in the persistent curation domain of the institutional research data infrastructure, current tools emphasise planning over adaptation and can introduce unwanted rigidity by insisting on a single valid version or point of truth. In the collaborative curation domain of a research project, things are more fluid. However, there is no equivalent to the "versioning iso-surface" of the git protocol for the management and versioning of research data. At CSIRO we are developing concepts and tools for the agile management of software code and research data for virtual research environments, based on our experiences of actual data analytics projects in the geosciences. We use code management that allows researchers to interact with the code through tools like Jupyter Notebooks while data are held in an object store. Our aim is an architecture allowing seamless integration of code development, data management, and data processing in virtual research environments.
Modeling of workflow-engaged networks on radiology transfers across a metro network.
Camorlinga, Sergio; Schofield, Bruce
2006-04-01
Radiology metro networks bear the challenging proposition of interconnecting several hospitals in a region to provide a comprehensive diagnostic imaging service. Consequences of a poorly designed and implemented metro network could cause delays or no access at all when health care providers try to retrieve medical cases across the network. This could translate into limited diagnostic services to patients, resulting in negative impacts to the patients' medical treatment. A workflow-engaged network (WEN) is a new network paradigm. A WEN appreciates radiology workflows and priorities in using the network. A WEN greatly improves the network performance by guaranteeing that critical image transfers experience minimal delay. It adjusts network settings to ensure the application's requirements are met. This means that high-priority image transfers will have guaranteed and known delay times, whereas lower-priority traffic will have increased delays. This paper introduces a modeling to understand the benefits that WEN brings to a radiology metro network. The modeling uses actual data patterns and flows found in a hospital metro region. The workflows considered are based on the Integrating the Healthcare Enterprise profiles. This modeling has been applied to metropolitan workflows of a health region. The modeling helps identify the kind of metro network that supports data patterns and flows in a metro area. The results of the modeling show that a 155-Mb/s metropolitan area network (MAN) with WEN operates virtually equal to a normal 622-Mb/s MAN without WEN, with potential cost savings for leased line services measured in the millions of dollars per year.
Identifying Novel Molecular Structures for Advanced Melanoma by Ligand-Based Virtual Screening
Wang, Zhao; Lu, Yan; Seibel, William; Miller, Duane D.; Li, Wei
2009-01-01
We recently discovered a new class of thiazole analogs that are highly potent against melanoma cells. To expand the structure-activity relationship study and to explore potential new molecular scaffolds, we performed extensive ligand-based virtual screening against a compound library containing 342,910 small molecules. Two different approaches of virtual screening were carried out using the structure of our lead molecule: 1) connectivity-based search using Scitegic Pipeline Pilot from Accelerys and 2) molecular shape similarity search using Schrodinger software. Using a testing compound library, both approaches can rank similar compounds very high and rank dissimilar compounds very low, thus validating our screening methods. Structures identified from these searches were analyzed, and selected compounds were tested in vitro to assess their activity against melanoma cancer cell lines. Several molecules showed good anticancer activity. While none of the identified compounds showed better activity than our lead compound, they provided important insight into structural modifications for our lead compound and also provided novel platforms on which we can optimize new classes of anticancer compounds. One of the newly synthesized analogs based on this virtual screening has improved potency and selectivity against melanoma. PMID:19445498
Quantum probability ranking principle for ligand-based virtual screening.
Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal
2017-04-01
Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.
Quantum probability ranking principle for ligand-based virtual screening
NASA Astrophysics Data System (ADS)
Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal
2017-04-01
Chemical libraries contain thousands of compounds that need screening, which increases the need for computational methods that can rank or prioritize compounds. The tools of virtual screening are widely exploited to enhance the cost effectiveness of lead drug discovery programs by ranking chemical compounds databases in decreasing probability of biological activity based upon probability ranking principle (PRP). In this paper, we developed a novel ranking approach for molecular compounds inspired by quantum mechanics, called quantum probability ranking principle (QPRP). The QPRP ranking criteria would make an attempt to draw an analogy between the physical experiment and molecular structure ranking process for 2D fingerprints in ligand based virtual screening (LBVS). The development of QPRP criteria in LBVS has employed the concepts of quantum at three different levels, firstly at representation level, this model makes an effort to develop a new framework of molecular representation by connecting the molecular compounds with mathematical quantum space. Secondly, estimate the similarity between chemical libraries and references based on quantum-based similarity searching method. Finally, rank the molecules using QPRP approach. Simulated virtual screening experiments with MDL drug data report (MDDR) data sets showed that QPRP outperformed the classical ranking principle (PRP) for molecular chemical compounds.
Duffy, Fergal J; Verniere, Mélanie; Devocelle, Marc; Bernard, Elise; Shields, Denis C; Chubb, Anthony J
2011-04-25
We introduce CycloPs, software for the generation of virtual libraries of constrained peptides including natural and nonnatural commercially available amino acids. The software is written in the cross-platform Python programming language, and features include generating virtual libraries in one-dimensional SMILES and three-dimensional SDF formats, suitable for virtual screening. The stand-alone software is capable of filtering the virtual libraries using empirical measurements, including peptide synthesizability by standard peptide synthesis techniques, stability, and the druglike properties of the peptide. The software and accompanying Web interface is designed to enable the rapid generation of large, structurally diverse, synthesizable virtual libraries of constrained peptides quickly and conveniently, for use in virtual screening experiments. The stand-alone software, and the Web interface for evaluating these empirical properties of a single peptide, are available at http://bioware.ucd.ie .
Evaluating progressive-rendering algorithms in appearance design tasks.
Jiawei Ou; Karlik, Ondrej; Křivánek, Jaroslav; Pellacini, Fabio
2013-01-01
Progressive rendering is becoming a popular alternative to precomputational approaches to appearance design. However, progressive algorithms create images exhibiting visual artifacts at early stages. A user study investigated these artifacts' effects on user performance in appearance design tasks. Novice and expert subjects performed lighting and material editing tasks with four algorithms: random path tracing, quasirandom path tracing, progressive photon mapping, and virtual-point-light rendering. Both the novices and experts strongly preferred path tracing to progressive photon mapping and virtual-point-light rendering. None of the participants preferred random path tracing to quasirandom path tracing or vice versa; the same situation held between progressive photon mapping and virtual-point-light rendering. The user workflow didn’t differ significantly with the four algorithms. The Web Extras include a video showing how four progressive-rendering algorithms converged (at http://youtu.be/ck-Gevl1e9s), the source code used, and other supplementary materials.
Verma, Suzanne; Gonzalez, Marianela; Schow, Sterling R; Triplett, R Gilbert
This technical protocol outlines the use of computer-assisted image-guided technology for the preoperative planning and intraoperative procedures involved in implant-retained facial prosthetic treatment. A contributing factor for a successful prosthetic restoration is accurate preoperative planning to identify prosthetically driven implant locations that maximize bone contact and enhance cosmetic outcomes. Navigational systems virtually transfer precise digital planning into the operative field for placing implants to support prosthetic restorations. In this protocol, there is no need to construct a physical, and sometimes inaccurate, surgical guide. The report addresses treatment workflow, radiologic data specifications, and special considerations in data acquisition, virtual preoperative planning, and intraoperative navigation for the prosthetic reconstruction of unilateral, bilateral, and midface defects. Utilization of this protocol for the planning and surgical placement of craniofacial bone-anchored implants allows positioning of implants to be prosthetically driven, accurate, precise, and efficient, and leads to a more predictable treatment outcome.
A Hypermedia Representation of a Taxonomy of Usability Characteristics in Virtual Environments
2003-03-01
user, organization, and social workflow; needs analysis; and user modeling. A user task analysis generates critical information used throughout all...exist specific to VE user interaction [Gabbard and others, 1999]. Typically more than one person performs guidelines-based evaluations, since it’s...unlikely that any one person could identify all if not most of an interaction design’s usability problems. Nielsen [1994] recommends three to five
Sen, Hasan Tutkun; Bell, Muyinatu A Lediju; Zhang, Yin; Ding, Kai; Boctor, Emad; Wong, John; Iordachita, Iulian; Kazanzides, Peter
2017-07-01
We are developing a cooperatively controlled robot system for image-guided radiation therapy (IGRT) in which a clinician and robot share control of a 3-D ultrasound (US) probe. IGRT involves two main steps: 1) planning/simulation and 2) treatment delivery. The goals of the system are to provide guidance for patient setup and real-time target monitoring during fractionated radiotherapy of soft tissue targets, especially in the upper abdomen. To compensate for soft tissue deformations created by the probe, we present a novel workflow where the robot holds the US probe on the patient during acquisition of the planning computerized tomography image, thereby ensuring that planning is performed on the deformed tissue. The robot system introduces constraints (virtual fixtures) to help to produce consistent soft tissue deformation between simulation and treatment days, based on the robot position, contact force, and reference US image recorded during simulation. This paper presents the system integration and the proposed clinical workflow, validated by an in vivo canine study. The results show that the virtual fixtures enable the clinician to deviate from the recorded position to better reproduce the reference US image, which correlates with more consistent soft tissue deformation and the possibility for more accurate patient setup and radiation delivery.
A Cloud-based Infrastructure and Architecture for Environmental System Research
NASA Astrophysics Data System (ADS)
Wang, D.; Wei, Y.; Shankar, M.; Quigley, J.; Wilson, B. E.
2016-12-01
The present availability of high-capacity networks, low-cost computers and storage devices, and the widespread adoption of hardware virtualization and service-oriented architecture provide a great opportunity to enable data and computing infrastructure sharing between closely related research activities. By taking advantage of these approaches, along with the world-class high computing and data infrastructure located at Oak Ridge National Laboratory, a cloud-based infrastructure and architecture has been developed to efficiently deliver essential data and informatics service and utilities to the environmental system research community, and will provide unique capabilities that allows terrestrial ecosystem research projects to share their software utilities (tools), data and even data submission workflow in a straightforward fashion. The infrastructure will minimize large disruptions from current project-based data submission workflows for better acceptances from existing projects, since many ecosystem research projects already have their own requirements or preferences for data submission and collection. The infrastructure will eliminate scalability problems with current project silos by provide unified data services and infrastructure. The Infrastructure consists of two key components (1) a collection of configurable virtual computing environments and user management systems that expedite data submission and collection from environmental system research community, and (2) scalable data management services and system, originated and development by ORNL data centers.
Luu, Van; Jona, Janan; Stanton, Mary K; Peterson, Matthew L; Morrison, Henry G; Nagapudi, Karthik; Tan, Helming
2013-01-30
A 96-well high-throughput cocrystal screening workflow has been developed consisting of solvent-mediated sonic blending synthesis and on-plate solid/solution stability characterization by XRPD. A strategy of cocrystallization screening in selected blend solvents including water mixtures is proposed to not only manipulate solubility of the cocrystal components but also differentiate physical stability of the cocrystal products. Caffeine-oxalic acid and theophylline-oxalic acid cocrystals were prepared and evaluated in relation to saturation levels of the cocrystal components and stability of the cocrystal products in anhydrous and hydrous solvents. AMG 517 was screened with a number of coformers, and solid/solution stability of the resulting cocrystals on the 96-well plate was investigated. A stability trend was observed and confirmed that cocrystals comprised of lower aqueous solubility coformers tended to be more stable in water. Furthermore, cocrystals which could be isolated under hydrous solvent blending condition exhibited superior physical stability to those which could only be obtained under anhydrous condition. This integrated HTS workflow provides an efficient route in an API-sparing approach to screen and identify cocrystal candidates with proper solubility and solid/solution stability properties. Copyright © 2012 Elsevier B.V. All rights reserved.
Initial steps towards a production platform for DNA sequence analysis on the grid.
Luyf, Angela C M; van Schaik, Barbera D C; de Vries, Michel; Baas, Frank; van Kampen, Antoine H C; Olabarriaga, Silvia D
2010-12-14
Bioinformatics is confronted with a new data explosion due to the availability of high throughput DNA sequencers. Data storage and analysis becomes a problem on local servers, and therefore it is needed to switch to other IT infrastructures. Grid and workflow technology can help to handle the data more efficiently, as well as facilitate collaborations. However, interfaces to grids are often unfriendly to novice users. In this study we reused a platform that was developed in the VL-e project for the analysis of medical images. Data transfer, workflow execution and job monitoring are operated from one graphical interface. We developed workflows for two sequence alignment tools (BLAST and BLAT) as a proof of concept. The analysis time was significantly reduced. All workflows and executables are available for the members of the Dutch Life Science Grid and the VL-e Medical virtual organizations All components are open source and can be transported to other grid infrastructures. The availability of in-house expertise and tools facilitates the usage of grid resources by new users. Our first results indicate that this is a practical, powerful and scalable solution to address the capacity and collaboration issues raised by the deployment of next generation sequencers. We currently adopt this methodology on a daily basis for DNA sequencing and other applications. More information and source code is available via http://www.bioinformaticslaboratory.nl/
Routine Digital Pathology Workflow: The Catania Experience
Fraggetta, Filippo; Garozzo, Salvatore; Zannoni, Gian Franco; Pantanowitz, Liron; Rossi, Esther Diana
2017-01-01
Introduction: Successful implementation of whole slide imaging (WSI) for routine clinical practice has been accomplished in only a few pathology laboratories worldwide. We report the transition to an effective and complete digital surgical pathology workflow in the pathology laboratory at Cannizzaro Hospital in Catania, Italy. Methods: All (100%) permanent histopathology glass slides were digitized at ×20 using Aperio AT2 scanners. Compatible stain and scanning slide racks were employed to streamline operations. eSlide Manager software was bidirectionally interfaced with the anatomic pathology laboratory information system. Virtual slide trays connected to the two-dimensional (2D) barcode tracking system allowed pathologists to confirm that they were correctly assigned slides and that all tissues on these glass slides were scanned. Results: Over 115,000 glass slides were digitized with a scan fail rate of around 1%. Drying glass slides before scanning minimized them sticking to scanner racks. Implementation required introduction of a 2D barcode tracking system and modification of histology workflow processes. Conclusion: Our experience indicates that effective adoption of WSI for primary diagnostic use was more dependent on optimizing preimaging variables and integration with the laboratory information system than on information technology infrastructure and ensuring pathologist buy-in. Implementation of digital pathology for routine practice not only leveraged the benefits of digital imaging but also creates an opportunity for establishing standardization of workflow processes in the pathology laboratory. PMID:29416914
Routine Digital Pathology Workflow: The Catania Experience.
Fraggetta, Filippo; Garozzo, Salvatore; Zannoni, Gian Franco; Pantanowitz, Liron; Rossi, Esther Diana
2017-01-01
Successful implementation of whole slide imaging (WSI) for routine clinical practice has been accomplished in only a few pathology laboratories worldwide. We report the transition to an effective and complete digital surgical pathology workflow in the pathology laboratory at Cannizzaro Hospital in Catania, Italy. All (100%) permanent histopathology glass slides were digitized at ×20 using Aperio AT2 scanners. Compatible stain and scanning slide racks were employed to streamline operations. eSlide Manager software was bidirectionally interfaced with the anatomic pathology laboratory information system. Virtual slide trays connected to the two-dimensional (2D) barcode tracking system allowed pathologists to confirm that they were correctly assigned slides and that all tissues on these glass slides were scanned. Over 115,000 glass slides were digitized with a scan fail rate of around 1%. Drying glass slides before scanning minimized them sticking to scanner racks. Implementation required introduction of a 2D barcode tracking system and modification of histology workflow processes. Our experience indicates that effective adoption of WSI for primary diagnostic use was more dependent on optimizing preimaging variables and integration with the laboratory information system than on information technology infrastructure and ensuring pathologist buy-in. Implementation of digital pathology for routine practice not only leveraged the benefits of digital imaging but also creates an opportunity for establishing standardization of workflow processes in the pathology laboratory.
Virtual screening of cocrystal formers for CL-20
NASA Astrophysics Data System (ADS)
Zhou, Jun-Hong; Chen, Min-Bo; Chen, Wei-Ming; Shi, Liang-Wei; Zhang, Chao-Yang; Li, Hong-Zhen
2014-08-01
According to the structure characteristics of 2,4,6,8,10,12-hexanitrohexaazaisowurtzitane (CL-20) and the kinetic mechanism of the cocrystal formation, the method of virtual screening CL-20 cocrystal formers by the criterion of the strongest intermolecular site pairing energy (ISPE) was proposed. In this method the strongest ISPE was thought to determine the first step of the cocrystal formation. The prediction results for four sets of common drug molecule cocrystals by this method were compared with those by the total ISPE method from the reference (Musumeci et al., 2011), and the experimental results. This method was then applied to virtually screen the CL-20 cocrystal formers, and the prediction results were compared with the experimental results.
Virtual Screening of Receptor Sites for Molecularly Imprinted Polymers.
Bates, Ferdia; Cela-Pérez, María Concepción; Karim, Kal; Piletsky, Sergey; López-Vilariño, José Manuel
2016-08-01
Molecularly Imprinted Polymers (MIPs) are highly advantageous in the field of analytical chemistry. However, interference from secondary molecules can also impede capture of a target by a MIP receptor. This greatly complicates the design process and often requires extensive laboratory screening which is time consuming, costly, and creates substantial waste products. Herein, is presented a new technique for screening of "virtually imprinted receptors" for rebinding of the molecular template as well as secondary structures, correlating the virtual predictions with experimentally acquired data in three case studies. This novel technique is particularly applicable to the evaluation and prediction of MIP receptor specificity and efficiency in complex aqueous systems. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zhu, Tian; Cao, Shuyi; Su, Pin-Chih; Patel, Ram; Shah, Darshan; Chokshi, Heta B; Szukala, Richard; Johnson, Michael E; Hevener, Kirk E
2013-09-12
A critical analysis of virtual screening results published between 2007 and 2011 was performed. The activity of reported hit compounds from over 400 studies was compared to their hit identification criteria. Hit rates and ligand efficiencies were calculated to assist in these analyses, and the results were compared with factors such as the size of the virtual library and the number of compounds tested. A series of promiscuity, druglike, and ADMET filters were applied to the reported hits to assess the quality of compounds reported, and a careful analysis of a subset of the studies that presented hit optimization was performed. These data allowed us to make several practical recommendations with respect to selection of compounds for experimental testing, definition of hit identification criteria, and general virtual screening hit criteria to allow for realistic hit optimization. A key recommendation is the use of size-targeted ligand efficiency values as hit identification criteria.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLean, Larry R.; Zhang, Ying; Li, Hua
Biochemical and X-ray crystallographic studies confirmed that hydroxyquinoline derivatives identified by virtual screening were actually covalent inhibitors of the MIF tautomerase. Adducts were formed by N-alkylation of the Pro-1 at the catalytic site with a loss of an amino group of the inhibitor.
Torktaz, Ibrahim; Mohamadhashem, Faezeh; Esmaeili, Abolghasem; Behjati, Mohaddeseh; Sharifzadeh, Sara
2013-01-01
Metastasis is a crucial aspect of cancer. Macrophage stimulating protein (MSP) is a single chain protein and can be cleaved by serum proteases. MSP has several roles in metastasis. In this in silico study, MSP as a metastatic agent was considered as a drug target. Crystallographic structure of MSP was retrieved from protein data bank. To find a chemical inhibitor of MSP, a library of KEGG compounds was screened and 1000 shape complemented ligands were retrieved with FindSite algorithm. Molegro Virtual Docker (MVD) software was used for docking simulation of shape complemented ligands against MSP. Moldock score was used as scoring function for virtual screening and potential inhibitors with more negative binding energy were obtained. PLANS scoring function was used for revaluation of virtual screening data. The top found chemical had binding affinity of -183.55 based on MolDock score and equal to -66.733 PLANTs score to MSP structure. Based on pharmacophore model of potential inhibitor, this study suggests that the chemical which was found in this research and its derivate can be used for subsequent laboratory studies.
Shin, Woong-Hee; Kihara, Daisuke
2018-01-01
Virtual screening is a computational technique for predicting a potent binding compound for a receptor protein from a ligand library. It has been a widely used in the drug discovery field to reduce the efforts of medicinal chemists to find hit compounds by experiments.Here, we introduce our novel structure-based virtual screening program, PL-PatchSurfer, which uses molecular surface representation with the three-dimensional Zernike descriptors, which is an effective mathematical representation for identifying physicochemical complementarities between local surfaces of a target protein and a ligand. The advantage of the surface-patch description is its tolerance on a receptor and compound structure variation. PL-PatchSurfer2 achieves higher accuracy on apo form and computationally modeled receptor structures than conventional structure-based virtual screening programs. Thus, PL-PatchSurfer2 opens up an opportunity for targets that do not have their crystal structures. The program is provided as a stand-alone program at http://kiharalab.org/plps2 . We also provide files for two ligand libraries, ChEMBL and ZINC Drug-like.
A Workflow to Investigate Exposure and Pharmacokinetic ...
Background: Adverse outcome pathways (AOPs) link adverse effects in individuals or populations to a molecular initiating event (MIE) that can be quantified using in vitro methods. Practical application of AOPs in chemical-specific risk assessment requires incorporation of knowledge on exposure, along with absorption, distribution, metabolism, and excretion (ADME) properties of chemicals.Objectives: We developed a conceptual workflow to examine exposure and ADME properties in relation to an MIE. The utility of this workflow was evaluated using a previously established AOP, acetylcholinesterase (AChE) inhibition.Methods: Thirty chemicals found to inhibit human AChE in the ToxCast™ assay were examined with respect to their exposure, absorption potential, and ability to cross the blood–brain barrier (BBB). Structures of active chemicals were compared against structures of 1,029 inactive chemicals to detect possible parent compounds that might have active metabolites.Results: Application of the workflow screened 10 “low-priority” chemicals of 30 active chemicals. Fifty-two of the 1,029 inactive chemicals exhibited a similarity threshold of ≥ 75% with their nearest active neighbors. Of these 52 compounds, 30 were excluded due to poor absorption or distribution. The remaining 22 compounds may inhibit AChE in vivo either directly or as a result of metabolic activation.Conclusions: The incorporation of exposure and ADME properties into the conceptual workflow e
Implementation of a fall screening program in a high risk of fracture population.
Ritchey, Katherine; Olney, Amanda; Shofer, Jane; Phelan, Elizabeth A; Matsumoto, Alvin M
2017-10-31
Fall prevention is an important way to prevent fractures in person with osteoporosis. We developed and implemented a fall screening program in the context of routine osteoporosis care. This program was found to be feasible and showed that a significant proportion of persons with osteoporosis are at risk of falling. Falls are the most common cause of fracture in persons with osteoporosis. However, osteoporosis care rarely includes assessment and prevention of falling. We thus sought to assess the feasibility of a fall screening and management program integrated into routine osteoporosis care. The program was developed and offered to patients with osteoporosis or osteopenia seen at an outpatient clinic between May 2015 and May 2016. Feasibility was measured by physical therapist time required to conduct screening and ease of integrating the screening program into the usual clinic workflow. Self-report responses and mobility testing were conducted to describe the fall and fracture risk profile of osteoporosis patients screened. Effects on fall-related care processes were assessed via chart abstraction of patient participation in fall prevention exercise. Of the 154 clinic patients who presented for a clinic visit, 68% met screening criteria and completed in two thirds of persons. Screening was completed in a third of the time typically allotted for traditional PT evaluations and did not interfere with clinic workflow. Forty percent of those screened reported falling in the last year, and over half had two or more falls in the past year. Over half reported a balance or lower extremity impairment, and over 40% were below norms on one or more performance tests. Most patients who selected a group exercise fall prevention program completed all sessions while only a quarter completed either supervised or independent home-based programs. Implementation of a fall risk screening program in an outpatient osteoporosis clinic appears feasible. A substantial proportion of people with osteoporosis screened positive for being at risk of falling, justifying integration of fall prevention into routine osteoporosis care.
sRNAtoolboxVM: Small RNA Analysis in a Virtual Machine.
Gómez-Martín, Cristina; Lebrón, Ricardo; Rueda, Antonio; Oliver, José L; Hackenberg, Michael
2017-01-01
High-throughput sequencing (HTS) data for small RNAs (noncoding RNA molecules that are 20-250 nucleotides in length) can now be routinely generated by minimally equipped wet laboratories; however, the bottleneck in HTS-based research has shifted now to the analysis of such huge amount of data. One of the reasons is that many analysis types require a Linux environment but computers, system administrators, and bioinformaticians suppose additional costs that often cannot be afforded by small to mid-sized groups or laboratories. Web servers are an alternative that can be used if the data is not subjected to privacy issues (what very often is an important issue with medical data). However, in any case they are less flexible than stand-alone programs limiting the number of workflows and analysis types that can be carried out.We show in this protocol how virtual machines can be used to overcome those problems and limitations. sRNAtoolboxVM is a virtual machine that can be executed on all common operating systems through virtualization programs like VirtualBox or VMware, providing the user with a high number of preinstalled programs like sRNAbench for small RNA analysis without the need to maintain additional servers and/or operating systems.
Very large virtual compound spaces: construction, storage and utility in drug discovery.
Peng, Zhengwei
2013-09-01
Recent activities in the construction, storage and exploration of very large virtual compound spaces are reviewed by this report. As expected, the systematic exploration of compound spaces at the highest resolution (individual atoms and bonds) is intrinsically intractable. By contrast, by staying within a finite number of reactions and a finite number of reactants or fragments, several virtual compound spaces have been constructed in a combinatorial fashion with sizes ranging from 10(11)11 to 10(20)20 compounds. Multiple search methods have been developed to perform searches (e.g. similarity, exact and substructure) into those compound spaces without the need for full enumeration. The up-front investment spent on synthetic feasibility during the construction of some of those virtual compound spaces enables a wider adoption by medicinal chemists to design and synthesize important compounds for drug discovery. Recent activities in the area of exploring virtual compound spaces via the evolutionary approach based on Genetic Algorithm also suggests a positive shift of focus from method development to workflow, integration and ease of use, all of which are required for this approach to be widely adopted by medicinal chemists.
Large-scale virtual screening on public cloud resources with Apache Spark.
Capuccini, Marco; Ahmed, Laeeq; Schaal, Wesley; Laure, Erwin; Spjuth, Ola
2017-01-01
Structure-based virtual screening is an in-silico method to screen a target receptor against a virtual molecular library. Applying docking-based screening to large molecular libraries can be computationally expensive, however it constitutes a trivially parallelizable task. Most of the available parallel implementations are based on message passing interface, relying on low failure rate hardware and fast network connection. Google's MapReduce revolutionized large-scale analysis, enabling the processing of massive datasets on commodity hardware and cloud resources, providing transparent scalability and fault tolerance at the software level. Open source implementations of MapReduce include Apache Hadoop and the more recent Apache Spark. We developed a method to run existing docking-based screening software on distributed cloud resources, utilizing the MapReduce approach. We benchmarked our method, which is implemented in Apache Spark, docking a publicly available target receptor against [Formula: see text]2.2 M compounds. The performance experiments show a good parallel efficiency (87%) when running in a public cloud environment. Our method enables parallel Structure-based virtual screening on public cloud resources or commodity computer clusters. The degree of scalability that we achieve allows for trying out our method on relatively small libraries first and then to scale to larger libraries. Our implementation is named Spark-VS and it is freely available as open source from GitHub (https://github.com/mcapuccini/spark-vs).Graphical abstract.
Shi, Zheng; Yu, Tian; Sun, Rong; Wang, Shan; Chen, Xiao-Qian; Cheng, Li-Jia; Liu, Rong
2016-01-01
Human epidermal growth factor receptor-2 (HER2) is a trans-membrane receptor like protein, and aberrant signaling of HER2 is implicated in many human cancers, such as ovarian cancer, gastric cancer, and prostate cancer, most notably breast cancer. Moreover, it has been in the spotlight in the recent years as a promising new target for therapy of breast cancer. Since virtual screening has become an integral part of the drug discovery process, it is of great significant to identify novel HER2 inhibitors by structure-based virtual screening. In this study, we carried out a series of elegant bioinformatics approaches, such as virtual screening and molecular dynamics (MD) simulations to identify HER2 inhibitors from Food and Drug Administration-approved small molecule drug as potential "new use" drugs. Molecular docking identified top 10 potential drugs which showed spectrum affinity to HER2. Moreover, MD simulations suggested that ZINC08214629 (Nonoxynol-9) and ZINC03830276 (Benzonatate) might exert potential inhibitory effects against HER2-targeted anti-breast cancer therapeutics. Together, our findings may provide successful application of virtual screening studies in the lead discovery process, and suggest that our discovered small molecules could be effective HER2 inhibitor candidates for further study. A series of elegant bioinformatics approaches, including virtual screening and molecular dynamics (MD) simulations were took advantage to identify human epidermal growth factor receptor-2 (HER2) inhibitors. Molecular docking recognized top 10 candidate compounds, which showed spectrum affinity to HER2. Further, MD simulations suggested that ZINC08214629 (Nonoxynol-9) and ZINC03830276 (Benzonatate) in candidate compounds were identified as potential "new use" drugs against HER2-targeted anti-breast cancer therapeutics. Abbreviations used: HER2: Human epidermal growth factor receptor-2, FDA: Food and Drug Administration, PDB: Protein Database Bank, RMSDs: Root mean square deviations, SPC: Single point charge, PME: Particle mesh Ewald, NVT: Constant volume, NPT: Constant pressure, RMSF: Root-mean-square fluctuation.
Islam, Md Ataul; Pillay, Tahir S
2017-08-01
In this study, we searched for potential DNA GyrB inhibitors using pharmacophore-based virtual screening followed by molecular docking and molecular dynamics simulation approaches. For this purpose, a set of 248 DNA GyrB inhibitors was collected from the literature and a well-validated pharmacophore model was generated. The best pharmacophore model explained that two each of hydrogen bond acceptors and hydrophobicity regions were critical for inhibition of DNA GyrB. Good statistical results of the pharmacophore model indicated that the model was robust in nature. Virtual screening of molecular databases revealed three molecules as potential antimycobacterial agents. The final screened promising compounds were evaluated in molecular docking and molecular dynamics simulation studies. In the molecular dynamics studies, RMSD and RMSF values undoubtedly explained that the screened compounds formed stable complexes with DNA GyrB. Therefore, it can be concluded that the compounds identified may have potential for the treatment of TB. © 2017 John Wiley & Sons A/S.
NASA Astrophysics Data System (ADS)
Foglini, Federica; Grande, Valentina; De Leo, Francesco; Mantovani, Simone; Ferraresi, Sergio
2017-04-01
EVER-EST offers a framework based on advanced services delivered both at the e-infrastructure and domain-specific level, with the objective of supporting each phase of the Earth Science Research and Information Lifecycle. It provides innovative e-research services to Earth Science user communities for communication, cross-validation and the sharing of knowledge and science outputs. The project follows a user-centric approach: real use cases taken from pre-selected Virtual Research Communities (VRC) covering different Earth Science research scenarios drive the implementation of the Virtual Research Environment (VRE) services and capabilities. The Sea Monitoring community is involved in the evaluation of the EVER-EST infrastructure. The community of potential users is wide and heterogeneous including both multi-disciplinary scientists and national/international agencies and authorities (e.g. MPAs directors, technicians from regional agencies like ARPA in Italy, the technicians working for the Ministry of the Environment) dealing with the adoption of a better way of measuring the quality of the environment. The scientific community has the main role of assessing the best criteria and indicators for defining the Good Environmental Status (GES) in their own sub regions, and implementing methods, protocols and tools for monitoring the GES descriptors. According to the Marine Strategy Framework Directive (MSFD), the environmental status of marine waters is defined by 11 descriptors, and forms a proposed set of 29 associated criteria and 56 different indicators. The objective of the Sea Monitoring VRC is to provide useful and applicable contributions to the evaluation of the descriptors: D1.Biodiversity, D2.Non-indigenous species and D6.Seafloor Integrity (http://ec.europa.eu/environment/marine/good-environmental-status/index_en.htm). The main challenges for the community members are: 1. discovery of existing data and products distributed among different infrastructures; 2. sharing methodologies about the GES evaluation and monitoring; 3. working on the same workflows and data; 4. adopting shared powerful tools for data processing (e.g. software and servers). The Sea Monitoring portal provides the VRC users with tools and services aimed at enhancing their ability to interoperate and share knowledge, experience and methods for GES assessment and monitoring, such as: •digital information services for data management, exploitation and preservation (accessibility of heterogeneous data sources including associated documentation); •e-collaboration services to communicate and share knowledge, ideas, protocols and workflows; •e-learning services to facilitate the use of common workflows for assessing GES indicators; •e-research services for workflow management, validation and verification, as well as visualization and interactive services. The current study is co-financed by the European Union's Horizon 2020 research and innovation programme under the EVER-EST project (Grant Agreement No. 674907).
Gough, Albert; Shun, Tongying; Taylor, D. Lansing; Schurdak, Mark
2016-01-01
Heterogeneity is well recognized as a common property of cellular systems that impacts biomedical research and the development of therapeutics and diagnostics. Several studies have shown that analysis of heterogeneity: gives insight into mechanisms of action of perturbagens; can be used to predict optimal combination therapies; and to quantify heterogeneity in tumors where heterogeneity is believed to be associated with adaptation and resistance. Cytometry methods including high content screening (HCS), high throughput microscopy, flow cytometry, mass spec imaging and digital pathology capture cell level data for populations of cells. However it is often assumed that the population response is normally distributed and therefore that the average adequately describes the results. A deeper understanding of the results of the measurements and more effective comparison of perturbagen effects requires analysis that takes into account the distribution of the measurements, i.e. the heterogeneity. However, the reproducibility of heterogeneous data collected on different days, and in different plates/slides has not previously been evaluated. Here we show that conventional assay quality metrics alone are not adequate for quality control of the heterogeneity in the data. To address this need, we demonstrate the use of the Kolmogorov-Smirnov statistic as a metric for monitoring the reproducibility of heterogeneity in an SAR screen, describe a workflow for quality control in heterogeneity analysis. One major challenge in high throughput biology is the evaluation and interpretation of heterogeneity in thousands of samples, such as compounds in a cell-based screen. In this study we also demonstrate that three heterogeneity indices previously reported, capture the shapes of the distributions and provide a means to filter and browse big data sets of cellular distributions in order to compare and identify distributions of interest. These metrics and methods are presented as a workflow for analysis of heterogeneity in large scale biology projects. PMID:26476369
Chen, H F; Dong, X C; Zen, B S; Gao, K; Yuan, S G; Panaye, A; Doucet, J P; Fan, B T
2003-08-01
An efficient virtual and rational drug design method is presented. It combines virtual bioactive compound generation with 3D-QSAR model and docking. Using this method, it is possible to generate a lot of highly diverse molecules and find virtual active lead compounds. The method was validated by the study of a set of anti-tumor drugs. With the constraints of pharmacophore obtained by DISCO implemented in SYBYL 6.8, 97 virtual bioactive compounds were generated, and their anti-tumor activities were predicted by CoMFA. Eight structures with high activity were selected and screened by the 3D-QSAR model. The most active generated structure was further investigated by modifying its structure in order to increase the activity. A comparative docking study with telomeric receptor was carried out, and the results showed that the generated structures could form more stable complexes with receptor than the reference compound selected from experimental data. This investigation showed that the proposed method was a feasible way for rational drug design with high screening efficiency.
Impact of a Virtual Clinic in a Paediatric Cardiology Network on Northeast Brazil.
de Araújo, Juliana Sousa Soares; Dias Filho, Adalberto Vieira; Silva Gomes, Renata Grigório; Regis, Cláudio Teixeira; Rodrigues, Klecida Nunes; Siqueira, Nicoly Negreiros; Albuquerque, Fernanda Cruz de Lira; Mourato, Felipe Alves; Mattos, Sandra da Silva
2015-01-01
Introduction. Congenital heart diseases (CHD) affect approximately 1% of live births and is an important cause of neonatal morbidity and mortality. Despite that, there is a shortage of paediatric cardiologists in Brazil, mainly in the northern and northeastern regions. In this context, the implementation of virtual outpatient clinics with the aid of different telemedicine resources may help in the care of children with heart defects. Methods. Patients under 18 years of age treated in virtual outpatient clinics between January 2013 and May 2014 were selected. They were divided into 2 groups: those who had and those who had not undergone a screening process for CHD in the neonatal period. Clinical and demographic characteristics were collected for further statistical analysis. Results. A total of 653 children and teenagers were treated in the virtual outpatient clinics. From these, 229 had undergone a neonatal screening process. Fewer abnormalities were observed on the physical examination of the screened patients. Conclusion. The implementation of pediatric cardiology virtual outpatient clinics can have a positive impact in the care provided to people in areas with lack of skilled professionals.
Lee, Hyun; Mittal, Anuradha; Patel, Kavankumar; Gatuz, Joseph L; Truong, Lena; Torres, Jaime; Mulhearn, Debbie C; Johnson, Michael E
2014-01-01
We have used a combination of virtual screening (VS) and high-throughput screening (HTS) techniques to identify novel, non-peptidic small molecule inhibitors against human SARS-CoV 3CLpro. A structure-based VS approach integrating docking and pharmacophore based methods was employed to computationally screen 621,000 compounds from the ZINC library. The screening protocol was validated using known 3CLpro inhibitors and was optimized for speed, improved selectivity, and for accommodating receptor flexibility. Subsequently, a fluorescence-based enzymatic HTS assay was developed and optimized to experimentally screen approximately 41,000 compounds from four structurally diverse libraries chosen mainly based on the VS results. False positives from initial HTS hits were eliminated by a secondary orthogonal binding analysis using surface plasmon resonance (SPR). The campaign identified a reversible small molecule inhibitor exhibiting mixed-type inhibition with a K(i) value of 11.1 μM. Together, these results validate our protocols as suitable approaches to screen virtual and chemical libraries, and the newly identified compound reported in our study represents a promising structural scaffold to pursue for further SARS-CoV 3CLpro inhibitor development. Copyright © 2013. Published by Elsevier Ltd.
Anti-nuclear antibody screening using HEp-2 cells.
Buchner, Carol; Bryant, Cassandra; Eslami, Anna; Lakos, Gabriella
2014-06-23
The American College of Rheumatology position statement on ANA testing stipulates the use of IIF as the gold standard method for ANA screening(1). Although IIF is an excellent screening test in expert hands, the technical difficulties of processing and reading IIF slides--such as the labor intensive slide processing, manual reading, the need for experienced, trained technologists and the use of dark room--make the IIF method difficult to fit in the workflow of modern, automated laboratories. The first and crucial step towards high quality ANA screening is careful slide processing. This procedure is labor intensive, and requires full understanding of the process, as well as attention to details and experience. Slide reading is performed by fluorescent microscopy in dark rooms, and is done by trained technologists who are familiar with the various patterns, in the context of cell cycle and the morphology of interphase and dividing cells. Provided that IIF is the first line screening tool for SARD, understanding the steps to correctly perform this technique is critical. Recently, digital imaging systems have been developed for the automated reading of IIF slides. These systems, such as the NOVA View Automated Fluorescent Microscope, are designed to streamline the routine IIF workflow. NOVA View acquires and stores high resolution digital images of the wells, thereby separating image acquisition from interpretation; images are viewed an interpreted on high resolution computer monitors. It stores images for future reference and supports the operator's interpretation by providing fluorescent light intensity data on the images. It also preliminarily categorizes results as positive or negative, and provides pattern recognition for positive samples. In summary, it eliminates the need for darkroom, and automates and streamlines the IIF reading/interpretation workflow. Most importantly, it increases consistency between readers and readings. Moreover, with the use of barcoded slides, transcription errors are eliminated by providing sample traceability and positive patient identification. This results in increased patient data integrity and safety. The overall goal of this video is to demonstrate the IIF procedure, including slide processing, identification of common IIF patterns, and the introduction of new advancements to simplify and harmonize this technique.
Hristozov, Dimitar P; Oprea, Tudor I; Gasteiger, Johann
2007-01-01
Four different ligand-based virtual screening scenarios are studied: (1) prioritizing compounds for subsequent high-throughput screening (HTS); (2) selecting a predefined (small) number of potentially active compounds from a large chemical database; (3) assessing the probability that a given structure will exhibit a given activity; (4) selecting the most active structure(s) for a biological assay. Each of the four scenarios is exemplified by performing retrospective ligand-based virtual screening for eight different biological targets using two large databases--MDDR and WOMBAT. A comparison between the chemical spaces covered by these two databases is presented. The performance of two techniques for ligand--based virtual screening--similarity search with subsequent data fusion (SSDF) and novelty detection with Self-Organizing Maps (ndSOM) is investigated. Three different structure representations--2,048-dimensional Daylight fingerprints, topological autocorrelation weighted by atomic physicochemical properties (sigma electronegativity, polarizability, partial charge, and identity) and radial distribution functions weighted by the same atomic physicochemical properties--are compared. Both methods were found applicable in scenario one. The similarity search was found to perform slightly better in scenario two while the SOM novelty detection is preferred in scenario three. No method/descriptor combination achieved significant success in scenario four.
RAVEN Quality Assurance Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cogliati, Joshua Joseph
2015-09-01
This report discusses the quality assurance activities needed to raise the Quality Level of Risk Analysis in a Virtual Environment (RAVEN) from Quality Level 3 to Quality Level 2. This report also describes the general RAVEN quality assurance activities. For improving the quality, reviews of code changes have been instituted, more parts of testing have been automated, and improved packaging has been created. For upgrading the quality level, requirements have been created and the workflow has been improved.
Turk, Samo; Kovac, Andreja; Boniface, Audrey; Bostock, Julieanne M; Chopra, Ian; Blanot, Didier; Gobec, Stanislav
2009-03-01
The ATP-dependent Mur ligases (MurC, MurD, MurE and MurF) successively add L-Ala, D-Glu, meso-A(2)pm or L-Lys, and D-Ala-D-Ala to the nucleotide precursor UDP-MurNAc, and they represent promising targets for antibacterial drug discovery. We have used the molecular docking programme eHiTS for the virtual screening of 1990 compounds from the National Cancer Institute 'Diversity Set' on MurD and MurF. The 50 top-scoring compounds from screening on each enzyme were selected for experimental biochemical evaluation. Our approach of virtual screening and subsequent in vitro biochemical evaluation of the best ranked compounds has provided four novel MurD inhibitors (best IC(50)=10 microM) and one novel MurF inhibitor (IC(50)=63 microM).
Azizian, Homa; Bagherzadeh, Kowsar; Shahbazi, Sophia; Sharifi, Niusha; Amanlou, Massoud
2017-09-18
Respiratory chain ubiquinol-cytochrome (cyt) c oxidoreductase (cyt bc 1 or complex III) has been demonstrated as a promising target for numerous antibiotics and fungicide applications. In this study, a virtual screening of NCI diversity database was carried out in order to find novel Qo/Qi cyt bc 1 complex inhibitors. Structure-based virtual screening and molecular docking methodology were employed to further screen compounds with inhibition activity against cyt bc 1 complex after extensive reliability validation protocol with cross-docking method and identification of the best score functions. Subsequently, the application of rational filtering procedure over the target database resulted in the elucidation of a novel class of cyt bc 1 complex potent inhibitors with comparable binding energies and biological activities to those of the standard inhibitor, antimycin.
Interactive 3D visualization for theoretical virtual observatories
NASA Astrophysics Data System (ADS)
Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.
2018-06-01
Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.
Roth, Christopher J; Boll, Daniel T; Wall, Lisa K; Merkle, Elmar M
2010-08-01
The purpose of this investigation was to assess workflow for medical imaging studies, specifically comparing liver and knee MRI examinations by use of the Lean Six Sigma methodologic framework. The hypothesis tested was that the Lean Six Sigma framework can be used to quantify MRI workflow and to identify sources of inefficiency to target for sequence and protocol improvement. Audio-video interleave streams representing individual acquisitions were obtained with graphic user interface screen capture software in the examinations of 10 outpatients undergoing MRI of the liver and 10 outpatients undergoing MRI of the knee. With Lean Six Sigma methods, the audio-video streams were dissected into value-added time (true image data acquisition periods), business value-added time (time spent that provides no direct patient benefit but is requisite in the current system), and non-value-added time (scanner inactivity while awaiting manual input). For overall MRI table time, value-added time was 43.5% (range, 39.7-48.3%) of the time for liver examinations and 89.9% (range, 87.4-93.6%) for knee examinations. Business value-added time was 16.3% of the table time for the liver and 4.3% of the table time for the knee examinations. Non-value-added time was 40.2% of the overall table time for the liver and 5.8% for the knee examinations. Liver MRI examinations consume statistically significantly more non-value-added and business value-added times than do knee examinations, primarily because of respiratory command management and contrast administration. Workflow analyses and accepted inefficiency reduction frameworks can be applied with use of a graphic user interface screen capture program.
O'Connor, C; Kiernan, M G; Finnegan, C; O'Hara, M; Power, L; O'Connell, N H; Dunne, C P
2017-05-04
Rapid detection of patients with carbapenemase-producing Enterobacteriaceae (CPE) is essential for the prevention of nosocomial cross-transmission, allocation of isolation facilities and to protect patient safety. Here, we aimed to design a new laboratory work-flow, utilizing existing laboratory resources, in order to reduce time-to-diagnosis of CPE. A review of the current CPE testing processes and of the literature was performed to identify a real-time commercial polymerase chain reaction (PCR) assay that could facilitate batch testing of CPE clinical specimens, with adequate CPE gene coverage. Stool specimens (210) were collected; CPE-positive inpatients (n = 10) and anonymized community stool specimens (n = 200). Rectal swabs (eSwab™) were inoculated from collected stool specimens and a manual DNA extraction method (QIAamp® DNA Stool Mini Kit) was employed. Extracted DNA was then processed on the Check-Direct CPE® assay. The three step process of making the eSwab™, extracting DNA manually and running the Check-Direct CPE® assay, took <5 min, 1 h 30 min and 1 h 50 min, respectively. It was time efficient with a result available in under 4 h, comparing favourably with the existing method of CPE screening; average time-to-diagnosis of 48/72 h. Utilizing this CPE work-flow would allow a 'same-day' result. Antimicrobial susceptibility testing results, as is current practice, would remain a 'next-day' result. In conclusion, the Check-Direct CPE® assay was easily integrated into a local laboratory work-flow and could facilitate a large volume of CPE screening specimens in a single batch, making it cost-effective and convenient for daily CPE testing.
Akeroyd, Michiel; Olsthoorn, Maurien; Gerritsma, Jort; Gutker-Vermaas, Diana; Ekkelkamp, Laurens; van Rij, Tjeerd; Klaassen, Paul; Plugge, Wim; Smit, Ed; Strupat, Kerstin; Wenzel, Thibaut; van Tilborg, Marcel; van der Hoeven, Rob
2013-03-10
In the discovery of new enzymes genomic and cDNA expression libraries containing thousands of differential clones are generated to obtain biodiversity. These libraries need to be screened for the activity of interest. Removing so-called empty and redundant clones significantly reduces the size of these expression libraries and therefore speeds up new enzyme discovery. Here, we present a sensitive, generic workflow for high throughput screening of successful microbial protein over-expression in microtiter plates containing a complex matrix based on mass spectrometry techniques. MALDI-LTQ-Orbitrap screening followed by principal component analysis and peptide mass fingerprinting was developed to obtain a throughput of ∼12,000 samples per week. Alternatively, a UHPLC-MS(2) approach including MS(2) protein identification was developed for microorganisms with a complex protein secretome with a throughput of ∼2000 samples per week. TCA-induced protein precipitation enhanced by addition of bovine serum albumin is used for protein purification prior to MS detection. We show that this generic workflow can effectively reduce large expression libraries from fungi and bacteria to their minimal size by detection of successful protein over-expression using MS. Copyright © 2012 Elsevier B.V. All rights reserved.
The CARMEN software as a service infrastructure.
Weeks, Michael; Jessop, Mark; Fletcher, Martyn; Hodge, Victoria; Jackson, Tom; Austin, Jim
2013-01-28
The CARMEN platform allows neuroscientists to share data, metadata, services and workflows, and to execute these services and workflows remotely via a Web portal. This paper describes how we implemented a service-based infrastructure into the CARMEN Virtual Laboratory. A Software as a Service framework was developed to allow generic new and legacy code to be deployed as services on a heterogeneous execution framework. Users can submit analysis code typically written in Matlab, Python, C/C++ and R as non-interactive standalone command-line applications and wrap them as services in a form suitable for deployment on the platform. The CARMEN Service Builder tool enables neuroscientists to quickly wrap their analysis software for deployment to the CARMEN platform, as a service without knowledge of the service framework or the CARMEN system. A metadata schema describes each service in terms of both system and user requirements. The search functionality allows services to be quickly discovered from the many services available. Within the platform, services may be combined into more complicated analyses using the workflow tool. CARMEN and the service infrastructure are targeted towards the neuroscience community; however, it is a generic platform, and can be targeted towards any discipline.
Everware toolkit. Supporting reproducible science and challenge-driven education.
NASA Astrophysics Data System (ADS)
Ustyuzhanin, A.; Head, T.; Babuschkin, I.; Tiunov, A.
2017-10-01
Modern science clearly demands for a higher level of reproducibility and collaboration. To make research fully reproducible one has to take care of several aspects: research protocol description, data access, environment preservation, workflow pipeline, and analysis script preservation. Version control systems like git help with the workflow and analysis scripts part. Virtualization techniques like Docker or Vagrant can help deal with environments. Jupyter notebooks are a powerful platform for conducting research in a collaborative manner. We present project Everware that seamlessly integrates git repository management systems such as Github or Gitlab, Docker and Jupyter helping with a) sharing results of real research and b) boosts education activities. With the help of Everware one can not only share the final artifacts of research but all the depth of the research process. This been shown to be extremely helpful during organization of several data analysis hackathons and machine learning schools. Using Everware participants could start from an existing solution instead of starting from scratch. They could start contributing immediately. Everware allows its users to make use of their own computational resources to run the workflows they are interested in, which leads to higher scalability of the toolkit.
Satarasinghe, Praveen; Hamilton, Kojo D; Tarver, Michael J; Buchanan, Robert J; Koltz, Michael T
2018-04-17
Utilization of pedicle screws (PS) for spine stabilization is common in spinal surgery. With reliance on visual inspection of anatomical landmarks prior to screw placement, the free-hand technique requires a high level of surgeon skill and precision. Three-dimensional (3D), computer-assisted virtual neuronavigation improves the precision of PS placement and minimization steps. Twenty-three patients with degenerative, traumatic, or neoplastic pathologies received treatment via a novel three-step PS technique that utilizes a navigated power driver in combination with virtual screw technology. (1) Following visualization of neuroanatomy using intraoperative CT, a navigated 3-mm match stick drill bit was inserted at an anatomical entry point with a screen projection showing a virtual screw. (2) A Navigated Stryker Cordless Driver with an appropriate tap was used to access the vertebral body through a pedicle with a screen projection again showing a virtual screw. (3) A Navigated Stryker Cordless Driver with an actual screw was used with a screen projection showing the same virtual screw. One hundred and forty-four consecutive screws were inserted using this three-step, navigated driver, virtual screw technique. Only 1 screw needed intraoperative revision after insertion using the three-step, navigated driver, virtual PS technique. This amounts to a 0.69% revision rate. One hundred percent of patients had intraoperative CT reconstructed images taken to confirm hardware placement. Pedicle screw placement utilizing the Stryker-Ziehm neuronavigation virtual screw technology with a three step, navigated power drill technique is safe and effective.
Rapid analysis and exploration of fluorescence microscopy images.
Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason M; Steininger, Robert J; Wu, Lani F; Altschuler, Steven J
2014-03-19
Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard. Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens.
A big data approach for climate change indicators processing in the CLIP-C project
NASA Astrophysics Data System (ADS)
D'Anca, Alessandro; Conte, Laura; Palazzo, Cosimo; Fiore, Sandro; Aloisio, Giovanni
2016-04-01
Defining and implementing processing chains with multiple (e.g. tens or hundreds of) data analytics operators can be a real challenge in many practical scientific use cases such as climate change indicators. This is usually done via scripts (e.g. bash) on the client side and requires climate scientists to take care of, implement and replicate workflow-like control logic aspects (which may be error-prone too) in their scripts, along with the expected application-level part. Moreover, the big amount of data and the strong I/O demand pose additional challenges related to the performance. In this regard, production-level tools for climate data analysis are mostly sequential and there is a lack of big data analytics solutions implementing fine-grain data parallelism or adopting stronger parallel I/O strategies, data locality, workflow optimization, etc. High-level solutions leveraging on workflow-enabled big data analytics frameworks for eScience could help scientists in defining and implementing the workflows related to their experiments by exploiting a more declarative, efficient and powerful approach. This talk will start introducing the main needs and challenges regarding big data analytics workflow management for eScience and will then provide some insights about the implementation of some real use cases related to some climate change indicators on large datasets produced in the context of the CLIP-C project - a EU FP7 project aiming at providing access to climate information of direct relevance to a wide variety of users, from scientists to policy makers and private sector decision makers. All the proposed use cases have been implemented exploiting the Ophidia big data analytics framework. The software stack includes an internal workflow management system, which coordinates, orchestrates, and optimises the execution of multiple scientific data analytics and visualization tasks. Real-time workflow monitoring execution is also supported through a graphical user interface. In order to address the challenges of the use cases, the implemented data analytics workflows include parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, and import/export of datasets in NetCDF format. The use cases have been implemented on a HPC cluster of 8-nodes (16-cores/node) of the Athena Cluster available at the CMCC Supercomputing Centre. Benchmark results will be also presented during the talk.
RapidSplint: virtual splint generation for orthognathic surgery - results of a pilot series.
Adolphs, Nicolai; Liu, Weichen; Keeve, Erwin; Hoffmeister, Bodo
2014-01-01
Within the domain of craniomaxillofacial surgery, orthognathic surgery is a special field dedicated to the correction of dentofacial anomalies resulting from skeletal malocclusion. Generally, in such cases, an interdisciplinary orthodontic and surgical treatment approach is required. After initial orthodontic alignment of the dental arches, skeletal discrepancies of the jaws can be corrected by distinct surgical strategies and procedures in order to achieve correct occlusal relations, as well as facial balance and harmony within individualized treatment concepts. To transfer the preoperative surgical planning and reposition the mobilized dental arches with optimal occlusal relations, surgical splints are typically used. For this purpose, different strategies have been described which use one or more splints. Traditionally, these splints are manufactured by a dental technician based on patient-specific dental casts; however, computer-assisted technologies have gained increasing importance with respect to preoperative planning and its subsequent surgical transfer. In a pilot study of 10 patients undergoing orthognathic corrections by a one-splint strategy, two final occlusal splints were produced for each patient and compared with respect to their clinical usability. One splint was manufactured in the traditional way by a dental technician according to the preoperative surgical planning. After performing a CBCT scan of the patient's dental casts, a second splint was designed virtually by an engineer and surgeon working together, according to the desired final occlusion. For this purpose, RapidSplint, a custom-made software platform, was used. After post-processing and conversion of the datasets into .stl files, the splints were fabricated by the PolyJet procedure using photo polymerization. During surgery, both splints were inserted after mobilization of the dental arches then compared with respect to their clinical usability according to the occlusal fitting. Using the workflow described above, virtual splints could be designed and manufactured for all patients in this pilot study. Eight of 10 virtual splints could be used clinically to achieve and maintain final occlusion after orthognathic surgery. In two cases virtual splints were not usable due to insufficient occlusal fitting, and even two of the traditional splints were not clinically usable. In five patients where both types of splints were available, their occlusal fitting was assessed as being equivalent, and in one case the virtual splint showed even better occlusal fitting than the traditional splint. In one case where no traditional splint was available, the virtual splint proved to be helpful in achieving the final occlusion. In this pilot study it was demonstrated that clinically usable splints for orthognathic surgery can be produced by computer-assisted technology. Virtual splint design was realized by RapidSplint®, an in-house software platform which might contribute in future to shorten preoperative workflows for the production of orthognathic surgical splints.
Steger, Julia; Arnhard, Kathrin; Haslacher, Sandra; Geiger, Klemens; Singer, Klaus; Schlapp, Michael; Pitterl, Florian; Oberacher, Herbert
2016-04-01
Forensic toxicology and environmental water analysis share the common interest and responsibility in ensuring comprehensive and reliable confirmation of drugs and pharmaceutical compounds in samples analyzed. Dealing with similar analytes, detection and identification techniques should be exchangeable between scientific disciplines. Herein, we demonstrate the successful adaption of a forensic toxicological screening workflow employing nontargeted LC/MS/MS under data-dependent acquisition control and subsequent database search to water analysis. The main modification involved processing of an increased sample volume with SPE (500 mL vs. 1-10 mL) to reach LODs in the low ng/L range. Tandem mass spectra acquired with a qTOF instrument were submitted to database search. The targeted data mining strategy was found to be sensitive and specific; automated search produced hardly any false results. To demonstrate the applicability of the adapted workflow to complex samples, 14 wastewater effluent samples collected on seven consecutive days at the local wastewater-treatment plant were analyzed. Of the 88,970 fragment ion mass spectra produced, 8.8% of spectra were successfully assigned to one of the 1040 reference compounds included in the database, and this enabled the identification of 51 compounds representing important illegal drugs, members of various pharmaceutical compound classes, and metabolites thereof. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Assessment of wheelchair driving performance in a virtual reality-based simulator
Mahajan, Harshal P.; Dicianno, Brad E.; Cooper, Rory A.; Ding, Dan
2013-01-01
Objective To develop a virtual reality (VR)-based simulator that can assist clinicians in performing standardized wheelchair driving assessments. Design A completely within-subjects repeated measures design. Methods Participants drove their wheelchairs along a virtual driving circuit modeled after the Power Mobility Road Test (PMRT) and in a hallway with decreasing width. The virtual simulator was displayed on computer screen and VR screens and participants interacted with it using a set of instrumented rollers and a wheelchair joystick. Driving performances of participants were estimated and compared using quantitative metrics from the simulator. Qualitative ratings from two experienced clinicians were used to estimate intra- and inter-rater reliability. Results Ten regular wheelchair users (seven men, three women; mean age ± SD, 39.5 ± 15.39 years) participated. The virtual PMRT scores from the two clinicians show high inter-rater reliability (78–90%) and high intra-rater reliability (71–90%) for all test conditions. More research is required to explore user preferences and effectiveness of the two control methods (rollers and mathematical model) and the display screens. Conclusions The virtual driving simulator seems to be a promising tool for wheelchair driving assessment that clinicians can use to supplement their real-world evaluations. PMID:23820148
Sánchez-Rodríguez, Aminael; Tejera, Eduardo; Cruz-Monteagudo, Maykel; Borges, Fernanda; Cordeiro, M. Natália D. S.; Le-Thi-Thu, Huong; Pham-The, Hai
2018-01-01
Gastric cancer is the third leading cause of cancer-related mortality worldwide and despite advances in prevention, diagnosis and therapy, it is still regarded as a global health concern. The efficacy of the therapies for gastric cancer is limited by a poor response to currently available therapeutic regimens. One of the reasons that may explain these poor clinical outcomes is the highly heterogeneous nature of this disease. In this sense, it is essential to discover new molecular agents capable of targeting various gastric cancer subtypes simultaneously. Here, we present a multi-objective approach for the ligand-based virtual screening discovery of chemical compounds simultaneously active against the gastric cancer cell lines AGS, NCI-N87 and SNU-1. The proposed approach relays in a novel methodology based on the development of ensemble models for the bioactivity prediction against each individual gastric cancer cell line. The methodology includes the aggregation of one ensemble per cell line using a desirability-based algorithm into virtual screening protocols. Our research leads to the proposal of a multi-targeted virtual screening protocol able to achieve high enrichment of known chemicals with anti-gastric cancer activity. Specifically, our results indicate that, using the proposed protocol, it is possible to retrieve almost 20 more times multi-targeted compounds in the first 1% of the ranked list than what is expected from a uniform distribution of the active ones in the virtual screening database. More importantly, the proposed protocol attains an outstanding initial enrichment of known multi-targeted anti-gastric cancer agents. PMID:29420638
Application of Shape Similarity in Pose Selection and Virtual Screening in CSARdock2014 Exercise.
Kumar, Ashutosh; Zhang, Kam Y J
2016-06-27
To evaluate the applicability of shape similarity in docking-based pose selection and virtual screening, we participated in the CSARdock2014 benchmark exercise for identifying the correct docking pose of inhibitors targeting factor XA, spleen tyrosine kinase, and tRNA methyltransferase. This exercise provides a valuable opportunity for researchers to test their docking programs, methods, and protocols in a blind testing environment. In the CSARdock2014 benchmark exercise, we have implemented an approach that uses ligand 3D shape similarity to facilitate docking-based pose selection and virtual screening. We showed here that ligand 3D shape similarity between bound poses could be used to identify the native-like pose from an ensemble of docking-generated poses. Our method correctly identified the native pose as the top-ranking pose for 73% of test cases in a blind testing environment. Moreover, the pose selection results also revealed an excellent correlation between ligand 3D shape similarity scores and RMSD to X-ray crystal structure ligand. In the virtual screening exercise, the average RMSD for our pose prediction was found to be 1.02 Å, and it was one of the top performances achieved in CSARdock2014 benchmark exercise. Furthermore, the inclusion of shape similarity improved virtual screening performance of docking-based scoring and ranking. The coefficient of determination (r(2)) between experimental activities and docking scores for 276 spleen tyrosine kinase inhibitors was found to be 0.365 but reached 0.614 when the ligand 3D shape similarity was included.
ERIC Educational Resources Information Center
Moyer-Packenham, Patricia S.; Bullock, Emma K.; Shumway, Jessica F.; Tucker, Stephen I.; Watts, Christina M.; Westenskow, Arla; Anderson-Pence, Katie L.; Maahs-Fladung, Cathy; Boyer-Thurgood, Jennifer; Gulkilik, Hilal; Jordan, Kerry
2016-01-01
This paper focuses on understanding the role that affordances played in children's learning performance and efficiency during clinical interviews of their interactions with mathematics apps on touch-screen devices. One hundred children, ages 3 to 8, each used six different virtual manipulative mathematics apps during 30-40-min interviews. The…
Game engines and immersive displays
NASA Astrophysics Data System (ADS)
Chang, Benjamin; Destefano, Marc
2014-02-01
While virtual reality and digital games share many core technologies, the programming environments, toolkits, and workflows for developing games and VR environments are often distinct. VR toolkits designed for applications in visualization and simulation often have a different feature set or design philosophy than game engines, while popular game engines often lack support for VR hardware. Extending a game engine to support systems such as the CAVE gives developers a unified development environment and the ability to easily port projects, but involves challenges beyond just adding stereo 3D visuals. In this paper we outline the issues involved in adapting a game engine for use with an immersive display system including stereoscopy, tracking, and clustering, and present example implementation details using Unity3D. We discuss application development and workflow approaches including camera management, rendering synchronization, GUI design, and issues specific to Unity3D, and present examples of projects created for a multi-wall, clustered, stereoscopic display.
Torktaz, Ibrahim; Mohamadhashem, Faezeh; Esmaeili, Abolghasem; Behjati, Mohaddeseh; Sharifzadeh, Sara
2013-01-01
Introduction: Metastasis is a crucial aspect of cancer. Macrophage stimulating protein (MSP) is a single chain protein and can be cleaved by serum proteases. MSP has several roles in metastasis. In this in silico study, MSP as a metastatic agent was considered as a drug target. Methods: Crystallographic structure of MSP was retrieved from protein data bank. To find a chemical inhibitor of MSP, a library of KEGG compounds was screened and 1000 shape complemented ligands were retrieved with FindSite algorithm. Molegro Virtual Docker (MVD) software was used for docking simulation of shape complemented ligands against MSP. Moldock score was used as scoring function for virtual screening and potential inhibitors with more negative binding energy were obtained. PLANS scoring function was used for revaluation of virtual screening data. Results: The top found chemical had binding affinity of -183.55 based on MolDock score and equal to -66.733 PLANTs score to MSP structure. Conclusion: Based on pharmacophore model of potential inhibitor, this study suggests that the chemical which was found in this research and its derivate can be used for subsequent laboratory studies. PMID:24163807
Combining in silico and in cerebro approaches for virtual screening and pose prediction in SAMPL4.
Voet, Arnout R D; Kumar, Ashutosh; Berenger, Francois; Zhang, Kam Y J
2014-04-01
The SAMPL challenges provide an ideal opportunity for unbiased evaluation and comparison of different approaches used in computational drug design. During the fourth round of this SAMPL challenge, we participated in the virtual screening and binding pose prediction on inhibitors targeting the HIV-1 integrase enzyme. For virtual screening, we used well known and widely used in silico methods combined with personal in cerebro insights and experience. Regular docking only performed slightly better than random selection, but the performance was significantly improved upon incorporation of additional filters based on pharmacophore queries and electrostatic similarities. The best performance was achieved when logical selection was added. For the pose prediction, we utilized a similar consensus approach that amalgamated the results of the Glide-XP docking with structural knowledge and rescoring. The pose prediction results revealed that docking displayed reasonable performance in predicting the binding poses. However, prediction performance can be improved utilizing scientific experience and rescoring approaches. In both the virtual screening and pose prediction challenges, the top performance was achieved by our approaches. Here we describe the methods and strategies used in our approaches and discuss the rationale of their performances.
Combining in silico and in cerebro approaches for virtual screening and pose prediction in SAMPL4
NASA Astrophysics Data System (ADS)
Voet, Arnout R. D.; Kumar, Ashutosh; Berenger, Francois; Zhang, Kam Y. J.
2014-04-01
The SAMPL challenges provide an ideal opportunity for unbiased evaluation and comparison of different approaches used in computational drug design. During the fourth round of this SAMPL challenge, we participated in the virtual screening and binding pose prediction on inhibitors targeting the HIV-1 integrase enzyme. For virtual screening, we used well known and widely used in silico methods combined with personal in cerebro insights and experience. Regular docking only performed slightly better than random selection, but the performance was significantly improved upon incorporation of additional filters based on pharmacophore queries and electrostatic similarities. The best performance was achieved when logical selection was added. For the pose prediction, we utilized a similar consensus approach that amalgamated the results of the Glide-XP docking with structural knowledge and rescoring. The pose prediction results revealed that docking displayed reasonable performance in predicting the binding poses. However, prediction performance can be improved utilizing scientific experience and rescoring approaches. In both the virtual screening and pose prediction challenges, the top performance was achieved by our approaches. Here we describe the methods and strategies used in our approaches and discuss the rationale of their performances.
Zhu, Tian; Cao, Shuyi; Su, Pin-Chih; Patel, Ram; Shah, Darshan; Chokshi, Heta B.; Szukala, Richard; Johnson, Michael E.; Hevener, Kirk E.
2013-01-01
A critical analysis of virtual screening results published between 2007 and 2011 was performed. The activity of reported hit compounds from over 400 studies was compared to their hit identification criteria. Hit rates and ligand efficiencies were calculated to assist in these analyses and the results were compared with factors such as the size of the virtual library and the number of compounds tested. A series of promiscuity, drug-like, and ADMET filters were applied to the reported hits to assess the quality of compounds reported and a careful analysis of a subset of the studies which presented hit optimization was performed. This data allowed us to make several practical recommendations with respect to selection of compounds for experimental testing, defining hit identification criteria, and general virtual screening hit criteria to allow for realistic hit optimization. A key recommendation is the use of size-targeted ligand efficiency values as hit identification criteria. PMID:23688234
Discovery of new GSK-3β inhibitors through structure-based virtual screening.
Dou, Xiaodong; Jiang, Lan; Wang, Yanxing; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren
2018-01-15
Glycogen synthase kinase-3β (GSK-3β) is an attractive therapeutic target for human diseases, such as diabetes, cancer, neurodegenerative diseases, and inflammation. Thus, structure-based virtual screening was performed to identify novel scaffolds of GSK-3β inhibitors, and we observed that conserved water molecules of GSK-3β were suitable for virtual screening. We found 14 hits and D1 (IC 50 of 0.71 μM) were identified. Furthermore, the neuroprotection activity of D1-D3 was validated on a cellular level. 2D similarity searches were used to find derivatives of high inhibitory compounds and an enriched structure-activity relationship suggested that these skeletons were worthy of study as potent GSK-3β inhibitors. Copyright © 2017. Published by Elsevier Ltd.
Modeling and Deorphanization of Orphan GPCRs.
Diaz, Constantino; Angelloz-Nicoud, Patricia; Pihan, Emilie
2018-01-01
Despite tremendous efforts, approximately 120 GPCRs remain orphan. Their physiological functions and their potential roles in diseases are poorly understood. Orphan GPCRs are extremely important because they may provide novel therapeutic targets for unmet medical needs. As a complement to experimental approaches, molecular modeling and virtual screening are efficient techniques to discover synthetic surrogate ligands which can help to elucidate the role of oGPCRs. Constitutively activated mutants and recently published active structures of GPCRs provide stimulating opportunities for building active molecular models for oGPCRs and identifying activators using virtual screening of compound libraries. We describe the molecular modeling and virtual screening process we have applied in the discovery of surrogate ligands, and provide examples for CCKA, a simulated oGPCR, and for two oGPCRs, GPR52 and GPR34.
Towards interactive narrative medicine.
Cavazza, Marc; Charles, Fred
2013-01-01
Interactive Storytelling technologies have attracted significant interest in the field of simulation and serious gaming for their potential to provide a principled approach to improve user engagement in training scenarios. In this paper, we explore the use of Interactive Storytelling to support Narrative Medicine as a reflective practice. We describe a workflow for the generation of virtual narratives from high-level descriptions of patients' experiences as perceived by physicians, which can help to objectivize such perceptions and support various forms of analysis.
NMRbox: A Resource for Biomolecular NMR Computation.
Maciejewski, Mark W; Schuyler, Adam D; Gryk, Michael R; Moraru, Ion I; Romero, Pedro R; Ulrich, Eldon L; Eghbalnia, Hamid R; Livny, Miron; Delaglio, Frank; Hoch, Jeffrey C
2017-04-25
Advances in computation have been enabling many recent advances in biomolecular applications of NMR. Due to the wide diversity of applications of NMR, the number and variety of software packages for processing and analyzing NMR data is quite large, with labs relying on dozens, if not hundreds of software packages. Discovery, acquisition, installation, and maintenance of all these packages is a burdensome task. Because the majority of software packages originate in academic labs, persistence of the software is compromised when developers graduate, funding ceases, or investigators turn to other projects. To simplify access to and use of biomolecular NMR software, foster persistence, and enhance reproducibility of computational workflows, we have developed NMRbox, a shared resource for NMR software and computation. NMRbox employs virtualization to provide a comprehensive software environment preconfigured with hundreds of software packages, available as a downloadable virtual machine or as a Platform-as-a-Service supported by a dedicated compute cloud. Ongoing development includes a metadata harvester to regularize, annotate, and preserve workflows and facilitate and enhance data depositions to BioMagResBank, and tools for Bayesian inference to enhance the robustness and extensibility of computational analyses. In addition to facilitating use and preservation of the rich and dynamic software environment for biomolecular NMR, NMRbox fosters the development and deployment of a new class of metasoftware packages. NMRbox is freely available to not-for-profit users. Copyright © 2017 Biophysical Society. All rights reserved.
Pham-The, H; Casañola-Martin, G; Diéguez-Santana, K; Nguyen-Hai, N; Ngoc, N T; Vu-Duc, L; Le-Thi-Thu, H
2017-03-01
Histone deacetylases (HDAC) are emerging as promising targets in cancer, neuronal diseases and immune disorders. Computational modelling approaches have been widely applied for the virtual screening and rational design of novel HDAC inhibitors. In this study, different machine learning (ML) techniques were applied for the development of models that accurately discriminate HDAC2 inhibitors form non-inhibitors. The obtained models showed encouraging results, with the global accuracy in the external set ranging from 0.83 to 0.90. Various aspects related to the comparison of modelling techniques, applicability domain and descriptor interpretations were discussed. Finally, consensus predictions of these models were used for screening HDAC2 inhibitors from four chemical libraries whose bioactivities against HDAC1, HDAC3, HDAC6 and HDAC8 have been known. According to the results of virtual screening assays, structures of some hits with pair-isoform-selective activity (between HDAC2 and other HDACs) were revealed. This study illustrates the power of ML-based QSAR approaches for the screening and discovery of potent, isoform-selective HDACIs.
Rapid lead discovery through iterative screening of one bead one compound libraries.
Gao, Yu; Amar, Sabrina; Pahwa, Sonia; Fields, Gregg; Kodadek, Thomas
2015-01-12
Primary hits that arise from screening one bead one compound (OBOC) libraries against a target of interest rarely have high potency. However, there has been little work focused on the development of an efficient workflow for primary hit improvement. In this study, we show that by characterizing the binding constants for all of the hits that arise from a screen, structure-activity relationship (SAR) data can be obtained to inform the design of "derivative libraries" of a primary hit that can then be screened under more demanding conditions to obtain improved compounds. Here, we demonstrate the rapid improvement of a primary hit against matrix metalloproteinase-14 using this approach.
Kobayashi, Hajime; Ohkubo, Masaki; Narita, Akihiro; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Sone, Shusuke
2017-01-01
Objective: We propose the application of virtual nodules to evaluate the performance of computer-aided detection (CAD) of lung nodules in cancer screening using low-dose CT. Methods: The virtual nodules were generated based on the spatial resolution measured for a CT system used in an institution providing cancer screening and were fused into clinical lung images obtained at that institution, allowing site specificity. First, we validated virtual nodules as an alternative to artificial nodules inserted into a phantom. In addition, we compared the results of CAD analysis between the real nodules (n = 6) and the corresponding virtual nodules. Subsequently, virtual nodules of various sizes and contrasts between nodule density and background density (ΔCT) were inserted into clinical images (n = 10) and submitted for CAD analysis. Results: In the validation study, 46 of 48 virtual nodules had the same CAD results as artificial nodules (kappa coefficient = 0.913). Real nodules and the corresponding virtual nodules showed the same CAD results. The detection limits of the tested CAD system were determined in terms of size and density of peripheral lung nodules; we demonstrated that a nodule with a 5-mm diameter was detected when the nodule had a ΔCT > 220 HU. Conclusion: Virtual nodules are effective in evaluating CAD performance using site-specific scan/reconstruction conditions. Advances in knowledge: Virtual nodules can be an effective means of evaluating site-specific CAD performance. The methodology for guiding the detection limit for nodule size/density might be a useful evaluation strategy. PMID:27897029
The architecture of a virtual grid GIS server
NASA Astrophysics Data System (ADS)
Wu, Pengfei; Fang, Yu; Chen, Bin; Wu, Xi; Tian, Xiaoting
2008-10-01
The grid computing technology provides the service oriented architecture for distributed applications. The virtual Grid GIS server is the distributed and interoperable enterprise application GIS architecture running in the grid environment, which integrates heterogeneous GIS platforms. All sorts of legacy GIS platforms join the grid as members of GIS virtual organization. Based on Microkernel we design the ESB and portal GIS service layer, which compose Microkernel GIS. Through web portals, portal GIS services and mediation of service bus, following the principle of SoC, we separate business logic from implementing logic. Microkernel GIS greatly reduces the coupling degree between applications and GIS platforms. The enterprise applications are independent of certain GIS platforms, and making the application developers to pay attention to the business logic. Via configuration and orchestration of a set of fine-grained services, the system creates GIS Business, which acts as a whole WebGIS request when activated. In this way, the system satisfies a business workflow directly and simply, with little or no new code.
Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason
2010-01-01
Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time costs were minimal. Despite the increase in overhead, virtual clusters are an ideal solution for Grid heterogeneity. With greater development of virtual cluster technology in Grid environments, the problem of platform heterogeneity may be eliminated through virtualization, allowing greater usage of VS, and will benefit all Grid applications in general.
Virtual screening and optimization of Type II inhibitors of JAK2 from a natural product library.
Ma, Dik-Lung; Chan, Daniel Shiu-Hin; Wei, Guo; Zhong, Hai-Jing; Yang, Hui; Leung, Lai To; Gullen, Elizabeth A; Chiu, Pauline; Cheng, Yung-Chi; Leung, Chung-Hang
2014-11-21
Amentoflavone has been identified as a JAK2 inhibitor by structure-based virtual screening of a natural product library. In silico optimization using the DOLPHIN model yielded analogues with enhanced potency against JAK2 activity and HCV activity in cellulo. Molecular modeling and kinetic experiments suggested that the analogues may function as Type II inhibitors of JAK2.
Steinhuber, Thomas; Brunold, Silvia; Gärtner, Catherina; Offermanns, Vincent; Ulmer, Hanno; Ploder, Oliver
2018-02-01
The purpose of this study was to measure and compare the working time for virtual surgical planning (VSP) in orthognathic surgery in a largely office-based workflow in comparison with conventional surgical planning (CSP) regarding the type of surgery, staff involved, and working location. This prospective cohort study included patients treated with orthognathic surgery from May to December 2016. For each patient, both CSP with manual splint fabrication and VSP with fabrication of computer-aided design-computer-aided manufacturing splints were performed. The predictor variables were planning method (CSP or VSP) and type of surgery (single or double jaw), and the outcome was time. Descriptive and analytic statistics, including analysis of variance for repeated measures, were computed. The sample was composed of 40 patients (25 female and 15 male patients; mean age, 24.6 years) treated with single-jaw surgery (n = 18) or double-jaw surgery (n = 22). The mean times for planning single-jaw surgery were 145.5 ± 11.5 minutes for CSP and 109.3 ± 10.8 minutes for VSP, and those for planning double-jaw surgery were 224.1 ± 11.2 minutes and 149.6 ± 15.3 minutes, respectively. Besides the expected result that the working time was shorter for single-versus double-jaw surgery (P < .001), it was shown that VSP shortened the working time significantly versus CSP (P < .001). The reduction of time through VSP was relatively stronger for double-jaw surgery (P < .001 for interaction). All differences between CSP and VSP regarding profession (except for the surgeon's time investment) and location were statistically significant (P < .01). The surgeon's time to plan single-jaw surgery was 37.0 minutes for CSP and 41.2 minutes for VSP; for double-jaw surgery, it was 53.8 minutes and 53.6 minutes, respectively. Office-based VSP for orthognathic surgery was significantly faster for single- and double-jaw surgery. The time investment of the surgeon was equal for both methods, and all other steps of the workflow differed significantly compared with CSP. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Virtual Facility at Fermilab: Infrastructure and Services Expand to Public Clouds
Timm, Steve; Garzoglio, Gabriele; Cooper, Glenn; ...
2016-02-18
In preparation for its new Virtual Facility Project, Fermilab has launched a program of work to determine the requirements for running a computation facility on-site, in public clouds, or a combination of both. This program builds on the work we have done to successfully run experimental workflows of 1000-VM scale both on an on-site private cloud and on Amazon AWS. To do this at scale we deployed dynamically launched and discovered caching services on the cloud. We are now testing the deployment of more complicated services on Amazon AWS using native load balancing and auto scaling features they provide. Themore » Virtual Facility Project will design and develop a facility including infrastructure and services that can live on the site of Fermilab, off-site, or a combination of both. We expect to need this capacity to meet the peak computing requirements in the future. The Virtual Facility is intended to provision resources on the public cloud on behalf of the facility as a whole instead of having each experiment or Virtual Organization do it on their own. We will describe the policy aspects of a distributed Virtual Facility, the requirements, and plans to make a detailed comparison of the relative cost of the public and private clouds. Furthermore, this talk will present the details of the technical mechanisms we have developed to date, and the plans currently taking shape for a Virtual Facility at Fermilab.« less
Virtual Facility at Fermilab: Infrastructure and Services Expand to Public Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timm, Steve; Garzoglio, Gabriele; Cooper, Glenn
In preparation for its new Virtual Facility Project, Fermilab has launched a program of work to determine the requirements for running a computation facility on-site, in public clouds, or a combination of both. This program builds on the work we have done to successfully run experimental workflows of 1000-VM scale both on an on-site private cloud and on Amazon AWS. To do this at scale we deployed dynamically launched and discovered caching services on the cloud. We are now testing the deployment of more complicated services on Amazon AWS using native load balancing and auto scaling features they provide. Themore » Virtual Facility Project will design and develop a facility including infrastructure and services that can live on the site of Fermilab, off-site, or a combination of both. We expect to need this capacity to meet the peak computing requirements in the future. The Virtual Facility is intended to provision resources on the public cloud on behalf of the facility as a whole instead of having each experiment or Virtual Organization do it on their own. We will describe the policy aspects of a distributed Virtual Facility, the requirements, and plans to make a detailed comparison of the relative cost of the public and private clouds. Furthermore, this talk will present the details of the technical mechanisms we have developed to date, and the plans currently taking shape for a Virtual Facility at Fermilab.« less
[Tools for laparoscopic skill development - available trainers and simulators].
Jaksa, László; Haidegger, Tamás; Galambos, Péter; Kiss, Rita
2017-10-01
The laparoscopic minimally invasive surgical technique is widely employed on a global scale. However, the efficient and ethical teaching of this technique requires equipment for surgical simulation. These educational devices are present on the market in the form of box trainers and virtual reality simulators, or some combination of those. In this article, we present a systematic overview of commercially available surgical simulators describing the most important features of each product. Our overview elaborates on box trainers and virtual reality simulators, and also touches on surgical robotics simulators, together with operating room workflow simulators, for the sake of completeness. Apart from presenting educational tools, we evaluated the literature of laparoscopic surgical education and simulation, to provide a complete picture of the unfolding trends in this field. Orv Hetil. 2017; 158(40): 1570-1576.
Wang, Yen-Ling
2014-01-01
Checkpoint kinase 2 (Chk2) has a great effect on DNA-damage and plays an important role in response to DNA double-strand breaks and related lesions. In this study, we will concentrate on Chk2 and the purpose is to find the potential inhibitors by the pharmacophore hypotheses (PhModels), combinatorial fusion, and virtual screening techniques. Applying combinatorial fusion into PhModels and virtual screening techniques is a novel design strategy for drug design. We used combinatorial fusion to analyze the prediction results and then obtained the best correlation coefficient of the testing set (r test) with the value 0.816 by combining the BesttrainBesttest and FasttrainFasttest prediction results. The potential inhibitors were selected from NCI database by screening according to BesttrainBesttest + FasttrainFasttest prediction results and molecular docking with CDOCKER docking program. Finally, the selected compounds have high interaction energy between a ligand and a receptor. Through these approaches, 23 potential inhibitors for Chk2 are retrieved for further study. PMID:24864236
Ebalunode, Jerry O; Zheng, Weifan; Tropsha, Alexander
2011-01-01
Optimization of chemical library composition affords more efficient identification of hits from biological screening experiments. The optimization could be achieved through rational selection of reagents used in combinatorial library synthesis. However, with a rapid advent of parallel synthesis methods and availability of millions of compounds synthesized by many vendors, it may be more efficient to design targeted libraries by means of virtual screening of commercial compound collections. This chapter reviews the application of advanced cheminformatics approaches such as quantitative structure-activity relationships (QSAR) and pharmacophore modeling (both ligand and structure based) for virtual screening. Both approaches rely on empirical SAR data to build models; thus, the emphasis is placed on achieving models of the highest rigor and external predictive power. We present several examples of successful applications of both approaches for virtual screening to illustrate their utility. We suggest that the expert use of both QSAR and pharmacophore models, either independently or in combination, enables users to achieve targeted libraries enriched with experimentally confirmed hit compounds.
Evaluation of a novel virtual screening strategy using receptor decoy binding sites.
Patel, Hershna; Kukol, Andreas
2016-08-23
Virtual screening is used in biomedical research to predict the binding affinity of a large set of small organic molecules to protein receptor targets. This report shows the development and evaluation of a novel yet straightforward attempt to improve this ranking in receptor-based molecular docking using a receptor-decoy strategy. This strategy includes defining a decoy binding site on the receptor and adjusting the ranking of the true binding-site virtual screen based on the decoy-site screen. The results show that by docking against a receptor-decoy site with Autodock Vina, improved Receiver Operator Characteristic Enrichment (ROCE) was achieved for 5 out of fifteen receptor targets investigated, when up to 15 % of a decoy site rank list was considered. No improved enrichment was seen for 7 targets, while for 3 targets the ROCE was reduced. The extent to which this strategy can effectively improve ligand prediction is dependent on the target receptor investigated.
Wang, Yi; Hess, Tamara Noelle; Jones, Victoria; Zhou, Joe Zhongxiang; McNeil, Michael R.; McCammon, J. Andrew
2011-01-01
The complex and highly impermeable cell wall of Mycobacterium tuberculosis (Mtb) is largely responsible for the ability of the mycobacterium to resist the action of chemical therapeutics. An L-rhamnosyl residue, which occupies an important anchoring position in the Mtb cell wall, is an attractive target for novel anti-tuberculosis drugs. In this work, we report a virtual screening (VS) study targeting Mtb dTDP-deoxy-L-lyxo-4-hexulose reductase (RmlD), the last enzyme in the L-rhamnosyl synthesis pathway. Through two rounds of VS, we have identified four RmlD inhibitors with half inhibitory concentrations of 0.9-25 μM, and whole-cell minimum inhibitory concentrations of 20-200 μg/ml. Compared with our previous high throughput screening targeting another enzyme involved in L-rhamnosyl synthesis, virtual screening produced higher hit rates, supporting the use of computational methods in future anti-tuberculosis drug discovery efforts. PMID:22014548
A cross docking pipeline for improving pose prediction and virtual screening performance
NASA Astrophysics Data System (ADS)
Kumar, Ashutosh; Zhang, Kam Y. J.
2018-01-01
Pose prediction and virtual screening performance of a molecular docking method depend on the choice of protein structures used for docking. Multiple structures for a target protein are often used to take into account the receptor flexibility and problems associated with a single receptor structure. However, the use of multiple receptor structures is computationally expensive when docking a large library of small molecules. Here, we propose a new cross-docking pipeline suitable to dock a large library of molecules while taking advantage of multiple target protein structures. Our method involves the selection of a suitable receptor for each ligand in a screening library utilizing ligand 3D shape similarity with crystallographic ligands. We have prospectively evaluated our method in D3R Grand Challenge 2 and demonstrated that our cross-docking pipeline can achieve similar or better performance than using either single or multiple-receptor structures. Moreover, our method displayed not only decent pose prediction performance but also better virtual screening performance over several other methods.
Bai, Qifeng; Shao, Yonghua; Pan, Dabo; Zhang, Yang; Liu, Huanxiang; Yao, Xiaojun
2014-01-01
We designed a program called MolGridCal that can be used to screen small molecule database in grid computing on basis of JPPF grid environment. Based on MolGridCal program, we proposed an integrated strategy for virtual screening and binding mode investigation by combining molecular docking, molecular dynamics (MD) simulations and free energy calculations. To test the effectiveness of MolGridCal, we screened potential ligands for β2 adrenergic receptor (β2AR) from a database containing 50,000 small molecules. MolGridCal can not only send tasks to the grid server automatically, but also can distribute tasks using the screensaver function. As for the results of virtual screening, the known agonist BI-167107 of β2AR is ranked among the top 2% of the screened candidates, indicating MolGridCal program can give reasonable results. To further study the binding mode and refine the results of MolGridCal, more accurate docking and scoring methods are used to estimate the binding affinity for the top three molecules (agonist BI-167107, neutral antagonist alprenolol and inverse agonist ICI 118,551). The results indicate agonist BI-167107 has the best binding affinity. MD simulation and free energy calculation are employed to investigate the dynamic interaction mechanism between the ligands and β2AR. The results show that the agonist BI-167107 also has the lowest binding free energy. This study can provide a new way to perform virtual screening effectively through integrating molecular docking based on grid computing, MD simulations and free energy calculations. The source codes of MolGridCal are freely available at http://molgridcal.codeplex.com. PMID:25229694
GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing
Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal
2016-01-01
Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300
Badrinarayan, Preethi; Sastry, G Narahari
2012-04-01
In this work, we introduce the development and application of a three-step scoring and filtering procedure for the design of type II p38 MAP kinase leads using allosteric fragments extracted from virtual screening hits. The design of the virtual screening filters is based on a thorough evaluation of docking methods, DFG-loop conformation, binding interactions and chemotype specificity of the 138 p38 MAP kinase inhibitors from Protein Data Bank bound to DFG-in and DFG-out conformations using Glide, GOLD and CDOCKER. A 40 ns molecular dynamics simulation with the apo, type I with DFG-in and type II with DFG-out forms was carried out to delineate the effects of structural variations on inhibitor binding. The designed docking-score and sub-structure filters were first tested on a dataset of 249 potent p38 MAP kinase inhibitors from seven diverse series and 18,842 kinase inhibitors from PDB, to gauge their capacity to discriminate between kinase and non-kinase inhibitors and likewise to selectively filter-in target-specific inhibitors. The designed filters were then applied in the virtual screening of a database of ten million (10⁷) compounds resulting in the identification of 100 hits. Based on their binding modes, 98 allosteric fragments were extracted from the hits and a fragment library was generated. New type II p38 MAP kinase leads were designed by tailoring the existing type I ATP site binders with allosteric fragments using a common urea linker. Target specific virtual screening filters can thus be easily developed for other kinases based on this strategy to retrieve target selective compounds. Copyright © 2012 Elsevier Inc. All rights reserved.
Ren, Ji-Xia; Li, Cheng-Ping; Zhou, Xiu-Ling; Cao, Xue-Song; Xie, Yong
2017-08-22
Myeloid cell leukemia-1 (Mcl-1) has been a validated and attractive target for cancer therapy. Over-expression of Mcl-1 in many cancers allows cancer cells to evade apoptosis and contributes to the resistance to current chemotherapeutics. Here, we identified new Mcl-1 inhibitors using a multi-step virtual screening approach. First, based on two different ligand-receptor complexes, 20 pharmacophore models were established by simultaneously using 'Receptor-Ligand Pharmacophore Generation' method and manual build feature method, and then carefully validated by a test database. Then, pharmacophore-based virtual screening (PB-VS) could be performed by using the 20 pharmacophore models. In addition, docking study was used to predict the possible binding poses of compounds, and the docking parameters were optimized before performing docking-based virtual screening (DB-VS). Moreover, a 3D QSAR model was established by applying the 55 aligned Mcl-1 inhibitors. The 55 inhibitors sharing the same scaffold were docked into the Mcl-1 active site before alignment, then the inhibitors with possible binding conformations were aligned. For the training set, the 3D QSAR model gave a correlation coefficient r 2 of 0.996; for the test set, the correlation coefficient r 2 was 0.812. Therefore, the developed 3D QSAR model was a good model, which could be applied for carrying out 3D QSAR-based virtual screening (QSARD-VS). After the above three virtual screening methods orderly filtering, 23 potential inhibitors with novel scaffolds were identified. Furthermore, we have discussed in detail the mapping results of two potent compounds onto pharmacophore models, 3D QSAR model, and the interactions between the compounds and active site residues.
GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.
Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal
2016-01-01
Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.
... blood test Sigmoidoscopy Colonoscopy Virtual colonoscopy DNA stool test Studies have shown that screening for colorectal cancer using ... decrease the risk of dying from cancer. Scientists study screening tests to find those with the fewest risks and ...
A kinase-focused compound collection: compilation and screening strategy.
Sun, Dongyu; Chuaqui, Claudio; Deng, Zhan; Bowes, Scott; Chin, Donovan; Singh, Juswinder; Cullen, Patrick; Hankins, Gretchen; Lee, Wen-Cherng; Donnelly, Jason; Friedman, Jessica; Josiah, Serene
2006-06-01
Lead identification by high-throughput screening of large compound libraries has been supplemented with virtual screening and focused compound libraries. To complement existing approaches for lead identification at Biogen Idec, a kinase-focused compound collection was designed, developed and validated. Two strategies were adopted to populate the compound collection: a ligand shape-based virtual screening and a receptor-based approach (structural interaction fingerprint). Compounds selected with the two approaches were cherry-picked from an existing high-throughput screening compound library, ordered from suppliers and supplemented with specific medicinal compounds from internal programs. Promising hits and leads have been generated from the kinase-focused compound collection against multiple kinase targets. The principle of the collection design and screening strategy was validated and the use of the kinase-focused compound collection for lead identification has been added to existing strategies.
Economic and workflow analysis of a blood bank automated system.
Shin, Kyung-Hwa; Kim, Hyung Hoi; Chang, Chulhun L; Lee, Eun Yup
2013-07-01
This study compared the estimated costs and times required for ABO/Rh(D) typing and unexpected antibody screening using an automated system and manual methods. The total cost included direct and labor costs. Labor costs were calculated on the basis of the average operator salaries and unit values (minutes), which was the hands-on time required to test one sample. To estimate unit values, workflows were recorded on video, and the time required for each process was analyzed separately. The unit values of ABO/Rh(D) typing using the manual method were 5.65 and 8.1 min during regular and unsocial working hours, respectively. The unit value was less than 3.5 min when several samples were tested simultaneously. The unit value for unexpected antibody screening was 2.6 min. The unit values using the automated method for ABO/Rh(D) typing, unexpected antibody screening, and both simultaneously were all 1.5 min. The total cost of ABO/Rh(D) typing of only one sample using the automated analyzer was lower than that of testing only one sample using the manual technique but higher than that of testing several samples simultaneously. The total cost of unexpected antibody screening using an automated analyzer was less than that using the manual method. ABO/Rh(D) typing using an automated analyzer incurs a lower unit value and cost than that using the manual technique when only one sample is tested at a time. Unexpected antibody screening using an automated analyzer always incurs a lower unit value and cost than that using the manual technique.
Advanced Maintenance Simulation by Means of Hand-Based Haptic Interfaces
NASA Astrophysics Data System (ADS)
Nappi, Michele; Paolino, Luca; Ricciardi, Stefano; Sebillo, Monica; Vitiello, Giuliana
Aerospace industry has been involved in virtual simulation for design and testing since the birth of virtual reality. Today this industry is showing a growing interest in the development of haptic-based maintenance training applications, which represent the most advanced way to simulate maintenance and repair tasks within a virtual environment by means of a visual-haptic approach. The goal is to allow the trainee to experiment the service procedures not only as a workflow reproduced at a visual level but also in terms of the kinaesthetic feedback involved with the manipulation of tools and components. This study, conducted in collaboration with aerospace industry specialists, is aimed to the development of an immersive virtual capable of immerging the trainees into a virtual environment where mechanics and technicians can perform maintenance simulation or training tasks by directly manipulating 3D virtual models of aircraft parts while perceiving force feedback through the haptic interface. The proposed system is based on ViRstperson, a virtual reality engine under development at the Italian Center for Aerospace Research (CIRA) to support engineering and technical activities such as design-time maintenance procedure validation, and maintenance training. This engine has been extended to support haptic-based interaction, enabling a more complete level of interaction, also in terms of impedance control, and thus fostering the development of haptic knowledge in the user. The user’s “sense of touch” within the immersive virtual environment is simulated through an Immersion CyberForce® hand-based force-feedback device. Preliminary testing of the proposed system seems encouraging.
Shaw, James; Jamieson, Trevor; Agarwal, Payal; Griffin, Bailey; Wong, Ivy; Bhatia, R Sacha
2017-01-01
Background The development of new virtual care technologies (including telehealth and telemedicine) is growing rapidly, leading to a number of challenges related to health policy and planning for health systems around the world. Methods We brought together a diverse group of health system stakeholders, including patient representatives, to engage in policy dialogue to set health system priorities for the application of virtual care in the primary care sector in the Province of Ontario, Canada. We applied a nominal group technique (NGT) process to determine key priorities, and synthesized these priorities with group discussion to develop recommendations for virtual care policy. Methods included a structured priority ranking process, open-ended note-taking, and thematic analysis to identify priorities. Results Recommendations were summarized under the following themes: (a) identify clear health system leadership to embed virtual care strategies into all aspects of primary and community care; (b) make patients the focal point of health system decision-making; (c) leverage incentives to achieve meaningful health system improvements; and (d) building virtual care into streamlined workflows. Two key implications of our policy dialogue are especially relevant for an international audience. First, shifting the dialogue away from technology toward more meaningful patient engagement will enable policy planning for applications of technology that better meet patients' needs. Second, a strong conceptual framework on guiding the meaningful use of technology in health care settings is essential for intelligent planning of virtual care policy. Conclusions Policy planning for virtual care needs to shift toward a stronger focus on patient engagement to understand patients' needs.
Increasing efficiency and declining cost of generating whole transcriptome profiles has made high-throughput transcriptomics a practical option for chemical bioactivity screening. The resulting data output provides information on the expression of thousands of genes and is amenab...
Background: Adverse outcome pathways (AOPs) link adverse effects in individuals or populations to a molecular initiating event (MIE) that can be quantified using in vitro methods. Practical application of AOPs in chemical-specific risk assessment requires incorporation of knowled...
Increasing efficiency and declining cost of generating whole transcriptome profiles has made high-throughput transcriptomics a practical option for chemical bioactivity screening. The resulting data output provides information on the expression of thousands of genes and is amenab...
Workflow and Proof of Concept for Non-Targeted Analysis of Environmental Samples by LC-MS/MS
The human exposure includes thousands of chemicals acquired through various routes of exposure such as inhalation, ingestion, dermal contact, and indirect ingestion. Rapid assessment and screening of these chemicals is a difficult challenge facing EPA in its mission to protect pu...
WRF4SG: A Scientific Gateway for climate experiment workflows
NASA Astrophysics Data System (ADS)
Blanco, Carlos; Cofino, Antonio S.; Fernandez-Quiruelas, Valvanuz
2013-04-01
The Weather Research and Forecasting model (WRF) is a community-driven and public domain model widely used by the weather and climate communities. As opposite to other application-oriented models, WRF provides a flexible and computationally-efficient framework which allows solving a variety of problems for different time-scales, from weather forecast to climate change projection. Furthermore, WRF is also widely used as a research tool in modeling physics, dynamics, and data assimilation by the research community. Climate experiment workflows based on Weather Research and Forecasting (WRF) are nowadays among the one of the most cutting-edge applications. These workflows are complex due to both large storage and the huge number of simulations executed. In order to manage that, we have developed a scientific gateway (SG) called WRF for Scientific Gateway (WRF4SG) based on WS-PGRADE/gUSE and WRF4G frameworks to ease achieve WRF users needs (see [1] and [2]). WRF4SG provides services for different use cases that describe the different interactions between WRF users and the WRF4SG interface in order to show how to run a climate experiment. As WS-PGRADE/gUSE uses portlets (see [1]) to interact with users, its portlets will support these use cases. A typical experiment to be carried on by a WRF user will consist on a high-resolution regional re-forecast. These re-forecasts are common experiments used as input data form wind power energy and natural hazards (wind and precipitation fields). In the cases below, the user is able to access to different resources such as Grid due to the fact that WRF needs a huge amount of computing resources in order to generate useful simulations: * Resource configuration and user authentication: The first step is to authenticate on users' Grid resources by virtual organizations. After login, the user is able to select which virtual organization is going to be used by the experiment. * Data assimilation: In order to assimilate the data sources, the user has to select them browsing through LFC Portlet. * Design Experiment workflow: In order to configure the experiment, the user will define the type of experiment (i.e. re-forecast), and its attributes to simulate. In this case the main attributes are: the field of interest (wind, precipitation, ...), the start and end date simulation and the requirements of the experiment. * Monitor workflow: In order to monitor the experiment the user will receive notification messages based on events and also the gateway will display the progress of the experiment. * Data storage: Like Data assimilation case, the user is able to browse and view the output data simulations using LFC Portlet. The objectives of WRF4SG can be described by considering two goals. The first goal is to show how WRF4SG facilitates to execute, monitor and manage climate workflows based on the WRF4G framework. And the second goal of WRF4SG is to help WRF users to execute their experiment workflows concurrently using heterogeneous computing resources such as HPC and Grid. [1] Kacsuk, P.: P-GRADE portal family for grid infrastructures. Concurrency and Computation: Practice and Experience. 23, 235-245 (2011). [2] http://www.meteo.unican.es/software/wrf4g
Wu, Xin-Bao; Wang, Jun-Qiang; Zhao, Chun-Peng; Sun, Xu; Shi, Yin; Zhang, Zi-An; Li, Yu-Neng; Wang, Man-Yi
2015-02-20
Old pelvis fractures are among the most challenging fractures to treat because of their complex anatomy, difficult-to-access surgical sites, and the relatively low incidence of such cases. Proper evaluation and surgical planning are necessary to achieve the pelvic ring symmetry and stable fixation of the fracture. The goal of this study was to assess the use of three-dimensional (3D) printing techniques for surgical management of old pelvic fractures. First, 16 dried human cadaveric pelvises were used to confirm the anatomical accuracy of the 3D models printed based on radiographic data. Next, nine clinical cases between January 2009 and April 2013 were used to evaluate the surgical reconstruction based on the 3D printed models. The pelvic injuries were all type C, and the average time from injury to reconstruction was 11 weeks (range: 8-17 weeks). The workflow consisted of: (1) Printing patient-specific bone models based on preoperative computed tomography (CT) scans, (2) virtual fracture reduction using the printed 3D anatomic template, (3) virtual fracture fixation using Kirschner wires, and (4) preoperatively measuring the osteotomy and implant position relative to landmarks using the virtually defined deformation. These models aided communication between surgical team members during the procedure. This technique was validated by comparing the preoperative planning to the intraoperative procedure. The accuracy of the 3D printed models was within specification. Production of a model from standard CT DICOM data took 7 hours (range: 6-9 hours). Preoperative planning using the 3D printed models was feasible in all cases. Good correlation was found between the preoperative planning and postoperative follow-up X-ray in all nine cases. The patients were followed for 3-29 months (median: 5 months). The fracture healing time was 9-17 weeks (mean: 10 weeks). No delayed incision healing, wound infection, or nonunions occurred. The results were excellent in two cases, good in five, and poor in two based on the Majeed score. The 3D printing planning technique for pelvic surgery was successfully integrated into a clinical workflow to improve patient-specific preoperative planning by providing a visual and haptic model of the injury and allowing patient-specific adaptation of each osteosynthesis implant to the virtually reduced pelvis.
Virtual High-Throughput Screening for Matrix Metalloproteinase Inhibitors.
Choi, Jun Yong; Fuerst, Rita
2017-01-01
Structure-based virtual screening (SBVS) is a common method for the fast identification of hit structures at the beginning of a medicinal chemistry program in drug discovery. The SBVS, described in this manuscript, is focused on finding small molecule hits that can be further utilized as a starting point for the development of inhibitors of matrix metalloproteinase 13 (MMP-13) via structure-based molecular design. We intended to identify a set of structurally diverse hits, which occupy all subsites (S1'-S3', S2, and S3) centering the zinc containing binding site of MMP-13, by the virtual screening of a chemical library comprising more than ten million commercially available compounds. In total, 23 compounds were found as potential MMP-13 inhibitors using Glide docking followed by the analysis of the structural interaction fingerprints (SIFt) of the docked structures.
Application of Functional Use Predictions to Aid in Structure ...
Humans are potentially exposed to thousands of anthropogenic chemicals in commerce. Recent work has shown that the bulk of this exposure may occur in near-field indoor environments (e.g., home, school, work, etc.). Advances in suspect screening analyses (SSA) now allow an improved understanding of the chemicals present in these environments. However, due to the nature of suspect screening techniques, investigators are often left with chemical formula predictions, with the possibility of many chemical structures matching to each formula. Here, newly developed quantitative structure-use relationship (QSUR) models are used to identify potential exposure sources for candidate structures. Previously, a suspect screening workflow was introduced and applied to house dust samples collected from the U.S. Department of Housing and Urban Development’s American Healthy Homes Survey (AHHS) [Rager, et al., Env. Int. 88 (2016)]. This workflow utilized the US EPA’s Distributed Structure-Searchable Toxicity (DSSTox) Database to link identified molecular features to molecular formulas, and ultimately chemical structures. Multiple QSUR models were applied to support the evaluation of candidate structures. These QSURs predict the likelihood of a chemical having a functional use commonly associated with consumer products having near-field use. For 3,228 structures identified as possible chemicals in AHHS house dust samples, we were able to obtain the required descriptors to appl
Zhang, Baofeng; D'Erasmo, Michael P; Murelli, Ryan P; Gallicchio, Emilio
2016-09-30
We report the results of a binding free energy-based virtual screening campaign of a library of 77 α-hydroxytropolone derivatives against the challenging RNase H active site of the reverse transcriptase (RT) enzyme of human immunodeficiency virus-1. Multiple protonation states, rotamer states, and binding modalities of each compound were individually evaluated. The work involved more than 300 individual absolute alchemical binding free energy parallel molecular dynamics calculations and over 1 million CPU hours on national computing clusters and a local campus computational grid. The thermodynamic and structural measures obtained in this work rationalize a series of characteristics of this system useful for guiding future synthetic and biochemical efforts. The free energy model identified key ligand-dependent entropic and conformational reorganization processes difficult to capture using standard docking and scoring approaches. Binding free energy-based optimization of the lead compounds emerging from the virtual screen has yielded four compounds with very favorable binding properties, which will be the subject of further experimental investigations. This work is one of the few reported applications of advanced-binding free energy models to large-scale virtual screening and optimization projects. It further demonstrates that, with suitable algorithms and automation, advanced-binding free energy models can have a useful role in early-stage drug-discovery programs.
Role of Chemical Reactivity and Transition State Modeling for Virtual Screening.
Karthikeyan, Muthukumarasamy; Vyas, Renu; Tambe, Sanjeev S; Radhamohan, Deepthi; Kulkarni, Bhaskar D
2015-01-01
Every drug discovery research program involves synthesis of a novel and potential drug molecule utilizing atom efficient, economical and environment friendly synthetic strategies. The current work focuses on the role of the reactivity based fingerprints of compounds as filters for virtual screening using a tool ChemScore. A reactant-like (RLS) and a product- like (PLS) score can be predicted for a given compound using the binary fingerprints derived from the numerous known organic reactions which capture the molecule-molecule interactions in the form of addition, substitution, rearrangement, elimination and isomerization reactions. The reaction fingerprints were applied to large databases in biology and chemistry, namely ChEMBL, KEGG, HMDB, DSSTox, and the Drug Bank database. A large network of 1113 synthetic reactions was constructed to visualize and ascertain the reactant product mappings in the chemical reaction space. The cumulative reaction fingerprints were computed for 4000 molecules belonging to 29 therapeutic classes of compounds, and these were found capable of discriminating between the cognition disorder related and anti-allergy compounds with reasonable accuracy of 75% and AUC 0.8. In this study, the transition state based fingerprints were also developed and used effectively for virtual screening in drug related databases. The methodology presented here provides an efficient handle for the rapid scoring of molecular libraries for virtual screening.
Congestion game scheduling for virtual drug screening optimization
NASA Astrophysics Data System (ADS)
Nikitina, Natalia; Ivashko, Evgeny; Tchernykh, Andrei
2018-02-01
In virtual drug screening, the chemical diversity of hits is an important factor, along with their predicted activity. Moreover, interim results are of interest for directing the further research, and their diversity is also desirable. In this paper, we consider a problem of obtaining a diverse set of virtual screening hits in a short time. To this end, we propose a mathematical model of task scheduling for virtual drug screening in high-performance computational systems as a congestion game between computational nodes to find the equilibrium solutions for best balancing the number of interim hits with their chemical diversity. The model considers the heterogeneous environment with workload uncertainty, processing time uncertainty, and limited knowledge about the input dataset structure. We perform computational experiments and evaluate the performance of the developed approach considering organic molecules database GDB-9. The used set of molecules is rich enough to demonstrate the feasibility and practicability of proposed solutions. We compare the algorithm with two known heuristics used in practice and observe that game-based scheduling outperforms them by the hit discovery rate and chemical diversity at earlier steps. Based on these results, we use a social utility metric for assessing the efficiency of our equilibrium solutions and show that they reach greatest values.
Risks of Colorectal Cancer Screening
... blood test Sigmoidoscopy Colonoscopy Virtual colonoscopy DNA stool test Studies have shown that screening for colorectal cancer using ... decrease the risk of dying from cancer. Scientists study screening tests to find those with the fewest risks and ...
gWEGA: GPU-accelerated WEGA for molecular superposition and shape comparison.
Yan, Xin; Li, Jiabo; Gu, Qiong; Xu, Jun
2014-06-05
Virtual screening of a large chemical library for drug lead identification requires searching/superimposing a large number of three-dimensional (3D) chemical structures. This article reports a graphic processing unit (GPU)-accelerated weighted Gaussian algorithm (gWEGA) that expedites shape or shape-feature similarity score-based virtual screening. With 86 GPU nodes (each node has one GPU card), gWEGA can screen 110 million conformations derived from an entire ZINC drug-like database with diverse antidiabetic agents as query structures within 2 s (i.e., screening more than 55 million conformations per second). The rapid screening speed was accomplished through the massive parallelization on multiple GPU nodes and rapid prescreening of 3D structures (based on their shape descriptors and pharmacophore feature compositions). Copyright © 2014 Wiley Periodicals, Inc.
Library fingerprints: a novel approach to the screening of virtual libraries.
Klon, Anthony E; Diller, David J
2007-01-01
We propose a novel method to prioritize libraries for combinatorial synthesis and high-throughput screening that assesses the viability of a particular library on the basis of the aggregate physical-chemical properties of the compounds using a naïve Bayesian classifier. This approach prioritizes collections of related compounds according to the aggregate values of their physical-chemical parameters in contrast to single-compound screening. The method is also shown to be useful in screening existing noncombinatorial libraries when the compounds in these libraries have been previously clustered according to their molecular graphs. We show that the method used here is comparable or superior to the single-compound virtual screening of combinatorial libraries and noncombinatorial libraries and is superior to the pairwise Tanimoto similarity searching of a collection of combinatorial libraries.
Casis, E; Garrido, A; Uranga, B; Vives, A; Zufiaurre, C
2001-01-01
Total laboratory automation (TLA) can be substituted in mid-size laboratories by a computer sample workflow control (virtual automation). Such a solution has been implemented in our laboratory using PSM, software developed in cooperation with Roche Diagnostics (Barcelona, Spain), to this purpose. This software is connected to the online analyzers and to the laboratory information system and is able to control and direct the samples working as an intermediate station. The only difference with TLA is the replacement of transport belts by personnel of the laboratory. The implementation of this virtual automation system has allowed us the achievement of the main advantages of TLA: workload increase (64%) with reduction in the cost per test (43%), significant reduction in the number of biochemistry primary tubes (from 8 to 2), less aliquoting (from 600 to 100 samples/day), automation of functional testing, drastic reduction of preanalytical errors (from 11.7 to 0.4% of the tubes) and better total response time for both inpatients (from up to 48 hours to up to 4 hours) and outpatients (from up to 10 days to up to 48 hours). As an additional advantage, virtual automation could be implemented without hardware investment and significant headcount reduction (15% in our lab).
AI (artificial intelligence) in histopathology--from image analysis to automated diagnosis.
Kayser, Klaus; Görtler, Jürgen; Bogovac, Milica; Bogovac, Aleksandar; Goldmann, Torsten; Vollmer, Ekkehard; Kayser, Gian
2009-01-01
The technological progress in digitalization of complete histological glass slides has opened a new door in tissue--based diagnosis. The presentation of microscopic images as a whole in a digital matrix is called virtual slide. A virtual slide allows calculation and related presentation of image information that otherwise can only be seen by individual human performance. The digital world permits attachments of several (if not all) fields of view and the contemporary visualization on a screen. The presentation of all microscopic magnifications is possible if the basic pixel resolution is less than 0.25 microns. To introduce digital tissue--based diagnosis into the daily routine work of a surgical pathologist requires a new setup of workflow arrangement and procedures. The quality of digitized images is sufficient for diagnostic purposes; however, the time needed for viewing virtual slides exceeds that of viewing original glass slides by far. The reason lies in a slower and more difficult sampling procedure, which is the selection of information containing fields of view. By application of artificial intelligence, tissue--based diagnosis in routine work can be managed automatically in steps as follows: 1. The individual image quality has to be measured, and corrected, if necessary. 2. A diagnostic algorithm has to be applied. An algorithm has be developed, that includes both object based (object features, structures) and pixel based (texture) measures. 3. These measures serve for diagnosis classification and feedback to order additional information, for example in virtual immunohistochemical slides. 4. The measures can serve for automated image classification and detection of relevant image information by themselves without any labeling. 5. The pathologists' duty will not be released by such a system; to the contrary, it will manage and supervise the system, i.e., just working at a "higher level". Virtual slides are already in use for teaching and continuous education in anatomy and pathology. First attempts to introduce them into routine work have been reported. Application of AI has been established by automated immunohistochemical measurement systems (EAMUS, www.diagnomX.eu). The performance of automated diagnosis has been reported for a broad variety of organs at sensitivity and specificity levels >85%). The implementation of a complete connected AI supported system is in its childhood. Application of AI in digital tissue--based diagnosis will allow the pathologists to work as supervisors and no longer as primary "water carriers". Its accurate use will give them the time needed to concentrating on difficult cases for the benefit of their patients.
Jensen, Roxanne E.; Rothrock, Nan E.; DeWitt, Esi Morgan; Spiegel, Brennan; Tucker, Carole A.; Crane, Heidi M.; Forrest, Christopher B.; Patrick, Donald L.; Fredericksen, Rob; Shulman, Lisa M.; Cella, David; Crane, Paul K.
2016-01-01
Background Patient-reported outcomes (PROs) are gaining recognition as key measures for improving the quality of patient care in clinical care settings. Three factors have made the implementation of PROs in clinical care more feasible: increased use of modern measurement methods in PRO design and validation, rapid progression of technology (e.g., touch screen tablets, Internet accessibility, and electronic health records (EHRs)), and greater demand for measurement and monitoring of PROs by regulators, payers, accreditors, and professional organizations. As electronic PRO collection and reporting capabilities have improved, the challenges of collecting PRO data have changed. Objectives To update information on PRO adoption considerations in clinical care, highlighting electronic and technical advances with respect to measure selection, clinical workflow, data infrastructure, and outcomes reporting. Methods Five practical case studies across diverse healthcare settings and patient populations are used to explore how implementation barriers were addressed to promote the successful integration of PRO collection into the clinical workflow. The case studies address selecting and reporting of relevant content, workflow integration, pre-visit screening, effective evaluation, and EHR integration. Conclusions These case studies exemplify elements of well-designed electronic systems, including response automation, tailoring of item selection and reporting algorithms, flexibility of collection location, and integration with patient health care data elements. They also highlight emerging logistical barriers in this area, such as the need for specialized technological and methodological expertise, and design limitations of current electronic data capture systems. PMID:25588135
Role of Open Source Tools and Resources in Virtual Screening for Drug Discovery.
Karthikeyan, Muthukumarasamy; Vyas, Renu
2015-01-01
Advancement in chemoinformatics research in parallel with availability of high performance computing platform has made handling of large scale multi-dimensional scientific data for high throughput drug discovery easier. In this study we have explored publicly available molecular databases with the help of open-source based integrated in-house molecular informatics tools for virtual screening. The virtual screening literature for past decade has been extensively investigated and thoroughly analyzed to reveal interesting patterns with respect to the drug, target, scaffold and disease space. The review also focuses on the integrated chemoinformatics tools that are capable of harvesting chemical data from textual literature information and transform them into truly computable chemical structures, identification of unique fragments and scaffolds from a class of compounds, automatic generation of focused virtual libraries, computation of molecular descriptors for structure-activity relationship studies, application of conventional filters used in lead discovery along with in-house developed exhaustive PTC (Pharmacophore, Toxicophores and Chemophores) filters and machine learning tools for the design of potential disease specific inhibitors. A case study on kinase inhibitors is provided as an example.
Ellingson, Sally R; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C; Baudry, Jerome
2014-04-25
In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm.
SimITK: visual programming of the ITK image-processing library within Simulink.
Dickinson, Andrew W L; Abolmaesumi, Purang; Gobbi, David G; Mousavi, Parvin
2014-04-01
The Insight Segmentation and Registration Toolkit (ITK) is a software library used for image analysis, visualization, and image-guided surgery applications. ITK is a collection of C++ classes that poses the challenge of a steep learning curve should the user not have appropriate C++ programming experience. To remove the programming complexities and facilitate rapid prototyping, an implementation of ITK within a higher-level visual programming environment is presented: SimITK. ITK functionalities are automatically wrapped into "blocks" within Simulink, the visual programming environment of MATLAB, where these blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. The heavily templated C++ nature of ITK does not facilitate direct interaction between Simulink and ITK; an intermediary is required to convert respective data types and allow intercommunication. As such, a SimITK "Virtual Block" has been developed that serves as a wrapper around an ITK class which is capable of resolving the ITK data types to native Simulink data types. Part of the challenge surrounding this implementation involves automatically capturing and storing the pertinent class information that need to be refined from an initial state prior to being reflected within the final block representation. The primary result from the SimITK wrapping procedure is multiple Simulink block libraries. From these libraries, blocks are selected and interconnected to demonstrate two examples: a 3D segmentation workflow and a 3D multimodal registration workflow. Compared to their pure-code equivalents, the workflows highlight ITK usability through an alternative visual interpretation of the code that abstracts away potentially confusing technicalities.
NASA Astrophysics Data System (ADS)
Kaushik, Aman C.; Kumar, Sanjay; Wei, Dong Q.; Sahi, Shakti
2018-02-01
GPR142 (G protein receptor 142) is a novel orphan GPCR (G protein coupled receptor) belonging to ‘Class A’ of GPCR family and expressed in beta cells of pancreas. In this study, we reported the structure based virtual screening to identify the hit compounds which can be developed as leads for potential agonists. The results were validated through induced fit docking, pharmacophore modeling and system biology approaches. Since, there is no solved crystal structure of GPR142, we attempted to predict the 3D structure followed by validation and then identification of active site using threading and ab initio methods. Also, structure based virtual screening was performed against a total of 1171519 compounds from different libraries and only top 20 best hit compounds were screened and analyzed. Moreover, the biochemical pathway of GPR142 complex with screened compound2 was also designed and compared with experimental data. Interestingly, compound2 showed an increase in insulin production via Gq mediated signaling pathway suggesting the possible role of novel GPR142 agonists in therapy against type 2 diabetes.
Surflex-Dock: Docking benchmarks and real-world application
NASA Astrophysics Data System (ADS)
Spitzer, Russell; Jain, Ajay N.
2012-06-01
Benchmarks for molecular docking have historically focused on re-docking the cognate ligand of a well-determined protein-ligand complex to measure geometric pose prediction accuracy, and measurement of virtual screening performance has been focused on increasingly large and diverse sets of target protein structures, cognate ligands, and various types of decoy sets. Here, pose prediction is reported on the Astex Diverse set of 85 protein ligand complexes, and virtual screening performance is reported on the DUD set of 40 protein targets. In both cases, prepared structures of targets and ligands were provided by symposium organizers. The re-prepared data sets yielded results not significantly different than previous reports of Surflex-Dock on the two benchmarks. Minor changes to protein coordinates resulting from complex pre-optimization had large effects on observed performance, highlighting the limitations of cognate ligand re-docking for pose prediction assessment. Docking protocols developed for cross-docking, which address protein flexibility and produce discrete families of predicted poses, produced substantially better performance for pose prediction. Performance on virtual screening performance was shown to benefit by employing and combining multiple screening methods: docking, 2D molecular similarity, and 3D molecular similarity. In addition, use of multiple protein conformations significantly improved screening enrichment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheu, R; Ghafar, R; Powers, A
Purpose: Demonstrate the effectiveness of in-house software in ensuring EMR workflow efficiency and safety. Methods: A web-based dashboard system (WBDS) was developed to monitor clinical workflow in real time using web technology (WAMP) through ODBC (Open Database Connectivity). Within Mosaiq (Elekta Inc), operational workflow is driven and indicated by Quality Check Lists (QCLs), which is triggered by automation software IQ Scripts (Elekta Inc); QCLs rely on user completion to propagate. The WBDS retrieves data directly from the Mosaig SQL database and tracks clinical events in real time. For example, the necessity of a physics initial chart check can be determinedmore » by screening all patients on treatment who have received their first fraction and who have not yet had their first chart check. Monitoring similar “real” events with our in-house software creates a safety net as its propagation does not rely on individual users input. Results: The WBDS monitors the following: patient care workflow (initial consult to end of treatment), daily treatment consistency (scheduling, technique, charges), physics chart checks (initial, EOT, weekly), new starts, missing treatments (>3 warning/>5 fractions, action required), and machine overrides. The WBDS can be launched from any web browser which allows the end user complete transparency and timely information. Since the creation of the dashboards, workflow interruptions due to accidental deletion or completion of QCLs were eliminated. Additionally, all physics chart checks were completed timely. Prompt notifications of treatment record inconsistency and machine overrides have decreased the amount of time between occurrence and execution of corrective action. Conclusion: Our clinical workflow relies primarily on QCLs and IQ Scripts; however, this functionality is not the panacea of safety and efficiency. The WBDS creates a more thorough system of checks to provide a safer and near error-less working environment.« less
Kumar, Gyanendra; Agarwal, Rakhi; Swaminathan, Subramanyam
2012-02-28
Botulinum neurotoxins are one of the most poisonous biological substances known to humans and present a potential bioterrorism threat. There are no therapeutic interventions developed so far. Here, we report the first small molecule non-peptide inhibitor for botulinum neurotoxin serotype E discovered by structure-based virtual screening and propose a mechanism for its inhibitory activity. This journal is © The Royal Society of Chemistry 2012
Sun, Yunan; Zhou, Hui; Zhu, Hongmei; Leung, Siu-wai
2016-01-25
Sirtuin 1 (SIRT1) is a nicotinamide adenine dinucleotide-dependent deacetylase, and its dysregulation can lead to ageing, diabetes, and cancer. From 346 experimentally confirmed SIRT1 inhibitors, an inhibitor structure pattern was generated by inductive logic programming (ILP) with DMax Chemistry Assistant software. The pattern contained amide, amine, and hetero-aromatic five-membered rings, each of which had a hetero-atom and an unsubstituted atom at a distance of 2. According to this pattern, a ligand-based virtual screening of 1 444 880 active compounds from Chinese herbs identified 12 compounds as inhibitors of SIRT1. Three compounds (ZINC08790006, ZINC08792229, and ZINC08792355) had high affinity (-7.3, -7.8, and -8.6 kcal/mol, respectively) for SIRT1 as estimated by molecular docking software AutoDock Vina. This study demonstrated a use of ILP and background knowledge in machine learning to facilitate virtual screening.
NASA Astrophysics Data System (ADS)
Iftikhar, Sehrish; Shahid, Ahmad A.; Halim, Sobia A.; Wolters, Pieter J.; Vleeshouwers, Vivianne G. A. A.; Khan, Ajmal; Al-Harrasi, Ahmed; Ahmad, Shahbaz
2017-11-01
Alternaria blight is an important foliage disease caused by Alternaria solani. The enzyme Succinate dehydrogenase (SDH) is a potential drug target because of its role in tricarboxylic acid cycle. Hence targeting Alternaria solani SDH enzyme could be efficient tool to design novel fungicides against A. solani. We employed computational methodologies to design new SDH inhibitors using homology modeling; pharmacophore modeling and structure based virtual screening protocol. The three dimensional SDH model showed good stereo-chemical and structural properties. Based on virtual screening results twelve commercially available compounds were purchased and tested in vitro and in vivo. The compounds were found to inhibit mycelial growth of A. solani. Moreover in vitro trials showed that inhibitory effects were enhanced with increase in concentrations. Similarly increased disease control was observed in pre-treated potato tubers. Hence the applied in silico strategy led us to identify new and novel fungicides.
Iftikhar, Sehrish; Shahid, Ahmad A.; Halim, Sobia A.; Wolters, Pieter J.; Vleeshouwers, Vivianne G. A. A.; Khan, Ajmal; Al-Harrasi, Ahmed; Ahmad, Shahbaz
2017-01-01
Alternaria blight is an important foliage disease caused by Alternaria solani. The enzyme Succinate dehydrogenase (SDH) is a potential drug target because of its role in tricarboxylic acid cycle. Hence targeting Alternaria solani SDH enzyme could be efficient tool to design novel fungicides against A. solani. We employed computational methodologies to design new SDH inhibitors using homology modeling; pharmacophore modeling and structure based virtual screening. The three dimensional SDH model showed good stereo-chemical and structural properties. Based on virtual screening results twelve commercially available compounds were purchased and tested in vitro and in vivo. The compounds were found to inhibit mycelial growth of A. solani. Moreover in vitro trials showed that inhibitory effects were enhanced with increase in concentrations. Similarly increased disease control was observed in pre-treated potato tubers. Hence the applied in silico strategy led us to identify novel fungicides. PMID:29204422
NASA Astrophysics Data System (ADS)
Sun, Yunan; Zhou, Hui; Zhu, Hongmei; Leung, Siu-Wai
2016-01-01
Sirtuin 1 (SIRT1) is a nicotinamide adenine dinucleotide-dependent deacetylase, and its dysregulation can lead to ageing, diabetes, and cancer. From 346 experimentally confirmed SIRT1 inhibitors, an inhibitor structure pattern was generated by inductive logic programming (ILP) with DMax Chemistry Assistant software. The pattern contained amide, amine, and hetero-aromatic five-membered rings, each of which had a hetero-atom and an unsubstituted atom at a distance of 2. According to this pattern, a ligand-based virtual screening of 1 444 880 active compounds from Chinese herbs identified 12 compounds as inhibitors of SIRT1. Three compounds (ZINC08790006, ZINC08792229, and ZINC08792355) had high affinity (-7.3, -7.8, and -8.6 kcal/mol, respectively) for SIRT1 as estimated by molecular docking software AutoDock Vina. This study demonstrated a use of ILP and background knowledge in machine learning to facilitate virtual screening.
Identification of DNA primase inhibitors via a combined fragment-based and virtual screening
NASA Astrophysics Data System (ADS)
Ilic, Stefan; Akabayov, Sabine R.; Arthanari, Haribabu; Wagner, Gerhard; Richardson, Charles C.; Akabayov, Barak
2016-11-01
The structural differences between bacterial and human primases render the former an excellent target for drug design. Here we describe a technique for selecting small molecule inhibitors of the activity of T7 DNA primase, an ideal model for bacterial primases due to their common structural and functional features. Using NMR screening, fragment molecules that bind T7 primase were identified and then exploited in virtual filtration to select larger molecules from the ZINC database. The molecules were docked to the primase active site using the available primase crystal structure and ranked based on their predicted binding energies to identify the best candidates for functional and structural investigations. Biochemical assays revealed that some of the molecules inhibit T7 primase-dependent DNA replication. The binding mechanism was delineated via NMR spectroscopy. Our approach, which combines fragment based and virtual screening, is rapid and cost effective and can be applied to other targets.
Xing, Li; McDonald, Joseph J; Kolodziej, Steve A; Kurumbail, Ravi G; Williams, Jennifer M; Warren, Chad J; O'Neal, Janet M; Skepner, Jill E; Roberds, Steven L
2011-03-10
Structure-based virtual screening was applied to design combinatorial libraries to discover novel and potent soluble epoxide hydrolase (sEH) inhibitors. X-ray crystal structures revealed unique interactions for a benzoxazole template in addition to the conserved hydrogen bonds with the catalytic machinery of sEH. By exploitation of the favorable binding elements, two iterations of library design based on amide coupling were employed, guided principally by the docking results of the enumerated virtual products. Biological screening of the libraries demonstrated as high as 90% hit rate, of which over two dozen compounds were single digit nanomolar sEH inhibitors by IC(50) determination. In total the library design and synthesis produced more than 300 submicromolar sEH inhibitors. In cellular systems consistent activities were demonstrated with biochemical measurements. The SAR understanding of the benzoxazole template provides valuable insights into discovery of novel sEH inhibitors as therapeutic agents.
NASA Astrophysics Data System (ADS)
Drwal, Malgorzata N.; Agama, Keli; Pommier, Yves; Griffith, Renate
2013-12-01
Purely structure-based pharmacophores (SBPs) are an alternative method to ligand-based approaches and have the advantage of describing the entire interaction capability of a binding pocket. Here, we present the development of SBPs for topoisomerase I, an anticancer target with an unusual ligand binding pocket consisting of protein and DNA atoms. Different approaches to cluster and select pharmacophore features are investigated, including hierarchical clustering and energy calculations. In addition, the performance of SBPs is evaluated retrospectively and compared to the performance of ligand- and complex-based pharmacophores. SBPs emerge as a valid method in virtual screening and a complementary approach to ligand-focussed methods. The study further reveals that the choice of pharmacophore feature clustering and selection methods has a large impact on the virtual screening hit lists. A prospective application of the SBPs in virtual screening reveals that they can be used successfully to identify novel topoisomerase inhibitors.
Three-dimensional compound comparison methods and their application in drug discovery.
Shin, Woong-Hee; Zhu, Xiaolei; Bures, Mark Gregory; Kihara, Daisuke
2015-07-16
Virtual screening has been widely used in the drug discovery process. Ligand-based virtual screening (LBVS) methods compare a library of compounds with a known active ligand. Two notable advantages of LBVS methods are that they do not require structural information of a target receptor and that they are faster than structure-based methods. LBVS methods can be classified based on the complexity of ligand structure information utilized: one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D). Unlike 1D and 2D methods, 3D methods can have enhanced performance since they treat the conformational flexibility of compounds. In this paper, a number of 3D methods will be reviewed. In addition, four representative 3D methods were benchmarked to understand their performance in virtual screening. Specifically, we tested overall performance in key aspects including the ability to find dissimilar active compounds, and computational speed.
Monocular display unit for 3D display with correct depth perception
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Hosomi, Takashi
2009-11-01
A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.
Gu, Jiali; Liu, Min; Guo, Fei; Xie, Wenping; Lu, Wenqiang; Ye, Lidan; Chen, Zhirong; Yuan, Shenfeng; Yu, Hongwei
2014-02-05
Mandelate racemase (MR) is a promising candidate for the dynamic kinetic resolution of racemates. However, the poor activity of MR towards most of its non-natural substrates limits its widespread application. In this work, a virtual screening method based on the binding energy in the transition state was established to assist in the screening of MR mutants with enhanced catalytic efficiency. Using R-3-chloromandelic acid as a model substrate, a total of 53 mutants were constructed based on rational design in the two rounds of screening. The number of mutants for experimental validation was brought down to 17 by the virtual screening method, among which 14 variants turned out to possess improved catalytic efficiency. The variant V26I/Y54V showed 5.2-fold higher catalytic efficiency (k(cat)/K(m)) towards R-3-chloromandelic acid than that observed for the wild-type enzyme. Using this strategy, mutants were successfully obtained for two other substrates, R-mandelamide and R-2-naphthylglycolate (V26I and V29L, respectively), both with a 2-fold improvement in catalytic efficiency. These results demonstrated that this method could effectively predict the trend of mutational effects on catalysis. Analysis from the energetic and structural assays indicated that the enhanced interactions between the active sites and the substrate in the transition state led to improved catalytic efficiency. It was concluded that this virtual screening method based on the binding energy in the transition state was beneficial in enzyme rational redesign and helped to better understand the catalytic properties of the enzyme. Copyright © 2013 Elsevier Inc. All rights reserved.
Background: Accurate prediction of in vivo toxicity from in vitro testing is a challenging problem. Large public–private consortia have been formed with the goal of improving chemical safety assessment by the means of high-throughput screening. Methods and results: A database co...
Usability Testing and Workflow Analysis of the TRADOC Data Visualization Tool
2012-09-01
software such as blink data, saccades, and cognitive load based on pupil contraction. Eye-tracking was only a component of the data evaluated and as...line charts were a pain to read) Yes Yes Projecting the charts directly onto the regions increased clutter on the screen and is a bad stylistic
Structure-Based Virtual Screening of Commercially Available Compound Libraries.
Kireev, Dmitri
2016-01-01
Virtual screening (VS) is an efficient hit-finding tool. Its distinctive strength is that it allows one to screen compound libraries that are not available in the lab. Moreover, structure-based (SB) VS also enables an understanding of how the hit compounds bind the protein target, thus laying ground work for the rational hit-to-lead progression. SBVS requires a very limited experimental effort and is particularly well suited for academic labs and small biotech companies that, unlike pharmaceutical companies, do not have physical access to quality small-molecule libraries. Here, we describe SBVS of commercial compound libraries for Mer kinase inhibitors. The screening protocol relies on the docking algorithm Glide complemented by a post-docking filter based on structural protein-ligand interaction fingerprints (SPLIF).
Virtual daily living test to screen for mild cognitive impairment using kinematic movement analysis
Seo, Kyoungwon; Kim, Jae-kwan; Oh, Dong Hoon
2017-01-01
Questionnaires or computer-based tests for assessing activities of daily living are well-known approaches to screen for mild cognitive impairment (MCI). However, questionnaires are subjective and computerized tests only collect simple performance data with conventional input devices such as a mouse and keyboard. This study explored the validity and discriminative power of a virtual daily living test as a new diagnostic approach to assess MCI. Twenty-two healthy controls and 20 patients with MCI were recruited. The virtual daily living test presents two complex daily living tasks in an immersive virtual reality environment. The tasks were conducted based on subject body movements and detailed behavioral data (i.e., kinematic measures) were collected. Performance in both the proposed virtual daily living test and conventional neuropsychological tests for patients with MCI was compared to healthy controls. Kinematic measures considered in this study, such as body movement trajectory, time to completion, and speed, classified patients with MCI from healthy controls, F(8, 33) = 5.648, p < 0.001, η2 = 0.578. When both hand and head speed were employed in conjunction with the immediate free-recall test, a conventional neuropsychological test, the discrimination power for screening MCI was significantly improved to 90% sensitivity and 95.5% specificity (cf. the immediate free-recall test alone has 80% sensitivity and 77.3% specificity). Inclusion of the kinematic measures in screening for MCI significantly improved the classification of patients with MCI compared to the healthy control group, Wilks’ Lambda = 0.451, p < 0.001. PMID:28738088
Brodney, Marian D; Brosius, Arthur D; Gregory, Tracy; Heck, Steven D; Klug-McLeod, Jacquelyn L; Poss, Christopher S
2009-12-01
Advances in the field of drug discovery have brought an explosion in the quantity of data available to medicinal chemists and other project team members. New strategies and systems are needed to help these scientists to efficiently gather, organize, analyze, annotate, and share data about potential new drug molecules of interest to their project teams. Herein we describe a suite of integrated services and end-user applications that facilitate these activities throughout the medicinal chemistry design cycle. The Automated Data Presentation (ADP) and Virtual Compound Profiler (VCP) processes automate the gathering, organization, and storage of real and virtual molecules, respectively, and associated data. The Project-Focused Activity and Knowledge Tracker (PFAKT) provides a unified data analysis and collaboration environment, enhancing decision-making, improving team communication, and increasing efficiency.
NASA Astrophysics Data System (ADS)
Liu, Shuai; Chen, Ge; Yao, Shifeng; Tian, Fenglin; Liu, Wei
2017-07-01
This paper presents a novel integrated marine visualization framework which focuses on processing, analyzing the multi-dimension spatiotemporal marine data in one workflow. Effective marine data visualization is needed in terms of extracting useful patterns, recognizing changes, and understanding physical processes in oceanography researches. However, the multi-source, multi-format, multi-dimension characteristics of marine data pose a challenge for interactive and feasible (timely) marine data analysis and visualization in one workflow. And, global multi-resolution virtual terrain environment is also needed to give oceanographers and the public a real geographic background reference and to help them to identify the geographical variation of ocean phenomena. This paper introduces a data integration and processing method to efficiently visualize and analyze the heterogeneous marine data. Based on the data we processed, several GPU-based visualization methods are explored to interactively demonstrate marine data. GPU-tessellated global terrain rendering using ETOPO1 data is realized and the video memory usage is controlled to ensure high efficiency. A modified ray-casting algorithm for the uneven multi-section Argo volume data is also presented and the transfer function is designed to analyze the 3D structure of ocean phenomena. Based on the framework we designed, an integrated visualization system is realized. The effectiveness and efficiency of the framework is demonstrated. This system is expected to make a significant contribution to the demonstration and understanding of marine physical process in a virtual global environment.
Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers
NASA Astrophysics Data System (ADS)
Dreher, Patrick; Scullin, William; Vouk, Mladen
2015-09-01
Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.
Evaluating virtual hosted desktops for graphics-intensive astronomy
NASA Astrophysics Data System (ADS)
Meade, B. F.; Fluke, C. J.
2018-04-01
Visualisation of data is critical to understanding astronomical phenomena. Today, many instruments produce datasets that are too big to be downloaded to a local computer, yet many of the visualisation tools used by astronomers are deployed only on desktop computers. Cloud computing is increasingly used to provide a computation and simulation platform in astronomy, but it also offers great potential as a visualisation platform. Virtual hosted desktops, with graphics processing unit (GPU) acceleration, allow interactive, graphics-intensive desktop applications to operate co-located with astronomy datasets stored in remote data centres. By combining benchmarking and user experience testing, with a cohort of 20 astronomers, we investigate the viability of replacing physical desktop computers with virtual hosted desktops. In our work, we compare two Apple MacBook computers (one old and one new, representing hardware and opposite ends of the useful lifetime) with two virtual hosted desktops: one commercial (Amazon Web Services) and one in a private research cloud (the Australian NeCTAR Research Cloud). For two-dimensional image-based tasks and graphics-intensive three-dimensional operations - typical of astronomy visualisation workflows - we found that benchmarks do not necessarily provide the best indication of performance. When compared to typical laptop computers, virtual hosted desktops can provide a better user experience, even with lower performing graphics cards. We also found that virtual hosted desktops are equally simple to use, provide greater flexibility in choice of configuration, and may actually be a more cost-effective option for typical usage profiles.
Virtual gastrointestinal colonoscopy in combination with large bowel endoscopy: Clinical application
He, Qing; Rao, Ting; Guan, Yong-Song
2014-01-01
Although colorectal cancer (CRC) has no longer been the leading cancer killer worldwide for years with the exponential development in computed tomography (CT) or magnetic resonance imaging, and positron emission tomography/CT as well as virtual colonoscopy for early detection, the CRC related mortality is still high. The objective of CRC screening is to reduce the burden of CRC and thereby the morbidity and mortality rates of the disease. It is believed that this goal can be achieved by regularly screening the average-risk population, enabling the detection of cancer at early, curable stages, and polyps before they become cancerous. Large-scale screening with multimodality imaging approaches plays an important role in reaching that goal to detect polyps, Crohn’s disease, ulcerative colitis and CRC in early stage. This article reviews kinds of presentative imaging procedures for various screening options and updates detecting, staging and re-staging of CRC patients for determining the optimal therapeutic method and forecasting the risk of CRC recurrence and the overall prognosis. The combination use of virtual colonoscopy and conventional endoscopy, advantages and limitations of these modalities are also discussed. PMID:25320519
Lin, Chun-Yuan; Wang, Yen-Ling
2014-01-01
Checkpoint kinase 2 (Chk2) has a great effect on DNA-damage and plays an important role in response to DNA double-strand breaks and related lesions. In this study, we will concentrate on Chk2 and the purpose is to find the potential inhibitors by the pharmacophore hypotheses (PhModels), combinatorial fusion, and virtual screening techniques. Applying combinatorial fusion into PhModels and virtual screening techniques is a novel design strategy for drug design. We used combinatorial fusion to analyze the prediction results and then obtained the best correlation coefficient of the testing set (r test) with the value 0.816 by combining the Best(train)Best(test) and Fast(train)Fast(test) prediction results. The potential inhibitors were selected from NCI database by screening according to Best(train)Best(test) + Fast(train)Fast(test) prediction results and molecular docking with CDOCKER docking program. Finally, the selected compounds have high interaction energy between a ligand and a receptor. Through these approaches, 23 potential inhibitors for Chk2 are retrieved for further study.
NASA Astrophysics Data System (ADS)
Yu, Miao; Gu, Qiong; Xu, Jun
2018-02-01
PI3Kα is a promising drug target for cancer chemotherapy. In this paper, we report a strategy of combing ligand-based and structure-based virtual screening to identify new PI3Kα inhibitors. First, naïve Bayesian (NB) learning models and a 3D-QSAR pharmacophore model were built based upon known PI3Kα inhibitors. Then, the SPECS library was screened by the best NB model. This resulted in virtual hits, which were validated by matching the structures against the pharmacophore models. The pharmacophore matched hits were then docked into PI3Kα crystal structures to form ligand-receptor complexes, which are further validated by the Glide-XP program to result in structural validated hits. The structural validated hits were examined by PI3Kα inhibitory assay. With this screening protocol, ten PI3Kα inhibitors with new scaffolds were discovered with IC50 values ranging 0.44-31.25 μM. The binding affinities for the most active compounds 33 and 74 were estimated through molecular dynamics simulations and MM-PBSA analyses.
a Standardized Approach to Topographic Data Processing and Workflow Management
NASA Astrophysics Data System (ADS)
Wheaton, J. M.; Bailey, P.; Glenn, N. F.; Hensleigh, J.; Hudak, A. T.; Shrestha, R.; Spaete, L.
2013-12-01
An ever-increasing list of options exist for collecting high resolution topographic data, including airborne LIDAR, terrestrial laser scanners, bathymetric SONAR and structure-from-motion. An equally rich, arguably overwhelming, variety of tools exists with which to organize, quality control, filter, analyze and summarize these data. However, scientists are often left to cobble together their analysis as a series of ad hoc steps, often using custom scripts and one-time processes that are poorly documented and rarely shared with the community. Even when literature-cited software tools are used, the input and output parameters differ from tool to tool. These parameters are rarely archived and the steps performed lost, making the analysis virtually impossible to replicate precisely. What is missing is a coherent, robust, framework for combining reliable, well-documented topographic data-processing steps into a workflow that can be repeated and even shared with others. We have taken several popular topographic data processing tools - including point cloud filtering and decimation as well as DEM differencing - and defined a common protocol for passing inputs and outputs between them. This presentation describes a free, public online portal that enables scientists to create custom workflows for processing topographic data using a number of popular topographic processing tools. Users provide the inputs required for each tool and in what sequence they want to combine them. This information is then stored for future reuse (and optionally sharing with others) before the user then downloads a single package that contains all the input and output specifications together with the software tools themselves. The user then launches the included batch file that executes the workflow on their local computer against their topographic data. This ZCloudTools architecture helps standardize, automate and archive topographic data processing. It also represents a forum for discovering and sharing effective topographic processing workflows.
Lessons Learned From A Study Of Genomics-Based Carrier Screening For Reproductive Decision Making.
Wilfond, Benjamin S; Kauffman, Tia L; Jarvik, Gail P; Reiss, Jacob A; Richards, C Sue; McMullen, Carmit; Gilmore, Marian; Himes, Patricia; Kraft, Stephanie A; Porter, Kathryn M; Schneider, Jennifer L; Punj, Sumit; Leo, Michael C; Dickerson, John F; Lynch, Frances L; Clarke, Elizabeth; Rope, Alan F; Lutz, Kevin; Goddard, Katrina A B
2018-05-01
Genomics-based carrier screening is one of many opportunities to use genomic information to inform medical decision making, but clinicians, health care delivery systems, and payers need to determine whether to offer screening and how to do so in an efficient, ethical way. To shed light on this issue, we conducted a study in the period 2014-17 to inform the design of clinical screening programs and guide further health services research. Many of our results have been published elsewhere; this article summarizes the lessons we learned from that study and offers policy insights. Our experience can inform understanding of the potential impact of expanded carrier screening services on health system workflows and workforces-impacts that depend on the details of the screening approach. We found limited patient or health system harms from expanded screening. We also found that some patients valued the information they learned from the process. Future policy discussions should consider the value of offering such expanded carrier screening in health delivery systems with limited resources.
Zdrazil, B.; Neefs, J.-M.; Van Vlijmen, H.; Herhaus, C.; Caracoti, A.; Brea, J.; Roibás, B.; Loza, M. I.; Queralt-Rosinach, N.; Furlong, L. I.; Gaulton, A.; Bartek, L.; Senger, S.; Chichester, C.; Engkvist, O.; Evelo, C. T.; Franklin, N. I.; Marren, D.; Ecker, G. F.
2016-01-01
Phenotypic screening is in a renaissance phase and is expected by many academic and industry leaders to accelerate the discovery of new drugs for new biology. Given that phenotypic screening is per definition target agnostic, the emphasis of in silico and in vitro follow-up work is on the exploration of possible molecular mechanisms and efficacy targets underlying the biological processes interrogated by the phenotypic screening experiments. Herein, we present six exemplar computational protocols for the interpretation of cellular phenotypic screens based on the integration of compound, target, pathway, and disease data established by the IMI Open PHACTS project. The protocols annotate phenotypic hit lists and allow follow-up experiments and mechanistic conclusions. The annotations included are from ChEMBL, ChEBI, GO, WikiPathways and DisGeNET. Also provided are protocols which select from the IUPHAR/BPS Guide to PHARMACOLOGY interaction file selective compounds to probe potential targets and a correlation robot which systematically aims to identify an overlap of active compounds in both the phenotypic as well as any kinase assay. The protocols are applied to a phenotypic pre-lamin A/C splicing assay selected from the ChEMBL database to illustrate the process. The computational protocols make use of the Open PHACTS API and data and are built within the Pipeline Pilot and KNIME workflow tools. PMID:27774140
Falgreen, Steffen; Laursen, Maria Bach; Bødker, Julie Støve; Kjeldsen, Malene Krag; Schmitz, Alexander; Nyegaard, Mette; Johnsen, Hans Erik; Dybkær, Karen; Bøgsted, Martin
2014-06-05
In vitro generated dose-response curves of human cancer cell lines are widely used to develop new therapeutics. The curves are summarised by simplified statistics that ignore the conventionally used dose-response curves' dependency on drug exposure time and growth kinetics. This may lead to suboptimal exploitation of data and biased conclusions on the potential of the drug in question. Therefore we set out to improve the dose-response assessments by eliminating the impact of time dependency. First, a mathematical model for drug induced cell growth inhibition was formulated and used to derive novel dose-response curves and improved summary statistics that are independent of time under the proposed model. Next, a statistical analysis workflow for estimating the improved statistics was suggested consisting of 1) nonlinear regression models for estimation of cell counts and doubling times, 2) isotonic regression for modelling the suggested dose-response curves, and 3) resampling based method for assessing variation of the novel summary statistics. We document that conventionally used summary statistics for dose-response experiments depend on time so that fast growing cell lines compared to slowly growing ones are considered overly sensitive. The adequacy of the mathematical model is tested for doxorubicin and found to fit real data to an acceptable degree. Dose-response data from the NCI60 drug screen were used to illustrate the time dependency and demonstrate an adjustment correcting for it. The applicability of the workflow was illustrated by simulation and application on a doxorubicin growth inhibition screen. The simulations show that under the proposed mathematical model the suggested statistical workflow results in unbiased estimates of the time independent summary statistics. Variance estimates of the novel summary statistics are used to conclude that the doxorubicin screen covers a significant diverse range of responses ensuring it is useful for biological interpretations. Time independent summary statistics may aid the understanding of drugs' action mechanism on tumour cells and potentially renew previous drug sensitivity evaluation studies.
2014-01-01
Background In vitro generated dose-response curves of human cancer cell lines are widely used to develop new therapeutics. The curves are summarised by simplified statistics that ignore the conventionally used dose-response curves’ dependency on drug exposure time and growth kinetics. This may lead to suboptimal exploitation of data and biased conclusions on the potential of the drug in question. Therefore we set out to improve the dose-response assessments by eliminating the impact of time dependency. Results First, a mathematical model for drug induced cell growth inhibition was formulated and used to derive novel dose-response curves and improved summary statistics that are independent of time under the proposed model. Next, a statistical analysis workflow for estimating the improved statistics was suggested consisting of 1) nonlinear regression models for estimation of cell counts and doubling times, 2) isotonic regression for modelling the suggested dose-response curves, and 3) resampling based method for assessing variation of the novel summary statistics. We document that conventionally used summary statistics for dose-response experiments depend on time so that fast growing cell lines compared to slowly growing ones are considered overly sensitive. The adequacy of the mathematical model is tested for doxorubicin and found to fit real data to an acceptable degree. Dose-response data from the NCI60 drug screen were used to illustrate the time dependency and demonstrate an adjustment correcting for it. The applicability of the workflow was illustrated by simulation and application on a doxorubicin growth inhibition screen. The simulations show that under the proposed mathematical model the suggested statistical workflow results in unbiased estimates of the time independent summary statistics. Variance estimates of the novel summary statistics are used to conclude that the doxorubicin screen covers a significant diverse range of responses ensuring it is useful for biological interpretations. Conclusion Time independent summary statistics may aid the understanding of drugs’ action mechanism on tumour cells and potentially renew previous drug sensitivity evaluation studies. PMID:24902483
Golovin, A V; Smirnov, I V; Stepanova, A V; Zalevskiy, A O; Zlobin, A S; Ponomarenko, N A; Belogurov, A A; Knorre, V D; Hurs, E N; Chatziefthimiou, S D; Wilmanns, M; Blackburn, G M; Khomutov, R M; Gabibov, A G
2017-07-01
It is proposed to perform quantum mechanical/molecular dynamics calculations of chemical reactions that are planned to be catalyzed by antibodies and then conduct a virtual screening of the library of potential antibody mutants to select an optimal biocatalyst. We tested the effectiveness of this approach by the example of hydrolysis of organophosphorus toxicant paraoxon using kinetic approaches and X-ray analysis of the antibody biocatalyst designed de novo.
Sanhueza, Carlos A; Cartmell, Jonathan; El-Hawiet, Amr; Szpacenko, Adam; Kitova, Elena N; Daneshfar, Rambod; Klassen, John S; Lang, Dean E; Eugenio, Luiz; Ng, Kenneth K-S; Kitov, Pavel I; Bundle, David R
2015-01-07
A focused library of virtual heterobifunctional ligands was generated in silico and a set of ligands with recombined fragments was synthesized and evaluated for binding to Clostridium difficile toxins. The position of the trisaccharide fragment was used as a reference for filtering docked poses during virtual screening to match the trisaccharide ligand in a crystal structure. The peptoid, a diversity fragment probing the protein surface area adjacent to a known binding site, was generated by a multi-component Ugi reaction. Our approach combines modular fragment-based design with in silico screening of synthetically feasible compounds and lays the groundwork for future efforts in development of composite bifunctional ligands for large clostridial toxins.
[Application and prospect of digital technology in the field of orthodontics].
Zhou, Y H
2016-06-01
The three-dimensional(3D)digital technology has brought a revolutionary change in diagnostic planning and treatment strategy of orthodontics. Acquisition of 3D image data of the hard and soft tissues of the patients, diagnostic analysis and treatment prediction, and ultimately the individualized orthodontic appliance, will become the development trend and workflow of the 3D orthodontics. With the development of 3D digital technology, the traditional plaster model has been gradually replacing by 3D digital models. Meanwhile, 3D facial soft tissue scan and cone-beam CT scan have been gradually applied to clinical orthodontics, making it possible to get 3D virtual anatomical structure for patients. With the help of digital technology, the diagnostic process is much easier for orthodontist. However how to command the whole digital workflow and put it into practice in the daily work is still a long way to go. The purpose of this article is to enlighten the orthodontists interested in digital technology and discuss the future of digital orthodontics in China.
Fleming, Michael; Olsen, Dale; Stathes, Hilary; Boteler, Laura; Grossberg, Paul; Pfeifer, Judie; Schiro, Stephanie; Banning, Jane; Skochelak, Susan
2009-01-01
Educating physicians and other health care professionals about the identification and treatment of patients who drink more than recommended limits is an ongoing challenge. An educational randomized controlled trial was conducted to test the ability of a stand-alone training simulation to improve the clinical skills of health care professionals in alcohol screening and intervention. The "virtual reality simulation" combined video, voice recognition, and nonbranching logic to create an interactive environment that allowed trainees to encounter complex social cues and realistic interpersonal exchanges. The simulation included 707 questions and statements and 1207 simulated patient responses. A sample of 102 health care professionals (10 physicians; 30 physician assistants or nurse practitioners; 36 medical students; 26 pharmacy, physican assistant, or nurse practitioner students) were randomly assigned to a no training group (n = 51) or a computer-based virtual reality intervention (n = 51). Professionals in both groups had similar pretest standardized patient alcohol screening skill scores: 53.2 (experimental) vs 54.4 (controls), 52.2 vs 53.7 alcohol brief intervention skills, and 42.9 vs 43.5 alcohol referral skills. After repeated practice with the simulation there were significant increases in the scores of the experimental group at 6 months after randomization compared with the control group for the screening (67.7 vs 58.1; P < .001) and brief intervention (58.3 vs 51.6; P < .04) scenarios. The technology tested in this trial is the first virtual reality simulation to demonstrate an increase in the alcohol screening and brief intervention skills of health care professionals.
Fleming, Michael; Olsen, Dale; Stathes, Hilary; Boteler, Laura; Grossberg, Paul; Pfeifer, Judie; Schiro, Stephanie; Banning, Jane; Skochelak, Susan
2009-01-01
Background Educating physicians and other health care professionals to identify and treat patients who drink above recommended limits is an ongoing challenge. Methods An educational Randomized Control Trial (RCT) was conducted to test the ability of a stand alone training simulation to improve the clinical skills of health care professionals in alcohol screening and intervention. The “virtual reality simulation” combines video, voice recognition and non branching logic to create an interactive environment that allows trainees to encounter complex social cues and realistic interpersonal exchanges. The simulation includes 707 questions and statements and 1207 simulated patient responses. Results A sample of 102 health care professionals (10 physicians; 30 physician assistants [PAs] or nurse practitioners [NPs]; 36 medical students; 26 pharmacy, PA or NP students) were randomly assigned to no training (n=51) or a computer based virtual reality intervention (n=51). Subjects in both groups had similar pre-test standardized patient alcohol screening skill scores – 53.2 (experimental) vs. 54.4 (controls), 52.2 vs. 53.7 alcohol brief intervention skills, and 42.9 vs. 43.5 alcohol referral skills. Following repeated practice with the simulation there were significant increases in the scores of the experimental group at 6 months post-randomization compared to the control group for the screening (67.7 vs. 58.1, p<.001) and brief intervention (58.3 vs. 51.6, p<.04) scenarios. Conclusions The technology tested in this trial is the first virtual reality simulation to demonstrate an increase in the alcohol screening and brief intervention skills of health care professionals. PMID:19587253
Zygouris, Stelios; Giakoumis, Dimitrios; Votis, Konstantinos; Doumpoulakis, Stefanos; Ntovas, Konstantinos; Segkouli, Sofia; Karagiannidis, Charalampos; Tzovaras, Dimitrios; Tsolaki, Magda
2015-01-01
Recent research advocates the potential of virtual reality (VR) applications in assessing cognitive functions highlighting the possibility of using a VR application for mild cognitive impairment (MCI) screening. The aim of this study is to investigate whether a VR cognitive training application, the virtual supermarket (VSM), can be used as a screening tool for MCI. Two groups, one of healthy older adults (n = 21) and one of MCI patients (n = 34), were recruited from day centers for cognitive disorders and administered the VSM and a neuropsychological test battery. The performance of the two groups in the VSM was compared and correlated with performance in established neuropsychological tests. At the same time, the effectiveness of a combination of traditional neuropsychological tests and the VSM was examined. VSM displayed a correct classification rate (CCR) of 87.30% when differentiating between MCI patients and healthy older adults, while it was unable to differentiate between MCI subtypes. At the same time, the VSM correlates with various established neuropsychological tests. A limited number of tests were able to improve the CCR of the VSM when combined with the VSM for screening purposes. VSM appears to be a valid method of screening for MCI in an older adult population though it cannot be used for MCI subtype assessment. VSM's concurrent validity is supported by the large number of correlations between the VSM and established tests. It is considered a robust test on its own as the inclusion of other tests failed to improve its CCR significantly.
Implementation and Challenges of Direct Acoustic Dosing into Cell-Based Assays.
Roberts, Karen; Callis, Rowena; Ikeda, Tim; Paunovic, Amalia; Simpson, Carly; Tang, Eric; Turton, Nick; Walker, Graeme
2016-02-01
Since the adoption of Labcyte Echo Acoustic Droplet Ejection (ADE) technology by AstraZeneca in 2005, ADE has become the preferred method for compound dosing into both biochemical and cell-based assays across AstraZeneca research and development globally. The initial implementation of Echos and the direct dosing workflow provided AstraZeneca with a unique set of challenges. In this article, we outline how direct Echo dosing has evolved over the past decade in AstraZeneca. We describe the practical challenges of applying ADE technology to 96-well, 384-well, and 1536-well assays and how AstraZeneca developed and applied software and robotic solutions to generate fully automated and effective cell-based assay workflows. © 2015 Society for Laboratory Automation and Screening.
Uniqueness of Experience and Virtual Playworlds: Playing Is Not Just for Fun
ERIC Educational Resources Information Center
Talamo, Alessandra; Pozzi, Simone; Mellini, Barbara
2010-01-01
Social interactions within virtual communities are often described solely as being online experiences. Such descriptions are limited, for they fail to reference life external to the screen. The terms "virtual" and "real" have a negative connotation for many people and can even be interpreted to mean that something is "false" or "inauthentic."…
Toward a Virtual Laboratory to Assess Biodiversity from Data Produced by an Underwater Microscope
NASA Astrophysics Data System (ADS)
Beaulieu, S.; Ball, M.; Futrelle, J.; Sosik, H. M.
2016-12-01
Real-time data from sensors deployed in the ocean are increasingly available online for broad use by scientists, educators, and the public. Such data have previously been limited to physical parameters, but data for biological parameters are becoming more prevalent with the development of new submersible instruments. Imaging FlowCytobot (IFCB), for example, automatically and rapidly acquires images of microscopic algae (phytoplankton) at the base of the food web in marine ecosystems. These images and products from image processing and automated classification are accessible via web services from an IFCB dashboard. However, until now, to process these data further into results representing the biodiversity of the phytoplankton required a complex workflow that could only be executed by scientists involved in the instrument development. Also, because these data have been collected near continuously for a decade, a number of "big data" challenges arise in attempting to implement and reproduce the workflow. Our research is geared toward the development of a virtual laboratory to enable other scientists and educators, as new users of data from this underwater microscope, to generate biodiversity data products. Our solution involves an electronic notebook (Jupyter Notebook) that can be re-purposed by users with some Python programming experience. However, when we scaled the virtual laboratory to accommodate a 2-month example time series (thousands of binned files each representing thousands of images), we needed to expand the execution environment to include batch processing outside of the notebook. We will share how we packaged these tools to share with other scientists to perform their own biodiversity assessment from data available on an IFCB dashboard. Additional outcomes of software development in this project include a prototype for time-series visualizations to be generated in near-real-time and recommendations for new products accessible via web services from the IFCB dashboard.
Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud
Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew
2015-01-01
Background Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. Results We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. Conclusions This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation. PMID:26501966
Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud.
Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew
2015-01-01
Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation.
Merema, B J; Kraeima, J; Ten Duis, K; Wendt, K W; Warta, R; Vos, E; Schepers, R H; Witjes, M J H; IJpma, F F A
2017-11-01
An innovative procedure for the development of 3D patient-specific implants with drilling guides for acetabular fracture surgery is presented. By using CT data and 3D surgical planning software, a virtual model of the fractured pelvis was created. During this process the fracture was virtually reduced. Based on the reduced fracture model, patient-specific titanium plates including polyamide drilling guides were designed, 3D printed and milled for intra-operative use. One of the advantages of this procedure is that the personalised plates could be tailored to both the shape of the pelvis and the type of fracture. The optimal screw directions and sizes were predetermined in the 3D model. The virtual plan was translated towards the surgical procedure by using the surgical guides and patient-specific osteosynthesis. Besides the description of the newly developed multi-disciplinary workflow, a clinical case example is presented to demonstrate that this technique is feasible and promising for the operative treatment of complex acetabular fractures. Copyright © 2017 Elsevier Ltd. All rights reserved.
An adaptable navigation strategy for Virtual Microscopy from mobile platforms.
Corredor, Germán; Romero, Eduardo; Iregui, Marcela
2015-04-01
Real integration of Virtual Microscopy with the pathologist service workflow requires the design of adaptable strategies for any hospital service to interact with a set of Whole Slide Images. Nowadays, mobile devices have the actual potential of supporting an online pervasive network of specialists working together. However, such devices are still very limited. This article introduces a novel highly adaptable strategy for streaming and visualizing WSI from mobile devices. The presented approach effectively exploits and extends the granularity of the JPEG2000 standard and integrates it with different strategies to achieve a lossless, loosely-coupled, decoder and platform independent implementation, adaptable to any interaction model. The performance was evaluated by two expert pathologists interacting with a set of 20 virtual slides. The method efficiently uses the available device resources: the memory usage did not exceed a 7% of the device capacity while the decoding times were smaller than the 200 ms per Region of Interest, i.e., a window of 256×256 pixels. This model is easily adaptable to other medical imaging scenarios. Copyright © 2015 Elsevier Inc. All rights reserved.
Gladysz, Rafaela; Dos Santos, Fabio Mendes; Langenaeker, Wilfried; Thijs, Gert; Augustyns, Koen; De Winter, Hans
2018-03-07
Spectrophores are novel descriptors that are calculated from the three-dimensional atomic properties of molecules. In our current implementation, the atomic properties that were used to calculate spectrophores include atomic partial charges, atomic lipophilicity indices, atomic shape deviations and atomic softness properties. This approach can easily be widened to also include additional atomic properties. Our novel methodology finds its roots in the experimental affinity fingerprinting technology developed in the 1990's by Terrapin Technologies. Here we have translated it into a purely virtual approach using artificial affinity cages and a simplified metric to calculate the interaction between these cages and the atomic properties. A typical spectrophore consists of a vector of 48 real numbers. This makes it highly suitable for the calculation of a wide range of similarity measures for use in virtual screening and for the investigation of quantitative structure-activity relationships in combination with advanced statistical approaches such as self-organizing maps, support vector machines and neural networks. In our present report we demonstrate the applicability of our novel methodology for scaffold hopping as well as virtual screening.
NASA Astrophysics Data System (ADS)
Wingert, Bentley M.; Oerlemans, Rick; Camacho, Carlos J.
2018-01-01
The goal of virtual screening is to generate a substantially reduced and enriched subset of compounds from a large virtual chemistry space. Critical in these efforts are methods to properly rank the binding affinity of compounds. Prospective evaluations of ranking strategies in the D3R grand challenges show that for targets with deep pockets the best correlations (Spearman ρ 0.5) were obtained by our submissions that docked compounds to the holo-receptors with the most chemically similar ligand. On the other hand, for targets with open pockets using multiple receptor structures is not a good strategy. Instead, docking to a single optimal receptor led to the best correlations (Spearman ρ 0.5), and overall performs better than any other method. Yet, choosing a suboptimal receptor for crossdocking can significantly undermine the affinity rankings. Our submissions that evaluated the free energy of congeneric compounds were also among the best in the community experiment. Error bars of around 1 kcal/mol are still too large to significantly improve the overall rankings. Collectively, our top of the line predictions show that automated virtual screening with rigid receptors perform better than flexible docking and other more complex methods.
Discovery of Novel New Delhi Metallo-β-Lactamases-1 Inhibitors by Multistep Virtual Screening
Wang, Xuequan; Lu, Meiling; Shi, Yang; Ou, Yu; Cheng, Xiaodong
2015-01-01
The emergence of NDM-1 containing multi-antibiotic resistant "Superbugs" necessitates the needs of developing of novel NDM-1inhibitors. In this study, we report the discovery of novel NDM-1 inhibitors by multi-step virtual screening. From a 2,800,000 virtual drug-like compound library selected from the ZINC database, we generated a focused NDM-1 inhibitor library containing 298 compounds of which 44 chemical compounds were purchased and evaluated experimentally for their ability to inhibit NDM-1 in vitro. Three novel NDM-1 inhibitors with micromolar IC50 values were validated. The most potent inhibitor, VNI-41, inhibited NDM-1 with an IC50 of 29.6 ± 1.3 μM. Molecular dynamic simulation revealed that VNI-41 interacted extensively with the active site. In particular, the sulfonamide group of VNI-41 interacts directly with the metal ion Zn1 that is critical for the catalysis. These results demonstrate the feasibility of applying virtual screening methodologies in identifying novel inhibitors for NDM-1, a metallo-β-lactamase with a malleable active site and provide a mechanism base for rational design of NDM-1 inhibitors using sulfonamide as a functional scaffold. PMID:25734558
iRODS: A Distributed Data Management Cyberinfrastructure for Observatories
NASA Astrophysics Data System (ADS)
Rajasekar, A.; Moore, R.; Vernon, F.
2007-12-01
Large-scale and long-term preservation of both observational and synthesized data requires a system that virtualizes data management concepts. A methodology is needed that can work across long distances in space (distribution) and long-periods in time (preservation). The system needs to manage data stored on multiple types of storage systems including new systems that become available in the future. This concept is called infrastructure independence, and is typically implemented through virtualization mechanisms. Data grids are built upon concepts of data and trust virtualization. These concepts enable the management of collections of data that are distributed across multiple institutions, stored on multiple types of storage systems, and accessed by multiple types of clients. Data virtualization ensures that the name spaces used to identify files, users, and storage systems are persistent, even when files are migrated onto future technology. This is required to preserve authenticity, the link between the record and descriptive and provenance metadata. Trust virtualization ensures that access controls remain invariant as files are moved within the data grid. This is required to track the chain of custody of records over time. The Storage Resource Broker (http://www.sdsc.edu/srb) is one such data grid used in a wide variety of applications in earth and space sciences such as ROADNet (roadnet.ucsd.edu), SEEK (seek.ecoinformatics.org), GEON (www.geongrid.org) and NOAO (www.noao.edu). Recent extensions to data grids provide one more level of virtualization - policy or management virtualization. Management virtualization ensures that execution of management policies can be automated, and that rules can be created that verify assertions about the shared collections of data. When dealing with distributed large-scale data over long periods of time, the policies used to manage the data and provide assurances about the authenticity of the data become paramount. The integrated Rule-Oriented Data System (iRODS) (http://irods.sdsc.edu) provides the mechanisms needed to describe not only management policies, but also to track how the policies are applied and their execution results. The iRODS data grid maps management policies to rules that control the execution of the remote micro-services. As an example, a rule can be created that automatically creates a replica whenever a file is added to a specific collection, or extracts its metadata automatically and registers it in a searchable catalog. For the replication operation, the persistent state information consists of the replica location, the creation date, the owner, the replica size, etc. The mechanism used by iRODS for providing policy virtualization is based on well-defined functions, called micro-services, which are chained into alternative workflows using rules. A rule engine, based on the event-condition-action paradigm executes the rule-based workflows after an event. Rules can be deferred to a pre-determined time or executed on a periodic basis. As the data management policies evolve, the iRODS system can implement new rules, new micro-services, and new state information (metadata content) needed to manage the new policies. Each sub- collection can be managed using a different set of policies. The discussion of the concepts in rule-based policy virtualization and its application to long-term and large-scale data management for observatories such as ORION and NEON will be the basis of the paper.
DOVIS: an implementation for high-throughput virtual screening using AutoDock.
Zhang, Shuxing; Kumar, Kamal; Jiang, Xiaohui; Wallqvist, Anders; Reifman, Jaques
2008-02-27
Molecular-docking-based virtual screening is an important tool in drug discovery that is used to significantly reduce the number of possible chemical compounds to be investigated. In addition to the selection of a sound docking strategy with appropriate scoring functions, another technical challenge is to in silico screen millions of compounds in a reasonable time. To meet this challenge, it is necessary to use high performance computing (HPC) platforms and techniques. However, the development of an integrated HPC system that makes efficient use of its elements is not trivial. We have developed an application termed DOVIS that uses AutoDock (version 3) as the docking engine and runs in parallel on a Linux cluster. DOVIS can efficiently dock large numbers (millions) of small molecules (ligands) to a receptor, screening 500 to 1,000 compounds per processor per day. Furthermore, in DOVIS, the docking session is fully integrated and automated in that the inputs are specified via a graphical user interface, the calculations are fully integrated with a Linux cluster queuing system for parallel processing, and the results can be visualized and queried. DOVIS removes most of the complexities and organizational problems associated with large-scale high-throughput virtual screening, and provides a convenient and efficient solution for AutoDock users to use this software in a Linux cluster platform.
Serious games for screening pre-dementia conditions: from virtuality to reality? A pilot project.
Zucchella, Chiara; Sinforiani, Elena; Tassorelli, Cristina; Cavallini, Elena; Tost-Pardell, Daniela; Grau, Sergi; Pazzi, Stefania; Puricelli, Stefano; Bernini, Sara; Bottiroli, Sara; Vecchi, Tomaso; Sandrini, Giorgio; Nappi, Giuseppe
2014-01-01
Conventional cognitive assessment is based on a pencil-and-paper neuropsychological evaluation, which is time consuming, expensive and requires the involvement of several professionals. Information and communication technology could be exploited to allow the development of tools that are easy to use, reduce the amount of data processing, and provide controllable test conditions. Serious games (SGs) have the potential to be new and effective tools in the management and treatment of cognitive impairments Serious games for screening pre-dementia conditions: from virtuality to reality? A pilot project in the elderly. Moreover, by adopting SGs in 3D virtual reality settings, cognitive functions might be evaluated using tasks that simulate daily activities, increasing the "ecological validity" of the assessment. In this commentary we report our experience in the creation of the Smart Aging platform, a 3D SGand virtual environment-based platform for the early identification and characterization of mild cognitive impairment.
Uematsu, Takayoshi
2017-01-01
This article discusses possible supplemental breast cancer screening modalities for younger women with dense breasts from a perspective of population-based breast cancer screening program in Japan. Supplemental breast cancer screening modalities have been proposed to increase the sensitivity and detection rates of early stage breast cancer in women with dense breasts; however, there are no global guidelines that recommend the use of supplemental breast cancer screening modalities in such women. Also, no criterion standard exists for breast density assessment. Based on the current situation of breast imaging in Japan, the possible supplemental breast cancer screening modalities are ultrasonography, digital breast tomosynthesis, and breast magnetic resonance imaging. An appropriate population-based breast cancer screening program based on the balance between cost and benefit should be a high priority. Further research based on evidence-based medicine is encouraged. It is very important that the ethnicity, workforce, workflow, and resources for breast cancer screening in each country should be considered when considering supplemental breast cancer screening modalities for women with dense breasts.
2008-08-05
Research in HLA Typing, Hematopoietic Stem Cell Transplantation and Clinical Studies to Improve Outcomes 16. SECURITY CLASSIFICATION OF: 19a. NAME...new action item was added to Workflow Management screen for the SCTOD ( Stem Cell Therapeutic Outcomes Data) Data Form. The information will be passed...Improvement Amendment NRP National Response Plan CME Continuing Medical Education NST Non-myeloablative Allogeneic Stem Cell Transplantation COG
Adverse outcome pathways (AOP) link known population outcomes to a molecular initiating event (MIE) that can be quantified using high-throughput in vitro methods. Practical application of AOPs in chemical-specific risk assessment requires consideration of exposure and absorption,...
Editorial: Challenges for the usability of AR and VR for clinical neurosurgical procedures.
de Ribaupierre, Sandrine; Eagleson, Roy
2017-10-01
There are a number of challenges that must be faced when trying to develop AR and VR-based Neurosurgical simulators, Surgical Navigation Platforms, and "Smart OR" systems. Trying to simulate an operating room environment and surgical tasks in Augmented and Virtual Reality is a challenge many are attempting to solve, in order to train surgeons or help them operate. What are some of the needs of the surgeon, and what are the challenges encountered (human computer interface, perception, workflow, etc). We discuss these tradeoffs and conclude with critical remarks.
NASA Astrophysics Data System (ADS)
Bastrakova, I.; Car, N.
2017-12-01
Geoscience Australia (GA) is recognised and respected as the National Repository and steward of multiple nationally significance data collections that provides geoscience information, services and capability to the Australian Government, industry and stakeholders. Internally, this brings a challenge of managing large volume (11 PB) of diverse and highly complex data distributed through a significant number of catalogues, applications, portals, virtual laboratories, and direct downloads from multiple locations. Externally, GA is facing constant changer in the Government regulations (e.g. open data and archival laws), growing stakeholder demands for high quality and near real-time delivery of data and products, and rapid technological advances enabling dynamic data access. Traditional approach to citing static data and products cannot satisfy increasing demands for the results from scientific workflows, or items within the workflows to be open, discoverable, thrusted and reproducible. Thus, citation of data, products, codes and applications through the implementation of provenance records is being implemented. This approach involves capturing the provenance of many GA processes according to a standardised data model and storing it, as well as metadata for the elements it references, in a searchable set of systems. This provides GA with ability to cite workflows unambiguously as well as each item within each workflow, including inputs and outputs and many other registered components. Dynamic objects can therefore be referenced flexibly in relation to their generation process - a dataset's metadata indicates where to obtain its provenance from - meaning the relevant facts of its dynamism need not be crammed into a single citation object with a single set of attributes. This allows for simple citations, similar to traditional static document citations such as references in journals, to be used for complex dynamic data and other objects such as software code.
Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian
2011-08-30
Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.
Chaput, Ludovic; Martinez-Sanz, Juan; Quiniou, Eric; Rigolet, Pascal; Saettel, Nicolas; Mouawad, Liliane
2016-01-01
In drug design, one may be confronted to the problem of finding hits for targets for which no small inhibiting molecules are known and only low-throughput experiments are available (like ITC or NMR studies), two common difficulties encountered in a typical academic setting. Using a virtual screening strategy like docking can alleviate some of the problems and save a considerable amount of time by selecting only top-ranking molecules, but only if the method is very efficient, i.e. when a good proportion of actives are found in the 1-10 % best ranked molecules. The use of several programs (in our study, Gold, Surflex, FlexX and Glide were considered) shows a divergence of the results, which presents a difficulty in guiding the experiments. To overcome this divergence and increase the yield of the virtual screening, we created the standard deviation consensus (SDC) and variable SDC (vSDC) methods, consisting of the intersection of molecule sets from several virtual screening programs, based on the standard deviations of their ranking distributions. SDC allowed us to find hits for two new protein targets by testing only 9 and 11 small molecules from a chemical library of circa 15,000 compounds. Furthermore, vSDC, when applied to the 102 proteins of the DUD-E benchmarking database, succeeded in finding more hits than any of the four isolated programs for 13-60 % of the targets. In addition, when only 10 molecules of each of the 102 chemical libraries were considered, vSDC performed better in the number of hits found, with an improvement of 6-24 % over the 10 best-ranked molecules given by the individual docking programs.Graphical abstractIn drug design, for a given target and a given chemical library, the results obtained with different virtual screening programs are divergent. So how to rationally guide the experimental tests, especially when only a few number of experiments can be made? The variable Standard Deviation Consensus (vSDC) method was developed to answer this issue. Left panel the vSDC principle consists of intersecting molecule sets, chosen on the basis of the standard deviations of their ranking distributions, obtained from various virtual screening programs. In this study Glide, Gold, FlexX and Surflex were used and tested on the 102 targets of the DUD-E database. Right panel Comparison of the average percentage of hits found with vSDC and each of the four programs, when only 10 molecules from each of the 102 chemical libraries of the DUD-E database were considered. On average, vSDC was capable of finding 38 % of the findable hits, against 34 % for Glide, 32 % for Gold, 16 % for FlexX and 14 % for Surflex, showing that with vSDC, it was possible to overcome the unpredictability of the virtual screening results and to improve them.
Can Untargeted Metabolomics Be Utilized in Drug Discovery/Development?
Caldwell, Gary W; Leo, Gregory C
2017-01-01
Untargeted metabolomics is a promising approach for reducing the significant attrition rate for discovering and developing drugs in the pharmaceutical industry. This review aims to highlight the practical decision-making value of untargeted metabolomics for the advancement of drug candidates in drug discovery/development including potentially identifying and validating novel therapeutic targets, creating alternative screening paradigms, facilitating the selection of specific and translational metabolite biomarkers, identifying metabolite signatures for the drug efficacy mechanism of action, and understanding potential drug-induced toxicity. The review provides an overview of the pharmaceutical process workflow to discover and develop new small molecule drugs followed by the metabolomics process workflow that is involved in conducting metabolomics studies. The pros and cons of the major components of the pharmaceutical and metabolomics workflows are reviewed and discussed. Finally, selected untargeted metabolomics literature examples, from primarily 2010 to 2016, are used to illustrate why, how, and where untargeted metabolomics can be integrated into the drug discovery/preclinical drug development process. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
O’Connor, Anne; Brasher, Christopher J.; Slatter, David A.; Meckelmann, Sven W.; Hawksworth, Jade I.; Allen, Stuart M.; O’Donnell, Valerie B.
2017-01-01
Accurate and high-quality curation of lipidomic datasets generated from plasma, cells, or tissues is becoming essential for cell biology investigations and biomarker discovery for personalized medicine. However, a major challenge lies in removing artifacts otherwise mistakenly interpreted as real lipids from large mass spectrometry files (>60 K features), while retaining genuine ions in the dataset. This requires powerful informatics tools; however, available workflows have not been tailored specifically for lipidomics, particularly discovery research. We designed LipidFinder, an open-source Python workflow. An algorithm is included that optimizes analysis based on users’ own data, and outputs are screened against online databases and categorized into LIPID MAPS classes. LipidFinder outperformed three widely used metabolomics packages using data from human platelets. We show a family of three 12-hydroxyeicosatetraenoic acid phosphoinositides (16:0/, 18:1/, 18:0/12-HETE-PI) generated by thrombin-activated platelets, indicating crosstalk between eicosanoid and phosphoinositide pathways in human cells. The software is available on GitHub (https://github.com/cjbrasher/LipidFinder), with full userguides. PMID:28405621
Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing
NASA Astrophysics Data System (ADS)
Chen, A.; Pham, L.; Kempler, S.; Theobald, M.; Esfandiari, A.; Campino, J.; Vollmer, B.; Lynnes, C.
2011-12-01
Cloud Computing technology has been used to offer high-performance and low-cost computing and storage resources for both scientific problems and business services. Several cloud computing services have been implemented in the commercial arena, e.g. Amazon's EC2 & S3, Microsoft's Azure, and Google App Engine. There are also some research and application programs being launched in academia and governments to utilize Cloud Computing. NASA launched the Nebula Cloud Computing platform in 2008, which is an Infrastructure as a Service (IaaS) to deliver on-demand distributed virtual computers. Nebula users can receive required computing resources as a fully outsourced service. NASA Goddard Earth Science Data and Information Service Center (GES DISC) migrated several GES DISC's applications to the Nebula as a proof of concept, including: a) The Simple, Scalable, Script-based Science Processor for Measurements (S4PM) for processing scientific data; b) the Atmospheric Infrared Sounder (AIRS) data process workflow for processing AIRS raw data; and c) the GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (GIOVANNI) for online access to, analysis, and visualization of Earth science data. This work aims to evaluate the practicability and adaptability of the Nebula. The initial work focused on the AIRS data process workflow to evaluate the Nebula. The AIRS data process workflow consists of a series of algorithms being used to process raw AIRS level 0 data and output AIRS level 2 geophysical retrievals. Migrating the entire workflow to the Nebula platform is challenging, but practicable. After installing several supporting libraries and the processing code itself, the workflow is able to process AIRS data in a similar fashion to its current (non-cloud) configuration. We compared the performance of processing 2 days of AIRS level 0 data through level 2 using a Nebula virtual computer and a local Linux computer. The result shows that Nebula has significantly better performance than the local machine. Much of the difference was due to newer equipment in the Nebula than the legacy computer, which is suggestive of a potential economic advantage beyond elastic power, i.e., access to up-to-date hardware vs. legacy hardware that must be maintained past its prime to amortize the cost. In addition to a trade study of advantages and challenges of porting complex processing to the cloud, a tutorial was developed to enable further progress in utilizing the Nebula for Earth Science applications and understanding better the potential for Cloud Computing in further data- and computing-intensive Earth Science research. In particular, highly bursty computing such as that experienced in the user-demand-driven Giovanni system may become more tractable in a Cloud environment. Our future work will continue to focus on migrating more GES DISC's applications/instances, e.g. Giovanni instances, to the Nebula platform and making matured migrated applications to be in operation on the Nebula.
Partnership For Edge Physics Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parashar, Manish
In this effort, we will extend our prior work as part of CPES (i.e., DART and DataSpaces) to support in-situ tight coupling between application codes that exploits data locality and core-level parallelism to maximize on-chip data exchange and reuse. This will be accomplished by mapping coupled simulations so that the data exchanges are more localized within the nodes. Coupled simulation workflows can more effectively utilize the resources available on emerging HEC platforms if they can be mapped and executed to exploit data locality as well as the communication patterns between application components. Scheduling and running such workflows requires an extendedmore » framework that should provide a unified hybrid abstraction to enable coordination and data sharing across computation tasks that run on the heterogeneous multi-core-based systems, and develop a data-locality based dynamic tasks scheduling approach to increase on-chip or intra-node data exchanges and in-situ execution. This effort will extend our prior work as part of CPES (i.e., DART and DataSpaces), which provided a simple virtual shared-space abstraction hosted at the staging nodes, to support application coordination, data sharing and active data processing services. Moreover, it will transparently manage the low-level operations associated with the inter-application data exchange, such as data redistributions, and will enable running coupled simulation workflow on multi-cores computing platforms.« less
Neuroimaging Study Designs, Computational Analyses and Data Provenance Using the LONI Pipeline
Dinov, Ivo; Lozev, Kamen; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Zamanyan, Alen; Chakrapani, Shruthi; Van Horn, John; Parker, D. Stott; Magsipoc, Rico; Leung, Kelvin; Gutman, Boris; Woods, Roger; Toga, Arthur
2010-01-01
Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges—management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu. PMID:20927408
IQ-Station: A Low Cost Portable Immersive Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eric Whiting; Patrick O'Leary; William Sherman
2010-11-01
The emergence of inexpensive 3D TV’s, affordable input and rendering hardware and open-source software has created a yeasty atmosphere for the development of low-cost immersive environments (IE). A low cost IE system, or IQ-station, fashioned from commercial off the shelf technology (COTS), coupled with a targeted immersive application can be a viable laboratory instrument for enhancing scientific workflow for exploration and analysis. The use of an IQ-station in a laboratory setting also has the potential of quickening the adoption of a more sophisticated immersive environment as a critical enabler in modern scientific and engineering workflows. Prior work in immersive environmentsmore » generally required either a head mounted display (HMD) system or a large projector-based implementation both of which have limitations in terms of cost, usability, or space requirements. The solution presented here provides an alternative platform providing a reasonable immersive experience that addresses those limitations. Our work brings together the needed hardware and software to create a fully integrated immersive display and interface system that can be readily deployed in laboratories and common workspaces. By doing so, it is now feasible for immersive technologies to be included in researchers’ day-to-day workflows. The IQ-Station sets the stage for much wider adoption of immersive environments outside the small communities of virtual reality centers.« less
Szilágyi, Bence; Skok, Žiga; Rácz, Anita; Frlan, Rok; Ferenczy, György G; Ilaš, Janez; Keserű, György M
2018-06-01
d-Amino acid oxidase (DAAO) inhibitors are typically small polar compounds with often suboptimal pharmacokinetic properties. Features of the native binding site limit the operational freedom of further medicinal chemistry efforts. We therefore initiated a structure based virtual screening campaign based on the X-ray structures of DAAO complexes where larger ligands shifted the loop (lid opening) covering the native binding site. The virtual screening of our in-house collection followed by the in vitro test of the best ranked compounds led to the identification of a new scaffold with micromolar IC 50 . Subsequent SAR explorations enabled us to identify submicromolar inhibitors. Docking studies supported by in vitro activity measurements suggest that compounds bind to the active site with a salt-bridge characteristic to DAAO inhibitor binding. In addition, displacement of and interaction with the loop covering the active site contributes significantly to the activity of the most potent compounds. Copyright © 2018 Elsevier Ltd. All rights reserved.
Lopes, Julio Cesar Dias; Dos Santos, Fábio Mendes; Martins-José, Andrelly; Augustyns, Koen; De Winter, Hans
2017-01-01
A new metric for the evaluation of model performance in the field of virtual screening and quantitative structure-activity relationship applications is described. This metric has been termed the power metric and is defined as the fraction of the true positive rate divided by the sum of the true positive and false positive rates, for a given cutoff threshold. The performance of this metric is compared with alternative metrics such as the enrichment factor, the relative enrichment factor, the receiver operating curve enrichment factor, the correct classification rate, Matthews correlation coefficient and Cohen's kappa coefficient. The performance of this new metric is found to be quite robust with respect to variations in the applied cutoff threshold and ratio of the number of active compounds to the total number of compounds, and at the same time being sensitive to variations in model quality. It possesses the correct characteristics for its application in early-recognition virtual screening problems.
NASA Astrophysics Data System (ADS)
Fu, Ying; Sun, Yi-Na; Yi, Ke-Han; Li, Ming-Qiang; Cao, Hai-Feng; Li, Jia-Zhong; Ye, Fei
2018-02-01
4-Hydroxyphenylpyruvate dioxygenase (EC 1.13.11.27, HPPD) is a potent new bleaching herbicide target. Therefore, in silico structure-based virtual screening was performed in order to speed up the identification of promising HPPD inhibitors. In this study, an integrated virtual screening protocol by combining 3D-pharmacophore model, molecular docking and molecular dynamics (MD) simulation was established to find novel HPPD inhibitors from four commercial databases. 3D-pharmacophore Hypo1 model was applied to efficiently narrow potential hits. The hit compounds were subsequently submitted to molecular docking studies, showing four compounds as potent inhibitor with the mechanism of the Fe(II) coordination and interaction with Phe360, Phe403 and Phe398. MD result demonstrated that nonpolar term of compound 3881 made great contributions to binding affinities. It showed an IC50 being 2.49 µM against AtHPPD in vitro. The results provided useful information for developing novel HPPD inhibitors, leading to further understanding of the interaction mechanism of HPPD inhibitors.
Docking and scoring with ICM: the benchmarking results and strategies for improvement
Neves, Marco A. C.; Totrov, Maxim; Abagyan, Ruben
2012-01-01
Flexible docking and scoring using the Internal Coordinate Mechanics software (ICM) was benchmarked for ligand binding mode prediction against the 85 co-crystal structures in the modified Astex data set. The ICM virtual ligand screening was tested against the 40 DUD target benchmarks and 11-target WOMBAT sets. The self-docking accuracy was evaluated for the top 1 and top 3 scoring poses at each ligand binding site with near native conformations below 2 Å RMSD found in 91% and 95% of the predictions, respectively. The virtual ligand screening using single rigid pocket conformations provided the median area under the ROC curves equal to 69.4 with 22.0% true positives recovered at 2% false positive rate. Significant improvements up to ROC AUC= 82.2 and ROC(2%)= 45.2 were achieved following our best practices for flexible pocket refinement and out-of-pocket binding rescore. The virtual screening can be further improved by considering multiple conformations of the target. PMID:22569591
Hou, Xuben; Du, Jintong; Liu, Renshuai; Zhou, Yi; Li, Minyong; Xu, Wenfang; Fang, Hao
2015-04-27
As key regulators of epigenetic regulation, human histone deacetylases (HDACs) have been identified as drug targets for the treatment of several cancers. The proper recognition of zinc-binding groups (ZBGs) will help improve the accuracy of virtual screening for novel HDAC inhibitors. Here, we developed a high-specificity ZBG-based pharmacophore model for HDAC8 inhibitors by incorporating customized ZBG features. Subsequently, pharmacophore-based virtual screening led to the discovery of three novel HDAC8 inhibitors with low micromole IC50 values (1.8-1.9 μM). Further studies demonstrated that compound H8-A5 was selective for HDAC8 over HDAC 1/4 and showed antiproliferation activity in MDA-MB-231 cancer cells. Molecular docking and molecular dynamic studies suggested a possible binding mode for H8-A5, which provides a good starting point for the development of HDAC8 inhibitors in cancer treatment.
Giordano, Assunta; Forte, Giovanni; Massimo, Luigia; Riccio, Raffaele; Bifulco, Giuseppe; Di Micco, Simone
2018-04-12
Inverse Virtual Screening (IVS) is a docking based approach aimed to the evaluation of the virtual ability of a single compound to interact with a library of proteins. For the first time, we applied this methodology to a library of synthetic compounds, which proved to be inactive towards the target they were initially designed for. Trifluoromethyl-benzenesulfonamides 3-21 were repositioned by means of IVS identifying new lead compounds (14-16, 19 and 20) for the inhibition of erbB4 in the low micromolar range. Among these, compound 20 exhibited an interesting value of IC 50 on MCF7 cell lines, thus validating IVS in lead repurposing. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Statistical analysis of EGFR structures' performance in virtual screening
NASA Astrophysics Data System (ADS)
Li, Yan; Li, Xiang; Dong, Zigang
2015-11-01
In this work the ability of EGFR structures to distinguish true inhibitors from decoys in docking and MM-PBSA is assessed by statistical procedures. The docking performance depends critically on the receptor conformation and bound state. The enrichment of known inhibitors is well correlated with the difference between EGFR structures rather than the bound-ligand property. The optimal structures for virtual screening can be selected based purely on the complex information. And the mixed combination of distinct EGFR conformations is recommended for ensemble docking. In MM-PBSA, a variety of EGFR structures have identically good performance in the scoring and ranking of known inhibitors, indicating that the choice of the receptor structure has little effect on the screening.
Discovery of Novel ROCK1 Inhibitors via Integrated Virtual Screening Strategy and Bioassays
Shen, Mingyun; Tian, Sheng; Pan, Peichen; Sun, Huiyong; Li, Dan; Li, Youyong; Zhou, Hefeng; Li, Chuwen; Lee, Simon Ming-Yuen; Hou, Tingjun
2015-01-01
Rho-associated kinases (ROCKs) have been regarded as promising drug targets for the treatment of cardiovascular diseases, nervous system diseases and cancers. In this study, a novel integrated virtual screening protocol by combining molecular docking and pharmacophore mapping based on multiple ROCK1 crystal structures was utilized to screen the ChemBridge database for discovering potential inhibitors of ROCK1. Among the 38 tested compounds, seven of them exhibited significant inhibitory activities of ROCK1 (IC50 < 10 μM) and the most potent one (compound TS-f22) with the novel scaffold of 4-Phenyl-1H-pyrrolo [2,3-b] pyridine had an IC50 of 480 nM. Then, the structure-activity relationships of 41 analogues of TS-f22 were examined. Two potent inhibitors were proven effective in inhibiting the phosphorylation of the downstream target in the ROCK signaling pathway in vitro and protecting atorvastatin-induced cerebral hemorrhage in vivo. The high hit rate (28.95%) suggested that the integrated virtual screening strategy was quite reliable and could be used as a powerful tool for identifying promising active compounds for targets of interest. PMID:26568382
Discovery of Novel ROCK1 Inhibitors via Integrated Virtual Screening Strategy and Bioassays.
Shen, Mingyun; Tian, Sheng; Pan, Peichen; Sun, Huiyong; Li, Dan; Li, Youyong; Zhou, Hefeng; Li, Chuwen; Lee, Simon Ming-Yuen; Hou, Tingjun
2015-11-16
Rho-associated kinases (ROCKs) have been regarded as promising drug targets for the treatment of cardiovascular diseases, nervous system diseases and cancers. In this study, a novel integrated virtual screening protocol by combining molecular docking and pharmacophore mapping based on multiple ROCK1 crystal structures was utilized to screen the ChemBridge database for discovering potential inhibitors of ROCK1. Among the 38 tested compounds, seven of them exhibited significant inhibitory activities of ROCK1 (IC50 < 10 μM) and the most potent one (compound TS-f22) with the novel scaffold of 4-Phenyl-1H-pyrrolo [2,3-b] pyridine had an IC50 of 480 nM. Then, the structure-activity relationships of 41 analogues of TS-f22 were examined. Two potent inhibitors were proven effective in inhibiting the phosphorylation of the downstream target in the ROCK signaling pathway in vitro and protecting atorvastatin-induced cerebral hemorrhage in vivo. The high hit rate (28.95%) suggested that the integrated virtual screening strategy was quite reliable and could be used as a powerful tool for identifying promising active compounds for targets of interest.
Systematic Exploitation of Multiple Receptor Conformations for Virtual Ligand Screening
Bottegoni, Giovanni; Rocchia, Walter; Rueda, Manuel; Abagyan, Ruben; Cavalli, Andrea
2011-01-01
The role of virtual ligand screening in modern drug discovery is to mine large chemical collections and to prioritize for experimental testing a comparatively small and diverse set of compounds with expected activity against a target. Several studies have pointed out that the performance of virtual ligand screening can be improved by taking into account receptor flexibility. Here, we systematically assess how multiple crystallographic receptor conformations, a powerful way of discretely representing protein plasticity, can be exploited in screening protocols to separate binders from non-binders. Our analyses encompass 36 targets of pharmaceutical relevance and are based on actual molecules with reported activity against those targets. The results suggest that an ensemble receptor-based protocol displays a stronger discriminating power between active and inactive molecules as compared to its standard single rigid receptor counterpart. Moreover, such a protocol can be engineered not only to enrich a higher number of active compounds, but also to enhance their chemical diversity. Finally, some clear indications can be gathered on how to select a subset of receptor conformations that is most likely to provide the best performance in a real life scenario. PMID:21625529
Roy, Kunal; Mitra, Indrani
2011-07-01
Quantitative structure-activity relationships (QSARs) have important applications in drug discovery research, environmental fate modeling, property prediction, etc. Validation has been recognized as a very important step for QSAR model development. As one of the important objectives of QSAR modeling is to predict activity/property/toxicity of new chemicals falling within the domain of applicability of the developed models and QSARs are being used for regulatory decisions, checking reliability of the models and confidence of their predictions is a very important aspect, which can be judged during the validation process. One prime application of a statistically significant QSAR model is virtual screening for molecules with improved potency based on the pharmacophoric features and the descriptors appearing in the QSAR model. Validated QSAR models may also be utilized for design of focused libraries which may be subsequently screened for the selection of hits. The present review focuses on various metrics used for validation of predictive QSAR models together with an overview of the application of QSAR models in the fields of virtual screening and focused library design for diverse series of compounds with citation of some recent examples.
Sastry, Madhavi; Lowrie, Jeffrey F; Dixon, Steven L; Sherman, Woody
2010-05-24
A systematic virtual screening study on 11 pharmaceutically relevant targets has been conducted to investigate the interrelation between 8 two-dimensional (2D) fingerprinting methods, 13 atom-typing schemes, 13 bit scaling rules, and 12 similarity metrics using the new cheminformatics package Canvas. In total, 157 872 virtual screens were performed to assess the ability of each combination of parameters to identify actives in a database screen. In general, fingerprint methods, such as MOLPRINT2D, Radial, and Dendritic that encode information about local environment beyond simple linear paths outperformed other fingerprint methods. Atom-typing schemes with more specific information, such as Daylight, Mol2, and Carhart were generally superior to more generic atom-typing schemes. Enrichment factors across all targets were improved considerably with the best settings, although no single set of parameters performed optimally on all targets. The size of the addressable bit space for the fingerprints was also explored, and it was found to have a substantial impact on enrichments. Small bit spaces, such as 1024, resulted in many collisions and in a significant degradation in enrichments compared to larger bit spaces that avoid collisions.
NASA Astrophysics Data System (ADS)
Kalid, Ori; Toledo Warshaviak, Dora; Shechter, Sharon; Sherman, Woody; Shacham, Sharon
2012-11-01
We present the Consensus Induced Fit Docking (cIFD) approach for adapting a protein binding site to accommodate multiple diverse ligands for virtual screening. This novel approach results in a single binding site structure that can bind diverse chemotypes and is thus highly useful for efficient structure-based virtual screening. We first describe the cIFD method and its validation on three targets that were previously shown to be challenging for docking programs (COX-2, estrogen receptor, and HIV reverse transcriptase). We then demonstrate the application of cIFD to the challenging discovery of irreversible Crm1 inhibitors. We report the identification of 33 novel Crm1 inhibitors, which resulted from the testing of 402 purchased compounds selected from a screening set containing 261,680 compounds. This corresponds to a hit rate of 8.2 %. The novel Crm1 inhibitors reveal diverse chemical structures, validating the utility of the cIFD method in a real-world drug discovery project. This approach offers a pragmatic way to implicitly account for protein flexibility without the additional computational costs of ensemble docking or including full protein flexibility during virtual screening.
A ranking method for the concurrent learning of compounds with various activity profiles.
Dörr, Alexander; Rosenbaum, Lars; Zell, Andreas
2015-01-01
In this study, we present a SVM-based ranking algorithm for the concurrent learning of compounds with different activity profiles and their varying prioritization. To this end, a specific labeling of each compound was elaborated in order to infer virtual screening models against multiple targets. We compared the method with several state-of-the-art SVM classification techniques that are capable of inferring multi-target screening models on three chemical data sets (cytochrome P450s, dehydrogenases, and a trypsin-like protease data set) containing three different biological targets each. The experiments show that ranking-based algorithms show an increased performance for single- and multi-target virtual screening. Moreover, compounds that do not completely fulfill the desired activity profile are still ranked higher than decoys or compounds with an entirely undesired profile, compared to other multi-target SVM methods. SVM-based ranking methods constitute a valuable approach for virtual screening in multi-target drug design. The utilization of such methods is most helpful when dealing with compounds with various activity profiles and the finding of many ligands with an already perfectly matching activity profile is not to be expected.
NASA Astrophysics Data System (ADS)
Annapoorani, Angusamy; Umamageswaran, Venugopal; Parameswari, Radhakrishnan; Pandian, Shunmugiah Karutha; Ravi, Arumugam Veera
2012-09-01
Drugs have been discovered in the past mainly either by identification of active components from traditional remedies or by unpredicted discovery. A key motivation for the study of structure based virtual screening is the exploitation of such information to design targeted drugs. In this study, structure based virtual screening was used in search for putative quorum sensing inhibitors (QSI) of Pseudomonas aeruginosa. The virtual screening programme Glide version 5.5 was applied to screen 1,920 natural compounds/drugs against LasR and RhlR receptor proteins of P. aeruginosa. Based on the results of in silico docking analysis, five top ranking compounds namely rosmarinic acid, naringin, chlorogenic acid, morin and mangiferin were subjected to in vitro bioassays against laboratory strain PAO1 and two more antibiotic resistant clinical isolates, P. aeruginosa AS1 (GU447237) and P. aeruginosa AS2 (GU447238). Among the five compounds studied, except mangiferin other four compounds showed significant inhibition in the production of protease, elastase and hemolysin. Further, all the five compounds potentially inhibited the biofilm related behaviours. This interaction study provided promising ligands to inhibit the quorum sensing (QS) mediated virulence factors production in P. aeruginosa.
NASA Astrophysics Data System (ADS)
Mulatsari, E.; Mumpuni, E.; Herfian, A.
2017-05-01
Curcumin is yellow colored phenolic compounds contained in Curcuma longa. Curcumin is known to have biological activities as anti-inflammatory, antiviral, antioxidant, and anti-infective agent [1]. Synthesis of curcumin analogue compounds has been done and some of them had biological activity like curcumin. In this research, the virtual screening of curcumin analogue compounds has been conducted. The purpose of this research was to determine the activity of these compounds as selective Cyclooxygenase-2inhibitors in in-silico. Binding mode elucidation was made by active and inactive representative compounds to see the interaction of the amino acids in the binding site of the compounds. This research used AYO_COX2_V.1.1, a structure-based virtual screening protocol (SBVS) that has been validated by Mumpuni E et al, 2014 [2]. AYO_COX2_V.1.1 protocol using a variety of integrated applications such as SPORES, PLANTS, BKchem, OpenBabel and PyMOL. The results of virtual screening conducted on 49 curcumin analogue compounds obtained 8 compounds with 4 active amino acid residues (GLY340, ILE503, PHE343, and PHE367) that were considered active as COX-2 inhibitor.
NASA Astrophysics Data System (ADS)
Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian
2011-06-01
The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.
Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian
2011-06-01
The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.
NASA Astrophysics Data System (ADS)
Kersten, T. P.; Büyüksalih, G.; Tschirschwitz, F.; Kan, T.; Deggim, S.; Kaya, Y.; Baskaraca, A. P.
2017-05-01
Recent advances in contemporary Virtual Reality (VR) technologies are going to have a significant impact on veryday life. Through VR it is possible to virtually explore a computer-generated environment as a different reality, and to immerse oneself into the past or in a virtual museum without leaving the current real-life situation. For such the ultimate VR experience, the user should only see the virtual world. Currently, the user must wear a VR headset which fits around the head and over the eyes to visually separate themselves from the physical world. Via the headset images are fed to the eyes through two small lenses. Cultural heritage monuments are ideally suited both for thorough multi-dimensional geometric documentation and for realistic interactive visualisation in immersive VR applications. Additionally, the game industry offers tools for interactive visualisation of objects to motivate users to virtually visit objects and places. In this paper the generation of a virtual 3D model of the Selimiye mosque in the city of Edirne, Turkey and its processing for data integration into the game engine Unity is presented. The project has been carried out as a co-operation between BİMTAŞ, a company of the Greater Municipality of Istanbul, Turkey and the Photogrammetry & Laser Scanning Lab of the HafenCity University Hamburg, Germany to demonstrate an immersive and interactive visualisation using the new VR system HTC Vive. The workflow from data acquisition to VR visualisation, including the necessary programming for navigation, is described. Furthermore, the possible use (including simultaneous multiple users environments) of such a VR visualisation for a CH monument is discussed in this contribution.
Using Hierarchical Virtual Screening To Combat Drug Resistance of the HIV-1 Protease.
Li, Nan; Ainsworth, Richard I; Ding, Bo; Hou, Tingjun; Wang, Wei
2015-07-27
Human immunodeficiency virus (HIV) protease inhibitors (PIs) are important components of highly active anti-retroviral therapy (HAART) that block the catalytic site of HIV protease, thus preventing maturation of the HIV virion. However, with two decades of PI prescriptions in clinical practice, drug-resistant HIV mutants have now been found for all of the PI drugs. Therefore, the continuous development of new PI drugs is crucial both to combat the existing drug-resistant HIV strains and to provide treatments for future patients. Here we purpose an HIV PI drug design strategy to select candidate PIs with binding energy distributions dominated by interactions with conserved protease residues in both wild-type and various drug-resistant mutants. On the basis of this strategy, we have constructed a virtual screening pipeline including combinatorial library construction, combinatorial docking, MM/GBSA-based rescoring, and reranking on the basis of the binding energy distribution. We have tested our strategy on lopinavir by modifying its two functional groups. From an initial 751 689 candidate molecules, 18 candidate inhibitors were selected using the pipeline for experimental validation. IC50 measurements and drug resistance predictions successfully identified two ligands with both HIV protease inhibitor activity and an improved drug resistance profile on 2382 HIV mutants. This study provides a proof of concept for the integration of MM/GBSA energy analysis and drug resistance information at the stage of virtual screening and sheds light on future HIV drug design and the use of virtual screening to combat drug resistance.
Gallicchio, Emilio; Deng, Nanjie; He, Peng; Wickstrom, Lauren; Perryman, Alexander L.; Santiago, Daniel N.; Forli, Stefano; Olson, Arthur J.; Levy, Ronald M.
2014-01-01
As part of the SAMPL4 blind challenge, filtered AutoDock Vina ligand docking predictions and large scale binding energy distribution analysis method binding free energy calculations have been applied to the virtual screening of a focused library of candidate binders to the LEDGF site of the HIV integrase protein. The computational protocol leveraged docking and high level atomistic models to improve enrichment. The enrichment factor of our blind predictions ranked best among all of the computational submissions, and second best overall. This work represents to our knowledge the first example of the application of an all-atom physics-based binding free energy model to large scale virtual screening. A total of 285 parallel Hamiltonian replica exchange molecular dynamics absolute protein-ligand binding free energy simulations were conducted starting from docked poses. The setup of the simulations was fully automated, calculations were distributed on multiple computing resources and were completed in a 6-weeks period. The accuracy of the docked poses and the inclusion of intramolecular strain and entropic losses in the binding free energy estimates were the major factors behind the success of the method. Lack of sufficient time and computing resources to investigate additional protonation states of the ligands was a major cause of mispredictions. The experiment demonstrated the applicability of binding free energy modeling to improve hit rates in challenging virtual screening of focused ligand libraries during lead optimization. PMID:24504704
Spyrakis, Francesca; Benedetti, Paolo; Decherchi, Sergio; Rocchia, Walter; Cavalli, Andrea; Alcaro, Stefano; Ortuso, Francesco; Baroni, Massimo; Cruciani, Gabriele
2015-10-26
The importance of taking into account protein flexibility in drug design and virtual ligand screening (VS) has been widely debated in the literature, and molecular dynamics (MD) has been recognized as one of the most powerful tools for investigating intrinsic protein dynamics. Nevertheless, deciphering the amount of information hidden in MD simulations and recognizing a significant minimal set of states to be used in virtual screening experiments can be quite complicated. Here we present an integrated MD-FLAP (molecular dynamics-fingerprints for ligand and proteins) approach, comprising a pipeline of molecular dynamics, clustering and linear discriminant analysis, for enhancing accuracy and efficacy in VS campaigns. We first extracted a limited number of representative structures from tens of nanoseconds of MD trajectories by means of the k-medoids clustering algorithm as implemented in the BiKi Life Science Suite ( http://www.bikitech.com [accessed July 21, 2015]). Then, instead of applying arbitrary selection criteria, that is, RMSD, pharmacophore properties, or enrichment performances, we allowed the linear discriminant analysis algorithm implemented in FLAP ( http://www.moldiscovery.com [accessed July 21, 2015]) to automatically choose the best performing conformational states among medoids and X-ray structures. Retrospective virtual screenings confirmed that ensemble receptor protocols outperform single rigid receptor approaches, proved that computationally generated conformations comprise the same quantity/quality of information included in X-ray structures, and pointed to the MD-FLAP approach as a valuable tool for improving VS performances.
Kaserer, Teresa; Temml, Veronika; Kutil, Zsofia; Vanek, Tomas; Landa, Premysl; Schuster, Daniela
2015-01-01
Computational methods can be applied in drug development for the identification of novel lead candidates, but also for the prediction of pharmacokinetic properties and potential adverse effects, thereby aiding to prioritize and identify the most promising compounds. In principle, several techniques are available for this purpose, however, which one is the most suitable for a specific research objective still requires further investigation. Within this study, the performance of several programs, representing common virtual screening methods, was compared in a prospective manner. First, we selected top-ranked virtual screening hits from the three methods pharmacophore modeling, shape-based modeling, and docking. For comparison, these hits were then additionally predicted by external pharmacophore- and 2D similarity-based bioactivity profiling tools. Subsequently, the biological activities of the selected hits were assessed in vitro, which allowed for evaluating and comparing the prospective performance of the applied tools. Although all methods performed well, considerable differences were observed concerning hit rates, true positive and true negative hits, and hitlist composition. Our results suggest that a rational selection of the applied method represents a powerful strategy to maximize the success of a research project, tightly linked to its aims. We employed cyclooxygenase as application example, however, the focus of this study lied on highlighting the differences in the virtual screening tool performances and not in the identification of novel COX-inhibitors. Copyright © 2015 The Authors. Published by Elsevier Masson SAS.. All rights reserved.
Song, Ming-Ke; Liu, Hong; Jiang, Hua-Liang; Yue, Jian-Min; Hu, Guo-Yuan
2006-02-15
14-Benzoyltalatisamine is a potent and selective blocker of the delayed rectifier K+ channel found in a computational virtual screening study. The compound was found to block the K+ channel from the extracellular side. However, it is unclear whether 14-benzoyltalatisamine shares the same block mechanism with tetraethylammonium (TEA). In order to elucidate how the hit compound found by the virtual screening interacts with the outer vestibule of the K+ channel, the effects of 14-benzoyltalatisamine and TEA on the delayed rectifier K+ current of rat dissociated hippocampal neurons were compared using whole-cell voltage-clamp recording. External application of 14-benzoyltalatisamine and TEA reversibly inhibited the current with IC50 values of 10.1+/-2.2 microM and 1.05+/-0.21 mM, respectively. 14-Benzoyltalatisamine exerted voltage-dependent inhibition, markedly accelerated the decay of the current, and caused a significant hyperpolarizing shift of the steady-state activation curve, whereas TEA caused voltage-independent inhibition, without affecting the kinetic parameters of the current. The blockade by 14-benzoyltalatisamine, but not by TEA, was significantly diminished in a high K+ (60 mM) external solution. The potency of 14-benzoyltalatisamine was markedly reduced in the presence of 15 mM TEA. The results suggest that 14-benzoyltalatisamine bind to the external pore entry of the delayed rectifier K+ channel with partial insertion into the selectivity filter, which is in conformity with that predicted by the molecular docking model in the virtual screening.
Investigation of tracking systems properties in CAVE-type virtual reality systems
NASA Astrophysics Data System (ADS)
Szymaniak, Magda; Mazikowski, Adam; Meironke, Michał
2017-08-01
In recent years, many scientific and industrial centers in the world developed a virtual reality systems or laboratories. One of the most advanced solutions are Immersive 3D Visualization Lab (I3DVL), a CAVE-type (Cave Automatic Virtual Environment) laboratory. It contains two CAVE-type installations: six-screen installation arranged in a form of a cube, and four-screen installation, a simplified version of the previous one. The user feeling of "immersion" and interaction with virtual world depend on many factors, in particular on the accuracy of the tracking system of the user. In this paper properties of the tracking systems applied in I3DVL was investigated. For analysis two parameters were selected: the accuracy of the tracking system and the range of detection of markers by the tracking system in space of the CAVE. Measurements of system accuracy were performed for six-screen installation, equipped with four tracking cameras for three axes: X, Y, Z. Rotation around the Y axis was also analyzed. Measured tracking system shows good linear and rotating accuracy. The biggest issue was the range of the monitoring of markers inside the CAVE. It turned out, that the tracking system lose sight of the markers in the corners of the installation. For comparison, for a simplified version of CAVE (four-screen installation), equipped with eight tracking cameras, this problem was not occur. Obtained results will allow for improvement of cave quality.
Condorcet and borda count fusion method for ligand-based virtual screening.
Ahmed, Ali; Saeed, Faisal; Salim, Naomie; Abdo, Ammar
2014-01-01
It is known that any individual similarity measure will not always give the best recall of active molecule structure for all types of activity classes. Recently, the effectiveness of ligand-based virtual screening approaches can be enhanced by using data fusion. Data fusion can be implemented using two different approaches: group fusion and similarity fusion. Similarity fusion involves searching using multiple similarity measures. The similarity scores, or ranking, for each similarity measure are combined to obtain the final ranking of the compounds in the database. The Condorcet fusion method was examined. This approach combines the outputs of similarity searches from eleven association and distance similarity coefficients, and then the winner measure for each class of molecules, based on Condorcet fusion, was chosen to be the best method of searching. The recall of retrieved active molecules at top 5% and significant test are used to evaluate our proposed method. The MDL drug data report (MDDR), maximum unbiased validation (MUV) and Directory of Useful Decoys (DUD) data sets were used for experiments and were represented by 2D fingerprints. Simulated virtual screening experiments with the standard two data sets show that the use of Condorcet fusion provides a very simple way of improving the ligand-based virtual screening, especially when the active molecules being sought have a lowest degree of structural heterogeneity. However, the effectiveness of the Condorcet fusion was increased slightly when structural sets of high diversity activities were being sought.
Condorcet and borda count fusion method for ligand-based virtual screening
2014-01-01
Background It is known that any individual similarity measure will not always give the best recall of active molecule structure for all types of activity classes. Recently, the effectiveness of ligand-based virtual screening approaches can be enhanced by using data fusion. Data fusion can be implemented using two different approaches: group fusion and similarity fusion. Similarity fusion involves searching using multiple similarity measures. The similarity scores, or ranking, for each similarity measure are combined to obtain the final ranking of the compounds in the database. Results The Condorcet fusion method was examined. This approach combines the outputs of similarity searches from eleven association and distance similarity coefficients, and then the winner measure for each class of molecules, based on Condorcet fusion, was chosen to be the best method of searching. The recall of retrieved active molecules at top 5% and significant test are used to evaluate our proposed method. The MDL drug data report (MDDR), maximum unbiased validation (MUV) and Directory of Useful Decoys (DUD) data sets were used for experiments and were represented by 2D fingerprints. Conclusions Simulated virtual screening experiments with the standard two data sets show that the use of Condorcet fusion provides a very simple way of improving the ligand-based virtual screening, especially when the active molecules being sought have a lowest degree of structural heterogeneity. However, the effectiveness of the Condorcet fusion was increased slightly when structural sets of high diversity activities were being sought. PMID:24883114
Novel Mycosin Protease MycP1 Inhibitors Identified by Virtual Screening and 4D Fingerprints
2015-01-01
The rise of drug-resistant Mycobacterium tuberculosis lends urgency to the need for new drugs for the treatment of tuberculosis (TB). The identification of a serine protease, mycosin protease-1 (MycP1), as the crucial agent in hydrolyzing the virulence factor, ESX-secretion-associated protein B (EspB), potentially opens the door to new tuberculosis treatment options. Using the crystal structure of mycobacterial MycP1 in the apo form, we performed an iterative ligand- and structure-based virtual screening (VS) strategy to identify novel, nonpeptide, small-molecule inhibitors against MycP1 protease. Screening of ∼485 000 ligands from databases at the Genomics Research Institute (GRI) at the University of Cincinnati and the National Cancer Institute (NCI) using our VS approach, which integrated a pharmacophore model and consensus molecular shape patterns of active ligands (4D fingerprints), identified 81 putative inhibitors, and in vitro testing subsequently confirmed two of them as active inhibitors. Thereafter, the lead structures of each VS round were used to generate a new 4D fingerprint that enabled virtual rescreening of the chemical libraries. Finally, the iterative process identified a number of diverse scaffolds as lead compounds that were tested and found to have micromolar IC50 values against the MycP1 target. This study validated the efficiency of the SABRE 4D fingerprints as a means of identifying novel lead compounds in each screening round of the databases. Together, these results underscored the value of using a combination of in silico iterative ligand- and structure-based virtual screening of chemical libraries with experimental validation for the identification of promising structural scaffolds, such as the MycP1 inhibitors. PMID:24628123
NASA Astrophysics Data System (ADS)
Cawood, A.; Bond, C. E.; Howell, J.; Totake, Y.
2016-12-01
Virtual outcrops derived from techniques such as LiDAR and SfM (digital photogrammetry) provide a viable and potentially powerful addition or alternative to traditional field studies, given the large amounts of raw data that can be acquired rapidly and safely. The use of these digital representations of outcrops as a source of geological data has increased greatly in the past decade, and as such, the accuracy and precision of these new acquisition methods applied to geological problems has been addressed by a number of authors. Little work has been done, however, on the integration of virtual outcrops into fundamental structural geology workflows and to systematically studying the fidelity of the data derived from them. Here, we use the classic Stackpole Quay syncline outcrop in South Wales to quantitatively evaluate the accuracy of three virtual outcrop models (LiDAR, aerial and terrestrial digital photogrammetry) compared to data collected directly in the field. Using these structural data, we have built 2D and 3D geological models which make predictions of fold geometries. We examine the fidelity of virtual outcrops generated using different acquisition techniques to outcrop geology and how these affect model building and final outcomes. Finally, we utilize newly acquired data to deterministically test model validity. Based upon these results, we find that acquisition of digital imagery by UAS (Unmanned Autonomous Vehicle) yields highly accurate virtual outcrops when compared to terrestrial methods, allowing the construction of robust data-driven predictive models. Careful planning, survey design and choice of suitable acquisition method are, however, of key importance for best results.
Virtual environment architecture for rapid application development
NASA Technical Reports Server (NTRS)
Grinstein, Georges G.; Southard, David A.; Lee, J. P.
1993-01-01
We describe the MITRE Virtual Environment Architecture (VEA), a product of nearly two years of investigations and prototypes of virtual environment technology. This paper discusses the requirements for rapid prototyping, and an architecture we are developing to support virtual environment construction. VEA supports rapid application development by providing a variety of pre-built modules that can be reconfigured for each application session. The modules supply interfaces for several types of interactive I/O devices, in addition to large-screen or head-mounted displays.
Feinstein, Wei P; Brylinski, Michal
2015-01-01
Computational approaches have emerged as an instrumental methodology in modern research. For example, virtual screening by molecular docking is routinely used in computer-aided drug discovery. One of the critical parameters for ligand docking is the size of a search space used to identify low-energy binding poses of drug candidates. Currently available docking packages often come with a default protocol for calculating the box size, however, many of these procedures have not been systematically evaluated. In this study, we investigate how the docking accuracy of AutoDock Vina is affected by the selection of a search space. We propose a new procedure for calculating the optimal docking box size that maximizes the accuracy of binding pose prediction against a non-redundant and representative dataset of 3,659 protein-ligand complexes selected from the Protein Data Bank. Subsequently, we use the Directory of Useful Decoys, Enhanced to demonstrate that the optimized docking box size also yields an improved ranking in virtual screening. Binding pockets in both datasets are derived from the experimental complex structures and, additionally, predicted by eFindSite. A systematic analysis of ligand binding poses generated by AutoDock Vina shows that the highest accuracy is achieved when the dimensions of the search space are 2.9 times larger than the radius of gyration of a docking compound. Subsequent virtual screening benchmarks demonstrate that this optimized docking box size also improves compound ranking. For instance, using predicted ligand binding sites, the average enrichment factor calculated for the top 1 % (10 %) of the screening library is 8.20 (3.28) for the optimized protocol, compared to 7.67 (3.19) for the default procedure. Depending on the evaluation metric, the optimal docking box size gives better ranking in virtual screening for about two-thirds of target proteins. This fully automated procedure can be used to optimize docking protocols in order to improve the ranking accuracy in production virtual screening simulations. Importantly, the optimized search space systematically yields better results than the default method not only for experimental pockets, but also for those predicted from protein structures. A script for calculating the optimal docking box size is freely available at www.brylinski.org/content/docking-box-size. Graphical AbstractWe developed a procedure to optimize the box size in molecular docking calculations. Left panel shows the predicted binding pose of NADP (green sticks) compared to the experimental complex structure of human aldose reductase (blue sticks) using a default protocol. Right panel shows the docking accuracy using an optimized box size.
Chaput, Ludovic; Martinez-Sanz, Juan; Saettel, Nicolas; Mouawad, Liliane
2016-01-01
In a structure-based virtual screening, the choice of the docking program is essential for the success of a hit identification. Benchmarks are meant to help in guiding this choice, especially when undertaken on a large variety of protein targets. Here, the performance of four popular virtual screening programs, Gold, Glide, Surflex and FlexX, is compared using the Directory of Useful Decoys-Enhanced database (DUD-E), which includes 102 targets with an average of 224 ligands per target and 50 decoys per ligand, generated to avoid biases in the benchmarking. Then, a relationship between these program performances and the properties of the targets or the small molecules was investigated. The comparison was based on two metrics, with three different parameters each. The BEDROC scores with α = 80.5, indicated that, on the overall database, Glide succeeded (score > 0.5) for 30 targets, Gold for 27, FlexX for 14 and Surflex for 11. The performance did not depend on the hydrophobicity nor the openness of the protein cavities, neither on the families to which the proteins belong. However, despite the care in the construction of the DUD-E database, the small differences that remain between the actives and the decoys likely explain the successes of Gold, Surflex and FlexX. Moreover, the similarity between the actives of a target and its crystal structure ligand seems to be at the basis of the good performance of Glide. When all targets with significant biases are removed from the benchmarking, a subset of 47 targets remains, for which Glide succeeded for only 5 targets, Gold for 4 and FlexX and Surflex for 2. The performance dramatic drop of all four programs when the biases are removed shows that we should beware of virtual screening benchmarks, because good performances may be due to wrong reasons. Therefore, benchmarking would hardly provide guidelines for virtual screening experiments, despite the tendency that is maintained, i.e., Glide and Gold display better performance than FlexX and Surflex. We recommend to always use several programs and combine their results. Graphical AbstractSummary of the results obtained by virtual screening with the four programs, Glide, Gold, Surflex and FlexX, on the 102 targets of the DUD-E database. The percentage of targets with successful results, i.e., with BDEROC(α = 80.5) > 0.5, when the entire database is considered are in Blue, and when targets with biased chemical libraries are removed are in Red.
An XML Representation for Crew Procedures
NASA Technical Reports Server (NTRS)
Simpson, Richard C.
2005-01-01
NASA ensures safe operation of complex systems through the use of formally-documented procedures, which encode the operational knowledge of the system as derived from system experts. Crew members use procedure documentation on the ground for training purposes and on-board space shuttle and space station to guide their activities. Investigators at JSC are developing a new representation for procedures that is content-based (as opposed to display-based). Instead of specifying how a procedure should look on the printed page, the content-based representation will identify the components of a procedure and (more importantly) how the components are related (e.g., how the activities within a procedure are sequenced; what resources need to be available for each activity). This approach will allow different sets of rules to be created for displaying procedures on a computer screen, on a hand-held personal digital assistant (PDA), verbally, or on a printed page, and will also allow intelligent reasoning processes to automatically interpret and use procedure definitions. During his NASA fellowship, Dr. Simpson examined how various industries represent procedures (also called business processes or workflows), in areas such as manufacturing, accounting, shipping, or customer service. A useful method for designing and evaluating workflow representation languages is by determining their ability to encode various workflow patterns, which depict abstract relationships between the components of a procedure removed from the context of a specific procedure or industry. Investigators have used this type of analysis to evaluate how well-suited existing workflow representation languages are for various industries based on the workflow patterns that commonly arise across industry-specific procedures. Based on this type of analysis, it is already clear that existing workflow representations capture discrete flow of control (i.e., when one activity should start and stop based on when other activities start and stop), but do not capture the flow of data, materials, resources or priorities. Existing workflow representation languages are also limited to representing sequences of discrete activities, and cannot encode procedures involving continuous flow of information or materials between activities.
Hydrogen storage materials discovery via high throughput ball milling and gas sorption.
Li, Bin; Kaye, Steven S; Riley, Conor; Greenberg, Doron; Galang, Daniel; Bailey, Mark S
2012-06-11
The lack of a high capacity hydrogen storage material is a major barrier to the implementation of the hydrogen economy. To accelerate discovery of such materials, we have developed a high-throughput workflow for screening of hydrogen storage materials in which candidate materials are synthesized and characterized via highly parallel ball mills and volumetric gas sorption instruments, respectively. The workflow was used to identify mixed imides with significantly enhanced absorption rates relative to Li2Mg(NH)2. The most promising material, 2LiNH2:MgH2 + 5 atom % LiBH4 + 0.5 atom % La, exhibits the best balance of absorption rate, capacity, and cycle-life, absorbing >4 wt % H2 in 1 h at 120 °C after 11 absorption-desorption cycles.
Becságh, Péter; Szakács, Orsolya
2014-10-01
During diagnostic workflow when detecting sequence alterations, sometimes it is important to design an algorithm that includes screening and direct tests in combination. Normally the use of direct test, which is mainly sequencing, is limited. There is an increased need for effective screening tests, with "closed tube" during the whole process and therefore decreasing the risk of PCR product contamination. The aim of this study was to design such a closed tube, detection probe based screening assay to detect different kind of sequence alterations in the exon 11 of the human c-kit gene region. Inside this region there are variable possible deletions and single nucleotide changes. During assay setup, more probe chemistry formats were screened and tested. After some optimization steps the taqman probe format was selected.
Office-Based Three-Dimensional Printing Workflow for Craniomaxillofacial Fracture Repair.
Elegbede, Adekunle; Diaconu, Silviu C; McNichols, Colton H L; Seu, Michelle; Rasko, Yvonne M; Grant, Michael P; Nam, Arthur J
2018-03-08
Three-dimensional printing of patient-specific models is being used in various aspects of craniomaxillofacial reconstruction. Printing is typically outsourced to off-site vendors, with the main disadvantages being increased costs and time for production. Office-based 3-dimensional printing has been proposed as a means to reduce costs and delays, but remains largely underused because of the perception among surgeons that it is futuristic, highly technical, and prohibitively expensive. The goal of this report is to demonstrate the feasibility and ease of incorporating in-office 3-dimensional printing into the standard workflow for facial fracture repair.Patients with complex mandible fractures requiring open repair were identified. Open-source software was used to create virtual 3-dimensional skeletal models of the, initial injury pattern, and then the ideally reduced fractures based on preoperative computed tomography (CT) scan images. The virtual 3-dimensional skeletal models were then printed in our office using a commercially available 3-dimensional printer and bioplastic filament. The 3-dimensional skeletal models were used as templates to bend and shape titanium plates that were subsequently used for intraoperative fixation.Average print time was 6 hours. Excluding the 1-time cost of the 3-dimensional printer of $2500, roughly the cost of a single commercially produced model, the average material cost to print 1 model mandible was $4.30. Postoperative CT imaging demonstrated precise, predicted reduction in all patients.Office-based 3-dimensional printing of skeletal models can be routinely used in repair of facial fractures in an efficient and cost-effective manner.
A central aim of EPA’s ToxCast project is to use in vitro high-throughput screening (HTS) profiles to build predictive models of in vivo toxicity. Where assays lack metabolic capability, such efforts may need to anticipate the role of metabolic activation (or deactivation). A wo...
Wegh, Robin S; Berendsen, Bjorn J A; Driessen-Van Lankveld, Wilma D M; Pikkemaat, Mariël G; Zuidema, Tina; Van Ginkel, Leen A
2017-11-01
A non-targeted workflow is reported for the isolation and identification of antimicrobial active compounds using bioassay-directed screening and LC coupled to high-resolution MS. Suspect samples are extracted using a generic protocol and fractionated using two different LC conditions (A and B). The behaviour of the bioactive compound under these different conditions yields information about the physicochemical properties of the compound and introduces variations in co-eluting compounds in the fractions, which is essential for peak picking and identification. The fractions containing the active compound(s) obtained with conditions A and B are selected using a microbiological effect-based bioassay. The selected bioactive fractions from A and B are analysed using LC combined with high-resolution MS. Selection of relevant signals is automatically carried out by selecting all signals present in both bioactive fractions A and B, yielding tremendous data reduction. The method was assessed using two spiked feed samples and subsequently applied to two feed samples containing an unidentified compound showing microbial growth inhibition. In all cases, the identity of the compound causing microbiological inhibition was successfully confirmed.
Siow, Hwee-Leng; Lim, Theam Soon; Gan, Chee-Yuen
2017-01-01
The main objective of this study was to develop an efficient workflow to discover α-amylase inhibitory peptides from cumin seed. A total of 56 unknown peptides was initially found in the cumin seed protein hydrolysate. They were subjected to 2 different in silico screenings and 6 peptides were shortlisted. The peptides were then subjected to in vitro selection using phage display technique and 3 clones (CSP3, CSP4 and CSP6) showed high affinity in binding α-amylase. These clones were subjected to the inhibitory test and only CSP4 and CSP6 exhibited high inhibitory activity. Therefore, these peptides were chemically synthesized for validation purposes. CSP4 exhibited inhibition of bacterial and human salivary α-amylases with IC50 values of 0.11 and 0.04μmol, respectively, whereas CSP6 was about 0.10 and 0.15μmol, respectively. Results showed that the strength of each protocol has been successfully combined as deemed fit to enhance the α-amylase inhibitor peptide discovery. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ortalli, Margherita; Attard, Luciano; Vanino, Elisa; Gaibani, Paolo; Vocale, Caterina; Rossini, Giada; Cagarelli, Roberto; Pierro, Anna; Billi, Patrizia; Mastroianni, Antonio; Di Cesare, Simona; Codeluppi, Mauro; Franceschini, Erica; Melchionda, Fraia; Gramiccia, Marina; Scalone, Aldo; Gentilomi, Giovanna A.; Landini, Maria P.
2017-01-01
The diagnosis of visceral leishmaniasis (VL) remains challenging, due to the limited sensitivity of microscopy, the poor performance of serological methods in immunocompromised patients and the lack of standardization of molecular tests. The aim of this study was to implement a combined diagnostic workflow by integrating serological and molecular tests with standardized clinical criteria. Between July 2013 and June 2015, the proposed workflow was applied to specimens obtained from 94 in-patients with clinical suspicion of VL in the Emilia-Romagna region, Northern Italy. Serological tests and molecular techniques were employed. Twenty-one adult patients (22%) had a confirmed diagnosis of VL by clinical criteria, serology and/or real-time polymerase chain reaction; 4 of these patients were HIV-positive. Molecular tests exhibited higher sensitivity than serological tests for the diagnosis of VL. In our experience, the rK39 immunochromatographic test was insufficiently sensitive for use as a screening test for the diagnosis of VL caused by L. infantum in Italy. However, as molecular tests are yet not standardized, further studies are required to identify an optimal screening test for Mediterranean VL. PMID:28832646
Howard, Barbara J; Sturner, Raymond
2017-12-01
To describe benefits and problems with screening and addressing developmental and behavioral problems in primary care and using an online clinical process support system as a solution. Screening has been found to have various implementation barriers including time costs, accuracy, workflow and knowledge of tools. In addition, training of clinicians in dealing with identified issues is lacking. Patients disclose more to and prefer computerized screening. An online clinical process support system (CHADIS) shows promise in addressing these issues. Use of a comprehensive panel of online pre-visit screens; linked decision support to provide moment-of-care training; and post-visit activities and resources for patient-specific education, monitoring and care coordination is an efficient way to make the entire process of screening and follow up care feasible in primary care. CHADIS fulfills these requirements and provides Maintenance of Certification credit to physicians as well as added income for screening efforts.
An interactive web-based application for Comprehensive Analysis of RNAi-screen Data.
Dutta, Bhaskar; Azhir, Alaleh; Merino, Louis-Henri; Guo, Yongjian; Revanur, Swetha; Madhamshettiwar, Piyush B; Germain, Ronald N; Smith, Jennifer A; Simpson, Kaylene J; Martin, Scott E; Buehler, Eugen; Beuhler, Eugen; Fraser, Iain D C
2016-02-23
RNAi screens are widely used in functional genomics. Although the screen data can be susceptible to a number of experimental biases, many of these can be corrected by computational analysis. For this purpose, here we have developed a web-based platform for integrated analysis and visualization of RNAi screen data named CARD (for Comprehensive Analysis of RNAi Data; available at https://card.niaid.nih.gov). CARD allows the user to seamlessly carry out sequential steps in a rigorous data analysis workflow, including normalization, off-target analysis, integration of gene expression data, optimal thresholds for hit selection and network/pathway analysis. To evaluate the utility of CARD, we describe analysis of three genome-scale siRNA screens and demonstrate: (i) a significant increase both in selection of subsequently validated hits and in rejection of false positives, (ii) an increased overlap of hits from independent screens of the same biology and (iii) insight to microRNA (miRNA) activity based on siRNA seed enrichment.
An interactive web-based application for Comprehensive Analysis of RNAi-screen Data
Dutta, Bhaskar; Azhir, Alaleh; Merino, Louis-Henri; Guo, Yongjian; Revanur, Swetha; Madhamshettiwar, Piyush B.; Germain, Ronald N.; Smith, Jennifer A.; Simpson, Kaylene J.; Martin, Scott E.; Beuhler, Eugen; Fraser, Iain D. C.
2016-01-01
RNAi screens are widely used in functional genomics. Although the screen data can be susceptible to a number of experimental biases, many of these can be corrected by computational analysis. For this purpose, here we have developed a web-based platform for integrated analysis and visualization of RNAi screen data named CARD (for Comprehensive Analysis of RNAi Data; available at https://card.niaid.nih.gov). CARD allows the user to seamlessly carry out sequential steps in a rigorous data analysis workflow, including normalization, off-target analysis, integration of gene expression data, optimal thresholds for hit selection and network/pathway analysis. To evaluate the utility of CARD, we describe analysis of three genome-scale siRNA screens and demonstrate: (i) a significant increase both in selection of subsequently validated hits and in rejection of false positives, (ii) an increased overlap of hits from independent screens of the same biology and (iii) insight to microRNA (miRNA) activity based on siRNA seed enrichment. PMID:26902267
De Simone, Angela; Mancini, Francesca; Cosconati, Sandro; Marinelli, Luciana; La Pietra, Valeria; Novellino, Ettore; Andrisano, Vincenza
2013-01-25
In the present work, a human recombinant BACE1 immobilized enzyme reactor (hrBACE1-IMER) has been applied for the sensitive fast screening of 38 compounds selected through a virtual screening approach. HrBACE1-IMER was inserted into a liquid chromatograph coupled with a fluorescent detector. A fluorogenic peptide substrate (M-2420), containing the β-secretase site of the Swedish mutation of APP, was injected and cleaved in the on-line HPLC-hrBACE1-IMER system, giving rise to the fluorescent product. The compounds of the library were tested for their ability to inhibit BACE1 in the immobilized format and to reduce the area related to the chromatographic peak of the fluorescent enzymatic product. The results were validated in solution by using two different FRET methods. Due to the efficient virtual screening methodology, more than fifty percent of the selected compounds showed a measurable inhibitory activity. One of the most active compound (a bis-indanone derivative) was characterized in terms of IC(50) and K(i) determination on the hrBACE1-IMER. Thus, the hrBACE1-IMER has been confirmed as a valid tool for the throughput screening of different chemical entities with potency lower than 30μM for the fast hits' selection and for mode of action determination. Copyright © 2012 Elsevier B.V. All rights reserved.
Modeling of luminance distribution in CAVE-type virtual reality systems
NASA Astrophysics Data System (ADS)
Meironke, Michał; Mazikowski, Adam
2017-08-01
At present, one of the most advanced virtual reality systems are CAVE-type (Cave Automatic Virtual Environment) installations. Such systems are usually consisted of four, five or six projection screens and in case of six screens arranged in form of a cube. Providing the user with a high level of immersion feeling in such systems is largely dependent of optical properties of the system. The modeling of physical phenomena plays nowadays a huge role in the most fields of science and technology. It allows to simulate work of device without a need to make any changes in the physical constructions. In this paper distribution of luminance in CAVE-type virtual reality systems were modelled. Calculations were performed for the model of 6-walled CAVE-type installation, based on Immersive 3D Visualization Laboratory, situated at the Faculty of Electronics, Telecommunications and Informatics at the Gdańsk University of Technology. Tests have been carried out for two different scattering distribution of the screen material in order to check how these characteristicinfluence on the luminance distribution of the whole CAVE. The basis assumption and simplification of modeled CAVE-type installation and results were presented. The brief discussion about the results and usefulness of developed model were also carried out.
Virtual screening for potential inhibitors of bacterial MurC and MurD ligases.
Tomašić, Tihomir; Kovač, Andreja; Klebe, Gerhard; Blanot, Didier; Gobec, Stanislav; Kikelj, Danijel; Mašič, Lucija Peterlin
2012-03-01
Mur ligases are bacterial enzymes involved in the cytoplasmic steps of peptidoglycan biosynthesis and are viable targets for antibacterial drug discovery. We have performed virtual screening for potential ATP-competitive inhibitors targeting MurC and MurD ligases, using a protocol of consecutive hierarchical filters. Selected compounds were evaluated for inhibition of MurC and MurD ligases, and weak inhibitors possessing dual inhibitory activity have been identified. These compounds represent new scaffolds for further optimisation towards multiple Mur ligase inhibitors with improved inhibitory potency.
Zhuang, Chunlin; Narayanapillai, Sreekanth; Zhang, Wannian; Sham, Yuk Yin; Xing, Chengguo
2014-02-13
In this study, rapid structure-based virtual screening and hit-based substructure search were utilized to identify small molecules that disrupt the interaction of Keap1-Nrf2. Special emphasis was placed toward maximizing the exploration of chemical diversity of the initial hits while economically establishing informative structure-activity relationship (SAR) of novel scaffolds. Our most potent noncovalent inhibitor exhibits three times improved cellular activation in Nrf2 activation than the most active noncovalent Keap1 inhibitor known to date.
Virtual screening of Indonesian flavonoid as neuraminidase inhibitor of influenza a subtype H5N1
NASA Astrophysics Data System (ADS)
Parikesit, A. A.; Ardiansah, B.; Handayani, D. M.; Tambunan, U. S. F.; Kerami, D.
2016-02-01
Highly Pathogenic Avian Influenza (HPAI) H5N1 poses a significant threat to animal and human health worldwide. The number of H5N1 infection in Indonesia is the highest during 2005-2013, with a mortality rate up to 83%. A mutation that occurred in H5N1 strain made it resistant to commercial antiviral agents such as oseltamivir and zanamivir, so the more potent antiviral agent is needed. In this study, virtual screening of Indonesian flavonoid as neuraminidase inhibitor of H5N1 was conducted. Total 491 flavonoid compound obtained from HerbalDB were screened. Molecular docking was performed using MOE 2008.10. This research resulted in Guajavin B as the best ligand.
NASA Astrophysics Data System (ADS)
Wang, Yang; Yu, Jianqun; Yu, Yajun
2018-05-01
To solve the problems in the DEM simulations of the screening process of a swing-bar sieve, in this paper we propose the real-virtual boundary method to build the geometrical model of the screen deck on a swing-bar sieve. The motion of the swing-bar sieve is modelled by the planer multi-body kinematics. A coupled model of the discrete element method (DEM) with multi-body kinematics (MBK) is presented to simulate the flowing and passing processes of soybean particles on the screen deck. By the comparison of the simulated results with the experimental results of the screening process of the LA-LK laboratory scale swing-bar sieve, the feasibility and validity of the real-virtual boundary method and the coupled DEM-MBK model we proposed in this paper can be verified. This work provides the basis for the optimization design of the swing-bar sieve with circular apertures and complex motion.
Albright, Glenn; Bryan, Craig; Adam, Cyrille; McMillan, Jeremiah; Shockley, Kristen
Primary health care professionals are in an excellent position to identify, screen, and conduct brief interventions for patients with mental health and substance use disorders. However, discomfort in initiating conversations about behavioral health, time concerns, lack of knowledge about screening tools, and treatment resources are barriers. This study examines the impact of an online simulation where users practice role-playing with emotionally responsive virtual patients to learn motivational interviewing strategies to better manage screening, brief interventions, and referral conversations. Baseline data were collected from 227 participants who were then randomly assigned into the treatment or wait-list control groups. Treatment group participants then completed the simulation, postsimulation survey, and 3-month follow-up survey. Results showed significant increases in knowledge/skill to identify and engage in collaborative decision making with patients. Results strongly suggest that role-play simulation experiences can be an effective means of teaching screening and brief intervention.
Shanahan, C W; Sorensen-Alawad, A; Carney, B L; Persand, I; Cruz, A; Botticelli, M; Pressman, K; Adams, W G; Brolin, M; Alford, D P
2014-01-01
The Massachusetts Screening, Brief Intervention and Referral to Treatment (MASBIRT) Program, a substance use screening program in general medical settings, created a web-based, point-of-care (POC), application--the MASBIRT Portal (the "Portal") to meet program goals. We report on development and implementation of the Portal. Five year program process outcomes recorded by an independent evaluator and an anonymous survey of Health Educator's (HEs) adoption, perceptions and Portal use with a modified version of the Technology Readiness Index are described. [8] Specific management team members, selected based on their roles in program leadership, development and implementation of the Portal and supervision of HEs, participated in semi-structured, qualitative interviews. At the conclusion of the program 73% (24/33) of the HEs completed a survey on their experience using the Portal. HEs reported that the Portal made recording screening information easy (96%); improved planning their workday (83%); facilitated POC data collection (84%); decreased time dedicated to data entry (100%); and improved job satisfaction (59%). The top two barriers to use were "no or limited wireless connectivity" (46%) and "the tablet was too heavy/bulky to carry" (29%). Qualitative management team interviews identified strategies for successful HIT implementation: importance of engaging HEs in outlining specifications and workflow needs, collaborative testing prior to implementation and clear agreement on data collection purpose, quality requirements and staff roles. Overall, HEs perceived the Portal favorably with regard to time saving ability and improved workflow. Lessons learned included identifying core requirements early during system development and need for managers to institute and enforce consistent behavioral work norms. Barriers and HEs' views of technology impacted the utilization of the MASBIRT Portal. Further research is needed to determine best approaches for HIT system implementation in general medical settings.
Pharmacophore screening of the protein data bank for specific binding site chemistry.
Campagna-Slater, Valérie; Arrowsmith, Andrew G; Zhao, Yong; Schapira, Matthieu
2010-03-22
A simple computational approach was developed to screen the Protein Data Bank (PDB) for putative pockets possessing a specific binding site chemistry and geometry. The method employs two commonly used 3D screening technologies, namely identification of cavities in protein structures and pharmacophore screening of chemical libraries. For each protein structure, a pocket finding algorithm is used to extract potential binding sites containing the correct types of residues, which are then stored in a large SDF-formatted virtual library; pharmacophore filters describing the desired binding site chemistry and geometry are then applied to screen this virtual library and identify pockets matching the specified structural chemistry. As an example, this approach was used to screen all human protein structures in the PDB and identify sites having chemistry similar to that of known methyl-lysine binding domains that recognize chromatin methylation marks. The selected genes include known readers of the histone code as well as novel binding pockets that may be involved in epigenetic signaling. Putative allosteric sites were identified on the structures of TP53BP1, L3MBTL3, CHEK1, KDM4A, and CREBBP.
Colorectal Cancer Screening (PDQ®)—Patient Version
There are five types of tests that are used to screen for colorectal cancer: fecal occult blood test, sigmoidoscopy, colonoscopy, virtual colonoscopy, and DNA stool test. Learn more about these and other tests in this expert-reviewed summary.
Xu, Zhenzhen; Li, Jianzhong; Chen, Ailiang; Ma, Xin; Yang, Shuming
2018-05-03
The retrospectivity (the ability to retrospect to a previously unknown compound in raw data) is very meaningful for food safety and risk assessment when facing new emerging drugs. Accurate mass and retention time based screening may lead false positive and false negative results so new retrospective, reliable platform is desirable. Different concentration levels of standards with and without matrix were analyzed using ion mobility (IM)-quadrupole-time-of-flight (Q-TOF) for collecting retrospective accurate mass, retention time, drift time and tandem MS evidence for identification in a single experiment. The isomer separation ability of IM and the four-dimensional (4D) feature abundance quantification abilities were evaluated for veterinary drugs for the first time. The sensitivity of the IM-Q-TOF workflow was obviously higher than that of the traditional database searching algorithm [find by formula (FbF) function] for Q-TOF. In addition, the IM-Q-TOF workflow contained most of the results from FbF and removed the false positive results. Some isomers were separated by IM and the 4D feature abundance quantitation removed interference with similar accurate mass and showed good linearity. A new retrospective, multi-evidence platform was built for veterinary drug screening in a single experiment. The sensitivity was significantly improved and the data can be used for quantification. The platform showed its potential to be used for food safety and risk assessment. This article is protected by copyright. All rights reserved.
Wei, Ning-Ning; Hamza, Adel
2014-01-27
We present an efficient and rational ligand/structure shape-based virtual screening approach combining our previous ligand shape-based similarity SABRE (shape-approach-based routines enhanced) and the 3D shape of the receptor binding site. Our approach exploits the pharmacological preferences of a number of known active ligands to take advantage of the structural diversities and chemical similarities, using a linear combination of weighted molecular shape density. Furthermore, the algorithm generates a consensus molecular-shape pattern recognition that is used to filter and place the candidate structure into the binding pocket. The descriptor pool used to construct the consensus molecular-shape pattern consists of four dimensional (4D) fingerprints generated from the distribution of conformer states available to a molecule and the 3D shapes of a set of active ligands computed using SABRE software. The virtual screening efficiency of SABRE was validated using the Database of Useful Decoys (DUD) and the filtered version (WOMBAT) of 10 DUD targets. The ligand/structure shape-based similarity SABRE algorithm outperforms several other widely used virtual screening methods which uses the data fusion of multiscreening tools (2D and 3D fingerprints) and demonstrates a superior early retrieval rate of active compounds (EF(0.1%) = 69.0% and EF(1%) = 98.7%) from a large size of ligand database (∼95,000 structures). Therefore, our developed similarity approach can be of particular use for identifying active compounds that are similar to reference molecules and predicting activity against other targets (chemogenomics). An academic license of the SABRE program is available on request.
Ko, Gene M; Garg, Rajni; Bailey, Barbara A; Kumar, Sunil
2016-01-01
Quantitative structure-activity relationship (QSAR) models can be used as a predictive tool for virtual screening of chemical libraries to identify novel drug candidates. The aims of this paper were to report the results of a study performed for descriptor selection, QSAR model development, and virtual screening for identifying novel HIV-1 integrase inhibitor drug candidates. First, three evolutionary algorithms were compared for descriptor selection: differential evolution-binary particle swarm optimization (DE-BPSO), binary particle swarm optimization, and genetic algorithms. Next, three QSAR models were developed from an ensemble of multiple linear regression, partial least squares, and extremely randomized trees models. A comparison of the performances of three evolutionary algorithms showed that DE-BPSO has a significant improvement over the other two algorithms. QSAR models developed in this study were used in consensus as a predictive tool for virtual screening of the NCI Open Database containing 265,242 compounds to identify potential novel HIV-1 integrase inhibitors. Six compounds were predicted to be highly active (plC50 > 6) by each of the three models. The use of a hybrid evolutionary algorithm (DE-BPSO) for descriptor selection and QSAR model development in drug design is a novel approach. Consensus modeling may provide better predictivity by taking into account a broader range of chemical properties within the data set conducive for inhibition that may be missed by an individual model. The six compounds identified provide novel drug candidate leads in the design of next generation HIV- 1 integrase inhibitors targeting drug resistant mutant viruses.
A proposal for an open source graphical environment for simulating x-ray optics
NASA Astrophysics Data System (ADS)
Sanchez del Rio, Manuel; Rebuffi, Luca; Demsar, Janez; Canestrari, Niccolo; Chubar, Oleg
2014-09-01
A new graphic environment to drive X-ray optics simulation packages such as SHADOW and SRW is proposed. The aim is to simulate a virtual experiment, including the description of the electron beam and simulate the emitted radiation, the optics, the scattering by the sample and radiation detection. Python is chosen as common interaction language. The ingredients of the new application, a glossary of variables for optical component, the selection of visualization tools, and the integration of all these components in a high level workflow environment built on Orange are presented.
Dual-Energy CT: New Horizon in Medical Imaging
Goo, Jin Mo
2017-01-01
Dual-energy CT has remained underutilized over the past decade probably due to a cumbersome workflow issue and current technical limitations. Clinical radiologists should be made aware of the potential clinical benefits of dual-energy CT over single-energy CT. To accomplish this aim, the basic principle, current acquisition methods with advantages and disadvantages, and various material-specific imaging methods as clinical applications of dual-energy CT should be addressed in detail. Current dual-energy CT acquisition methods include dual tubes with or without beam filtration, rapid voltage switching, dual-layer detector, split filter technique, and sequential scanning. Dual-energy material-specific imaging methods include virtual monoenergetic or monochromatic imaging, effective atomic number map, virtual non-contrast or unenhanced imaging, virtual non-calcium imaging, iodine map, inhaled xenon map, uric acid imaging, automatic bone removal, and lung vessels analysis. In this review, we focus on dual-energy CT imaging including related issues of radiation exposure to patients, scanning and post-processing options, and potential clinical benefits mainly to improve the understanding of clinical radiologists and thus, expand the clinical use of dual-energy CT; in addition, we briefly describe the current technical limitations of dual-energy CT and the current developments of photon-counting detector. PMID:28670151
NASA Astrophysics Data System (ADS)
2018-01-01
The large amount of data generated by modern space missions calls for a change of organization of data distribution and access procedures. Although long term archives exist for telescopic and space-borne observations, high-level functions need to be developed on top of these repositories to make Planetary Science and Heliophysics data more accessible and to favor interoperability. Results of simulations and reference laboratory data also need to be integrated to support and interpret the observations. Interoperable software and interfaces have recently been developed in many scientific domains. The Virtual Observatory (VO) interoperable standards developed for Astronomy by the International Virtual Observatory Alliance (IVOA) can be adapted to Planetary Sciences, as demonstrated by the VESPA (Virtual European Solar and Planetary Access) team within the Europlanet-H2020-RI project. Other communities have developed their own standards: GIS (Geographic Information System) for Earth and planetary surfaces tools, SPASE (Space Physics Archive Search and Extract) for space plasma, PDS4 (NASA Planetary Data System, version 4) and IPDA (International Planetary Data Alliance) for planetary mission archives, etc, and an effort to make them interoperable altogether is starting, including automated workflows to process related data from different sources.
Virtual Planning of a Complex Three-Part Bimaxillary Osteotomy
Anghinoni, Marilena Laura
2017-01-01
In maxillofacial surgery, every patient presents special problems requiring careful evaluation. Conventional methods to study the deformities are still reliable, but the advent of tridimensional (3D) imaging, especially computed tomography (CT) scan and laser scanning of casts, created the opportunity to better understanding the skeletal support and the soft tissue structures. Nowadays, virtual technologies are increasingly employed in maxillofacial surgery and demonstrated precision and reliability. However, in complex surgical procedures, these new technologies are still controversial. Especially in the less frequent cases of three-part maxillary surgery, the experience is limited, and scientific literature cannot give a clear support. This paper presents the case of a young patient affected by a complex long face dentofacial deformity treated by a bimaxillary surgery with three-part segmentation of the maxilla. The operator performed the surgical study completely with a virtual workflow. Pre- and postoperative CT scan and optical scanning of plaster models were collected and compared. Every postoperatory maxillary piece was superimposed with the presurgical one, and the differences were examined in a color-coded map. Only mild differences were found near the osteotomy lines, when the bony surface and the teeth demonstrated an excellent coincidence. PMID:29318057
Transient Science from Diverse Surveys
NASA Astrophysics Data System (ADS)
Mahabal, A.; Crichton, D.; Djorgovski, S. G.; Donalek, C.; Drake, A.; Graham, M.; Law, E.
2016-12-01
Over the last several years we have moved closer to being able to make digital movies of the non-static sky with wide-field synoptic telescopes operating at a variety of depths, resolutions, and wavelengths. For optimal combined use of these datasets, it is crucial that they speak and understand the same language and are thus interoperable. Initial steps towards such interoperability (e.g. the footprint service) were taken during the two five-year Virtual Observatory projects viz. National Virtual Observatory (NVO), and later Virtual Astronomical Observatory (VAO). Now with far bigger datasets and in an era of resource excess thanks to the cloud-based workflows, we show how the movement of data and of resources is required - rather than just one or the other - to combine diverse datasets for applications such as real-time astronomical transient characterization. Taking the specific example of ElectroMagnetic (EM) follow-up of Gravitational Wave events and EM transients (such as CRTS but also other optical and non-optical surveys), we discuss the requirements for rapid and flexible response. We show how the same methodology is applicable to Earth Science data with its datasets differing in spatial and temporal resolution as well as differing time-spans.
Combination of surface and borehole seismic data for robust target-oriented imaging
NASA Astrophysics Data System (ADS)
Liu, Yi; van der Neut, Joost; Arntsen, Børge; Wapenaar, Kees
2016-05-01
A novel application of seismic interferometry (SI) and Marchenko imaging using both surface and borehole data is presented. A series of redatuming schemes is proposed to combine both data sets for robust deep local imaging in the presence of velocity uncertainties. The redatuming schemes create a virtual acquisition geometry where both sources and receivers lie at the horizontal borehole level, thus only a local velocity model near the borehole is needed for imaging, and erroneous velocities in the shallow area have no effect on imaging around the borehole level. By joining the advantages of SI and Marchenko imaging, a macrovelocity model is no longer required and the proposed schemes use only single-component data. Furthermore, the schemes result in a set of virtual data that have fewer spurious events and internal multiples than previous virtual source redatuming methods. Two numerical examples are shown to illustrate the workflow and to demonstrate the benefits of the method. One is a synthetic model and the other is a realistic model of a field in the North Sea. In both tests, improved local images near the boreholes are obtained using the redatumed data without accurate velocities, because the redatumed data are close to the target.
The Pathologist 2.0: An Update on Digital Pathology in Veterinary Medicine.
Bertram, Christof A; Klopfleisch, Robert
2017-09-01
Using light microscopy to describe the microarchitecture of normal and diseased tissues has changed very little since the middle of the 19th century. While the premise of histologic analysis remains intact, our relationship with the microscope is changing dramatically. Digital pathology offers new forms of visualization, and delivery of images is facilitated in unprecedented ways. This new technology can untether us entirely from our light microscopes, with many pathologists already performing their jobs using virtual microscopy. Several veterinary colleges have integrated virtual microscopy in their curriculum, and some diagnostic histopathology labs are switching to virtual microscopy as their main tool for the assessment of histologic specimens. Considering recent technical advancements of slide scanner and viewing software, digital pathology should now be considered a serious alternative to traditional light microscopy. This review therefore intends to give an overview of the current digital pathology technologies and their potential in all fields of veterinary pathology (ie, research, diagnostic service, and education). A future integration of digital pathology in the veterinary pathologist's workflow seems to be inevitable, and therefore it is proposed that trainees should be taught in digital pathology to keep up with the unavoidable digitization of the profession.
Resilient workflows for computational mechanics platforms
NASA Astrophysics Data System (ADS)
Nguyên, Toàn; Trifan, Laurentiu; Désidéri, Jean-Antoine
2010-06-01
Workflow management systems have recently been the focus of much interest and many research and deployment for scientific applications worldwide [26, 27]. Their ability to abstract the applications by wrapping application codes have also stressed the usefulness of such systems for multidiscipline applications [23, 24]. When complex applications need to provide seamless interfaces hiding the technicalities of the computing infrastructures, their high-level modeling, monitoring and execution functionalities help giving production teams seamless and effective facilities [25, 31, 33]. Software integration infrastructures based on programming paradigms such as Python, Mathlab and Scilab have also provided evidence of the usefulness of such approaches for the tight coupling of multidisciplne application codes [22, 24]. Also high-performance computing based on multi-core multi-cluster infrastructures open new opportunities for more accurate, more extensive and effective robust multi-discipline simulations for the decades to come [28]. This supports the goal of full flight dynamics simulation for 3D aircraft models within the next decade, opening the way to virtual flight-tests and certification of aircraft in the future [23, 24, 29].
Using the iPlant collaborative discovery environment.
Oliver, Shannon L; Lenards, Andrew J; Barthelson, Roger A; Merchant, Nirav; McKay, Sheldon J
2013-06-01
The iPlant Collaborative is an academic consortium whose mission is to develop an informatics and social infrastructure to address the "grand challenges" in plant biology. Its cyberinfrastructure supports the computational needs of the research community and facilitates solving major challenges in plant science. The Discovery Environment provides a powerful and rich graphical interface to the iPlant Collaborative cyberinfrastructure by creating an accessible virtual workbench that enables all levels of expertise, ranging from students to traditional biology researchers and computational experts, to explore, analyze, and share their data. By providing access to iPlant's robust data-management system and high-performance computing resources, the Discovery Environment also creates a unified space in which researchers can access scalable tools. Researchers can use available Applications (Apps) to execute analyses on their data, as well as customize or integrate their own tools to better meet the specific needs of their research. These Apps can also be used in workflows that automate more complicated analyses. This module describes how to use the main features of the Discovery Environment, using bioinformatics workflows for high-throughput sequence data as examples. © 2013 by John Wiley & Sons, Inc.
Virtual Environment Training: Auxiliary Machinery Room (AMR) Watchstation Trainer.
ERIC Educational Resources Information Center
Hriber, Dennis C.; And Others
1993-01-01
Describes a project implemented at Newport News Shipbuilding that used Virtual Environment Training to improve the performance of submarine crewmen. Highlights include development of the Auxiliary Machine Room (AMR) Watchstation Trainer; Digital Video Interactive (DVI); screen layout; test design and evaluation; user reactions; authoring language;…
Virtual reality interventions for rehabilitation: considerations for developing protocols.
Boechler, Patricia; Krol, Andrea; Raso, Jim; Blois, Terry
2009-01-01
This paper is a preliminary report on a work in progress that explores the existence of practice effects in early use of virtual reality environments for rehabilitation purposes and the effects of increases in level of difficulty as defined by rate of on-screen objects.
This virtual FIFRA SAP meeting will be discus questions on Continuing Development of Alternative High-Throughput Screens to Determine Endocrine Disruption, focusing on Androgen Receptor, Steroidogenesis, and Thyroid Pathways
Droplet microfluidic technology for single-cell high-throughput screening.
Brouzes, Eric; Medkova, Martina; Savenelli, Neal; Marran, Dave; Twardowski, Mariusz; Hutchison, J Brian; Rothberg, Jonathan M; Link, Darren R; Perrimon, Norbert; Samuels, Michael L
2009-08-25
We present a droplet-based microfluidic technology that enables high-throughput screening of single mammalian cells. This integrated platform allows for the encapsulation of single cells and reagents in independent aqueous microdroplets (1 pL to 10 nL volumes) dispersed in an immiscible carrier oil and enables the digital manipulation of these reactors at a very high-throughput. Here, we validate a full droplet screening workflow by conducting a droplet-based cytotoxicity screen. To perform this screen, we first developed a droplet viability assay that permits the quantitative scoring of cell viability and growth within intact droplets. Next, we demonstrated the high viability of encapsulated human monocytic U937 cells over a period of 4 days. Finally, we developed an optically-coded droplet library enabling the identification of the droplets composition during the assay read-out. Using the integrated droplet technology, we screened a drug library for its cytotoxic effect against U937 cells. Taken together our droplet microfluidic platform is modular, robust, uses no moving parts, and has a wide range of potential applications including high-throughput single-cell analyses, combinatorial screening, and facilitating small sample analyses.
Lin, Shih-Hung; Huang, Kao-Jean; Weng, Ching-Feng; Shiuan, David
2015-01-01
Cholesterol plays an important role in living cells. However, a very high level of cholesterol may lead to atherosclerosis. HMG-CoA (3-hydroxy-3-methylglutaryl coenzyme A) reductase is the key enzyme in the cholesterol biosynthesis pathway, and the statin-like drugs are inhibitors of human HMG-CoA reductase (hHMGR). The present study aimed to virtually screen for potential hHMGR inhibitors from natural product to discover hypolipidemic drug candidates with fewer side effects and lesser toxicities. We used the 3D structure 1HWK from the PDB (Protein Data Bank) database of hHMGR as the target to screen for the strongly bound compounds from the traditional Chinese medicine database. Many interesting molecules including polyphenolic compounds, polisubstituted heterocyclics, and linear lipophilic alcohols were identified and their ADMET (absorption, disrtibution, metabolism, excretion, toxicity) properties were predicted. Finally, four compounds were obtained for the in vitro validation experiments. The results indicated that curcumin and salvianolic acid C can effectively inhibit hHMGR, with IC50 (half maximal inhibitory concentration) values of 4.3 µM and 8 µM, respectively. The present study also demonstrated the feasibility of discovering new drug candidates through structure-based virtual screening.
Kellenberger, Esther; Foata, Nicolas; Rognan, Didier
2008-05-01
Structure-based virtual screening is a promising tool to identify putative targets for a specific ligand. Instead of docking multiple ligands into a single protein cavity, a single ligand is docked in a collection of binding sites. In inverse screening, hits are in fact targets which have been prioritized within the pool of best ranked proteins. The target rate depends on specificity and promiscuity in protein-ligand interactions and, to a considerable extent, on the effectiveness of the scoring function, which still is the Achilles' heel of molecular docking. In the present retrospective study, virtual screening of the sc-PDB target library by GOLD docking was carried out for four compounds (biotin, 4-hydroxy-tamoxifen, 6-hydroxy-1,6-dihydropurine ribonucleoside, and methotrexate) of known sc-PDB targets and, several ranking protocols based on GOLD fitness score and topological molecular interaction fingerprint (IFP) comparison were evaluated. For the four investigated ligands, the fusion of GOLD fitness and two IFP scores allowed the recovery of most targets, including the rare proteins which are not readily suitable for statistical analysis, while significantly filtering out most false positive entries. The current survey suggests that selecting a small number of targets (<20) for experimental evaluation is achievable with a pure structure-based approach.
Manoharan, Prabu; Ghoshal, Nanda
2018-05-01
Traditional structure-based virtual screening method to identify drug-like small molecules for BACE1 is so far unsuccessful. Location of BACE1, poor Blood Brain Barrier permeability and P-glycoprotein (Pgp) susceptibility of the inhibitors make it even more difficult. Fragment-based drug design method is suitable for efficient optimization of initial hit molecules for target like BACE1. We have developed a fragment-based virtual screening approach to identify/optimize the fragment molecules as a starting point. This method combines the shape, electrostatic, and pharmacophoric features of known fragment molecules, bound to protein conjugate crystal structure, and aims to identify both chemically and energetically feasible small fragment ligands that bind to BACE1 active site. The two top-ranked fragment hits were subjected for a 53 ns MD simulation. Principle component analysis and free energy landscape analysis reveal that the new ligands show the characteristic features of established BACE1 inhibitors. The potent method employed in this study may serve for the development of potential lead molecules for BACE1-directed Alzheimer's disease therapeutics.
Kaserer, Teresa; Beck, Katharina R; Akram, Muhammad; Odermatt, Alex; Schuster, Daniela
2015-12-19
Computational methods are well-established tools in the drug discovery process and can be employed for a variety of tasks. Common applications include lead identification and scaffold hopping, as well as lead optimization by structure-activity relationship analysis and selectivity profiling. In addition, compound-target interactions associated with potentially harmful effects can be identified and investigated. This review focuses on pharmacophore-based virtual screening campaigns specifically addressing the target class of hydroxysteroid dehydrogenases. Many members of this enzyme family are associated with specific pathological conditions, and pharmacological modulation of their activity may represent promising therapeutic strategies. On the other hand, unintended interference with their biological functions, e.g., upon inhibition by xenobiotics, can disrupt steroid hormone-mediated effects, thereby contributing to the development and progression of major diseases. Besides a general introduction to pharmacophore modeling and pharmacophore-based virtual screening, exemplary case studies from the field of short-chain dehydrogenase/reductase (SDR) research are presented. These success stories highlight the suitability of pharmacophore modeling for the various application fields and suggest its application also in futures studies.
Zhang, Aiqian; Mu, Yunsong; Wu, Fengchang
2017-04-01
Chiral organophosphates (OPs) have been used widely around the world, very little is known about binding mechanisms with biological macromolecules. An in-depth understanding of the stereo selectivity of human AChE and discovering bioactive enantiomers of OPs can decrease health risks of these chiral chemicals. In the present study, a flexible molecular docking approach was conducted to investigate different binding modes of twelve phosphorus enantiomers. A pharmacophore model was then developed on basis of the bioactive conformations of these compounds. After virtual screening, twenty-four potential bioactive compounds were found, of which three compounds (Ethyl p-nitrophenyl phenylphosphonate (EPN), 1-naphthaleneacetic anhydride and N,4-dimethyl-N-phenyl-benzenesulfonamide) were tested by use of different in vitro assays. S-isomer of EPN was also found to exhibit greater inhibitory activity towards human AChE than the corresponding R-isomer. These findings affirm that stereochemistry plays a crucial role in virtual screening, and provide a new insight into designing safer organ phosphorus pesticides on human health. Copyright © 2017 Elsevier Inc. All rights reserved.
2011-01-01
Background Data fusion methods are widely used in virtual screening, and make the implicit assumption that the more often a molecule is retrieved in multiple similarity searches, the more likely it is to be active. This paper tests the correctness of this assumption. Results Sets of 25 searches using either the same reference structure and 25 different similarity measures (similarity fusion) or 25 different reference structures and the same similarity measure (group fusion) show that large numbers of unique molecules are retrieved by just a single search, but that the numbers of unique molecules decrease very rapidly as more searches are considered. This rapid decrease is accompanied by a rapid increase in the fraction of those retrieved molecules that are active. There is an approximately log-log relationship between the numbers of different molecules retrieved and the number of searches carried out, and a rationale for this power-law behaviour is provided. Conclusions Using multiple searches provides a simple way of increasing the precision of a similarity search, and thus provides a justification for the use of data fusion methods in virtual screening. PMID:21824430
Docking and Virtual Screening Strategies for GPCR Drug Discovery.
Beuming, Thijs; Lenselink, Bart; Pala, Daniele; McRobb, Fiona; Repasky, Matt; Sherman, Woody
2015-01-01
Progress in structure determination of G protein-coupled receptors (GPCRs) has made it possible to apply structure-based drug design (SBDD) methods to this pharmaceutically important target class. The quality of GPCR structures available for SBDD projects fall on a spectrum ranging from high resolution crystal structures (<2 Å), where all water molecules in the binding pocket are resolved, to lower resolution (>3 Å) where some protein residues are not resolved, and finally to homology models that are built using distantly related templates. Each GPCR project involves a distinct set of opportunities and challenges, and requires different approaches to model the interaction between the receptor and the ligands. In this review we will discuss docking and virtual screening to GPCRs, and highlight several refinement and post-processing steps that can be used to improve the accuracy of these calculations. Several examples are discussed that illustrate specific steps that can be taken to improve upon the docking and virtual screening accuracy. While GPCRs are a unique target class, many of the methods and strategies outlined in this review are general and therefore applicable to other protein families.
Kong, Xiangqian; Qin, Jie; Li, Zeng; Vultur, Adina; Tong, Linjiang; Feng, Enguang; Rajan, Geena; Liu, Shien; Lu, Junyan; Liang, Zhongjie; Zheng, Mingyue; Zhu, Weiliang; Jiang, Hualiang; Herlyn, Meenhard; Liu, Hong; Marmorstein, Ronen; Luo, Cheng
2012-01-01
Oncogenic mutations in critical nodes of cellular signaling pathways have been associated with tumorigenesis and progression. The B-Raf protein kinase, a key hub in the canonical MAPK signaling cascade, is mutated in a broad range of human cancers and especially in malignant melanoma. The most prevalent B-RafV600E mutant exhibits elevated kinase activity and results in constitutive activation of the MAPK pathway, thus making it a promising drug target for cancer therapy. Herein, we described the development of novel B-RafV600E selective inhibitors via multi-step virtual screening and hierarchical hit optimization. Nine hit compounds with low micromolar IC50 values were identified as B-RafV600E inhibitors through virtual screening. Subsequent scaffold-based analogue searching and medicinal chemistry efforts significantly improved both the inhibitor potency and oncogene selectivity. In particular, compounds 22f and 22q possess nanomolar IC50 values with selectivity for B-RafV600E in vitro and exclusive cytotoxicity against B-RafV600E harboring cancer cells. PMID:22875039
Kong, Xiangqian; Qin, Jie; Li, Zeng; Vultur, Adina; Tong, Linjiang; Feng, Enguang; Rajan, Geena; Liu, Shien; Lu, Junyan; Liang, Zhongjie; Zheng, Mingyue; Zhu, Weiliang; Jiang, Hualiang; Herlyn, Meenhard; Liu, Hong; Marmorstein, Ronen; Luo, Cheng
2012-09-28
Oncogenic mutations in critical nodes of cellular signaling pathways have been associated with tumorigenesis and progression. The B-Raf protein kinase, a key hub in the canonical MAPK signaling cascade, is mutated in a broad range of human cancers and especially in malignant melanoma. The most prevalent B-Raf(V600E) mutant exhibits elevated kinase activity and results in constitutive activation of the MAPK pathway, thus making it a promising drug target for cancer therapy. Herein, we describe the development of novel B-Raf(V600E) selective inhibitors via multi-step virtual screening and hierarchical hit optimization. Nine hit compounds with low micromolar IC(50) values were identified as B-Raf(V600E) inhibitors through virtual screening. Subsequent scaffold-based analogue searching and medicinal chemistry efforts significantly improved both the inhibitor potency and oncogene selectivity. In particular, compounds 22f and 22q possess nanomolar IC(50) values with selectivity for B-Raf(V600E)in vitro and exclusive cytotoxicity against B-Raf(V600E) harboring cancer cells.
Schuster, Daniela; Nashev, Lyubomir G; Kirchmair, Johannes; Laggner, Christian; Wolber, Gerhard; Langer, Thierry; Odermatt, Alex
2008-07-24
17Beta-hydroxysteroid dehydrogenase type 1 (17beta-HSD1) plays a pivotal role in the local synthesis of the most potent estrogen estradiol. Its expression is a prognostic marker for the outcome of patients with breast cancer and inhibition of 17beta-HSD1 is currently under consideration for breast cancer prevention and treatment. We aimed to identify nonsteroidal 17beta-HSD1 inhibitor scaffolds by virtual screening with pharmacophore models built from crystal structures containing steroidal compounds. The most promising model was validated by comparing predicted and experimentally determined inhibitory activities of several flavonoids. Subsequently, a virtual library of nonsteroidal compounds was screened against the 3D pharmacophore. Analysis of 14 selected compounds yielded four that inhibited the activity of human 17beta-HSD1 (IC 50 below 50 microM). Specificity assessment of identified 17beta-HSD1 inhibitors emphasized the importance of including related short-chain dehydrogenase/reductase (SDR) members to analyze off-target effects. Compound 29 displayed at least 10-fold selectivity over the related SDR enzymes tested.
Kellogg, Glen E; Fornabaio, Micaela; Chen, Deliang L; Abraham, Donald J; Spyrakis, Francesca; Cozzini, Pietro; Mozzarelli, Andrea
2006-05-01
Computational tools utilizing a unique empirical modeling system based on the hydrophobic effect and the measurement of logP(o/w) (the partition coefficient for solvent transfer between 1-octanol and water) are described. The associated force field, Hydropathic INTeractions (HINT), contains much rich information about non-covalent interactions in the biological environment because of its basis in an experiment that measures interactions in solution. HINT is shown to be the core of an evolving virtual screening system that is capable of taking into account a number of factors often ignored such as entropy, effects of solvent molecules at the active site, and the ionization states of acidic and basic residues and ligand functional groups. The outline of a comprehensive modeling system for virtual screening that incorporates these features is described. In addition, a detailed description of the Computational Titration algorithm is provided. As an example, three complexes of dihydrofolate reductase (DHFR) are analyzed with our system and these results are compared with the experimental free energies of binding.
Virtual Screening Approach of Bacterial Peptide Deformylase Inhibitors Results in New Antibiotics.
Merzoug, Amina; Chikhi, Abdelouahab; Bensegueni, Abderrahmane; Boucherit, Hanane; Okay, Sezer
2018-03-01
The increasing resistance of bacteria to antibacterial therapy poses an enormous health problem, it renders the development of new antibacterial agents with novel mechanism of action an urgent need. Peptide deformylase, a metalloenzyme which catalytically removes N-formyl group from N-terminal methionine of newly synthesized polypeptides, is an important target in antibacterial drug discovery. In this study, we report the structure-based virtual screening of ZINC database in order to discover potential hits as bacterial peptide deformylase enzyme inhibitors with more affinity as compared to GSK1322322, previously known inhibitor. After virtual screening, fifteen compounds of the top hits predicted were purchased and evaluated in vitro for their antibacterial activities against one Gram positive (Staphylococcus aureus) and three Gram negative (Escherichia coli, Pseudomonas aeruginosa and Klebsiella. pneumoniae) bacteria in different concentrations by disc diffusion method. Out of these, three compounds, ZINC00039650, ZINC03872971 and ZINC00126407, exhibited significant zone of inhibition. The results obtained were confirmed using the dilution method. Thus, these proposed compounds may aid the development of more efficient antibacterial agents. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Teaching Basic Field Skills Using Screen-Based Virtual Reality Landscapes
NASA Astrophysics Data System (ADS)
Houghton, J.; Robinson, A.; Gordon, C.; Lloyd, G. E. E.; Morgan, D. J.
2016-12-01
We are using screen-based virtual reality landscapes, created using the Unity 3D game engine, to augment the training geoscience students receive in preparing for fieldwork. Students explore these landscapes as they would real ones, interacting with virtual outcrops to collect data, determine location, and map the geology. Skills for conducting field geological surveys - collecting, plotting and interpreting data; time management and decision making - are introduced interactively and intuitively. As with real landscapes, the virtual landscapes are open-ended terrains with embedded data. This means the game does not structure student interaction with the information as it is through experience the student learns the best methods to work successfully and efficiently. These virtual landscapes are not replacements for geological fieldwork rather virtual spaces between classroom and field in which to train and reinforcement essential skills. Importantly, these virtual landscapes offer accessible parallel provision for students unable to visit, or fully partake in visiting, the field. The project has received positive feedback from both staff and students. Results show students find it easier to focus on learning these basic field skills in a classroom, rather than field setting, and make the same mistakes as when learning in the field, validating the realistic nature of the virtual experience and providing opportunity to learn from these mistakes. The approach also saves time, and therefore resources, in the field as basic skills are already embedded. 70% of students report increased confidence with how to map boundaries and 80% have found the virtual training a useful experience. We are also developing landscapes based on real places with 3D photogrammetric outcrops, and a virtual urban landscape in which Engineering Geology students can conduct a site investigation. This project is a collaboration between the University of Leeds and Leeds College of Art, UK, and all our virtual landscapes are freely available online at www.see.leeds.ac.uk/virtual-landscapes/.
Developing a Virtual Museum for the Ancient Wine Trade in Eastern Mediterranean
NASA Astrophysics Data System (ADS)
Kazanis, S.; Kontogianni, G.; Chliverou, R.; Georgopoulos, A.
2017-08-01
Digital technologies for representing cultural heritage assets of any size are already maturing. Technological progress has greatly enhanced the art of virtual representation and, as a consequence, it is all the more appealing to the general public and especially to younger generations. The game industry has played a significant role towards this end and has led to the development of edutainment applications. The digital workflow implemented for developing such an application is presented in this paper. A virtual museum has been designed and developed, with the intention to convey the history of trade in the Eastern Mediterranean area, focusing on the Aegean Sea and five productive cities-ports, during a period of more than 500 years. Image based modeling methodology was preferred to ensure accuracy and reliability. The setup in the museum environment, the difficulties encountered and the solutions adopted are discussed, while processing of the images and the production and finishing of the 3D models are described in detail. The virtual museum and edutainment application, MEDWINET, has been designed and developed with the intention to convey the essential information of the wine production and trade routes in the Eastern Mediterranean basin. The user is able to examine the 3D models of the amphorae, while learning about their production and use for trade during the centuries. The application has been evaluated and the results are also discussed.
2011-01-01
Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105
The Virtual Watershed Observatory: Cyberinfrastructure for Model-Data Integration and Access
NASA Astrophysics Data System (ADS)
Duffy, C.; Leonard, L. N.; Giles, L.; Bhatt, G.; Yu, X.
2011-12-01
The Virtual Watershed Observatory (VWO) is a concept where scientists, water managers, educators and the general public can create a virtual observatory from integrated hydrologic model results, national databases and historical or real-time observations via web services. In this paper, we propose a prototype for automated and virtualized web services software using national data products for climate reanalysis, soils, geology, terrain and land cover. The VWO has the broad purpose of making accessible water resource simulations, real-time data assimilation, calibration and archival at the scale of HUC 12 watersheds (Hydrologic Unit Code) anywhere in the continental US. Our prototype for model-data integration focuses on creating tools for fast data storage from selected national databases, as well as the computational resources necessary for a dynamic, distributed watershed simulation. The paper will describe cyberinfrastructure tools and workflow that attempts to resolve the problem of model-data accessibility and scalability such that individuals, research teams, managers and educators can create a WVO in a desired context. Examples are given for the NSF-funded Shale Hills Critical Zone Observatory and the European Critical Zone Observatories within the SoilTrEC project. In the future implementation of WVO services will benefit from the development of a cloud cyber infrastructure as the prototype evolves to data and model intensive computation for continental scale water resource predictions.
NASA Astrophysics Data System (ADS)
Weihusen, Andreas; Ritter, Felix; Kröger, Tim; Preusser, Tobias; Zidowitz, Stephan; Peitgen, Heinz-Otto
2007-03-01
Image guided radiofrequency (RF) ablation has taken a significant part in the clinical routine as a minimally invasive method for the treatment of focal liver malignancies. Medical imaging is used in all parts of the clinical workflow of an RF ablation, incorporating treatment planning, interventional targeting and result assessment. This paper describes a software application, which has been designed to support the RF ablation workflow under consideration of the requirements of clinical routine, such as easy user interaction and a high degree of robust and fast automatic procedures, in order to keep the physician from spending too much time at the computer. The application therefore provides a collection of specialized image processing and visualization methods for treatment planning and result assessment. The algorithms are adapted to CT as well as to MR imaging. The planning support contains semi-automatic methods for the segmentation of liver tumors and the surrounding vascular system as well as an interactive virtual positioning of RF applicators and a concluding numerical estimation of the achievable heat distribution. The assessment of the ablation result is supported by the segmentation of the coagulative necrosis and an interactive registration of pre- and post-interventional image data for the comparison of tumor and necrosis segmentation masks. An automatic quantification of surface distances is performed to verify the embedding of the tumor area into the thermal lesion area. The visualization methods support representations in the commonly used orthogonal 2D view as well as in 3D scenes.
Loroch, Stefan; Schommartz, Tim; Brune, Wolfram; Zahedi, René Peiman; Sickmann, Albert
2015-05-01
Quantitative proteomics and phosphoproteomics have become key disciplines in understanding cellular processes. Fundamental research can be done using cell culture providing researchers with virtually infinite sample amounts. In contrast, clinical, pre-clinical and biomedical research is often restricted to minute sample amounts and requires an efficient analysis with only micrograms of protein. To address this issue, we generated a highly sensitive workflow for combined LC-MS-based quantitative proteomics and phosphoproteomics by refining an ERLIC-based 2D phosphoproteomics workflow into an ERLIC-based 3D workflow covering the global proteome as well. The resulting 3D strategy was successfully used for an in-depth quantitative analysis of both, the proteome and the phosphoproteome of murine cytomegalovirus-infected mouse fibroblasts, a model system for host cell manipulation by a virus. In a 2-plex SILAC experiment with 150 μg of a tryptic digest per condition, the 3D strategy enabled the quantification of ~75% more proteins and even ~134% more peptides compared to the 2D strategy. Additionally, we could quantify ~50% more phosphoproteins by non-phosphorylated peptides, concurrently yielding insights into changes on the levels of protein expression and phosphorylation. Beside its sensitivity, our novel three-dimensional ERLIC-strategy has the potential for semi-automated sample processing rendering it a suitable future perspective for clinical, pre-clinical and biomedical research. Copyright © 2015. Published by Elsevier B.V.
SimITK: rapid ITK prototyping using the Simulink visual programming environment
NASA Astrophysics Data System (ADS)
Dickinson, A. W. L.; Mousavi, P.; Gobbi, D. G.; Abolmaesumi, P.
2011-03-01
The Insight Segmentation and Registration Toolkit (ITK) is a long-established, software package used for image analysis, visualization, and image-guided surgery applications. This package is a collection of C++ libraries, that can pose usability problems for users without C++ programming experience. To bridge the gap between the programming complexities and the required learning curve of ITK, we present a higher-level visual programming environment that represents ITK methods and classes by wrapping them into "blocks" within MATLAB's visual programming environment, Simulink. These blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. Due to the heavily C++ templated nature of ITK, direct interaction between Simulink and ITK requires an intermediary to convert their respective datatypes and allow intercommunication. We have developed a "Virtual Block" that serves as an intermediate wrapper around the ITK class and is responsible for resolving the templated datatypes used by ITK to native types used by Simulink. Presently, the wrapping procedure for SimITK is semi-automatic in that it requires XML descriptions of the ITK classes as a starting point, as this data is used to create all other necessary integration files. The generation of all source code and object code from the XML is done automatically by a CMake build script that yields Simulink blocks as the final result. An example 3D segmentation workflow using cranial-CT data as well as a 3D MR-to-CT registration workflow are presented as a proof-of-concept.
Study on user interface of pathology picture archiving and communication system.
Kim, Dasueran; Kang, Peter; Yun, Jungmin; Park, Sung-Hye; Seo, Jeong-Wook; Park, Peom
2014-01-01
It is necessary to improve the pathology workflow. A workflow task analysis was performed using a pathology picture archiving and communication system (pathology PACS) in order to propose a user interface for the Pathology PACS considering user experience. An interface analysis of the Pathology PACS in Seoul National University Hospital and a task analysis of the pathology workflow were performed by observing recorded video. Based on obtained results, a user interface for the Pathology PACS was proposed. Hierarchical task analysis of Pathology PACS was classified into 17 tasks including 1) pre-operation, 2) text, 3) images, 4) medical record viewer, 5) screen transition, 6) pathology identification number input, 7) admission date input, 8) diagnosis doctor, 9) diagnosis code, 10) diagnosis, 11) pathology identification number check box, 12) presence or absence of images, 13) search, 14) clear, 15) Excel save, 16) search results, and 17) re-search. And frequently used menu items were identified and schematized. A user interface for the Pathology PACS considering user experience could be proposed as a preliminary step, and this study may contribute to the development of medical information systems based on user experience and usability.
Integration of virtualized worker nodes in standard batch systems
NASA Astrophysics Data System (ADS)
Büge, Volker; Hessling, Hermann; Kemp, Yves; Kunze, Marcel; Oberst, Oliver; Quast, Günter; Scheurer, Armin; Synge, Owen
2010-04-01
Current experiments in HEP only use a limited number of operating system flavours. Their software might only be validated on one single OS platform. Resource providers might have other operating systems of choice for the installation of the batch infrastructure. This is especially the case if a cluster is shared with other communities, or communities that have stricter security requirements. One solution would be to statically divide the cluster into separated sub-clusters. In such a scenario, no opportunistic distribution of the load can be achieved, resulting in a poor overall utilization efficiency. Another approach is to make the batch system aware of virtualization, and to provide each community with its favoured operating system in a virtual machine. Here, the scheduler has full flexibility, resulting in a better overall efficiency of the resources. In our contribution, we present a lightweight concept for the integration of virtual worker nodes into standard batch systems. The virtual machines are started on the worker nodes just before jobs are executed there. No meta-scheduling is introduced. We demonstrate two prototype implementations, one based on the Sun Grid Engine (SGE), the other using Maui/Torque as a batch system. Both solutions support local job as well as Grid job submission. The hypervisors currently used are Xen and KVM, a port to another system is easily envisageable. To better handle different virtual machines on the physical host, the management solution VmImageManager is developed. We will present first experience from running the two prototype implementations. In a last part, we will show the potential future use of this lightweight concept when integrated into high-level (i.e. Grid) work-flows.
Interactive Immersive Virtualmuseum: Digital Documentation for Virtual Interaction
NASA Astrophysics Data System (ADS)
Clini, P.; Ruggeri, L.; Angeloni, R.; Sasso, M.
2018-05-01
Thanks to their playful and educational approach Virtual Museum systems are very effective for the communication of Cultural Heritage. Among the latest technologies Immersive Virtual Reality is probably the most appealing and potentially effective to serve this purpose; nevertheless, due to a poor user-system interaction, caused by an incomplete maturity of a specific technology for museum applications, it is still quite uncommon to find immersive installations in museums. This paper explore the possibilities offered by this technology and presents a workflow that, starting from digital documentation, makes possible an interaction with archaeological finds or any other cultural heritage inside different kinds of immersive virtual reality spaces. Two different cases studies are presented: the National Archaeological Museum of Marche in Ancona and the 3D reconstruction of the Roman Forum of Fanum Fortunae. Two different approaches not only conceptually but also in contents; while the Archaeological Museum is represented in the application simply using spherical panoramas to give the perception of the third dimension, the Roman Forum is a 3D model that allows visitors to move in the virtual space as in the real one. In both cases, the acquisition phase of the artefacts is central; artefacts are digitized with the photogrammetric technique Structure for Motion then they are integrated inside the immersive virtual space using a PC with a HTC Vive system that allows the user to interact with the 3D models turning the manipulation of objects into a fun and exciting experience. The challenge, taking advantage of the latest opportunities made available by photogrammetry and ICT, is to enrich visitors' experience in Real Museum making possible the interaction with perishable, damaged or lost objects and the public access to inaccessible or no longer existing places promoting in this way the preservation of fragile sites.
Damoiseaux, Robert
2014-05-01
The Molecular Screening Shared Resource (MSSR) offers a comprehensive range of leading-edge high throughput screening (HTS) services including drug discovery, chemical and functional genomics, and novel methods for nano and environmental toxicology. The MSSR is an open access environment with investigators from UCLA as well as from the entire globe. Industrial clients are equally welcome as are non-profit entities. The MSSR is a fee-for-service entity and does not retain intellectual property. In conjunction with the Center for Environmental Implications of Nanotechnology, the MSSR is unique in its dedicated and ongoing efforts towards high throughput toxicity testing of nanomaterials. In addition, the MSSR engages in technology development eliminating bottlenecks from the HTS workflow and enabling novel assays and readouts currently not available.
Special Section: New Ways to Detect Colon Cancer 3-D virtual screening now being used
... two together," recalls Arie Kaufman, chairman of the computer science department at New York's Stony Brook University. Dr. Kaufman is one of the world's leading researchers in the high-tech medical fields of biomedical visualization, computer graphics, virtual reality, and multimedia. The year was ...
USDA-ARS?s Scientific Manuscript database
Molecular field topology analysis, scaffold hopping, and molecular docking were used as complementary computational tools for the design of repellents for Aedes aegypti, the insect vector for yellow fever, West Nile fever, and dengue fever. A large number of analogues were evaluated by virtual scree...
Lagarde, Nathalie; Zagury, Jean-François; Montes, Matthieu
2015-07-27
Virtual screening methods are commonly used nowadays in drug discovery processes. However, to ensure their reliability, they have to be carefully evaluated. The evaluation of these methods is often realized in a retrospective way, notably by studying the enrichment of benchmarking data sets. To this purpose, numerous benchmarking data sets were developed over the years, and the resulting improvements led to the availability of high quality benchmarking data sets. However, some points still have to be considered in the selection of the active compounds, decoys, and protein structures to obtain optimal benchmarking data sets.
NASA Astrophysics Data System (ADS)
Alawa, Karam A.; Sayed, Mohamed; Arboleda, Alejandro; Durkee, Heather A.; Aguilar, Mariela C.; Lee, Richard K.
2017-02-01
Glaucoma is the leading cause of irreversible blindness worldwide. Due to its wide prevalence, effective screening tools are necessary. The purpose of this project is to design and evaluate a system that enables portable, cost effective, smartphone based visual field screening based on frequency doubling technology. The system is comprised of an Android smartphone to display frequency doubling stimuli and handle processing, a Bluetooth remote for user input, and a virtual reality headset to simulate the exam. The LG Nexus 5 smartphone and BoboVR Z3 virtual reality headset were used for their screen size and lens configuration, respectively. The system is capable of running the C-20, N-30, 24-2, and 30-2 testing patterns. Unlike the existing system, the smartphone FDT tests both eyes concurrently by showing the same background to both eyes but only displaying the stimulus to one eye at a time. Both the Humphrey Zeiss FDT and the smartphone FDT were tested on five subjects without a history of ocular disease with the C-20 testing pattern. The smartphone FDT successfully produced frequency doubling stimuli at the correct spatial and temporal frequency. Subjects could not tell which eye was being tested. All five subjects preferred the smartphone FDT to the Humphrey Zeiss FDT due to comfort and ease of use. The smartphone FDT is a low-cost, portable visual field screening device that can be used as a screening tool for glaucoma.
Sakamoto, Takashi; Mitsuzaki, Katsuhiko; Utsunomiya, Daisuke; Matsuda, Katsuhiko; Yamamura, Sadahiro; Urata, Joji; Kawakami, Megumi; Yamashita, Yasuyuki
2012-09-01
Although the screening of small, flat polyps is clinically important, the role of CT colonography (CTC) screening in their detection has not been thoroughly investigated. To evaluate the detection capability and usefulness of CTC in the screening of flat and polypoid lesions by comparing CTC with optic colonoscopy findings as the gold standard. We evaluated the CTC detection capability for flat colorectal polyps with a flat surface and a height not exceeding 3 mm (n = 42) by comparing to conventional polypoid lesions (n = 418) according to the polyp diameter. Four types of reconstruction images including multiplanar reconstruction, volume rendering, virtual gross pathology, and virtual endoscopic images were used for visual analysis. We compared the abilities of the four reconstructions for polyp visualization. Detection sensitivity for flat polyps was 31.3%, 44.4%, and 87.5% for lesions measuring 2-3 mm, 4-5 mm, and ≥6 mm, respectively; the corresponding sensitivity for polypoid lesions was 47.6%, 79.0%, and 91.7%. The overall sensitivity for flat lesions (47.6%) was significantly lower than polypoid lesions (64.1%). Virtual endoscopic imaging showed best visualization among the four reconstructions. Colon cancers were detected in eight patients by optic colonoscopy, and CTC detected colon cancers in all eight patients. CTC using 64-row multidetector CT is useful for colon cancer screening to detect colorectal polyps while the detection of small, flat lesions is still challenging.
Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D; Duvenaud, David; Maclaurin, Dougal; Blood-Forsythe, Martin A; Chae, Hyun Sik; Einzinger, Markus; Ha, Dong-Gwang; Wu, Tony; Markopoulos, Georgios; Jeon, Soonok; Kang, Hosuk; Miyazaki, Hiroshi; Numata, Masaki; Kim, Sunghan; Huang, Wenliang; Hong, Seong Ik; Baldo, Marc; Adams, Ryan P; Aspuru-Guzik, Alán
2016-10-01
Virtual screening is becoming a ground-breaking tool for molecular discovery due to the exponential growth of available computer time and constant improvement of simulation and machine learning techniques. We report an integrated organic functional material design process that incorporates theoretical insight, quantum chemistry, cheminformatics, machine learning, industrial expertise, organic synthesis, molecular characterization, device fabrication and optoelectronic testing. After exploring a search space of 1.6 million molecules and screening over 400,000 of them using time-dependent density functional theory, we identified thousands of promising novel organic light-emitting diode molecules across the visible spectrum. Our team collaboratively selected the best candidates from this set. The experimentally determined external quantum efficiencies for these synthesized candidates were as large as 22%.
NASA Astrophysics Data System (ADS)
Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Duvenaud, David; MacLaurin, Dougal; Blood-Forsythe, Martin A.; Chae, Hyun Sik; Einzinger, Markus; Ha, Dong-Gwang; Wu, Tony; Markopoulos, Georgios; Jeon, Soonok; Kang, Hosuk; Miyazaki, Hiroshi; Numata, Masaki; Kim, Sunghan; Huang, Wenliang; Hong, Seong Ik; Baldo, Marc; Adams, Ryan P.; Aspuru-Guzik, Alán
2016-10-01
Virtual screening is becoming a ground-breaking tool for molecular discovery due to the exponential growth of available computer time and constant improvement of simulation and machine learning techniques. We report an integrated organic functional material design process that incorporates theoretical insight, quantum chemistry, cheminformatics, machine learning, industrial expertise, organic synthesis, molecular characterization, device fabrication and optoelectronic testing. After exploring a search space of 1.6 million molecules and screening over 400,000 of them using time-dependent density functional theory, we identified thousands of promising novel organic light-emitting diode molecules across the visible spectrum. Our team collaboratively selected the best candidates from this set. The experimentally determined external quantum efficiencies for these synthesized candidates were as large as 22%.
The EVER-EST Virtual Research Environment for the European Volcano Supersites
NASA Astrophysics Data System (ADS)
Salvi, S.; Trasatti, E.; Rubbia, G.; Romaniello, V.; Marelli, F.
2017-12-01
EVER-EST (European Virtual Environment for Research - Earth Science Themes) is an European H2020 project (2015-2018) aimed at the creation of a Virtual Research Environment (VRE) for the Earth Sciences. The VRE is intended to enhance the ability to collaborate and share knowledge and experience among scientists. One of the innovations of the project is the exploitation of the "Research Object" concept (http://www.rohub.org). Research Objects encapsulate not only data and publications, but also algorithms, codes, results, and workflows that can be stored, shared and re-used. Four scientific communities are involved in the EVER-EST project: land monitoring, natural hazards, marine biology, and the GEO Geohazard Supersites community (http://www.earthobservations.org/gsnl.php). The latter is represented in the project by INGV and the University of Iceland, and has provided user requirements to tailor the VRE to the common needs of the worldwide Supersite communities. To develop and test the VRE we have defined user scenarios and created Research Objects embedding research activities and workflows on the Permanent Supersites Campi Flegrei, Mount Etna and Icelandic Volcanoes (http://vm1.everest.psnc.pl/supersites/). While these Supersites are test sites for the platform, during the last year of the project other Supersites may also be involved to demonstrate the added value of the collaborative environment in research activities aiming to support Disaster Risk Reduction. Using the VRE, scientists are able to collaborate with colleagues located in different parts of the world, in a simple and effective way. This includes being able to remotely access and share data, research results and ideas, to carry out training sessions and discussions, to compare different results and models, and to synthesize many different pieces of information in a single consensus product to be disseminated to end-users. In particular, a further need of the Supersite scientists, which can be fulfilled by EVER-EST especially in less developed countries, is the need to access computing resources and software codes for data processing and modelling, as well as tutoring in data analysis and interpretation. Examples and results illustrating the effective use of the VRE will be presented at the conference.
Virtual Laboratories and Virtual Worlds
NASA Astrophysics Data System (ADS)
Hut, Piet
2008-05-01
Since we cannot put stars in a laboratory, astrophysicists had to wait till the invention of computers before becoming laboratory scientists. For half a century now, we have been conducting experiments in our virtual laboratories. However, we ourselves have remained behind the keyboard, with the screen of the monitor separating us from the world we are simulating. Recently, 3D on-line technology, developed first for games but now deployed in virtual worlds like Second Life, is beginning to make it possible for astrophysicists to enter their virtual labs themselves, in virtual form as avatars. This has several advantages, from new possibilities to explore the results of the simulations to a shared presence in a virtual lab with remote collaborators on different continents. I will report my experiences with the use of Qwaq Forums, a virtual world developed by a new company (see http://www.qwaq.com).
[Virtual microscopy in pathology teaching and postgraduate training (continuing education)].
Sinn, H P; Andrulis, M; Mogler, C; Schirmacher, P
2008-11-01
As with conventional microscopy, virtual microscopy permits histological tissue sections to be viewed on a computer screen with a free choice of viewing areas and a wide range of magnifications. This, combined with the possibility of linking virtual microscopy to E-Learning courses, make virtual microscopy an ideal tool for teaching and postgraduate training in pathology. Uses of virtual microscopy in pathology teaching include blended learning with the presentation of digital teaching slides in the internet parallel to presentation in the histology lab, extending student access to histology slides beyond the lab. Other uses are student self-learning in the Internet, as well as the presentation of virtual slides in the classroom with or without replacing real microscopes. Successful integration of virtual microscopy depends on its embedding in the virtual classroom and the creation of interactive E-learning content. Applications derived from this include the use of virtual microscopy in video clips, podcasts, SCORM modules and the presentation of virtual microscopy using interactive whiteboards in the classroom.
Seamless 3D interaction for virtual tables, projection planes, and CAVEs
NASA Astrophysics Data System (ADS)
Encarnacao, L. M.; Bimber, Oliver; Schmalstieg, Dieter; Barton, Robert J., III
2000-08-01
The Virtual Table presents stereoscopic graphics to a user in a workbench-like setting. This device shares with other large- screen display technologies (such as data walls and surround- screen projection systems) the lack of human-centered unencumbered user interfaces and 3D interaction technologies. Such shortcomings present severe limitations to the application of virtual reality (VR) technology to time- critical applications as well as employment scenarios that involve heterogeneous groups of end-users without high levels of computer familiarity and expertise. Traditionally such employment scenarios are common in planning-related application areas such as mission rehearsal and command and control. For these applications, a high grade of flexibility with respect to the system requirements (display and I/O devices) as well as to the ability to seamlessly and intuitively switch between different interaction modalities and interaction are sought. Conventional VR techniques may be insufficient to meet this challenge. This paper presents novel approaches for human-centered interfaces to Virtual Environments focusing on the Virtual Table visual input device. It introduces new paradigms for 3D interaction in virtual environments (VE) for a variety of application areas based on pen-and-clipboard, mirror-in-hand, and magic-lens metaphors, and introduces new concepts for combining VR and augmented reality (AR) techniques. It finally describes approaches toward hybrid and distributed multi-user interaction environments and concludes by hypothesizing on possible use cases for defense applications.
Marcano-Belisario, José S; Gupta, Ajay K; O'Donoghue, John; Ramchandani, Paul; Morrison, Cecily; Car, Josip
2017-05-10
Mobile devices may facilitate depression screening in the waiting area of antenatal clinics. This can present implementation challenges, of which we focused on survey layout and technology deployment. We assessed the feasibility of using tablet computers to administer a socio-demographic survey, the Whooley questions and the Edinburgh Postnatal Depression Scale (EPDS) to 530 pregnant women attending National Health Service (NHS) antenatal clinics across England. We randomised participants to one of two layout versions of these surveys: (i) a scrolling layout where each survey was presented on a single screen; or (ii) a paging layout where only one question appeared on the screen at any given time. Overall, 85.10% of eligible pregnant women agreed to take part. Of these, 90.95% completed the study procedures. Approximately 23% of participants answered Yes to at least one Whooley question, and approximately 13% of them scored 10 points of more on the EPDS. We observed no association between survey layout and the responses given to the Whooley questions, the median EPDS scores, the number of participants at increased risk of self-harm, and the number of participants asking for technical assistance. However, we observed a difference in the number of participants at each EPDS scoring interval (p = 0.008), which provide an indication of a woman's risk of depression. A scrolling layout resulted in faster completion times (median = 4 min 46 s) than a paging layout (median = 5 min 33 s) (p = 0.024). However, the clinical significance of this difference (47.5 s) is yet to be determined. Tablet computers can be used for depression screening in the waiting area of antenatal clinics. This requires the careful consideration of clinical workflows, and technology-related issues such as connectivity and security. An association between survey layout and EPDS scoring intervals needs to be explored further to determine if it corresponds to a survey layout effect. Future research needs to evaluate the effect of this type of antenatal depression screening on clinical outcomes and clinic workflows. This study was registered in ClinicalTrials.gov under the identifier NCT02516982 on 20 July 2015.