Sample records for alignment search toolkit

  1. Toolkit for Evaluating Alignment of Instructional and Assessment Materials to the Common Core State Standards

    ERIC Educational Resources Information Center

    Achieve, Inc., 2014

    2014-01-01

    In joint partnership, Achieve, The Council of Chief State School Officers, and Student Achievement Partners have developed a Toolkit for Evaluating the Alignment of Instructional and Assessment Materials to the Common Core State Standards. The Toolkit is a set of interrelated, freely available instruments for evaluating alignment to the CCSS; each…

  2. The MIGenAS integrated bioinformatics toolkit for web-based sequence analysis

    PubMed Central

    Rampp, Markus; Soddemann, Thomas; Lederer, Hermann

    2006-01-01

    We describe a versatile and extensible integrated bioinformatics toolkit for the analysis of biological sequences over the Internet. The web portal offers convenient interactive access to a growing pool of chainable bioinformatics software tools and databases that are centrally installed and maintained by the RZG. Currently, supported tasks comprise sequence similarity searches in public or user-supplied databases, computation and validation of multiple sequence alignments, phylogenetic analysis and protein–structure prediction. Individual tools can be seamlessly chained into pipelines allowing the user to conveniently process complex workflows without the necessity to take care of any format conversions or tedious parsing of intermediate results. The toolkit is part of the Max-Planck Integrated Gene Analysis System (MIGenAS) of the Max Planck Society available at (click ‘Start Toolkit’). PMID:16844980

  3. Windows .NET Network Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST)

    PubMed Central

    Dowd, Scot E; Zaragoza, Joaquin; Rodriguez, Javier R; Oliver, Melvin J; Payton, Paxton R

    2005-01-01

    Background BLAST is one of the most common and useful tools for Genetic Research. This paper describes a software application we have termed Windows .NET Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST), which enhances the BLAST utility by improving usability, fault recovery, and scalability in a Windows desktop environment. Our goal was to develop an easy to use, fault tolerant, high-throughput BLAST solution that incorporates a comprehensive BLAST result viewer with curation and annotation functionality. Results W.ND-BLAST is a comprehensive Windows-based software toolkit that targets researchers, including those with minimal computer skills, and provides the ability increase the performance of BLAST by distributing BLAST queries to any number of Windows based machines across local area networks (LAN). W.ND-BLAST provides intuitive Graphic User Interfaces (GUI) for BLAST database creation, BLAST execution, BLAST output evaluation and BLAST result exportation. This software also provides several layers of fault tolerance and fault recovery to prevent loss of data if nodes or master machines fail. This paper lays out the functionality of W.ND-BLAST. W.ND-BLAST displays close to 100% performance efficiency when distributing tasks to 12 remote computers of the same performance class. A high throughput BLAST job which took 662.68 minutes (11 hours) on one average machine was completed in 44.97 minutes when distributed to 17 nodes, which included lower performance class machines. Finally, there is a comprehensive high-throughput BLAST Output Viewer (BOV) and Annotation Engine components, which provides comprehensive exportation of BLAST hits to text files, annotated fasta files, tables, or association files. Conclusion W.ND-BLAST provides an interactive tool that allows scientists to easily utilizing their available computing resources for high throughput and comprehensive sequence analyses. The install package for W.ND-BLAST is freely downloadable from . With registration the software is free, installation, networking, and usage instructions are provided as well as a support forum. PMID:15819992

  4. Matlab based Toolkits used to Interface with Optical Design Software for NASA's James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Howard, Joseph

    2007-01-01

    The viewgraph presentation provides an introduction to the James Webb Space Telescope (JWST). The first part provides a brief overview of Matlab toolkits including CodeV, OSLO, and Zemax Toolkits. The toolkit overview examines purpose, layout, how Matlab gets data from CodeV, function layout, and using cvHELP. The second part provides examples of use with JWST, including wavefront sensitivities and alignment simulations.

  5. AlignerBoost: A Generalized Software Toolkit for Boosting Next-Gen Sequencing Mapping Accuracy Using a Bayesian-Based Mapping Quality Framework.

    PubMed

    Zheng, Qi; Grice, Elizabeth A

    2016-10-01

    Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or "best" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit "AlignerBoost", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost's algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost.

  6. bioWidgets: data interaction components for genomics.

    PubMed

    Fischer, S; Crabtree, J; Brunk, B; Gibson, M; Overton, G C

    1999-10-01

    The presentation of genomics data in a perspicuous visual format is critical for its rapid interpretation and validation. Relatively few public database developers have the resources to implement sophisticated front-end user interfaces themselves. Accordingly, these developers would benefit from a reusable toolkit of user interface and data visualization components. We have designed the bioWidget toolkit as a set of JavaBean components. It includes a wide array of user interface components and defines an architecture for assembling applications. The toolkit is founded on established software engineering design patterns and principles, including componentry, Model-View-Controller, factored models and schema neutrality. As a proof of concept, we have used the bioWidget toolkit to create three extendible applications: AnnotView, BlastView and AlignView.

  7. H-BLAST: a fast protein sequence alignment toolkit on heterogeneous computers with GPUs.

    PubMed

    Ye, Weicai; Chen, Ying; Zhang, Yongdong; Xu, Yuesheng

    2017-04-15

    The sequence alignment is a fundamental problem in bioinformatics. BLAST is a routinely used tool for this purpose with over 118 000 citations in the past two decades. As the size of bio-sequence databases grows exponentially, the computational speed of alignment softwares must be improved. We develop the heterogeneous BLAST (H-BLAST), a fast parallel search tool for a heterogeneous computer that couples CPUs and GPUs, to accelerate BLASTX and BLASTP-basic tools of NCBI-BLAST. H-BLAST employs a locally decoupled seed-extension algorithm for better performance on GPUs, and offers a performance tuning mechanism for better efficiency among various CPUs and GPUs combinations. H-BLAST produces identical alignment results as NCBI-BLAST and its computational speed is much faster than that of NCBI-BLAST. Speedups achieved by H-BLAST over sequential NCBI-BLASTP (resp. NCBI-BLASTX) range mostly from 4 to 10 (resp. 5 to 7.2). With 2 CPU threads and 2 GPUs, H-BLAST can be faster than 16-threaded NCBI-BLASTX. Furthermore, H-BLAST is 1.5-4 times faster than GPU-BLAST. https://github.com/Yeyke/H-BLAST.git. yux06@syr.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  8. GCView: the genomic context viewer for protein homology searches

    PubMed Central

    Grin, Iwan; Linke, Dirk

    2011-01-01

    Genomic neighborhood can provide important insights into evolution and function of a protein or gene. When looking at operons, changes in operon structure and composition can only be revealed by looking at the operon as a whole. To facilitate the analysis of the genomic context of a query in multiple organisms we have developed Genomic Context Viewer (GCView). GCView accepts results from one or multiple protein homology searches such as BLASTp as input. For each hit, the neighboring protein-coding genes are extracted, the regions of homology are labeled for each input and the results are presented as a clear, interactive graphical output. It is also possible to add more searches to iteratively refine the output. GCView groups outputs by the hits for different proteins. This allows for easy comparison of different operon compositions and structures. The tool is embedded in the framework of the Bioinformatics Toolkit of the Max-Planck Institute for Developmental Biology (MPI Toolkit). Job results from the homology search tools inside the MPI Toolkit can be forwarded to GCView and results can be subsequently analyzed by sequence analysis tools. Results are stored online, allowing for later reinspection. GCView is freely available at http://toolkit.tuebingen.mpg.de/gcview. PMID:21609955

  9. AlignerBoost: A Generalized Software Toolkit for Boosting Next-Gen Sequencing Mapping Accuracy Using a Bayesian-Based Mapping Quality Framework

    PubMed Central

    Zheng, Qi; Grice, Elizabeth A.

    2016-01-01

    Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or "best" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit "AlignerBoost", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost’s algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost. PMID:27706155

  10. WIND Toolkit Offshore Summary Dataset

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draxl, Caroline; Musial, Walt; Scott, George

    This dataset contains summary statistics for offshore wind resources for the continental United States derived from the Wind Integration National Datatset (WIND) Toolkit. These data are available in two formats: GDB - Compressed geodatabases containing statistical summaries aligned with lease blocks (aliquots) stored in a GIS format. These data are partitioned into Pacific, Atlantic, and Gulf resource regions. HDF5 - Statistical summaries of all points in the offshore Pacific, Atlantic, and Gulf offshore regions. These data are located on the original WIND Toolkit grid and have not been reassigned or downsampled to lease blocks. These data were developed under contractmore » by NREL for the Bureau of Oceanic Energy Management (BOEM).« less

  11. NASA Indexing Benchmarks: Evaluating Text Search Engines

    NASA Technical Reports Server (NTRS)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  12. phylo-node: A molecular phylogenetic toolkit using Node.js.

    PubMed

    O'Halloran, Damien M

    2017-01-01

    Node.js is an open-source and cross-platform environment that provides a JavaScript codebase for back-end server-side applications. JavaScript has been used to develop very fast and user-friendly front-end tools for bioinformatic and phylogenetic analyses. However, no such toolkits are available using Node.js to conduct comprehensive molecular phylogenetic analysis. To address this problem, I have developed, phylo-node, which was developed using Node.js and provides a stable and scalable toolkit that allows the user to perform diverse molecular and phylogenetic tasks. phylo-node can execute the analysis and process the resulting outputs from a suite of software options that provides tools for read processing and genome alignment, sequence retrieval, multiple sequence alignment, primer design, evolutionary modeling, and phylogeny reconstruction. Furthermore, phylo-node enables the user to deploy server dependent applications, and also provides simple integration and interoperation with other Node modules and languages using Node inheritance patterns, and a customized piping module to support the production of diverse pipelines. phylo-node is open-source and freely available to all users without sign-up or login requirements. All source code and user guidelines are openly available at the GitHub repository: https://github.com/dohalloran/phylo-node.

  13. Multimethod evaluation of the VA's peer-to-peer Toolkit for patient-centered medical home implementation.

    PubMed

    Luck, Jeff; Bowman, Candice; York, Laura; Midboe, Amanda; Taylor, Thomas; Gale, Randall; Asch, Steven

    2014-07-01

    Effective implementation of the patient-centered medical home (PCMH) in primary care practices requires training and other resources, such as online toolkits, to share strategies and materials. The Veterans Health Administration (VA) developed an online Toolkit of user-sourced tools to support teams implementing its Patient Aligned Care Team (PACT) medical home model. To present findings from an evaluation of the PACT Toolkit, including use, variation across facilities, effect of social marketing, and factors influencing use. The Toolkit is an online repository of ready-to-use tools created by VA clinic staff that physicians, nurses, and other team members may share, download, and adopt in order to more effectively implement PCMH principles and improve local performance on VA metrics. Multimethod evaluation using: (1) website usage analytics, (2) an online survey of the PACT community of practice's use of the Toolkit, and (3) key informant interviews. Survey respondents were PACT team members and coaches (n = 544) at 136 VA facilities. Interview respondents were Toolkit users and non-users (n = 32). For survey data, multivariable logistic models were used to predict Toolkit awareness and use. Interviews and open-text survey comments were coded using a "common themes" framework. The Consolidated Framework for Implementation Research (CFIR) guided data collection and analyses. The Toolkit was used by 6,745 staff in the first 19 months of availability. Among members of the target audience, 80 % had heard of the Toolkit, and of those, 70 % had visited the website. Tools had been implemented at 65 % of facilities. Qualitative findings revealed a range of user perspectives from enthusiastic support to lack of sufficient time to browse the Toolkit. An online Toolkit to support PCMH implementation was used at VA facilities nationwide. Other complex health care organizations may benefit from adopting similar online peer-to-peer resource libraries.

  14. BuddySuite: Command-Line Toolkits for Manipulating Sequences, Alignments, and Phylogenetic Trees.

    PubMed

    Bond, Stephen R; Keat, Karl E; Barreira, Sofia N; Baxevanis, Andreas D

    2017-06-01

    The ability to manipulate sequence, alignment, and phylogenetic tree files has become an increasingly important skill in the life sciences, whether to generate summary information or to prepare data for further downstream analysis. The command line can be an extremely powerful environment for interacting with these resources, but only if the user has the appropriate general-purpose tools on hand. BuddySuite is a collection of four independent yet interrelated command-line toolkits that facilitate each step in the workflow of sequence discovery, curation, alignment, and phylogenetic reconstruction. Most common sequence, alignment, and tree file formats are automatically detected and parsed, and over 100 tools have been implemented for manipulating these data. The project has been engineered to easily accommodate the addition of new tools, is written in the popular programming language Python, and is hosted on the Python Package Index and GitHub to maximize accessibility. Documentation for each BuddySuite tool, including usage examples, is available at http://tiny.cc/buddysuite_wiki. All software is open source and freely available through http://research.nhgri.nih.gov/software/BuddySuite. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution 2017. This work is written by US Government employees and is in the public domain in the US.

  15. When paradigms collide at the road rail interface: evaluation of a sociotechnical systems theory design toolkit for cognitive work analysis.

    PubMed

    Read, Gemma J M; Salmon, Paul M; Lenné, Michael G

    2016-09-01

    The Cognitive Work Analysis Design Toolkit (CWA-DT) is a recently developed approach that provides guidance and tools to assist in applying the outputs of CWA to design processes to incorporate the values and principles of sociotechnical systems theory. In this paper, the CWA-DT is evaluated based on an application to improve safety at rail level crossings. The evaluation considered the extent to which the CWA-DT met pre-defined methodological criteria and aligned with sociotechnical values and principles. Both process and outcome measures were taken based on the ratings of workshop participants and human factors experts. Overall, workshop participants were positive about the process and indicated that it met the methodological criteria and sociotechnical values. However, expert ratings suggested that the CWA-DT achieved only limited success in producing RLX designs that fully aligned with the sociotechnical approach. Discussion about the appropriateness of the sociotechnical approach in a public safety context is provided. Practitioner Summary: Human factors and ergonomics practitioners need evidence of the effectiveness of methods. A design toolkit for cognitive work analysis, incorporating values and principles from sociotechnical systems theory, was applied to create innovative designs for rail level crossings. Evaluation results based on the application are provided and discussed.

  16. Kekule.js: An Open Source JavaScript Chemoinformatics Toolkit.

    PubMed

    Jiang, Chen; Jin, Xi; Dong, Ying; Chen, Ming

    2016-06-27

    Kekule.js is an open-source, object-oriented JavaScript toolkit for chemoinformatics. It provides methods for many common tasks in molecular informatics, including chemical data input/output (I/O), two- and three-dimensional (2D/3D) rendering of chemical structure, stereo identification, ring perception, structure comparison, and substructure search. Encapsulated widgets to display and edit chemical structures directly in web context are also supplied. Developed with web standards, the toolkit is ideal for building chemoinformatics applications over the Internet. Moreover, it is highly platform-independent and can also be used in desktop or mobile environments. Some initial applications, such as plugins for inputting chemical structures on the web and uses in chemistry education, have been developed based on the toolkit.

  17. T-BAS: Tree-Based Alignment Selector toolkit for phylogenetic-based placement, alignment downloads and metadata visualization: an example with the Pezizomycotina tree of life.

    PubMed

    Carbone, Ignazio; White, James B; Miadlikowska, Jolanta; Arnold, A Elizabeth; Miller, Mark A; Kauff, Frank; U'Ren, Jana M; May, Georgiana; Lutzoni, François

    2017-04-15

    High-quality phylogenetic placement of sequence data has the potential to greatly accelerate studies of the diversity, systematics, ecology and functional biology of diverse groups. We developed the Tree-Based Alignment Selector (T-BAS) toolkit to allow evolutionary placement and visualization of diverse DNA sequences representing unknown taxa within a robust phylogenetic context, and to permit the downloading of highly curated, single- and multi-locus alignments for specific clades. In its initial form, T-BAS v1.0 uses a core phylogeny of 979 taxa (including 23 outgroup taxa, as well as 61 orders, 175 families and 496 genera) representing all 13 classes of largest subphylum of Fungi-Pezizomycotina (Ascomycota)-based on sequence alignments for six loci (nr5.8S, nrLSU, nrSSU, mtSSU, RPB1, RPB2 ). T-BAS v1.0 has three main uses: (i) Users may download alignments and voucher tables for members of the Pezizomycotina directly from the reference tree, facilitating systematics studies of focal clades. (ii) Users may upload sequence files with reads representing unknown taxa and place these on the phylogeny using either BLAST or phylogeny-based approaches, and then use the displayed tree to select reference taxa to include when downloading alignments. The placement of unknowns can be performed for large numbers of Sanger sequences obtained from fungal cultures and for alignable, short reads of environmental amplicons. (iii) User-customizable metadata can be visualized on the tree. T-BAS Version 1.0 is available online at http://tbas.hpc.ncsu.edu . Registration is required to access the CIPRES Science Gateway and NSF XSEDE's large computational resources. icarbon@ncsu.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  18. The effectiveness of toolkits as knowledge translation strategies for integrating evidence into clinical care: a systematic review

    PubMed Central

    Yamada, Janet; Shorkey, Allyson; Barwick, Melanie; Widger, Kimberley; Stevens, Bonnie J

    2015-01-01

    Objectives The aim of this systematic review was to evaluate the effectiveness of toolkits as a knowledge translation (KT) strategy for facilitating the implementation of evidence into clinical care. Toolkits include multiple resources for educating and/or facilitating behaviour change. Design Systematic review of the literature on toolkits. Methods A search was conducted on MEDLINE, EMBASE, PsycINFO and CINAHL. Studies were included if they evaluated the effectiveness of a toolkit to support the integration of evidence into clinical care, and if the KT goal(s) of the study were to inform, share knowledge, build awareness, change practice, change behaviour, and/or clinical outcomes in healthcare settings, inform policy, or to commercialise an innovation. Screening of studies, assessment of methodological quality and data extraction for the included studies were conducted by at least two reviewers. Results 39 relevant studies were included for full review; 8 were rated as moderate to strong methodologically with clinical outcomes that could be somewhat attributed to the toolkit. Three of the eight studies evaluated the toolkit as a single KT intervention, while five embedded the toolkit into a multistrategy intervention. Six of the eight toolkits were partially or mostly effective in changing clinical outcomes and six studies reported on implementation outcomes. The types of resources embedded within toolkits varied but included predominantly educational materials. Conclusions Future toolkits should be informed by high-quality evidence and theory, and should be evaluated using rigorous study designs to explain the factors underlying their effectiveness and successful implementation. PMID:25869686

  19. Atlas Toolkit: Fast registration of 3D morphological datasets in the absence of landmarks

    PubMed Central

    Grocott, Timothy; Thomas, Paul; Münsterberg, Andrea E.

    2016-01-01

    Image registration is a gateway technology for Developmental Systems Biology, enabling computational analysis of related datasets within a shared coordinate system. Many registration tools rely on landmarks to ensure that datasets are correctly aligned; yet suitable landmarks are not present in many datasets. Atlas Toolkit is a Fiji/ImageJ plugin collection offering elastic group-wise registration of 3D morphological datasets, guided by segmentation of the interesting morphology. We demonstrate the method by combinatorial mapping of cell signalling events in the developing eyes of chick embryos, and use the integrated datasets to predictively enumerate Gene Regulatory Network states. PMID:26864723

  20. Atlas Toolkit: Fast registration of 3D morphological datasets in the absence of landmarks.

    PubMed

    Grocott, Timothy; Thomas, Paul; Münsterberg, Andrea E

    2016-02-11

    Image registration is a gateway technology for Developmental Systems Biology, enabling computational analysis of related datasets within a shared coordinate system. Many registration tools rely on landmarks to ensure that datasets are correctly aligned; yet suitable landmarks are not present in many datasets. Atlas Toolkit is a Fiji/ImageJ plugin collection offering elastic group-wise registration of 3D morphological datasets, guided by segmentation of the interesting morphology. We demonstrate the method by combinatorial mapping of cell signalling events in the developing eyes of chick embryos, and use the integrated datasets to predictively enumerate Gene Regulatory Network states.

  1. A Prototype Search Toolkit

    NASA Astrophysics Data System (ADS)

    Knepper, Margaret M.; Fox, Kevin L.; Frieder, Ophir

    Information overload is now a reality. We no longer worry about obtaining a sufficient volume of data; we now are concerned with sifting and understanding the massive volumes of data available to us. To do so, we developed an integrated information processing toolkit that provides the user with a variety of ways to view their information. The views include keyword search results, a domain specific ranking system that allows for adaptively capturing topic vocabularies to customize and focus the search results, navigation pages for browsing, and a geospatial and temporal component to visualize results in time and space, and provide “what if” scenario playing. Integrating the information from different tools and sources gives the user additional information and another way to analyze the data. An example of the integration is illustrated on reports of the avian influenza (bird flu).

  2. PsyToolkit: a software package for programming psychological experiments using Linux.

    PubMed

    Stoet, Gijsbert

    2010-11-01

    PsyToolkit is a set of software tools for programming psychological experiments on Linux computers. Given that PsyToolkit is freely available under the Gnu Public License, open source, and designed such that it can easily be modified and extended for individual needs, it is suitable not only for technically oriented Linux users, but also for students, researchers on small budgets, and universities in developing countries. The software includes a high-level scripting language, a library for the programming language C, and a questionnaire presenter. The software easily integrates with other open source tools, such as the statistical software package R. PsyToolkit is designed to work with external hardware (including IoLab and Cedrus response keyboards and two common digital input/output boards) and to support millisecond timing precision. Four in-depth examples explain the basic functionality of PsyToolkit. Example 1 demonstrates a stimulus-response compatibility experiment. Example 2 demonstrates a novel mouse-controlled visual search experiment. Example 3 shows how to control light emitting diodes using PsyToolkit, and Example 4 shows how to build a light-detection sensor. The last two examples explain the electronic hardware setup such that they can even be used with other software packages.

  3. South Carolina Course Alignment Project: Best Practices Report

    ERIC Educational Resources Information Center

    Chadwick, Kristine; Ward, Terri; Hopper-Moore, Greg

    2014-01-01

    To facilitate a more seamless transition from high school to postsecondary education, high schools and colleges need to build new relationships and examine educational programming on both sides of the critical juncture between the senior year in high school and the first year in college. This South Carolina College and Career Readiness Toolkit was…

  4. Information Literacy at University: A Toolkit for Readiness and Measuring Impact

    ERIC Educational Resources Information Center

    Hulett, Heather; Corbin, Jenny; Karasmanis, Sharon; Robertson, Tracy; Salisbury, Fiona; Peseta, Tai

    2013-01-01

    La Trobe University Library has embarked on an institution-wide project with the objective of enabling students to engage with scholarly and credible information from the first year. This initiative by the library is in response to La Trobe curriculum reform. In particular, it aligns information literacy with the inquiry/research graduate…

  5. GESearch: An Interactive GUI Tool for Identifying Gene Expression Signature.

    PubMed

    Ye, Ning; Yin, Hengfu; Liu, Jingjing; Dai, Xiaogang; Yin, Tongming

    2015-01-01

    The huge amount of gene expression data generated by microarray and next-generation sequencing technologies present challenges to exploit their biological meanings. When searching for the coexpression genes, the data mining process is largely affected by selection of algorithms. Thus, it is highly desirable to provide multiple options of algorithms in the user-friendly analytical toolkit to explore the gene expression signatures. For this purpose, we developed GESearch, an interactive graphical user interface (GUI) toolkit, which is written in MATLAB and supports a variety of gene expression data files. This analytical toolkit provides four models, including the mean, the regression, the delegate, and the ensemble models, to identify the coexpression genes, and enables the users to filter data and to select gene expression patterns by browsing the display window or by importing knowledge-based genes. Subsequently, the utility of this analytical toolkit is demonstrated by analyzing two sets of real-life microarray datasets from cell-cycle experiments. Overall, we have developed an interactive GUI toolkit that allows for choosing multiple algorithms for analyzing the gene expression signatures.

  6. mTM-align: a server for fast protein structure database search and multiple protein structure alignment.

    PubMed

    Dong, Runze; Pan, Shuo; Peng, Zhenling; Zhang, Yang; Yang, Jianyi

    2018-05-21

    With the rapid increase of the number of protein structures in the Protein Data Bank, it becomes urgent to develop algorithms for efficient protein structure comparisons. In this article, we present the mTM-align server, which consists of two closely related modules: one for structure database search and the other for multiple structure alignment. The database search is speeded up based on a heuristic algorithm and a hierarchical organization of the structures in the database. The multiple structure alignment is performed using the recently developed algorithm mTM-align. Benchmark tests demonstrate that our algorithms outperform other peering methods for both modules, in terms of speed and accuracy. One of the unique features for the server is the interplay between database search and multiple structure alignment. The server provides service not only for performing fast database search, but also for making accurate multiple structure alignment with the structures found by the search. For the database search, it takes about 2-5 min for a structure of a medium size (∼300 residues). For the multiple structure alignment, it takes a few seconds for ∼10 structures of medium sizes. The server is freely available at: http://yanglab.nankai.edu.cn/mTM-align/.

  7. Optical Modeling Activities for NASA's James Webb Space Telescope (JWST): V. Operational Alignment Updates

    NASA Technical Reports Server (NTRS)

    Howard, Joseph M.; Ha, Kong Q.; Shiri, Ron; Smith, J. Scott; Mosier, Gary; Muheim, Danniella

    2008-01-01

    This paper is part five of a series on the ongoing optical modeling activities for the James Webb Space Telescope (JWST). The first two papers discussed modeling JWST on-orbit performance using wavefront sensitivities to predict line of sight motion induced blur, and stability during thermal transients. The third paper investigates the aberrations resulting from alignment and figure compensation of the controllable degrees of freedom (primary and secondary mirrors), which may be encountered during ground alignment and on-orbit commissioning of the observatory, and the fourth introduced the software toolkits used to perform much of the optical analysis for JWST. The work here models observatory operations by simulating line-of-sight image motion and alignment drifts over a two-week period. Alignment updates are then simulated using wavefront sensing and control processes to calculate and perform the corrections. A single model environment in Matlab is used for evaluating the predicted performance of the observatory during these operations.

  8. Octopus-toolkit: a workflow to automate mining of public epigenomic and transcriptomic next-generation sequencing data

    PubMed Central

    Kim, Taemook; Seo, Hogyu David; Hennighausen, Lothar; Lee, Daeyoup

    2018-01-01

    Abstract Octopus-toolkit is a stand-alone application for retrieving and processing large sets of next-generation sequencing (NGS) data with a single step. Octopus-toolkit is an automated set-up-and-analysis pipeline utilizing the Aspera, SRA Toolkit, FastQC, Trimmomatic, HISAT2, STAR, Samtools, and HOMER applications. All the applications are installed on the user's computer when the program starts. Upon the installation, it can automatically retrieve original files of various epigenomic and transcriptomic data sets, including ChIP-seq, ATAC-seq, DNase-seq, MeDIP-seq, MNase-seq and RNA-seq, from the gene expression omnibus data repository. The downloaded files can then be sequentially processed to generate BAM and BigWig files, which are used for advanced analyses and visualization. Currently, it can process NGS data from popular model genomes such as, human (Homo sapiens), mouse (Mus musculus), dog (Canis lupus familiaris), plant (Arabidopsis thaliana), zebrafish (Danio rerio), fruit fly (Drosophila melanogaster), worm (Caenorhabditis elegans), and budding yeast (Saccharomyces cerevisiae) genomes. With the processed files from Octopus-toolkit, the meta-analysis of various data sets, motif searches for DNA-binding proteins, and the identification of differentially expressed genes and/or protein-binding sites can be easily conducted with few commands by users. Overall, Octopus-toolkit facilitates the systematic and integrative analysis of available epigenomic and transcriptomic NGS big data. PMID:29420797

  9. BamTools: a C++ API and toolkit for analyzing and managing BAM files.

    PubMed

    Barnett, Derek W; Garrison, Erik K; Quinlan, Aaron R; Strömberg, Michael P; Marth, Gabor T

    2011-06-15

    Analysis of genomic sequencing data requires efficient, easy-to-use access to alignment results and flexible data management tools (e.g. filtering, merging, sorting, etc.). However, the enormous amount of data produced by current sequencing technologies is typically stored in compressed, binary formats that are not easily handled by the text-based parsers commonly used in bioinformatics research. We introduce a software suite for programmers and end users that facilitates research analysis and data management using BAM files. BamTools provides both the first C++ API publicly available for BAM file support as well as a command-line toolkit. BamTools was written in C++, and is supported on Linux, Mac OSX and MS Windows. Source code and documentation are freely available at http://github.org/pezmaster31/bamtools.

  10. PDDL4J: a planning domain description library for java

    NASA Astrophysics Data System (ADS)

    Pellier, D.; Fiorino, H.

    2018-01-01

    PDDL4J (Planning Domain Description Library for Java) is an open source toolkit for Java cross-platform developers meant (1) to provide state-of-the-art planners based on the Pddl language, and (2) to facilitate research works on new planners. In this article, we present an overview of the Automated Planning concepts and languages. We present some planning systems and their most significant applications. Then, we detail the Pddl4j toolkit with an emphasis on the available informative structures, heuristics and search algorithms.

  11. Aligning Biomolecular Networks Using Modular Graph Kernels

    NASA Astrophysics Data System (ADS)

    Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant

    Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.

  12. Robust Requirements Tracing via Internet Search Technology: Improving an IV and V Technique. Phase 2

    NASA Technical Reports Server (NTRS)

    Hayes, Jane; Dekhtyar, Alex

    2004-01-01

    There are three major objectives to this phase of the work. (1) Improvement of Information Retrieval (IR) methods for Independent Verification and Validation (IV&V) requirements tracing. Information Retrieval methods are typically developed for very large (order of millions - tens of millions and more documents) document collections and therefore, most successfully used methods somewhat sacrifice precision and recall in order to achieve efficiency. At the same time typical IR systems treat all user queries as independent of each other and assume that relevance of documents to queries is subjective for each user. The IV&V requirements tracing problem has a much smaller data set to operate on, even for large software development projects; the set of queries is predetermined by the high-level specification document and individual requirements considered as query input to IR methods are not necessarily independent from each other. Namely, knowledge about the links for one requirement may be helpful in determining the links of another requirement. Finally, while the final decision on the exact form of the traceability matrix still belongs to the IV&V analyst, his/her decisions are much less arbitrary than those of an Internet search engine user. All this suggests that the information available to us in the framework of the IV&V tracing problem can be successfully leveraged to enhance standard IR techniques, which in turn would lead to increased recall and precision. We developed several new methods during Phase II; (2) IV&V requirements tracing IR toolkit. Based on the methods developed in Phase I and their improvements developed in Phase II, we built a toolkit of IR methods for IV&V requirements tracing. The toolkit has been integrated, at the data level, with SAIC's SuperTracePlus (STP) tool; (3) Toolkit testing. We tested the methods included in the IV&V requirements tracing IR toolkit on a number of projects.

  13. pypet: A Python Toolkit for Data Management of Parameter Explorations

    PubMed Central

    Meyer, Robert; Obermayer, Klaus

    2016-01-01

    pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines. PMID:27610080

  14. pypet: A Python Toolkit for Data Management of Parameter Explorations.

    PubMed

    Meyer, Robert; Obermayer, Klaus

    2016-01-01

    pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.

  15. BamTools: a C++ API and toolkit for analyzing and managing BAM files

    PubMed Central

    Barnett, Derek W.; Garrison, Erik K.; Quinlan, Aaron R.; Strömberg, Michael P.; Marth, Gabor T.

    2011-01-01

    Motivation: Analysis of genomic sequencing data requires efficient, easy-to-use access to alignment results and flexible data management tools (e.g. filtering, merging, sorting, etc.). However, the enormous amount of data produced by current sequencing technologies is typically stored in compressed, binary formats that are not easily handled by the text-based parsers commonly used in bioinformatics research. Results: We introduce a software suite for programmers and end users that facilitates research analysis and data management using BAM files. BamTools provides both the first C++ API publicly available for BAM file support as well as a command-line toolkit. Availability: BamTools was written in C++, and is supported on Linux, Mac OSX and MS Windows. Source code and documentation are freely available at http://github.org/pezmaster31/bamtools. Contact: barnetde@bc.edu PMID:21493652

  16. Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases.

    PubMed

    Sanderson, Lacey-Anne; Ficklin, Stephen P; Cheng, Chun-Huai; Jung, Sook; Feltus, Frank A; Bett, Kirstin E; Main, Dorrie

    2013-01-01

    Tripal is an open-source freely available toolkit for construction of online genomic and genetic databases. It aims to facilitate development of community-driven biological websites by integrating the GMOD Chado database schema with Drupal, a popular website creation and content management software. Tripal provides a suite of tools for interaction with a Chado database and display of content therein. The tools are designed to be generic to support the various ways in which data may be stored in Chado. Previous releases of Tripal have supported organisms, genomic libraries, biological stocks, stock collections and genomic features, their alignments and annotations. Also, Tripal and its extension modules provided loaders for commonly used file formats such as FASTA, GFF, OBO, GAF, BLAST XML, KEGG heir files and InterProScan XML. Default generic templates were provided for common views of biological data, which could be customized using an open Application Programming Interface to change the way data are displayed. Here, we report additional tools and functionality that are part of release v1.1 of Tripal. These include (i) a new bulk loader that allows a site curator to import data stored in a custom tab delimited format; (ii) full support of every Chado table for Drupal Views (a powerful tool allowing site developers to construct novel displays and search pages); (iii) new modules including 'Feature Map', 'Genetic', 'Publication', 'Project', 'Contact' and the 'Natural Diversity' modules. Tutorials, mailing lists, download and set-up instructions, extension modules and other documentation can be found at the Tripal website located at http://tripal.info. DATABASE URL: http://tripal.info/.

  17. Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases

    PubMed Central

    Sanderson, Lacey-Anne; Ficklin, Stephen P.; Cheng, Chun-Huai; Jung, Sook; Feltus, Frank A.; Bett, Kirstin E.; Main, Dorrie

    2013-01-01

    Tripal is an open-source freely available toolkit for construction of online genomic and genetic databases. It aims to facilitate development of community-driven biological websites by integrating the GMOD Chado database schema with Drupal, a popular website creation and content management software. Tripal provides a suite of tools for interaction with a Chado database and display of content therein. The tools are designed to be generic to support the various ways in which data may be stored in Chado. Previous releases of Tripal have supported organisms, genomic libraries, biological stocks, stock collections and genomic features, their alignments and annotations. Also, Tripal and its extension modules provided loaders for commonly used file formats such as FASTA, GFF, OBO, GAF, BLAST XML, KEGG heir files and InterProScan XML. Default generic templates were provided for common views of biological data, which could be customized using an open Application Programming Interface to change the way data are displayed. Here, we report additional tools and functionality that are part of release v1.1 of Tripal. These include (i) a new bulk loader that allows a site curator to import data stored in a custom tab delimited format; (ii) full support of every Chado table for Drupal Views (a powerful tool allowing site developers to construct novel displays and search pages); (iii) new modules including ‘Feature Map’, ‘Genetic’, ‘Publication’, ‘Project’, ‘Contact’ and the ‘Natural Diversity’ modules. Tutorials, mailing lists, download and set-up instructions, extension modules and other documentation can be found at the Tripal website located at http://tripal.info. Database URL: http://tripal.info/ PMID:24163125

  18. About 311

    Science.gov Websites

    NYC311NYC311 Mobile AppNYC311 TwitterNYC311 Facebook Directory of City Agencies Contact NYC Government City Employees Notify NYC CityStore Stay Connected NYC Mobile Apps Maps Resident Toolkit NYC Search City of New

  19. Backtracking behaviour in lost ants: an additional strategy in their navigational toolkit

    PubMed Central

    Wystrach, Antoine; Schwarz, Sebastian; Baniel, Alice; Cheng, Ken

    2013-01-01

    Ants use multiple sources of information to navigate, but do not integrate all this information into a unified representation of the world. Rather, the available information appears to serve three distinct main navigational systems: path integration, systematic search and the use of learnt information—mainly via vision. Here, we report on an additional behaviour that suggests a supplemental system in the ant's navigational toolkit: ‘backtracking’. Homing ants, having almost reached their nest but, suddenly displaced to unfamiliar areas, did not show the characteristic undirected headings of systematic searches. Instead, these ants backtracked in the compass direction opposite to the path that they had just travelled. The ecological function of this behaviour is clear as we show it increases the chances of returning to familiar terrain. Importantly, the mechanistic implications of this behaviour stress an extra level of cognitive complexity in ant navigation. Our results imply: (i) the presence of a type of ‘memory of the current trip’ allowing lost ants to take into account the familiar view recently experienced, and (ii) direct sharing of information across different navigational systems. We propose a revised architecture of the ant's navigational toolkit illustrating how the different systems may interact to produce adaptive behaviours. PMID:23966644

  20. P21 Common Core Toolkit: A Guide to Aligning the Common Core State Standards with the Framework for 21st Century Skills

    ERIC Educational Resources Information Center

    Partnership for 21st Century Skills, 2011

    2011-01-01

    Standards drive critical elements of the American educational system--the curricula that schools follow, the textbooks students read, and the tests they take. Similarly, standards establish the levels of performance that students, teachers and schools are expected to meet. To succeed in the 21st century, all students will need to perform to high…

  1. A basic analysis toolkit for biological sequences

    PubMed Central

    Giancarlo, Raffaele; Siragusa, Alessandro; Siragusa, Enrico; Utro, Filippo

    2007-01-01

    This paper presents a software library, nicknamed BATS, for some basic sequence analysis tasks. Namely, local alignments, via approximate string matching, and global alignments, via longest common subsequence and alignments with affine and concave gap cost functions. Moreover, it also supports filtering operations to select strings from a set and establish their statistical significance, via z-score computation. None of the algorithms is new, but although they are generally regarded as fundamental for sequence analysis, they have not been implemented in a single and consistent software package, as we do here. Therefore, our main contribution is to fill this gap between algorithmic theory and practice by providing an extensible and easy to use software library that includes algorithms for the mentioned string matching and alignment problems. The library consists of C/C++ library functions as well as Perl library functions. It can be interfaced with Bioperl and can also be used as a stand-alone system with a GUI. The software is available at under the GNU GPL. PMID:17877802

  2. Unipro UGENE: a unified bioinformatics toolkit.

    PubMed

    Okonechnikov, Konstantin; Golosova, Olga; Fursov, Mikhail

    2012-04-15

    Unipro UGENE is a multiplatform open-source software with the main goal of assisting molecular biologists without much expertise in bioinformatics to manage, analyze and visualize their data. UGENE integrates widely used bioinformatics tools within a common user interface. The toolkit supports multiple biological data formats and allows the retrieval of data from remote data sources. It provides visualization modules for biological objects such as annotated genome sequences, Next Generation Sequencing (NGS) assembly data, multiple sequence alignments, phylogenetic trees and 3D structures. Most of the integrated algorithms are tuned for maximum performance by the usage of multithreading and special processor instructions. UGENE includes a visual environment for creating reusable workflows that can be launched on local resources or in a High Performance Computing (HPC) environment. UGENE is written in C++ using the Qt framework. The built-in plugin system and structured UGENE API make it possible to extend the toolkit with new functionality. UGENE binaries are freely available for MS Windows, Linux and Mac OS X at http://ugene.unipro.ru/download.html. UGENE code is licensed under the GPLv2; the information about the code licensing and copyright of integrated tools can be found in the LICENSE.3rd_party file provided with the source bundle.

  3. DTS: Building custom, intelligent schedulers

    NASA Technical Reports Server (NTRS)

    Hansson, Othar; Mayer, Andrew

    1994-01-01

    DTS is a decision-theoretic scheduler, built on top of a flexible toolkit -- this paper focuses on how the toolkit might be reused in future NASA mission schedulers. The toolkit includes a user-customizable scheduling interface, and a 'Just-For-You' optimization engine. The customizable interface is built on two metaphors: objects and dynamic graphs. Objects help to structure problem specifications and related data, while dynamic graphs simplify the specification of graphical schedule editors (such as Gantt charts). The interface can be used with any 'back-end' scheduler, through dynamically-loaded code, interprocess communication, or a shared database. The 'Just-For-You' optimization engine includes user-specific utility functions, automatically compiled heuristic evaluations, and a postprocessing facility for enforcing scheduling policies. The optimization engine is based on BPS, the Bayesian Problem-Solver (1,2), which introduced a similar approach to solving single-agent and adversarial graph search problems.

  4. Genetic Testing Registry

    MedlinePlus

    ... Splign Vector Alignment Search Tool (VAST) All Data & Software Resources... Domains & Structures BioSystems Cn3D Conserved Domain Database (CDD) Conserved Domain Search Service (CD Search) Structure (Molecular Modeling Database) Vector Alignment ...

  5. MBAT: a scalable informatics system for unifying digital atlasing workflows.

    PubMed

    Lee, Daren; Ruffins, Seth; Ng, Queenie; Sane, Nikhil; Anderson, Steve; Toga, Arthur

    2010-12-22

    Digital atlases provide a common semantic and spatial coordinate system that can be leveraged to compare, contrast, and correlate data from disparate sources. As the quality and amount of biological data continues to advance and grow, searching, referencing, and comparing this data with a researcher's own data is essential. However, the integration process is cumbersome and time-consuming due to misaligned data, implicitly defined associations, and incompatible data sources. This work addressing these challenges by providing a unified and adaptable environment to accelerate the workflow to gather, align, and analyze the data. The MouseBIRN Atlasing Toolkit (MBAT) project was developed as a cross-platform, free open-source application that unifies and accelerates the digital atlas workflow. A tiered, plug-in architecture was designed for the neuroinformatics and genomics goals of the project to provide a modular and extensible design. MBAT provides the ability to use a single query to search and retrieve data from multiple data sources, align image data using the user's preferred registration method, composite data from multiple sources in a common space, and link relevant informatics information to the current view of the data or atlas. The workspaces leverage tool plug-ins to extend and allow future extensions of the basic workspace functionality. A wide variety of tool plug-ins were developed that integrate pre-existing as well as newly created technology into each workspace. Novel atlasing features were also developed, such as supporting multiple label sets, dynamic selection and grouping of labels, and synchronized, context-driven display of ontological data. MBAT empowers researchers to discover correlations among disparate data by providing a unified environment for bringing together distributed reference resources, a user's image data, and biological atlases into the same spatial or semantic context. Through its extensible tiered plug-in architecture, MBAT allows researchers to customize all platform components to quickly achieve personalized workflows.

  6. National Center for Biotechnology Information

    MedlinePlus

    ... Splign Vector Alignment Search Tool (VAST) All Data & Software Resources... Domains & Structures BioSystems Cn3D Conserved Domain Database (CDD) Conserved Domain Search Service (CD Search) Structure (Molecular Modeling Database) Vector Alignment ...

  7. MCScanX: a toolkit for detection and evolutionary analysis of gene synteny and collinearity

    PubMed Central

    Wang, Yupeng; Tang, Haibao; DeBarry, Jeremy D.; Tan, Xu; Li, Jingping; Wang, Xiyin; Lee, Tae-ho; Jin, Huizhe; Marler, Barry; Guo, Hui; Kissinger, Jessica C.; Paterson, Andrew H.

    2012-01-01

    MCScan is an algorithm able to scan multiple genomes or subgenomes in order to identify putative homologous chromosomal regions, and align these regions using genes as anchors. The MCScanX toolkit implements an adjusted MCScan algorithm for detection of synteny and collinearity that extends the original software by incorporating 14 utility programs for visualization of results and additional downstream analyses. Applications of MCScanX to several sequenced plant genomes and gene families are shown as examples. MCScanX can be used to effectively analyze chromosome structural changes, and reveal the history of gene family expansions that might contribute to the adaptation of lineages and taxa. An integrated view of various modes of gene duplication can supplement the traditional gene tree analysis in specific families. The source code and documentation of MCScanX are freely available at http://chibba.pgml.uga.edu/mcscan2/. PMID:22217600

  8. Embedding strategies for effective use of information from multiple sequence alignments.

    PubMed Central

    Henikoff, S.; Henikoff, J. G.

    1997-01-01

    We describe a new strategy for utilizing multiple sequence alignment information to detect distant relationships in searches of sequence databases. A single sequence representing a protein family is enriched by replacing conserved regions with position-specific scoring matrices (PSSMs) or consensus residues derived from multiple alignments of family members. In comprehensive tests of these and other family representations, PSSM-embedded queries produced the best results overall when used with a special version of the Smith-Waterman searching algorithm. Moreover, embedding consensus residues instead of PSSMs improved performance with readily available single sequence query searching programs, such as BLAST and FASTA. Embedding PSSMs or consensus residues into a representative sequence improves searching performance by extracting multiple alignment information from motif regions while retaining single sequence information where alignment is uncertain. PMID:9070452

  9. BEAUTY: an enhanced BLAST-based search tool that integrates multiple biological information resources into sequence similarity search results.

    PubMed

    Worley, K C; Wiese, B A; Smith, R F

    1995-09-01

    BEAUTY (BLAST enhanced alignment utility) is an enhanced version of the NCBI's BLAST data base search tool that facilitates identification of the functions of matched sequences. We have created new data bases of conserved regions and functional domains for protein sequences in NCBI's Entrez data base, and BEAUTY allows this information to be incorporated directly into BLAST search results. A Conserved Regions Data Base, containing the locations of conserved regions within Entrez protein sequences, was constructed by (1) clustering the entire data base into families, (2) aligning each family using our PIMA multiple sequence alignment program, and (3) scanning the multiple alignments to locate the conserved regions within each aligned sequence. A separate Annotated Domains Data Base was constructed by extracting the locations of all annotated domains and sites from sequences represented in the Entrez, PROSITE, BLOCKS, and PRINTS data bases. BEAUTY performs a BLAST search of those Entrez sequences with conserved regions and/or annotated domains. BEAUTY then uses the information from the Conserved Regions and Annotated Domains data bases to generate, for each matched sequence, a schematic display that allows one to directly compare the relative locations of (1) the conserved regions, (2) annotated domains and sites, and (3) the locally aligned regions matched in the BLAST search. In addition, BEAUTY search results include World-Wide Web hypertext links to a number of external data bases that provide a variety of additional types of information on the function of matched sequences. This convenient integration of protein families, conserved regions, annotated domains, alignment displays, and World-Wide Web resources greatly enhances the biological informativeness of sequence similarity searches. BEAUTY searches can be performed remotely on our system using the "BCM Search Launcher" World-Wide Web pages (URL is < http:/ /gc.bcm.tmc.edu:8088/ search-launcher/launcher.html > ).

  10. Implied alignment: a synapomorphy-based multiple-sequence alignment method and its use in cladogram search

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    A method to align sequence data based on parsimonious synapomorphy schemes generated by direct optimization (DO; earlier termed optimization alignment) is proposed. DO directly diagnoses sequence data on cladograms without an intervening multiple-alignment step, thereby creating topology-specific, dynamic homology statements. Hence, no multiple-alignment is required to generate cladograms. Unlike general and globally optimal multiple-alignment procedures, the method described here, implied alignment (IA), takes these dynamic homologies and traces them back through a single cladogram, linking the unaligned sequence positions in the terminal taxa via DO transformation series. These "lines of correspondence" link ancestor-descendent states and, when displayed as linearly arrayed columns without hypothetical ancestors, are largely indistinguishable from standard multiple alignment. Since this method is based on synapomorphy, the treatment of certain classes of insertion-deletion (indel) events may be different from that of other alignment procedures. As with all alignment methods, results are dependent on parameter assumptions such as indel cost and transversion:transition ratios. Such an IA could be used as a basis for phylogenetic search, but this would be questionable since the homologies derived from the implied alignment depend on its natal cladogram and any variance, between DO and IA + Search, due to heuristic approach. The utility of this procedure in heuristic cladogram searches using DO and the improvement of heuristic cladogram cost calculations are discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  11. DynaMIT: the dynamic motif integration toolkit

    PubMed Central

    Dassi, Erik; Quattrone, Alessandro

    2016-01-01

    De-novo motif search is a frequently applied bioinformatics procedure to identify and prioritize recurrent elements in sequences sets for biological investigation, such as the ones derived from high-throughput differential expression experiments. Several algorithms have been developed to perform motif search, employing widely different approaches and often giving divergent results. In order to maximize the power of these investigations and ultimately be able to draft solid biological hypotheses, there is the need for applying multiple tools on the same sequences and merge the obtained results. However, motif reporting formats and statistical evaluation methods currently make such an integration task difficult to perform and mostly restricted to specific scenarios. We thus introduce here the Dynamic Motif Integration Toolkit (DynaMIT), an extremely flexible platform allowing to identify motifs employing multiple algorithms, integrate them by means of a user-selected strategy and visualize results in several ways; furthermore, the platform is user-extendible in all its aspects. DynaMIT is freely available at http://cibioltg.bitbucket.org. PMID:26253738

  12. Development of a toolkit and glossary to aid in the adaptation of health technology assessment (HTA) reports for use in different contexts.

    PubMed

    Chase, D; Rosten, C; Turner, S; Hicks, N; Milne, R

    2009-11-01

    To develop a health technology assessment (HTA) adaptation toolkit and glossary of adaptation terms for use by HTA agencies within EU member states to support them in adapting HTA reports written for other contexts. The toolkit and glossary were developed by a partnership of 28 HTA agencies and networks across Europe (EUnetHTA work package 5), led by the UK National Coordinating Centre for Health Technology Assessment (NCCHTA). Methods employed for the two resources were literature searching, a survey of adaptation experience, two rounds of a Delphi survey, meetings of the partnership and drawing on the expertise and experience of the partnership, two rounds of review, and two rounds of quality assurance testing. All partners were requested to provide input into each stage of development. The resulting toolkit is a collection of resources, in the form of checklists of questions on relevance, reliability and transferability of data and information, and links to useful websites, that help the user assess whether data and information in existing HTA reports can be adapted for a different setting. The toolkit is designed for the adaptation of evidence synthesis rather than primary research. The accompanying glossary provides descriptions of meanings for HTA adaptation terms from HTA agencies across Europe. It seeks to highlight differences in the use and understanding of each word by HTA agencies. The toolkit and glossary are available for use by all HTA agencies and can be accessed via www.eunethta.net/. These resources have been developed to help HTA agencies make better use of HTA reports produced elsewhere. They can be used by policy-makers and clinicians to aid in understanding HTA reports written for other contexts. The main implication of this work is that there is the potential for the adaptation of HTA reports and, if utilised, this should release resources to enable the development of further HTA reports. Recommendations for the further development of the toolkit include the potential to develop an interactive web-based version and to extend the toolkit to facilitate the adaptation of HTA reports on diagnostic testing and screening.

  13. Automated Ontology Alignment with Fuselets for Community of Interest (COI) Integration

    DTIC Science & Technology

    2008-09-01

    Search Example ............................................................................... 22 Figure 8 - Federated Search Example Revisited...integrating information from various sources through a single query. This is the traditional federated search problem, where the sources don’t...Figure 7 - Federated Search Example For the data sources in the graphic above, the ontologies align in a fairly straightforward manner

  14. Sierra Toolkit Manual Version 4.48.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Toolkit Team

    This report provides documentation for the SIERRA Toolkit (STK) modules. STK modules are intended to provide infrastructure that assists the development of computational engineering soft- ware such as finite-element analysis applications. STK includes modules for unstructured-mesh data structures, reading/writing mesh files, geometric proximity search, and various utilities. This document contains a chapter for each module, and each chapter contains overview descriptions and usage examples. Usage examples are primarily code listings which are generated from working test programs that are included in the STK code-base. A goal of this approach is to ensure that the usage examples will not fall outmore » of date. This page intentionally left blank.« less

  15. GIRAF: a method for fast search and flexible alignment of ligand binding interfaces in proteins at atomic resolution

    PubMed Central

    Kinjo, Akira R.; Nakamura, Haruki

    2012-01-01

    Comparison and classification of protein structures are fundamental means to understand protein functions. Due to the computational difficulty and the ever-increasing amount of structural data, however, it is in general not feasible to perform exhaustive all-against-all structure comparisons necessary for comprehensive classifications. To efficiently handle such situations, we have previously proposed a method, now called GIRAF. We herein describe further improvements in the GIRAF protein structure search and alignment method. The GIRAF method achieves extremely efficient search of similar structures of ligand binding sites of proteins by exploiting database indexing of structural features of local coordinate frames. In addition, it produces refined atom-wise alignments by iterative applications of the Hungarian method to the bipartite graph defined for a pair of superimposed structures. By combining the refined alignments based on different local coordinate frames, it is made possible to align structures involving domain movements. We provide detailed accounts for the database design, the search and alignment algorithms as well as some benchmark results. PMID:27493524

  16. IBiSA_Tools: A Computational Toolkit for Ion-Binding State Analysis in Molecular Dynamics Trajectories of Ion Channels.

    PubMed

    Kasahara, Kota; Kinoshita, Kengo

    2016-01-01

    Ion conduction mechanisms of ion channels are a long-standing conundrum. Although the molecular dynamics (MD) method has been extensively used to simulate ion conduction dynamics at the atomic level, analysis and interpretation of MD results are not straightforward due to complexity of the dynamics. In our previous reports, we proposed an analytical method called ion-binding state analysis to scrutinize and summarize ion conduction mechanisms by taking advantage of a variety of analytical protocols, e.g., the complex network analysis, sequence alignment, and hierarchical clustering. This approach effectively revealed the ion conduction mechanisms and their dependence on the conditions, i.e., ion concentration and membrane voltage. Here, we present an easy-to-use computational toolkit for ion-binding state analysis, called IBiSA_tools. This toolkit consists of a C++ program and a series of Python and R scripts. From the trajectory file of MD simulations and a structure file, users can generate several images and statistics of ion conduction processes. A complex network named ion-binding state graph is generated in a standard graph format (graph modeling language; GML), which can be visualized by standard network analyzers such as Cytoscape. As a tutorial, a trajectory of a 50 ns MD simulation of the Kv1.2 channel is also distributed with the toolkit. Users can trace the entire process of ion-binding state analysis step by step. The novel method for analysis of ion conduction mechanisms of ion channels can be easily used by means of IBiSA_tools. This software is distributed under an open source license at the following URL: http://www.ritsumei.ac.jp/~ktkshr/ibisa_tools/.

  17. Query-seeded iterative sequence similarity searching improves selectivity 5–20-fold

    PubMed Central

    Li, Weizhong; Lopez, Rodrigo

    2017-01-01

    Abstract Iterative similarity search programs, like psiblast, jackhmmer, and psisearch, are much more sensitive than pairwise similarity search methods like blast and ssearch because they build a position specific scoring model (a PSSM or HMM) that captures the pattern of sequence conservation characteristic to a protein family. But models are subject to contamination; once an unrelated sequence has been added to the model, homologs of the unrelated sequence will also produce high scores, and the model can diverge from the original protein family. Examination of alignment errors during psiblast PSSM contamination suggested a simple strategy for dramatically reducing PSSM contamination. psiblast PSSMs are built from the query-based multiple sequence alignment (MSA) implied by the pairwise alignments between the query model (PSSM, HMM) and the subject sequences in the library. When the original query sequence residues are inserted into gapped positions in the aligned subject sequence, the resulting PSSM rarely produces alignment over-extensions or alignments to unrelated sequences. This simple step, which tends to anchor the PSSM to the original query sequence and slightly increase target percent identity, can reduce the frequency of false-positive alignments more than 20-fold compared with psiblast and jackhmmer, with little loss in search sensitivity. PMID:27923999

  18. The Bio-Community Perl toolkit for microbial ecology.

    PubMed

    Angly, Florent E; Fields, Christopher J; Tyson, Gene W

    2014-07-01

    The development of bioinformatic solutions for microbial ecology in Perl is limited by the lack of modules to represent and manipulate microbial community profiles from amplicon and meta-omics studies. Here we introduce Bio-Community, an open-source, collaborative toolkit that extends BioPerl. Bio-Community interfaces with commonly used programs using various file formats, including BIOM, and provides operations such as rarefaction and taxonomic summaries. Bio-Community will help bioinformaticians to quickly piece together custom analysis pipelines and develop novel software. Availability an implementation: Bio-Community is cross-platform Perl code available from http://search.cpan.org/dist/Bio-Community under the Perl license. A readme file describes software installation and how to contribute. © The Author 2014. Published by Oxford University Press.

  19. Machine learning for a Toolkit for Image Mining

    NASA Technical Reports Server (NTRS)

    Delanoy, Richard L.

    1995-01-01

    A prototype user environment is described that enables a user with very limited computer skills to collaborate with a computer algorithm to develop search tools (agents) that can be used for image analysis, creating metadata for tagging images, searching for images in an image database on the basis of image content, or as a component of computer vision algorithms. Agents are learned in an ongoing, two-way dialogue between the user and the algorithm. The user points to mistakes made in classification. The algorithm, in response, attempts to discover which image attributes are discriminating between objects of interest and clutter. It then builds a candidate agent and applies it to an input image, producing an 'interest' image highlighting features that are consistent with the set of objects and clutter indicated by the user. The dialogue repeats until the user is satisfied. The prototype environment, called the Toolkit for Image Mining (TIM) is currently capable of learning spectral and textural patterns. Learning exhibits rapid convergence to reasonable levels of performance and, when thoroughly trained, Fo appears to be competitive in discrimination accuracy with other classification techniques.

  20. Internet Librarian '98: Proceedings of the Internet Librarian Conference (2nd, Monterey, California, November 1-5, 1998).

    ERIC Educational Resources Information Center

    Nixon, Carol, Comp.; Dengler, M. Heide, Comp.; McHenry, Mare L., Comp.

    This proceedings contains 56 papers, presentation summaries, and/or slide presentations pertaining to the Internet, World Wide Web, intranets, and library systems. Topics include: Web databases in medium sized libraries; Dow Jones Intranet Toolkit; the future of online; Web searching and Internet basics; digital archiving; evolution of the online…

  1. Mansonella ozzardi mitogenome and pseudogene characterisation provides new perspectives on filarial parasite systematics and CO-1 barcoding.

    PubMed

    Crainey, James Lee; Marín, Michel Abanto; Silva, Túllio Romão Ribeiro da; de Medeiros, Jansen Fernandes; Pessoa, Felipe Arley Costa; Santos, Yago Vinícius; Vicente, Ana Carolina Paulo; Luz, Sérgio Luiz Bessa

    2018-04-18

    Despite the broad distribution of M. ozzardi in Latin America and the Caribbean, there is still very little DNA sequence data available to study this neglected parasite's epidemiology. Mitochondrial DNA (mtDNA) sequences, especially the cytochrome oxidase (CO1) gene's barcoding region, have been targeted successfully for filarial diagnostics and for epidemiological, ecological and evolutionary studies. MtDNA-based studies can, however, be compromised by unrecognised mitochondrial pseudogenes, such as Numts. Here, we have used shot-gun Illumina-HiSeq sequencing to recover the first complete Mansonella genus mitogenome and to identify several mitochondrial-origin pseudogenes. Mitogenome phylogenetic analysis placed M. ozzardi in the Onchocercidae "ONC5" clade and suggested that Mansonella parasites are more closely related to Wuchereria and Brugia genera parasites than they are to Loa genus parasites. DNA sequence alignments, BLAST searches and conceptual translations have been used to compliment phylogenetic analysis showing that M. ozzardi from the Amazon and Caribbean regions are near-identical and that previously reported Peruvian M. ozzardi CO1 reference sequences are probably of pseudogene origin. In addition to adding a much-needed resource to the Mansonella genus's molecular tool-kit and providing evidence that some M. ozzardi CO1 sequence deposits are pseudogenes, our results suggest that all Neotropical M. ozzardi parasites are closely related.

  2. Robust, Globally Consistent, and Fully-automatic Multi-image Registration and Montage Synthesis for 3-D Multi-channel Images

    PubMed Central

    Tsai, Chia-Ling; Lister, James P.; Bjornsson, Christopher J; Smith, Karen; Shain, William; Barnes, Carol A.; Roysam, Badrinath

    2013-01-01

    The need to map regions of brain tissue that are much wider than the field of view of the microscope arises frequently. One common approach is to collect a series of overlapping partial views, and align them to synthesize a montage covering the entire region of interest. We present a method that advances this approach in multiple ways. Our method (1) produces a globally consistent joint registration of an unorganized collection of 3-D multi-channel images with or without stage micrometer data; (2) produces accurate registrations withstanding changes in scale, rotation, translation and shear by using a 3-D affine transformation model; (3) achieves complete automation, and does not require any parameter settings; (4) handles low and variable overlaps (5 – 15%) between adjacent images, minimizing the number of images required to cover a tissue region; (5) has the self-diagnostic ability to recognize registration failures instead of delivering incorrect results; (6) can handle a broad range of biological images by exploiting generic alignment cues from multiple fluorescence channels without requiring segmentation; and (7) is computationally efficient enough to run on desktop computers regardless of the number of images. The algorithm was tested with several tissue samples of at least 50 image tiles, involving over 5,000 image pairs. It correctly registered all image pairs with an overlap greater than 7%, correctly recognized all failures, and successfully joint-registered all images for all tissue samples studied. This algorithm is disseminated freely to the community as included with the FARSIGHT toolkit for microscopy (www.farsight-toolkit.org). PMID:21361958

  3. A novel mutation in PRPF31, causative of autosomal dominant retinitis pigmentosa, using the BGISEQ-500 sequencer.

    PubMed

    Zheng, Yu; Wang, Hai-Lin; Li, Jian-Kang; Xu, Li; Tellier, Laurent; Li, Xiao-Lin; Huang, Xiao-Yan; Li, Wei; Niu, Tong-Tong; Yang, Huan-Ming; Zhang, Jian-Guo; Liu, Dong-Ning

    2018-01-01

    To study the genes responsible for retinitis pigmentosa. A total of 15 Chinese families with retinitis pigmentosa, containing 94 sporadically afflicted cases, were recruited. The targeted sequences were captured using the Target_Eye_365_V3 chip and sequenced using the BGISEQ-500 sequencer, according to the manufacturer's instructions. Data were aligned to UCSC Genome Browser build hg19, using the Burroughs Wheeler Aligner MEM algorithm. Local realignment was performed with the Genome Analysis Toolkit (GATK v.3.3.0) IndelRealigner, and variants were called with the Genome Analysis Toolkit Haplotypecaller, without any use of imputation. Variants were filtered against a panel derived from 1000 Genomes Project, 1000G_ASN, ESP6500, ExAC and dbSNP138. In all members of Family ONE and Family TWO with available DNA samples, the genetic variant was validated using Sanger sequencing. A novel, pathogenic variant of retinitis pigmentosa, c.357_358delAA (p.Ser119SerfsX5) was identified in PRPF31 in 2 of 15 autosomal-dominant retinitis pigmentosa (ADRP) families, as well as in one, sporadic case. Sanger sequencing was performed upon probands, as well as upon other family members. This novel, pathogenic genotype co-segregated with retinitis pigmentosa phenotype in these two families. ADRP is a subtype of retinitis pigmentosa, defined by its genotype, which accounts for 20%-40% of the retinitis pigmentosa patients. Our study thus expands the spectrum of PRPF31 mutations known to occur in ADRP, and provides further demonstration of the applicability of the BGISEQ500 sequencer for genomics research.

  4. A review of midwifery in Mongolia utilising the 'Strengthening Midwifery Toolkit'.

    PubMed

    Kildea, Sue; Larsson, Margareta; Govind, Salik

    2012-12-01

    The World Health Organization (WHO) developed the 'Strengthening Midwifery Toolkit' in response to an international emphasis on increasing midwifery's role in providing maternal newborn health services. It was used to assist a review of midwifery in Mongolia. A rapid situational assessment included site visits to eight health facilities and four educational institutions resulting in 30 key informant interviews and six focus group discussions (67 midwives and students). A desk review of pertinent documents (n=19) was undertaken. Data collected included assessments of: midwife competency (n=96), scope of practice (n=2), health facilities (n=8), educational institutions (n=4), legislation and regulation (n=1), and midwifery (n=4) Feldsher-Nurse (n=4) and Bachelor-Nurse (n=1) curricula. Stakeholders in Mongolia are committed to strengthening midwifery across the country to better align with international standards. This requires: a long-term investment in reorientating the health workforce and educational institutions, regulatory changes, educational investment, job description changes which will impact on other maternal newborn health service providers. Additional support and incentives for providers in rural and remote areas is needed and investment in health facilities to enable appropriate infection control; and adequate provision of essential equipment and drugs, are important strategies needed to protect staff. Maternity emergency training is required across the country. The Midwifery Toolkit was adapted to suit the local context and provided an excellent framework for this review. Copyright © 2011 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.

  5. G-LoSA for Prediction of Protein-Ligand Binding Sites and Structures.

    PubMed

    Lee, Hui Sun; Im, Wonpil

    2017-01-01

    Recent advances in high-throughput structure determination and computational protein structure prediction have significantly enriched the universe of protein structure. However, there is still a large gap between the number of available protein structures and that of proteins with annotated function in high accuracy. Computational structure-based protein function prediction has emerged to reduce this knowledge gap. The identification of a ligand binding site and its structure is critical to the determination of a protein's molecular function. We present a computational methodology for predicting small molecule ligand binding site and ligand structure using G-LoSA, our protein local structure alignment and similarity measurement tool. All the computational procedures described here can be easily implemented using G-LoSA Toolkit, a package of standalone software programs and preprocessed PDB structure libraries. G-LoSA and G-LoSA Toolkit are freely available to academic users at http://compbio.lehigh.edu/GLoSA . We also illustrate a case study to show the potential of our template-based approach harnessing G-LoSA for protein function prediction.

  6. SW#db: GPU-Accelerated Exact Sequence Similarity Database Search.

    PubMed

    Korpar, Matija; Šošić, Martin; Blažeka, Dino; Šikić, Mile

    2015-01-01

    In recent years we have witnessed a growth in sequencing yield, the number of samples sequenced, and as a result-the growth of publicly maintained sequence databases. The increase of data present all around has put high requirements on protein similarity search algorithms with two ever-opposite goals: how to keep the running times acceptable while maintaining a high-enough level of sensitivity. The most time consuming step of similarity search are the local alignments between query and database sequences. This step is usually performed using exact local alignment algorithms such as Smith-Waterman. Due to its quadratic time complexity, alignments of a query to the whole database are usually too slow. Therefore, the majority of the protein similarity search methods prior to doing the exact local alignment apply heuristics to reduce the number of possible candidate sequences in the database. However, there is still a need for the alignment of a query sequence to a reduced database. In this paper we present the SW#db tool and a library for fast exact similarity search. Although its running times, as a standalone tool, are comparable to the running times of BLAST, it is primarily intended to be used for exact local alignment phase in which the database of sequences has already been reduced. It uses both GPU and CPU parallelization and was 4-5 times faster than SSEARCH, 6-25 times faster than CUDASW++ and more than 20 times faster than SSW at the time of writing, using multiple queries on Swiss-prot and Uniref90 databases.

  7. Improved alignment evaluation and optimization : final report.

    DOT National Transportation Integrated Search

    2007-09-11

    This report outlines the development of an enhanced highway alignment evaluation and optimization : model. A GIS-based software tool is prepared for alignment optimization that uses genetic algorithms for : optimal search. The software is capable of ...

  8. Searching for gravitational waves from compact binaries with precessing spins

    NASA Astrophysics Data System (ADS)

    Harry, Ian; Privitera, Stephen; Bohé, Alejandro; Buonanno, Alessandra

    2016-07-01

    Current searches for gravitational waves from compact-object binaries with the LIGO and Virgo observatories employ waveform models with spins aligned (or antialigned) with the orbital angular momentum. Here, we derive a new statistic to search for compact objects carrying generic (precessing) spins. Applying this statistic, we construct banks of both aligned- and generic-spin templates for binary black holes and neutron star-black hole binaries, and compare the effectualness of these banks towards simulated populations of generic-spin systems. We then use these banks in a pipeline analysis of Gaussian noise to measure the increase in background incurred by using generic- instead of aligned-spin banks. Although the generic-spin banks have roughly a factor of ten more templates than the aligned-spin banks, we find an overall improvement in signal recovery at a fixed false-alarm rate for systems with high-mass ratio and highly precessing spins. This gain in sensitivity comes at a small loss of sensitivity (≲4 %) for systems that are already well covered by aligned-spin templates. Since the observation of even a single binary merger with misaligned spins could provide unique astrophysical insights into the formation of these sources, we recommend that the method described here be developed further to mount a viable search for generic-spin binary mergers in LIGO/Virgo data.

  9. Improve homology search sensitivity of PacBio data by correcting frameshifts.

    PubMed

    Du, Nan; Sun, Yanni

    2016-09-01

    Single-molecule, real-time sequencing (SMRT) developed by Pacific BioSciences produces longer reads than secondary generation sequencing technologies such as Illumina. The long read length enables PacBio sequencing to close gaps in genome assembly, reveal structural variations, and identify gene isoforms with higher accuracy in transcriptomic sequencing. However, PacBio data has high sequencing error rate and most of the errors are insertion or deletion errors. During alignment-based homology search, insertion or deletion errors in genes will cause frameshifts and may only lead to marginal alignment scores and short alignments. As a result, it is hard to distinguish true alignments from random alignments and the ambiguity will incur errors in structural and functional annotation. Existing frameshift correction tools are designed for data with much lower error rate and are not optimized for PacBio data. As an increasing number of groups are using SMRT, there is an urgent need for dedicated homology search tools for PacBio data. In this work, we introduce Frame-Pro, a profile homology search tool for PacBio reads. Our tool corrects sequencing errors and also outputs the profile alignments of the corrected sequences against characterized protein families. We applied our tool to both simulated and real PacBio data. The results showed that our method enables more sensitive homology search, especially for PacBio data sets of low sequencing coverage. In addition, we can correct more errors when comparing with a popular error correction tool that does not rely on hybrid sequencing. The source code is freely available at https://sourceforge.net/projects/frame-pro/ yannisun@msu.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. PF2fit: Polar Fast Fourier Matched Alignment of Atomistic Structures with 3D Electron Microscopy Maps.

    PubMed

    Bettadapura, Radhakrishna; Rasheed, Muhibur; Vollrath, Antje; Bajaj, Chandrajit

    2015-10-01

    There continue to be increasing occurrences of both atomistic structure models in the PDB (possibly reconstructed from X-ray diffraction or NMR data), and 3D reconstructed cryo-electron microscopy (3D EM) maps (albeit at coarser resolution) of the same or homologous molecule or molecular assembly, deposited in the EMDB. To obtain the best possible structural model of the molecule at the best achievable resolution, and without any missing gaps, one typically aligns (match and fits) the atomistic structure model with the 3D EM map. We discuss a new algorithm and generalized framework, named PF(2) fit (Polar Fast Fourier Fitting) for the best possible structural alignment of atomistic structures with 3D EM. While PF(2) fit enables only a rigid, six dimensional (6D) alignment method, it augments prior work on 6D X-ray structure and 3D EM alignment in multiple ways: Scoring. PF(2) fit includes a new scoring scheme that, in addition to rewarding overlaps between the volumes occupied by the atomistic structure and 3D EM map, rewards overlaps between the volumes complementary to them. We quantitatively demonstrate how this new complementary scoring scheme improves upon existing approaches. PF(2) fit also includes two scoring functions, the non-uniform exterior penalty and the skeleton-secondary structure score, and implements the scattering potential score as an alternative to traditional Gaussian blurring. Search. PF(2) fit utilizes a fast polar Fourier search scheme, whose main advantage is the ability to search over uniformly and adaptively sampled subsets of the space of rigid-body motions. PF(2) fit also implements a new reranking search and scoring methodology that considerably improves alignment metrics in results obtained from the initial search.

  11. PF2 fit: Polar Fast Fourier Matched Alignment of Atomistic Structures with 3D Electron Microscopy Maps

    PubMed Central

    Bettadapura, Radhakrishna; Rasheed, Muhibur; Vollrath, Antje; Bajaj, Chandrajit

    2015-01-01

    There continue to be increasing occurrences of both atomistic structure models in the PDB (possibly reconstructed from X-ray diffraction or NMR data), and 3D reconstructed cryo-electron microscopy (3D EM) maps (albeit at coarser resolution) of the same or homologous molecule or molecular assembly, deposited in the EMDB. To obtain the best possible structural model of the molecule at the best achievable resolution, and without any missing gaps, one typically aligns (match and fits) the atomistic structure model with the 3D EM map. We discuss a new algorithm and generalized framework, named PF2 fit (Polar Fast Fourier Fitting) for the best possible structural alignment of atomistic structures with 3D EM. While PF2 fit enables only a rigid, six dimensional (6D) alignment method, it augments prior work on 6D X-ray structure and 3D EM alignment in multiple ways: Scoring. PF2 fit includes a new scoring scheme that, in addition to rewarding overlaps between the volumes occupied by the atomistic structure and 3D EM map, rewards overlaps between the volumes complementary to them. We quantitatively demonstrate how this new complementary scoring scheme improves upon existing approaches. PF2 fit also includes two scoring functions, the non-uniform exterior penalty and the skeleton-secondary structure score, and implements the scattering potential score as an alternative to traditional Gaussian blurring. Search. PF2 fit utilizes a fast polar Fourier search scheme, whose main advantage is the ability to search over uniformly and adaptively sampled subsets of the space of rigid-body motions. PF2 fit also implements a new reranking search and scoring methodology that considerably improves alignment metrics in results obtained from the initial search. PMID:26469938

  12. Highly Asynchronous VisitOr Queue Graph Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pearce, R.

    2012-10-01

    HAVOQGT is a C++ framework that can be used to create highly parallel graph traversal algorithms. The framework stores the graph and algorithmic data structures on external memory that is typically mapped to high performance locally attached NAND FLASH arrays. The framework supports a vertex-centered visitor programming model. The frameworkd has been used to implement breadth first search, connected components, and single source shortest path.

  13. NYC311 Mobile App | City of New York

    Science.gov Websites

    Mayor Events Connect Jobs NYC311 Mobile App Share Print The NYC311 app is available for free for iPhone DocumentsSNAP (Food Stamps)Parking Signs and LocatorAbout NYC311NYC311 Mobile AppNYC311 TwitterNYC311 Facebook Mobile Apps Maps Resident Toolkit NYC Search City of New York. 2018 All Rights Reserved, NYC is a

  14. Flare Frequency Distribution at Low Energies in GALEX UV

    NASA Astrophysics Data System (ADS)

    Million, Chase; Fleming, Scott W.; Osten, Rachel A.; Brasseur, Clara; Bianchi, Luciana; Shiao, Bernie; Loyd, R. O. Parke; Shkolnik, Evgenya L.

    2018-06-01

    The gPhoton toolkit and database of GALEX photon events permits measurement of flares with energies as small as ~10^27 ergs in the two GALEX UV bandpasses. Following a previously reported search for flaring on several thousand M dwarfs observed by GALEX, we present initial results on the flare frequency as a function of energy and stellar type at energies < 10^32 ergs.

  15. Contrast Analysis for Side-Looking Sonar

    DTIC Science & Technology

    2013-09-30

    bound for shadow depth that can be used to validate modeling tools such as SWAT (Shallow Water Acoustics Toolkit). • Adaptive Postprocessing: Tune image...0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send

  16. Prevention literacy: community-based advocacy for access and ownership of the HIV prevention toolkit.

    PubMed

    Parker, Richard G; Perez-Brumer, Amaya; Garcia, Jonathan; Gavigan, Kelly; Ramirez, Ana; Milnor, Jack; Terto, Veriano

    2016-01-01

    Critical technological advances have yielded a toolkit of HIV prevention strategies. This literature review sought to provide contextual and historical reflection needed to bridge the conceptual gap between clinical efficacy and community effectiveness (i.e. knowledge and usage) of existing HIV prevention options, especially in resource-poor settings. Between January 2015 and October 2015, we reviewed scholarly and grey literatures to define treatment literacy and health literacy and assess the current need for literacy related to HIV prevention. The review included searches in electronic databases including MEDLINE, PsycINFO, PubMed, and Google Scholar. Permutations of the following search terms were used: "treatment literacy," "treatment education," "health literacy," and "prevention literacy." Through an iterative process of analyses and searches, titles and/or abstracts and reference lists of retrieved articles were reviewed for additional articles, and historical content analyses of grey literature and websites were additionally conducted. Treatment literacy was a well-established concept developed in the global South, which was later partially adopted by international agencies such as the World Health Organization. Treatment literacy emerged as more effective antiretroviral therapies became available. Developed from popular pedagogy and grassroots efforts during an intense struggle for treatment access, treatment literacy addressed the need to extend access to underserved communities and low-income settings that might otherwise be excluded from access. In contrast, prevention literacy is absent in the recent surge of new biomedical prevention strategies; prevention literacy was scarcely referenced and undertheorized in the available literature. Prevention efforts today include multimodal techniques, which jointly comprise a toolkit of biomedical, behavioural, and structural/environmental approaches. However, linkages to community advocacy and mobilization efforts are limited and unsustainable. Success of prevention efforts depends on equity of access, community-based ownership, and multilevel support structures to enable usage and sustainability. For existing HIV prevention efforts to be effective in "real-world" settings, with limited resources, reflection on historical lessons and contextual realities (i.e. policies, financial constraints, and biomedical patents) indicated the need to extend principles developed for treatment access and treatment literacy, to support prevention literacy and prevention access as an integral part of the global response to HIV.

  17. Reasoning over taxonomic change: exploring alignments for the Perelleschus use case.

    PubMed

    Franz, Nico M; Chen, Mingmin; Yu, Shizhuo; Kianmajd, Parisa; Bowers, Shawn; Ludäscher, Bertram

    2015-01-01

    Classifications and phylogenetic inferences of organismal groups change in light of new insights. Over time these changes can result in an imperfect tracking of taxonomic perspectives through the re-/use of Code-compliant or informal names. To mitigate these limitations, we introduce a novel approach for aligning taxonomies through the interaction of human experts and logic reasoners. We explore the performance of this approach with the Perelleschus use case of Franz & Cardona-Duque (2013). The use case includes six taxonomies published from 1936 to 2013, 54 taxonomic concepts (i.e., circumscriptions of names individuated according to their respective source publications), and 75 expert-asserted Region Connection Calculus articulations (e.g., congruence, proper inclusion, overlap, or exclusion). An Open Source reasoning toolkit is used to analyze 13 paired Perelleschus taxonomy alignments under heterogeneous constraints and interpretations. The reasoning workflow optimizes the logical consistency and expressiveness of the input and infers the set of maximally informative relations among the entailed taxonomic concepts. The latter are then used to produce merge visualizations that represent all congruent and non-congruent taxonomic elements among the aligned input trees. In this small use case with 6-53 input concepts per alignment, the information gained through the reasoning process is on average one order of magnitude greater than in the input. The approach offers scalable solutions for tracking provenance among succeeding taxonomic perspectives that may have differential biases in naming conventions, phylogenetic resolution, ingroup and outgroup sampling, or ostensive (member-referencing) versus intensional (property-referencing) concepts and articulations.

  18. A novel mutation in PRPF31, causative of autosomal dominant retinitis pigmentosa, using the BGISEQ-500 sequencer

    PubMed Central

    Zheng, Yu; Wang, Hai-Lin; Li, Jian-Kang; Xu, Li; Tellier, Laurent; Li, Xiao-Lin; Huang, Xiao-Yan; Li, Wei; Niu, Tong-Tong; Yang, Huan-Ming; Zhang, Jian-Guo; Liu, Dong-Ning

    2018-01-01

    AIM To study the genes responsible for retinitis pigmentosa. METHODS A total of 15 Chinese families with retinitis pigmentosa, containing 94 sporadically afflicted cases, were recruited. The targeted sequences were captured using the Target_Eye_365_V3 chip and sequenced using the BGISEQ-500 sequencer, according to the manufacturer's instructions. Data were aligned to UCSC Genome Browser build hg19, using the Burroughs Wheeler Aligner MEM algorithm. Local realignment was performed with the Genome Analysis Toolkit (GATK v.3.3.0) IndelRealigner, and variants were called with the Genome Analysis Toolkit Haplotypecaller, without any use of imputation. Variants were filtered against a panel derived from 1000 Genomes Project, 1000G_ASN, ESP6500, ExAC and dbSNP138. In all members of Family ONE and Family TWO with available DNA samples, the genetic variant was validated using Sanger sequencing. RESULTS A novel, pathogenic variant of retinitis pigmentosa, c.357_358delAA (p.Ser119SerfsX5) was identified in PRPF31 in 2 of 15 autosomal-dominant retinitis pigmentosa (ADRP) families, as well as in one, sporadic case. Sanger sequencing was performed upon probands, as well as upon other family members. This novel, pathogenic genotype co-segregated with retinitis pigmentosa phenotype in these two families. CONCLUSION ADRP is a subtype of retinitis pigmentosa, defined by its genotype, which accounts for 20%-40% of the retinitis pigmentosa patients. Our study thus expands the spectrum of PRPF31 mutations known to occur in ADRP, and provides further demonstration of the applicability of the BGISEQ500 sequencer for genomics research. PMID:29375987

  19. SANSparallel: interactive homology search against Uniprot

    PubMed Central

    Somervuo, Panu; Holm, Liisa

    2015-01-01

    Proteins evolve by mutations and natural selection. The network of sequence similarities is a rich source for mining homologous relationships that inform on protein structure and function. There are many servers available to browse the network of homology relationships but one has to wait up to a minute for results. The SANSparallel webserver provides protein sequence database searches with immediate response and professional alignment visualization by third-party software. The output is a list, pairwise alignment or stacked alignment of sequence-similar proteins from Uniprot, UniRef90/50, Swissprot or Protein Data Bank. The stacked alignments are viewed in Jalview or as sequence logos. The database search uses the suffix array neighborhood search (SANS) method, which has been re-implemented as a client-server, improved and parallelized. The method is extremely fast and as sensitive as BLAST above 50% sequence identity. Benchmarks show that the method is highly competitive compared to previously published fast database search programs: UBLAST, DIAMOND, LAST, LAMBDA, RAPSEARCH2 and BLAT. The web server can be accessed interactively or programmatically at http://ekhidna2.biocenter.helsinki.fi/cgi-bin/sans/sans.cgi. It can be used to make protein functional annotation pipelines more efficient, and it is useful in interactive exploration of the detailed evidence supporting the annotation of particular proteins of interest. PMID:25855811

  20. A scalable architecture for extracting, aligning, linking, and visualizing multi-Int data

    NASA Astrophysics Data System (ADS)

    Knoblock, Craig A.; Szekely, Pedro

    2015-05-01

    An analyst today has a tremendous amount of data available, but each of the various data sources typically exists in their own silos, so an analyst has limited ability to see an integrated view of the data and has little or no access to contextual information that could help in understanding the data. We have developed the Domain-Insight Graph (DIG) system, an innovative architecture for extracting, aligning, linking, and visualizing massive amounts of domain-specific content from unstructured sources. Under the DARPA Memex program we have already successfully applied this architecture to multiple application domains, including the enormous international problem of human trafficking, where we extracted, aligned and linked data from 50 million online Web pages. DIG builds on our Karma data integration toolkit, which makes it easy to rapidly integrate structured data from a variety of sources, including databases, spreadsheets, XML, JSON, and Web services. The ability to integrate Web services allows Karma to pull in live data from the various social media sites, such as Twitter, Instagram, and OpenStreetMaps. DIG then indexes the integrated data and provides an easy to use interface for query, visualization, and analysis.

  1. Business intelligence from social media: a study from the VAST Box Office Challenge.

    PubMed

    Lu, Yafeng; Wang, Feng; Maciejewski, Ross

    2014-01-01

    With over 16 million tweets per hour, 600 new blog posts per minute, and 400 million active users on Facebook, businesses have begun searching for ways to turn real-time consumer-based posts into actionable intelligence. The goal is to extract information from this noisy, unstructured data and use it for trend analysis and prediction. Current practices support the idea that visual analytics (VA) can help enable the effective analysis of such data. However, empirical evidence demonstrating the effectiveness of a VA solution is still lacking. A proposed VA toolkit extracts data from Bitly and Twitter to predict movie revenue and ratings. Results from the 2013 VAST Box Office Challenge demonstrate the benefit of an interactive environment for predictive analysis, compared to a purely statistical modeling approach. The VA approach used by the toolkit is generalizable to other domains involving social media data, such as sales forecasting and advertisement analysis.

  2. ISRNA: an integrative online toolkit for short reads from high-throughput sequencing data.

    PubMed

    Luo, Guan-Zheng; Yang, Wei; Ma, Ying-Ke; Wang, Xiu-Jie

    2014-02-01

    Integrative Short Reads NAvigator (ISRNA) is an online toolkit for analyzing high-throughput small RNA sequencing data. Besides the high-speed genome mapping function, ISRNA provides statistics for genomic location, length distribution and nucleotide composition bias analysis of sequence reads. Number of reads mapped to known microRNAs and other classes of short non-coding RNAs, coverage of short reads on genes, expression abundance of sequence reads as well as some other analysis functions are also supported. The versatile search functions enable users to select sequence reads according to their sub-sequences, expression abundance, genomic location, relationship to genes, etc. A specialized genome browser is integrated to visualize the genomic distribution of short reads. ISRNA also supports management and comparison among multiple datasets. ISRNA is implemented in Java/C++/Perl/MySQL and can be freely accessed at http://omicslab.genetics.ac.cn/ISRNA/.

  3. Search algorithm complexity modeling with application to image alignment and matching

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2014-05-01

    Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.

  4. Heuristics for multiobjective multiple sequence alignment.

    PubMed

    Abbasi, Maryam; Paquete, Luís; Pereira, Francisco B

    2016-07-15

    Aligning multiple sequences arises in many tasks in Bioinformatics. However, the alignments produced by the current software packages are highly dependent on the parameters setting, such as the relative importance of opening gaps with respect to the increase of similarity. Choosing only one parameter setting may provide an undesirable bias in further steps of the analysis and give too simplistic interpretations. In this work, we reformulate multiple sequence alignment from a multiobjective point of view. The goal is to generate several sequence alignments that represent a trade-off between maximizing the substitution score and minimizing the number of indels/gaps in the sum-of-pairs score function. This trade-off gives to the practitioner further information about the similarity of the sequences, from which she could analyse and choose the most plausible alignment. We introduce several heuristic approaches, based on local search procedures, that compute a set of sequence alignments, which are representative of the trade-off between the two objectives (substitution score and indels). Several algorithm design options are discussed and analysed, with particular emphasis on the influence of the starting alignment and neighborhood search definitions on the overall performance. A perturbation technique is proposed to improve the local search, which provides a wide range of high-quality alignments. The proposed approach is tested experimentally on a wide range of instances. We performed several experiments with sequences obtained from the benchmark database BAliBASE 3.0. To evaluate the quality of the results, we calculate the hypervolume indicator of the set of score vectors returned by the algorithms. The results obtained allow us to identify reasonably good choices of parameters for our approach. Further, we compared our method in terms of correctly aligned pairs ratio and columns correctly aligned ratio with respect to reference alignments. Experimental results show that our approaches can obtain better results than TCoffee and Clustal Omega in terms of the first ratio.

  5. Gravitational wave searches for aligned-spin binary neutron stars using nonspinning templates

    NASA Astrophysics Data System (ADS)

    Cho, Hee-Suk; Lee, Chang-Hwan

    2018-01-01

    We study gravitational wave searches for merging binary neutron stars (NSs). We use nonspinning template waveforms towards the signals emitted from aligned-spin NS-NS binaries, in which the spins of the NSs are aligned with the orbital angular momentum. We use the TaylorF2 waveform model, which can generate inspiral waveforms emitted from aligned-spin compact binaries. We employ the single effective spin parameter χeff to represent the effect of two component spins (χ1, χ2) on the wave function. For a target system, we choose a binary consisting of the same component masses of 1.4 M ⊙ and consider the spins up to χ i = 0.4. We investigate fitting factors of the nonspinning templates to evaluate their efficiency in gravitational wave searches for the aligned-spin NS-NS binaries. We find that the templates can achieve the fitting factors exceeding 0.97 only for the signals in the range of -0.2 ≲ χeff ≲ 0. Therefore, we demonstrate the necessity of using aligned-spin templates not to lose the signals outside that range. We also show how much the recovered total mass can be biased from the true value depending on the spin of the signal.

  6. De novo identification of highly diverged protein repeats by probabilistic consistency.

    PubMed

    Biegert, A; Söding, J

    2008-03-15

    An estimated 25% of all eukaryotic proteins contain repeats, which underlines the importance of duplication for evolving new protein functions. Internal repeats often correspond to structural or functional units in proteins. Methods capable of identifying diverged repeated segments or domains at the sequence level can therefore assist in predicting domain structures, inferring hypotheses about function and mechanism, and investigating the evolution of proteins from smaller fragments. We present HHrepID, a method for the de novo identification of repeats in protein sequences. It is able to detect the sequence signature of structural repeats in many proteins that have not yet been known to possess internal sequence symmetry, such as outer membrane beta-barrels. HHrepID uses HMM-HMM comparison to exploit evolutionary information in the form of multiple sequence alignments of homologs. In contrast to a previous method, the new method (1) generates a multiple alignment of repeats; (2) utilizes the transitive nature of homology through a novel merging procedure with fully probabilistic treatment of alignments; (3) improves alignment quality through an algorithm that maximizes the expected accuracy; (4) is able to identify different kinds of repeats within complex architectures by a probabilistic domain boundary detection method and (5) improves sensitivity through a new approach to assess statistical significance. Server: http://toolkit.tuebingen.mpg.de/hhrepid; Executables: ftp://ftp.tuebingen.mpg.de/pub/protevo/HHrepID

  7. The Model Analyst’s Toolkit: Scientific Model Development, Analysis, and Validation

    DTIC Science & Technology

    2013-11-20

    Granger causality F-test validation 3.1.2. Dynamic time warping for uneven temporal relationships Many causal relationships are imperfectly...mapping for dynamic feedback models Granger causality and DTW can identify causal relationships and consider complex temporal factors. However, many ...variant of the tf-idf algorithm (Manning, Raghavan, Schutze et al., 2008), typically used in search engines, to “score” features. The (-log tf) in

  8. VaST: A variability search toolkit

    NASA Astrophysics Data System (ADS)

    Sokolovsky, K. V.; Lebedev, A. A.

    2018-01-01

    Variability Search Toolkit (VaST) is a software package designed to find variable objects in a series of sky images. It can be run from a script or interactively using its graphical interface. VaST relies on source list matching as opposed to image subtraction. SExtractor is used to generate source lists and perform aperture or PSF-fitting photometry (with PSFEx). Variability indices that characterize scatter and smoothness of a lightcurve are computed for all objects. Candidate variables are identified as objects having high variability index values compared to other objects of similar brightness. The two distinguishing features of VaST are its ability to perform accurate aperture photometry of images obtained with non-linear detectors and handle complex image distortions. The software has been successfully applied to images obtained with telescopes ranging from 0.08 to 2.5 m in diameter equipped with a variety of detectors including CCD, CMOS, MIC and photographic plates. About 1800 variable stars have been discovered with VaST. It is used as a transient detection engine in the New Milky Way (NMW) nova patrol. The code is written in C and can be easily compiled on the majority of UNIX-like systems. VaST is free software available at http://scan.sai.msu.ru/vast/.

  9. TOBA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Travis

    2007-01-26

    Toba is an extensible personal information retrieval system. It supports various plugins which the user uses to create and store bits of information. It comes configured to store meeting notes, task items, issue, and business development opportunities. Plugins could be written to support almost any kind of digital information. So with the right plugins, Toba could become a full fledged contact manager, project management application, programmer's toolkit, or almost any other type of data storage/search/retrieval application imaginable. Toba comes with a built in command line interface and via a plugin it has a fully scripting language (jython). The information storedmore » can be searched by keyword or through SQL queries.« less

  10. Medicaid information technology architecture: an overview.

    PubMed

    Friedman, Richard H

    2006-01-01

    The Medicaid Information Technology Architecture (MITA) is a roadmap and tool-kit for States to transform their Medicaid Management Information System (MMIS) into an enterprise-wide, beneficiary-centric system. MITA will enable State Medicaid agencies to align their information technology (IT) opportunities with their evolving business needs. It also addresses long-standing issues of interoperability, adaptability, and data sharing, including clinical data, across organizational boundaries by creating models based on nationally accepted technical standards. Perhaps most significantly, MITA allows State Medicaid Programs to actively participate in the DHHS Secretary's vision of a transparent health care market that utilizes electronic health records (EHRs), ePrescribing and personal health records (PHRs).

  11. An Automated Ab Initio Framework for Identifying New Ferroelectrics

    NASA Astrophysics Data System (ADS)

    Smidt, Tess; Reyes-Lillo, Sebastian E.; Jain, Anubhav; Neaton, Jeffrey B.

    Ferroelectric materials have a wide-range of technological applications including non-volatile RAM and optoelectronics. In this work, we present an automated first-principles search for ferroelectrics. We integrate density functional theory, crystal structure databases, symmetry tools, workflow software, and a custom analysis toolkit to build a library of known and proposed ferroelectrics. We screen thousands of candidates using symmetry relations between nonpolar and polar structure pairs. We use two search strategies 1) polar-nonpolar pairs with the same composition and 2) polar-nonpolar structure type pairs. Results are automatically parsed, stored in a database, and accessible via a web interface showing distortion animations and plots of polarization and total energy as a function of distortion. We benchmark our results against experimental data, present new ferroelectric candidates found through our search, and discuss future work on expanding this search methodology to other material classes such as anti-ferroelectrics and multiferroics.

  12. SANSparallel: interactive homology search against Uniprot.

    PubMed

    Somervuo, Panu; Holm, Liisa

    2015-07-01

    Proteins evolve by mutations and natural selection. The network of sequence similarities is a rich source for mining homologous relationships that inform on protein structure and function. There are many servers available to browse the network of homology relationships but one has to wait up to a minute for results. The SANSparallel webserver provides protein sequence database searches with immediate response and professional alignment visualization by third-party software. The output is a list, pairwise alignment or stacked alignment of sequence-similar proteins from Uniprot, UniRef90/50, Swissprot or Protein Data Bank. The stacked alignments are viewed in Jalview or as sequence logos. The database search uses the suffix array neighborhood search (SANS) method, which has been re-implemented as a client-server, improved and parallelized. The method is extremely fast and as sensitive as BLAST above 50% sequence identity. Benchmarks show that the method is highly competitive compared to previously published fast database search programs: UBLAST, DIAMOND, LAST, LAMBDA, RAPSEARCH2 and BLAT. The web server can be accessed interactively or programmatically at http://ekhidna2.biocenter.helsinki.fi/cgi-bin/sans/sans.cgi. It can be used to make protein functional annotation pipelines more efficient, and it is useful in interactive exploration of the detailed evidence supporting the annotation of particular proteins of interest. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. A Grassroots Remote Sensing Toolkit Using Live Coding, Smartphones, Kites and Lightweight Drones.

    PubMed

    Anderson, K; Griffiths, D; DeBell, L; Hancock, S; Duffy, J P; Shutler, J D; Reinhardt, W J; Griffiths, A

    2016-01-01

    This manuscript describes the development of an android-based smartphone application for capturing aerial photographs and spatial metadata automatically, for use in grassroots mapping applications. The aim of the project was to exploit the plethora of on-board sensors within modern smartphones (accelerometer, GPS, compass, camera) to generate ready-to-use spatial data from lightweight aerial platforms such as drones or kites. A visual coding 'scheme blocks' framework was used to build the application ('app'), so that users could customise their own data capture tools in the field. The paper reports on the coding framework, then shows the results of test flights from kites and lightweight drones and finally shows how open-source geospatial toolkits were used to generate geographical information system (GIS)-ready GeoTIFF images from the metadata stored by the app. Two Android smartphones were used in testing-a high specification OnePlus One handset and a lower cost Acer Liquid Z3 handset, to test the operational limits of the app on phones with different sensor sets. We demonstrate that best results were obtained when the phone was attached to a stable single line kite or to a gliding drone. Results show that engine or motor vibrations from powered aircraft required dampening to ensure capture of high quality images. We demonstrate how the products generated from the open-source processing workflow are easily used in GIS. The app can be downloaded freely from the Google store by searching for 'UAV toolkit' (UAV toolkit 2016), and used wherever an Android smartphone and aerial platform are available to deliver rapid spatial data (e.g. in supporting decision-making in humanitarian disaster-relief zones, in teaching or for grassroots remote sensing and democratic mapping).

  14. The Role of Celestial Compass Information in Cataglyphis Ants during Learning Walks and for Neuroplasticity in the Central Complex and Mushroom Bodies

    PubMed Central

    Grob, Robin; Fleischmann, Pauline N.; Grübel, Kornelia; Wehner, Rüdiger; Rössler, Wolfgang

    2017-01-01

    Central place foragers are faced with the challenge to learn the position of their nest entrance in its surroundings, in order to find their way back home every time they go out to search for food. To acquire navigational information at the beginning of their foraging career, Cataglyphis noda performs learning walks during the transition from interior worker to forager. These small loops around the nest entrance are repeatedly interrupted by strikingly accurate back turns during which the ants stop and precisely gaze back to the nest entrance—presumably to learn the landmark panorama of the nest surroundings. However, as at this point the complete navigational toolkit is not yet available, the ants are in need of a reference system for the compass component of the path integrator to align their nest entrance-directed gazes. In order to find this directional reference system, we systematically manipulated the skylight information received by ants during learning walks in their natural habitat, as it has been previously suggested that the celestial compass, as part of the path integrator, might provide such a reference system. High-speed video analyses of distinct learning walk elements revealed that even exclusion from the skylight polarization pattern, UV-light spectrum and the position of the sun did not alter the accuracy of the look back to the nest behavior. We therefore conclude that C. noda uses a different reference system to initially align their gaze directions. However, a comparison of neuroanatomical changes in the central complex and the mushroom bodies before and after learning walks revealed that exposure to UV light together with a naturally changing polarization pattern was essential to induce neuroplasticity in these high-order sensory integration centers of the ant brain. This suggests a crucial role of celestial information, in particular a changing polarization pattern, in initially calibrating the celestial compass system. PMID:29184487

  15. The Role of Celestial Compass Information in Cataglyphis Ants during Learning Walks and for Neuroplasticity in the Central Complex and Mushroom Bodies.

    PubMed

    Grob, Robin; Fleischmann, Pauline N; Grübel, Kornelia; Wehner, Rüdiger; Rössler, Wolfgang

    2017-01-01

    Central place foragers are faced with the challenge to learn the position of their nest entrance in its surroundings, in order to find their way back home every time they go out to search for food. To acquire navigational information at the beginning of their foraging career, Cataglyphis noda performs learning walks during the transition from interior worker to forager. These small loops around the nest entrance are repeatedly interrupted by strikingly accurate back turns during which the ants stop and precisely gaze back to the nest entrance-presumably to learn the landmark panorama of the nest surroundings. However, as at this point the complete navigational toolkit is not yet available, the ants are in need of a reference system for the compass component of the path integrator to align their nest entrance-directed gazes. In order to find this directional reference system, we systematically manipulated the skylight information received by ants during learning walks in their natural habitat, as it has been previously suggested that the celestial compass, as part of the path integrator, might provide such a reference system. High-speed video analyses of distinct learning walk elements revealed that even exclusion from the skylight polarization pattern, UV-light spectrum and the position of the sun did not alter the accuracy of the look back to the nest behavior. We therefore conclude that C. noda uses a different reference system to initially align their gaze directions. However, a comparison of neuroanatomical changes in the central complex and the mushroom bodies before and after learning walks revealed that exposure to UV light together with a naturally changing polarization pattern was essential to induce neuroplasticity in these high-order sensory integration centers of the ant brain. This suggests a crucial role of celestial information, in particular a changing polarization pattern, in initially calibrating the celestial compass system.

  16. Analysis of a Spatial Point Pattern: Examining the Damage to Pavement and Pipes in Santa Clara Valley Resulting from the Loma Prieta Earthquake

    USGS Publications Warehouse

    Phelps, G.A.

    2008-01-01

    This report describes some simple spatial statistical methods to explore the relationships of scattered points to geologic or other features, represented by points, lines, or areas. It also describes statistical methods to search for linear trends and clustered patterns within the scattered point data. Scattered points are often contained within irregularly shaped study areas, necessitating the use of methods largely unexplored in the point pattern literature. The methods take advantage of the power of modern GIS toolkits to numerically approximate the null hypothesis of randomly located data within an irregular study area. Observed distributions can then be compared with the null distribution of a set of randomly located points. The methods are non-parametric and are applicable to irregularly shaped study areas. Patterns within the point data are examined by comparing the distribution of the orientation of the set of vectors defined by each pair of points within the data with the equivalent distribution for a random set of points within the study area. A simple model is proposed to describe linear or clustered structure within scattered data. A scattered data set of damage to pavement and pipes, recorded after the 1989 Loma Prieta earthquake, is used as an example to demonstrate the analytical techniques. The damage is found to be preferentially located nearer a set of mapped lineaments than randomly scattered damage, suggesting range-front faulting along the base of the Santa Cruz Mountains is related to both the earthquake damage and the mapped lineaments. The damage also exhibit two non-random patterns: a single cluster of damage centered in the town of Los Gatos, California, and a linear alignment of damage along the range front of the Santa Cruz Mountains, California. The linear alignment of damage is strongest between 45? and 50? northwest. This agrees well with the mean trend of the mapped lineaments, measured as 49? northwest.

  17. Reasoning over Taxonomic Change: Exploring Alignments for the Perelleschus Use Case

    PubMed Central

    Franz, Nico M.; Chen, Mingmin; Yu, Shizhuo; Kianmajd, Parisa; Bowers, Shawn; Ludäscher, Bertram

    2015-01-01

    Classifications and phylogenetic inferences of organismal groups change in light of new insights. Over time these changes can result in an imperfect tracking of taxonomic perspectives through the re-/use of Code-compliant or informal names. To mitigate these limitations, we introduce a novel approach for aligning taxonomies through the interaction of human experts and logic reasoners. We explore the performance of this approach with the Perelleschus use case of Franz & Cardona-Duque (2013). The use case includes six taxonomies published from 1936 to 2013, 54 taxonomic concepts (i.e., circumscriptions of names individuated according to their respective source publications), and 75 expert-asserted Region Connection Calculus articulations (e.g., congruence, proper inclusion, overlap, or exclusion). An Open Source reasoning toolkit is used to analyze 13 paired Perelleschus taxonomy alignments under heterogeneous constraints and interpretations. The reasoning workflow optimizes the logical consistency and expressiveness of the input and infers the set of maximally informative relations among the entailed taxonomic concepts. The latter are then used to produce merge visualizations that represent all congruent and non-congruent taxonomic elements among the aligned input trees. In this small use case with 6-53 input concepts per alignment, the information gained through the reasoning process is on average one order of magnitude greater than in the input. The approach offers scalable solutions for tracking provenance among succeeding taxonomic perspectives that may have differential biases in naming conventions, phylogenetic resolution, ingroup and outgroup sampling, or ostensive (member-referencing) versus intensional (property-referencing) concepts and articulations. PMID:25700173

  18. Accelerating Smith-Waterman Alignment for Protein Database Search Using Frequency Distance Filtration Scheme Based on CPU-GPU Collaborative System.

    PubMed

    Liu, Yu; Hong, Yang; Lin, Chun-Yuan; Hung, Che-Lun

    2015-01-01

    The Smith-Waterman (SW) algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs) and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS) to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.

  19. CLAST: CUDA implemented large-scale alignment search tool.

    PubMed

    Yano, Masahiro; Mori, Hiroshi; Akiyama, Yutaka; Yamada, Takuji; Kurokawa, Ken

    2014-12-11

    Metagenomics is a powerful methodology to study microbial communities, but it is highly dependent on nucleotide sequence similarity searching against sequence databases. Metagenomic analyses with next-generation sequencing technologies produce enormous numbers of reads from microbial communities, and many reads are derived from microbes whose genomes have not yet been sequenced, limiting the usefulness of existing sequence similarity search tools. Therefore, there is a clear need for a sequence similarity search tool that can rapidly detect weak similarity in large datasets. We developed a tool, which we named CLAST (CUDA implemented large-scale alignment search tool), that enables analyses of millions of reads and thousands of reference genome sequences, and runs on NVIDIA Fermi architecture graphics processing units. CLAST has four main advantages over existing alignment tools. First, CLAST was capable of identifying sequence similarities ~80.8 times faster than BLAST and 9.6 times faster than BLAT. Second, CLAST executes global alignment as the default (local alignment is also an option), enabling CLAST to assign reads to taxonomic and functional groups based on evolutionarily distant nucleotide sequences with high accuracy. Third, CLAST does not need a preprocessed sequence database like Burrows-Wheeler Transform-based tools, and this enables CLAST to incorporate large, frequently updated sequence databases. Fourth, CLAST requires <2 GB of main memory, making it possible to run CLAST on a standard desktop computer or server node. CLAST achieved very high speed (similar to the Burrows-Wheeler Transform-based Bowtie 2 for long reads) and sensitivity (equal to BLAST, BLAT, and FR-HIT) without the need for extensive database preprocessing or a specialized computing platform. Our results demonstrate that CLAST has the potential to be one of the most powerful and realistic approaches to analyze the massive amount of sequence data from next-generation sequencing technologies.

  20. The Climate Resilience Toolkit: Central gateway for risk assessment and resilience planning at all governance scales

    NASA Astrophysics Data System (ADS)

    Herring, D.; Lipschultz, F.

    2016-12-01

    As people and organizations grapple with a changing climate amid a range of other factors simultaneously shifting, there is a need for credible, legitimate & salient scientific information in useful formats. In addition, an assessment framework is needed to guide the process of planning and implementing projects that allow communities and businesses to adapt to specific changing conditions, while also building overall resilience to future change. We will discuss how the U.S. Climate Resilience Toolkit (CRT) can improve people's ability to understand and manage their climate-related risks and opportunities, and help them make their communities and businesses more resilient. In close coordination with the U.S. Climate Data Initiative, the CRT is continually evolving to offer actionable authoritative information, relevant tools, and subject matter expertise from across the U.S. federal government in one easy-to-use location. The Toolkit's "Climate Explorer" is designed to help people understand potential climate conditions over the course of this century. It offers easy access to downloadable maps, graphs, and data tables of observed and projected temperature, precipitation and other decision-relevant climate variables dating back to 1950 and out to 2100. Since climate is only one of many changing factors affecting decisions about the future, it also ties climate information to a wide range of relevant variables to help users explore vulnerabilities and impacts. New topic areas have been added, such as "Fisheries," "Regions," and "Built Environment" sections that feature case studies and personal experiences in making adaptation decisions. A curated "Reports" section is integrated with semantic web capabilities to help users locate the most relevant information sources. As part of the USGCRP's sustained assessment process, the CRT is aligning with other federal activities, such as the upcoming 4th National Climate Assessment.

  1. PyEvolve: a toolkit for statistical modelling of molecular evolution.

    PubMed

    Butterfield, Andrew; Vedagiri, Vivek; Lang, Edward; Lawrence, Cath; Wakefield, Matthew J; Isaev, Alexander; Huttley, Gavin A

    2004-01-05

    Examining the distribution of variation has proven an extremely profitable technique in the effort to identify sequences of biological significance. Most approaches in the field, however, evaluate only the conserved portions of sequences - ignoring the biological significance of sequence differences. A suite of sophisticated likelihood based statistical models from the field of molecular evolution provides the basis for extracting the information from the full distribution of sequence variation. The number of different problems to which phylogeny-based maximum likelihood calculations can be applied is extensive. Available software packages that can perform likelihood calculations suffer from a lack of flexibility and scalability, or employ error-prone approaches to model parameterisation. Here we describe the implementation of PyEvolve, a toolkit for the application of existing, and development of new, statistical methods for molecular evolution. We present the object architecture and design schema of PyEvolve, which includes an adaptable multi-level parallelisation schema. The approach for defining new methods is illustrated by implementing a novel dinucleotide model of substitution that includes a parameter for mutation of methylated CpG's, which required 8 lines of standard Python code to define. Benchmarking was performed using either a dinucleotide or codon substitution model applied to an alignment of BRCA1 sequences from 20 mammals, or a 10 species subset. Up to five-fold parallel performance gains over serial were recorded. Compared to leading alternative software, PyEvolve exhibited significantly better real world performance for parameter rich models with a large data set, reducing the time required for optimisation from approximately 10 days to approximately 6 hours. PyEvolve provides flexible functionality that can be used either for statistical modelling of molecular evolution, or the development of new methods in the field. The toolkit can be used interactively or by writing and executing scripts. The toolkit uses efficient processes for specifying the parameterisation of statistical models, and implements numerous optimisations that make highly parameter rich likelihood functions solvable within hours on multi-cpu hardware. PyEvolve can be readily adapted in response to changing computational demands and hardware configurations to maximise performance. PyEvolve is released under the GPL and can be downloaded from http://cbis.anu.edu.au/software.

  2. ITEP: an integrated toolkit for exploration of microbial pan-genomes.

    PubMed

    Benedict, Matthew N; Henriksen, James R; Metcalf, William W; Whitaker, Rachel J; Price, Nathan D

    2014-01-03

    Comparative genomics is a powerful approach for studying variation in physiological traits as well as the evolution and ecology of microorganisms. Recent technological advances have enabled sequencing large numbers of related genomes in a single project, requiring computational tools for their integrated analysis. In particular, accurate annotations and identification of gene presence and absence are critical for understanding and modeling the cellular physiology of newly sequenced genomes. Although many tools are available to compare the gene contents of related genomes, new tools are necessary to enable close examination and curation of protein families from large numbers of closely related organisms, to integrate curation with the analysis of gain and loss, and to generate metabolic networks linking the annotations to observed phenotypes. We have developed ITEP, an Integrated Toolkit for Exploration of microbial Pan-genomes, to curate protein families, compute similarities to externally-defined domains, analyze gene gain and loss, and generate draft metabolic networks from one or more curated reference network reconstructions in groups of related microbial species among which the combination of core and variable genes constitute the their "pan-genomes". The ITEP toolkit consists of: (1) a series of modular command-line scripts for identification, comparison, curation, and analysis of protein families and their distribution across many genomes; (2) a set of Python libraries for programmatic access to the same data; and (3) pre-packaged scripts to perform common analysis workflows on a collection of genomes. ITEP's capabilities include de novo protein family prediction, ortholog detection, analysis of functional domains, identification of core and variable genes and gene regions, sequence alignments and tree generation, annotation curation, and the integration of cross-genome analysis and metabolic networks for study of metabolic network evolution. ITEP is a powerful, flexible toolkit for generation and curation of protein families. ITEP's modular design allows for straightforward extension as analysis methods and tools evolve. By integrating comparative genomics with the development of draft metabolic networks, ITEP harnesses the power of comparative genomics to build confidence in links between genotype and phenotype and helps disambiguate gene annotations when they are evaluated in both evolutionary and metabolic network contexts.

  3. Prevention literacy: community-based advocacy for access and ownership of the HIV prevention toolkit

    PubMed Central

    Parker, Richard G; Perez-Brumer, Amaya; Garcia, Jonathan; Gavigan, Kelly; Ramirez, Ana; Milnor, Jack; Terto, Veriano

    2016-01-01

    Introduction Critical technological advances have yielded a toolkit of HIV prevention strategies. This literature review sought to provide contextual and historical reflection needed to bridge the conceptual gap between clinical efficacy and community effectiveness (i.e. knowledge and usage) of existing HIV prevention options, especially in resource-poor settings. Methods Between January 2015 and October 2015, we reviewed scholarly and grey literatures to define treatment literacy and health literacy and assess the current need for literacy related to HIV prevention. The review included searches in electronic databases including MEDLINE, PsycINFO, PubMed, and Google Scholar. Permutations of the following search terms were used: “treatment literacy,” “treatment education,” “health literacy,” and “prevention literacy.” Through an iterative process of analyses and searches, titles and/or abstracts and reference lists of retrieved articles were reviewed for additional articles, and historical content analyses of grey literature and websites were additionally conducted. Results and discussion Treatment literacy was a well-established concept developed in the global South, which was later partially adopted by international agencies such as the World Health Organization. Treatment literacy emerged as more effective antiretroviral therapies became available. Developed from popular pedagogy and grassroots efforts during an intense struggle for treatment access, treatment literacy addressed the need to extend access to underserved communities and low-income settings that might otherwise be excluded from access. In contrast, prevention literacy is absent in the recent surge of new biomedical prevention strategies; prevention literacy was scarcely referenced and undertheorized in the available literature. Prevention efforts today include multimodal techniques, which jointly comprise a toolkit of biomedical, behavioural, and structural/environmental approaches. However, linkages to community advocacy and mobilization efforts are limited and unsustainable. Success of prevention efforts depends on equity of access, community-based ownership, and multilevel support structures to enable usage and sustainability. Conclusions For existing HIV prevention efforts to be effective in “real-world” settings, with limited resources, reflection on historical lessons and contextual realities (i.e. policies, financial constraints, and biomedical patents) indicated the need to extend principles developed for treatment access and treatment literacy, to support prevention literacy and prevention access as an integral part of the global response to HIV. PMID:27702430

  4. BIO::Phylo-phyloinformatic analysis using perl.

    PubMed

    Vos, Rutger A; Caravas, Jason; Hartmann, Klaas; Jensen, Mark A; Miller, Chase

    2011-02-27

    Phyloinformatic analyses involve large amounts of data and metadata of complex structure. Collecting, processing, analyzing, visualizing and summarizing these data and metadata should be done in steps that can be automated and reproduced. This requires flexible, modular toolkits that can represent, manipulate and persist phylogenetic data and metadata as objects with programmable interfaces. This paper presents Bio::Phylo, a Perl5 toolkit for phyloinformatic analysis. It implements classes and methods that are compatible with the well-known BioPerl toolkit, but is independent from it (making it easy to install) and features a richer API and a data model that is better able to manage the complex relationships between different fundamental data and metadata objects in phylogenetics. It supports commonly used file formats for phylogenetic data including the novel NeXML standard, which allows rich annotations of phylogenetic data to be stored and shared. Bio::Phylo can interact with BioPerl, thereby giving access to the file formats that BioPerl supports. Many methods for data simulation, transformation and manipulation, the analysis of tree shape, and tree visualization are provided. Bio::Phylo is composed of 59 richly documented Perl5 modules. It has been deployed successfully on a variety of computer architectures (including various Linux distributions, Mac OS X versions, Windows, Cygwin and UNIX-like systems). It is available as open source (GPL) software from http://search.cpan.org/dist/Bio-Phylo.

  5. The Virtual Physiological Human ToolKit.

    PubMed

    Cooper, Jonathan; Cervenansky, Frederic; De Fabritiis, Gianni; Fenner, John; Friboulet, Denis; Giorgino, Toni; Manos, Steven; Martelli, Yves; Villà-Freixa, Jordi; Zasada, Stefan; Lloyd, Sharon; McCormack, Keith; Coveney, Peter V

    2010-08-28

    The Virtual Physiological Human (VPH) is a major European e-Science initiative intended to support the development of patient-specific computer models and their application in personalized and predictive healthcare. The VPH Network of Excellence (VPH-NoE) project is tasked with facilitating interaction between the various VPH projects and addressing issues of common concern. A key deliverable is the 'VPH ToolKit'--a collection of tools, methodologies and services to support and enable VPH research, integrating and extending existing work across Europe towards greater interoperability and sustainability. Owing to the diverse nature of the field, a single monolithic 'toolkit' is incapable of addressing the needs of the VPH. Rather, the VPH ToolKit should be considered more as a 'toolbox' of relevant technologies, interacting around a common set of standards. The latter apply to the information used by tools, including any data and the VPH models themselves, and also to the naming and categorizing of entities and concepts involved. Furthermore, the technologies and methodologies available need to be widely disseminated, and relevant tools and services easily found by researchers. The VPH-NoE has thus created an online resource for the VPH community to meet this need. It consists of a database of tools, methods and services for VPH research, with a Web front-end. This has facilities for searching the database, for adding or updating entries, and for providing user feedback on entries. Anyone is welcome to contribute.

  6. BIO::Phylo-phyloinformatic analysis using perl

    PubMed Central

    2011-01-01

    Background Phyloinformatic analyses involve large amounts of data and metadata of complex structure. Collecting, processing, analyzing, visualizing and summarizing these data and metadata should be done in steps that can be automated and reproduced. This requires flexible, modular toolkits that can represent, manipulate and persist phylogenetic data and metadata as objects with programmable interfaces. Results This paper presents Bio::Phylo, a Perl5 toolkit for phyloinformatic analysis. It implements classes and methods that are compatible with the well-known BioPerl toolkit, but is independent from it (making it easy to install) and features a richer API and a data model that is better able to manage the complex relationships between different fundamental data and metadata objects in phylogenetics. It supports commonly used file formats for phylogenetic data including the novel NeXML standard, which allows rich annotations of phylogenetic data to be stored and shared. Bio::Phylo can interact with BioPerl, thereby giving access to the file formats that BioPerl supports. Many methods for data simulation, transformation and manipulation, the analysis of tree shape, and tree visualization are provided. Conclusions Bio::Phylo is composed of 59 richly documented Perl5 modules. It has been deployed successfully on a variety of computer architectures (including various Linux distributions, Mac OS X versions, Windows, Cygwin and UNIX-like systems). It is available as open source (GPL) software from http://search.cpan.org/dist/Bio-Phylo PMID:21352572

  7. Development of a web-based toolkit to support improvement of care coordination in primary care.

    PubMed

    Ganz, David A; Barnard, Jenny M; Smith, Nina Z Y; Miake-Lye, Isomi M; Delevan, Deborah M; Simon, Alissa; Rose, Danielle E; Stockdale, Susan E; Chang, Evelyn T; Noël, Polly H; Finley, Erin P; Lee, Martin L; Zulman, Donna M; Cordasco, Kristina M; Rubenstein, Lisa V

    2018-05-23

    Promising practices for the coordination of chronic care exist, but how to select and share these practices to support quality improvement within a healthcare system is uncertain. This study describes an approach for selecting high-quality tools for an online care coordination toolkit to be used in Veterans Health Administration (VA) primary care practices. We evaluated tools in three steps: (1) an initial screening to identify tools relevant to care coordination in VA primary care, (2) a two-clinician expert review process assessing tool characteristics (e.g. frequency of problem addressed, linkage to patients' experience of care, effect on practice workflow, and sustainability with existing resources) and assigning each tool a summary rating, and (3) semi-structured interviews with VA patients and frontline clinicians and staff. Of 300 potentially relevant tools identified by searching online resources, 65, 38, and 18 remained after steps one, two and three, respectively. The 18 tools cover five topics: managing referrals to specialty care, medication management, patient after-visit summary, patient activation materials, agenda setting, patient pre-visit packet, and provider contact information for patients. The final toolkit provides access to the 18 tools, as well as detailed information about tools' expected benefits, and resources required for tool implementation. Future care coordination efforts can benefit from systematically reviewing available tools to identify those that are high quality and relevant.

  8. Two Influential Primate Classifications Logically Aligned

    PubMed Central

    Franz, Nico M.; Pier, Naomi M.; Reeder, Deeann M.; Chen, Mingmin; Yu, Shizhuo; Kianmajd, Parisa; Bowers, Shawn; Ludäscher, Bertram

    2016-01-01

    Classifications and phylogenies of perceived natural entities change in the light of new evidence. Taxonomic changes, translated into Code-compliant names, frequently lead to name:meaning dissociations across succeeding treatments. Classification standards such as the Mammal Species of the World (MSW) may experience significant levels of taxonomic change from one edition to the next, with potential costs to long-term, large-scale information integration. This circumstance challenges the biodiversity and phylogenetic data communities to express taxonomic congruence and incongruence in ways that both humans and machines can process, that is, to logically represent taxonomic alignments across multiple classifications. We demonstrate that such alignments are feasible for two classifications of primates corresponding to the second and third MSW editions. Our approach has three main components: (i) use of taxonomic concept labels, that is name sec. author (where sec. means according to), to assemble each concept hierarchy separately via parent/child relationships; (ii) articulation of select concepts across the two hierarchies with user-provided Region Connection Calculus (RCC-5) relationships; and (iii) the use of an Answer Set Programming toolkit to infer and visualize logically consistent alignments of these input constraints. Our use case entails the Primates sec. Groves (1993; MSW2–317 taxonomic concepts; 233 at the species level) and Primates sec. Groves (2005; MSW3–483 taxonomic concepts; 376 at the species level). Using 402 RCC-5 input articulations, the reasoning process yields a single, consistent alignment and 153,111 Maximally Informative Relations that constitute a comprehensive meaning resolution map for every concept pair in the Primates sec. MSW2/MSW3. The complete alignment, and various partitions thereof, facilitate quantitative analyses of name:meaning dissociation, revealing that nearly one in three taxonomic names are not reliable across treatments—in the sense of the same name identifying congruent taxonomic meanings. The RCC-5 alignment approach is potentially widely applicable in systematics and can achieve scalable, precise resolution of semantically evolving name usages in synthetic, next-generation biodiversity, and phylogeny data platforms. PMID:27009895

  9. Finding Protein and Nucleotide Similarities with FASTA

    PubMed Central

    Pearson, William R.

    2016-01-01

    The FASTA programs provide a comprehensive set of rapid similarity searching tools ( fasta36, fastx36, tfastx36, fasty36, tfasty36), similar to those provided by the BLAST package, as well as programs for slower, optimal, local and global similarity searches ( ssearch36, ggsearch36) and for searching with short peptides and oligonucleotides ( fasts36, fastm36). The FASTA programs use an empirical strategy for estimating statistical significance that accommodates a range of similarity scoring matrices and gap penalties, improving alignment boundary accuracy and search sensitivity (Unit 3.5). The FASTA programs can produce “BLAST-like” alignment and tabular output, for ease of integration into existing analysis pipelines, and can search small, representative databases, and then report results for a larger set of sequences, using links from the smaller dataset. The FASTA programs work with a wide variety of database formats, including mySQL and postgreSQL databases (Unit 9.4). The programs also provide a strategy for integrating domain and active site annotations into alignments and highlighting the mutational state of functionally critical residues. These protocols describe how to use the FASTA programs to characterize protein and DNA sequences, using protein:protein, protein:DNA, and DNA:DNA comparisons. PMID:27010337

  10. Finding Protein and Nucleotide Similarities with FASTA.

    PubMed

    Pearson, William R

    2016-03-24

    The FASTA programs provide a comprehensive set of rapid similarity searching tools (fasta36, fastx36, tfastx36, fasty36, tfasty36), similar to those provided by the BLAST package, as well as programs for slower, optimal, local, and global similarity searches (ssearch36, ggsearch36), and for searching with short peptides and oligonucleotides (fasts36, fastm36). The FASTA programs use an empirical strategy for estimating statistical significance that accommodates a range of similarity scoring matrices and gap penalties, improving alignment boundary accuracy and search sensitivity. The FASTA programs can produce "BLAST-like" alignment and tabular output, for ease of integration into existing analysis pipelines, and can search small, representative databases, and then report results for a larger set of sequences, using links from the smaller dataset. The FASTA programs work with a wide variety of database formats, including mySQL and postgreSQL databases. The programs also provide a strategy for integrating domain and active site annotations into alignments and highlighting the mutational state of functionally critical residues. These protocols describe how to use the FASTA programs to characterize protein and DNA sequences, using protein:protein, protein:DNA, and DNA:DNA comparisons. Copyright © 2016 John Wiley & Sons, Inc.

  11. muBLASTP: database-indexed protein sequence search on multicore CPUs.

    PubMed

    Zhang, Jing; Misra, Sanchit; Wang, Hao; Feng, Wu-Chun

    2016-11-04

    The Basic Local Alignment Search Tool (BLAST) is a fundamental program in the life sciences that searches databases for sequences that are most similar to a query sequence. Currently, the BLAST algorithm utilizes a query-indexed approach. Although many approaches suggest that sequence search with a database index can achieve much higher throughput (e.g., BLAT, SSAHA, and CAFE), they cannot deliver the same level of sensitivity as the query-indexed BLAST, i.e., NCBI BLAST, or they can only support nucleotide sequence search, e.g., MegaBLAST. Due to different challenges and characteristics between query indexing and database indexing, the existing techniques for query-indexed search cannot be used into database indexed search. muBLASTP, a novel database-indexed BLAST for protein sequence search, delivers identical hits returned to NCBI BLAST. On Intel Haswell multicore CPUs, for a single query, the single-threaded muBLASTP achieves up to a 4.41-fold speedup for alignment stages, and up to a 1.75-fold end-to-end speedup over single-threaded NCBI BLAST. For a batch of queries, the multithreaded muBLASTP achieves up to a 5.7-fold speedups for alignment stages, and up to a 4.56-fold end-to-end speedup over multithreaded NCBI BLAST. With a newly designed index structure for protein database and associated optimizations in BLASTP algorithm, we re-factored BLASTP algorithm for modern multicore processors that achieves much higher throughput with acceptable memory footprint for the database index.

  12. Cazymes Analysis Toolkit (CAT): Webservice for searching and analyzing carbohydrateactive enzymes in a newly sequenced organism using CAZy database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karpinets, Tatiana V; Park, Byung; Syed, Mustafa H

    2010-01-01

    The Carbohydrate-Active Enzyme (CAZy) database provides a rich set of manually annotated enzymes that degrade, modify, or create glycosidic bonds. Despite rich and invaluable information stored in the database, software tools utilizing this information for annotation of newly sequenced genomes by CAZy families are limited. We have employed two annotation approaches to fill the gap between manually curated high-quality protein sequences collected in the CAZy database and the growing number of other protein sequences produced by genome or metagenome sequencing projects. The first approach is based on a similarity search against the entire non-redundant sequences of the CAZy database. Themore » second approach performs annotation using links or correspondences between the CAZy families and protein family domains. The links were discovered using the association rule learning algorithm applied to sequences from the CAZy database. The approaches complement each other and in combination achieved high specificity and sensitivity when cross-evaluated with the manually curated genomes of Clostridium thermocellum ATCC 27405 and Saccharophagus degradans 2-40. The capability of the proposed framework to predict the function of unknown protein domains (DUF) and of hypothetical proteins in the genome of Neurospora crassa is demonstrated. The framework is implemented as a Web service, the CAZymes Analysis Toolkit (CAT), and is available at http://cricket.ornl.gov/cgi-bin/cat.cgi.« less

  13. CAZymes Analysis Toolkit (CAT): web service for searching and analyzing carbohydrate-active enzymes in a newly sequenced organism using CAZy database.

    PubMed

    Park, Byung H; Karpinets, Tatiana V; Syed, Mustafa H; Leuze, Michael R; Uberbacher, Edward C

    2010-12-01

    The Carbohydrate-Active Enzyme (CAZy) database provides a rich set of manually annotated enzymes that degrade, modify, or create glycosidic bonds. Despite rich and invaluable information stored in the database, software tools utilizing this information for annotation of newly sequenced genomes by CAZy families are limited. We have employed two annotation approaches to fill the gap between manually curated high-quality protein sequences collected in the CAZy database and the growing number of other protein sequences produced by genome or metagenome sequencing projects. The first approach is based on a similarity search against the entire nonredundant sequences of the CAZy database. The second approach performs annotation using links or correspondences between the CAZy families and protein family domains. The links were discovered using the association rule learning algorithm applied to sequences from the CAZy database. The approaches complement each other and in combination achieved high specificity and sensitivity when cross-evaluated with the manually curated genomes of Clostridium thermocellum ATCC 27405 and Saccharophagus degradans 2-40. The capability of the proposed framework to predict the function of unknown protein domains and of hypothetical proteins in the genome of Neurospora crassa is demonstrated. The framework is implemented as a Web service, the CAZymes Analysis Toolkit, and is available at http://cricket.ornl.gov/cgi-bin/cat.cgi.

  14. A qualitative study of clinic and community member perspectives on intervention toolkits: "Unless the toolkit is used it won't help solve the problem".

    PubMed

    Davis, Melinda M; Howk, Sonya; Spurlock, Margaret; McGinnis, Paul B; Cohen, Deborah J; Fagnan, Lyle J

    2017-07-18

    Intervention toolkits are common products of grant-funded research in public health and primary care settings. Toolkits are designed to address the knowledge translation gap by speeding implementation and dissemination of research into practice. However, few studies describe characteristics of effective intervention toolkits and their implementation. Therefore, we conducted this study to explore what clinic and community-based users want in intervention toolkits and to identify the factors that support application in practice. In this qualitative descriptive study we conducted focus groups and interviews with a purposive sample of community health coalition members, public health experts, and primary care professionals between November 2010 and January 2012. The transdisciplinary research team used thematic analysis to identify themes and a cross-case comparative analysis to explore variation by participant role and toolkit experience. Ninety six participants representing primary care (n = 54, 56%) and community settings (n = 42, 44%) participated in 18 sessions (13 focus groups, five key informant interviews). Participants ranged from those naïve through expert in toolkit development; many reported limited application of toolkits in actual practice. Participants wanted toolkits targeted at the right audience and demonstrated to be effective. Well organized toolkits, often with a quick start guide, with tools that were easy to tailor and apply were desired. Irrespective of perceived quality, participants experienced with practice change emphasized that leadership, staff buy-in, and facilitative support was essential for intervention toolkits to be translated into changes in clinic or public -health practice. Given the emphasis on toolkits in supporting implementation and dissemination of research and clinical guidelines, studies are warranted to determine when and how toolkits are used. Funders, policy makers, researchers, and leaders in primary care and public health are encouraged to allocate resources to foster both toolkit development and implementation. Support, through practice facilitation and organizational leadership, are critical for translating knowledge from intervention toolkits into practice.

  15. Heuristic reusable dynamic programming: efficient updates of local sequence alignment.

    PubMed

    Hong, Changjin; Tewfik, Ahmed H

    2009-01-01

    Recomputation of the previously evaluated similarity results between biological sequences becomes inevitable when researchers realize errors in their sequenced data or when the researchers have to compare nearly similar sequences, e.g., in a family of proteins. We present an efficient scheme for updating local sequence alignments with an affine gap model. In principle, using the previous matching result between two amino acid sequences, we perform a forward-backward alignment to generate heuristic searching bands which are bounded by a set of suboptimal paths. Given a correctly updated sequence, we initially predict a new score of the alignment path for each contour to select the best candidates among them. Then, we run the Smith-Waterman algorithm in this confined space. Furthermore, our heuristic alignment for an updated sequence shows that it can be further accelerated by using reusable dynamic programming (rDP), our prior work. In this study, we successfully validate "relative node tolerance bound" (RNTB) in the pruned searching space. Furthermore, we improve the computational performance by quantifying the successful RNTB tolerance probability and switch to rDP on perturbation-resilient columns only. In our searching space derived by a threshold value of 90 percent of the optimal alignment score, we find that 98.3 percent of contours contain correctly updated paths. We also find that our method consumes only 25.36 percent of the runtime cost of sparse dynamic programming (sDP) method, and to only 2.55 percent of that of a normal dynamic programming with the Smith-Waterman algorithm.

  16. Protein Identification Using Top-Down Spectra*

    PubMed Central

    Liu, Xiaowen; Sirotkin, Yakov; Shen, Yufeng; Anderson, Gordon; Tsai, Yihsuan S.; Ting, Ying S.; Goodlett, David R.; Smith, Richard D.; Bafna, Vineet; Pevzner, Pavel A.

    2012-01-01

    In the last two years, because of advances in protein separation and mass spectrometry, top-down mass spectrometry moved from analyzing single proteins to analyzing complex samples and identifying hundreds and even thousands of proteins. However, computational tools for database search of top-down spectra against protein databases are still in their infancy. We describe MS-Align+, a fast algorithm for top-down protein identification based on spectral alignment that enables searches for unexpected post-translational modifications. We also propose a method for evaluating statistical significance of top-down protein identifications and further benchmark various software tools on two top-down data sets from Saccharomyces cerevisiae and Salmonella typhimurium. We demonstrate that MS-Align+ significantly increases the number of identified spectra as compared with MASCOT and OMSSA on both data sets. Although MS-Align+ and ProSightPC have similar performance on the Salmonella typhimurium data set, MS-Align+ outperforms ProSightPC on the (more complex) Saccharomyces cerevisiae data set. PMID:22027200

  17. Cloud4Psi: cloud computing for 3D protein structure similarity searching.

    PubMed

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Kłapciński, Artur

    2014-10-01

    Popular methods for 3D protein structure similarity searching, especially those that generate high-quality alignments such as Combinatorial Extension (CE) and Flexible structure Alignment by Chaining Aligned fragment pairs allowing Twists (FATCAT) are still time consuming. As a consequence, performing similarity searching against large repositories of structural data requires increased computational resources that are not always available. Cloud computing provides huge amounts of computational power that can be provisioned on a pay-as-you-go basis. We have developed the cloud-based system that allows scaling of the similarity searching process vertically and horizontally. Cloud4Psi (Cloud for Protein Similarity) was tested in the Microsoft Azure cloud environment and provided good, almost linearly proportional acceleration when scaled out onto many computational units. Cloud4Psi is available as Software as a Service for testing purposes at: http://cloud4psi.cloudapp.net/. For source code and software availability, please visit the Cloud4Psi project home page at http://zti.polsl.pl/dmrozek/science/cloud4psi.htm. © The Author 2014. Published by Oxford University Press.

  18. Cloud4Psi: cloud computing for 3D protein structure similarity searching

    PubMed Central

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Kłapciński, Artur

    2014-01-01

    Summary: Popular methods for 3D protein structure similarity searching, especially those that generate high-quality alignments such as Combinatorial Extension (CE) and Flexible structure Alignment by Chaining Aligned fragment pairs allowing Twists (FATCAT) are still time consuming. As a consequence, performing similarity searching against large repositories of structural data requires increased computational resources that are not always available. Cloud computing provides huge amounts of computational power that can be provisioned on a pay-as-you-go basis. We have developed the cloud-based system that allows scaling of the similarity searching process vertically and horizontally. Cloud4Psi (Cloud for Protein Similarity) was tested in the Microsoft Azure cloud environment and provided good, almost linearly proportional acceleration when scaled out onto many computational units. Availability and implementation: Cloud4Psi is available as Software as a Service for testing purposes at: http://cloud4psi.cloudapp.net/. For source code and software availability, please visit the Cloud4Psi project home page at http://zti.polsl.pl/dmrozek/science/cloud4psi.htm. Contact: dariusz.mrozek@polsl.pl PMID:24930141

  19. Stroke

    MedlinePlus

    ... GUIDELINES, CLINICAL TOPIC ACKNOWLEDGEMENTS MACRA MATTERS HEALTH POLICY, ECONOMICS, CODING REIMBURSEMENT AND APPEALS TOOLKITS UFE AWARENESS TOOLKIT ... GUIDELINES, CLINICAL TOPIC ACKNOWLEDGEMENTS MACRA MATTERS HEALTH POLICY, ECONOMICS, CODING REIMBURSEMENT AND APPEALS TOOLKITS UFE AWARENESS TOOLKIT ...

  20. Varicose Veins

    MedlinePlus

    ... GUIDELINES, CLINICAL TOPIC ACKNOWLEDGEMENTS MACRA MATTERS HEALTH POLICY, ECONOMICS, CODING REIMBURSEMENT AND APPEALS TOOLKITS UFE AWARENESS TOOLKIT ... GUIDELINES, CLINICAL TOPIC ACKNOWLEDGEMENTS MACRA MATTERS HEALTH POLICY, ECONOMICS, CODING REIMBURSEMENT AND APPEALS TOOLKITS UFE AWARENESS TOOLKIT ...

  1. Tribal Green Building Toolkit

    EPA Pesticide Factsheets

    This Tribal Green Building Toolkit (Toolkit) is designed to help tribal officials, community members, planners, developers, and architects develop and adopt building codes to support green building practices. Anyone can use this toolkit!

  2. The DLESE Evaluation Toolkit Project

    NASA Astrophysics Data System (ADS)

    Buhr, S. M.; Barker, L. J.; Marlino, M.

    2002-12-01

    The Evaluation Toolkit and Community project is a new Digital Library for Earth System Education (DLESE) collection designed to raise awareness of project evaluation within the geoscience education community, and to enable principal investigators, teachers, and evaluators to implement project evaluation more readily. This new resource is grounded in the needs of geoscience educators, and will provide a virtual home for a geoscience education evaluation community. The goals of the project are to 1) provide a robust collection of evaluation resources useful for Earth systems educators, 2) establish a forum and community for evaluation dialogue within DLESE, and 3) disseminate the resources through the DLESE infrastructure and through professional society workshops and proceedings. Collaboration and expertise in education, geoscience and evaluation are necessary if we are to conduct the best possible geoscience education. The Toolkit allows users to engage in evaluation at whichever level best suits their needs, get more evaluation professional development if desired, and access the expertise of other segments of the community. To date, a test web site has been built and populated, initial community feedback from the DLESE and broader community is being garnered, and we have begun to heighten awareness of geoscience education evaluation within our community. The web site contains features that allow users to access professional development about evaluation, search and find evaluation resources, submit resources, find or offer evaluation services, sign up for upcoming workshops, take the user survey, and submit calendar items. The evaluation resource matrix currently contains resources that have met our initial review. The resources are currently organized by type; they will become searchable on multiple dimensions of project type, audience, objectives and evaluation resource type as efforts to develop a collection-specific search engine mature. The peer review criteria and process for ensuring that the site contains robust and useful resources has been drafted and received initial feedback from the project advisory board, which consists of members of every segment of the target audience. The review criteria are based upon DLESE peer review criteria, the MERLOT digital library peer review criteria, digital resource evaluation criteria, and evaluation best practices. In geoscience education, as in most endeavors, improvements are made by asking questions and acting upon information about successes and failures; project evaluation can be thought of as the systematic process of asking these questions and gathering the right information. The Evaluation Toolkit seeks to help principal investigators, teachers, and evaluators use the evaluation process to improve our projects and our field.

  3. Find an Interventional Radiologist

    MedlinePlus

    ... GUIDELINES, CLINICAL TOPIC ACKNOWLEDGEMENTS MACRA MATTERS HEALTH POLICY, ECONOMICS, CODING REIMBURSEMENT AND APPEALS TOOLKITS UFE AWARENESS TOOLKIT ... GUIDELINES, CLINICAL TOPIC ACKNOWLEDGEMENTS MACRA MATTERS HEALTH POLICY, ECONOMICS, CODING REIMBURSEMENT AND APPEALS TOOLKITS UFE AWARENESS TOOLKIT ...

  4. Society of Interventional Radiology

    MedlinePlus

    ... GUIDELINES, CLINICAL TOPIC ACKNOWLEDGEMENTS MACRA MATTERS HEALTH POLICY, ECONOMICS, CODING REIMBURSEMENT AND APPEALS TOOLKITS UFE AWARENESS TOOLKIT ... GUIDELINES, CLINICAL TOPIC ACKNOWLEDGEMENTS MACRA MATTERS HEALTH POLICY, ECONOMICS, CODING REIMBURSEMENT AND APPEALS TOOLKITS UFE AWARENESS TOOLKIT ...

  5. Hereditary Hemorrhagic Telangiectasia - HHT

    MedlinePlus

    ... GUIDELINES, CLINICAL TOPIC ACKNOWLEDGEMENTS MACRA MATTERS HEALTH POLICY, ECONOMICS, CODING REIMBURSEMENT AND APPEALS TOOLKITS UFE AWARENESS TOOLKIT ... GUIDELINES, CLINICAL TOPIC ACKNOWLEDGEMENTS MACRA MATTERS HEALTH POLICY, ECONOMICS, CODING REIMBURSEMENT AND APPEALS TOOLKITS UFE AWARENESS TOOLKIT ...

  6. Child Abuse - Multiple Languages

    MedlinePlus

    ... Section Healthy Living Toolkit: Violence In the Home - English PDF Healthy Living Toolkit: Violence In the Home - ... Section Healthy Living Toolkit: Violence In the Home - English PDF Healthy Living Toolkit: Violence In the Home - ...

  7. Wind Integration National Dataset Toolkit | Grid Modernization | NREL

    Science.gov Websites

    information, share tips The WIND Toolkit includes meteorological conditions and turbine power for more than Integration National Dataset Toolkit Wind Integration National Dataset Toolkit The Wind Integration National Dataset (WIND) Toolkit is an update and expansion of the Eastern Wind Integration Data Set and

  8. Chronic pelvic pain (pelvic congestion syndrome)

    MedlinePlus

    ... GUIDELINES, CLINICAL TOPIC ACKNOWLEDGEMENTS MACRA MATTERS HEALTH POLICY, ECONOMICS, CODING REIMBURSEMENT AND APPEALS TOOLKITS UFE AWARENESS TOOLKIT ... GUIDELINES, CLINICAL TOPIC ACKNOWLEDGEMENTS MACRA MATTERS HEALTH POLICY, ECONOMICS, CODING REIMBURSEMENT AND APPEALS TOOLKITS UFE AWARENESS TOOLKIT ...

  9. The RAND Online Measure Repository for Evaluating Psychological Health and Traumatic Brain Injury Programs. The RAND Toolkit, Volume 2

    DTIC Science & Technology

    2014-01-01

    tempo may raise the risk for mental health challenges. During this time, the U.S. Department of Defense (DoD) has implemented numerous programs to...and were based on the constraints of each electronic database. However, most searches were variations on a basic three-category format: The first...Gerontology, 1983, 38: 111–116. Iannuzzo RW, Jaeger J, Goldberg JF, Kafantaris V, Sublette ME. “Development and Reliability of the Ham-D/MADRS

  10. Optical Modeling Activities for NASA's James Webb Space Telescope (JWST). 4; Overview and Introduction of Matlab Based Toolkits used to Interface with Optical Design Software

    NASA Technical Reports Server (NTRS)

    Howard, Joseph

    2007-01-01

    This is part four of a series on the ongoing optical modeling activities for James Webb Space Telescope (JWST). The first two discussed modeling JWST on-orbit performance using wavefront sensitivities to predict line of sight motion induced blur, and stability during thermal transients. The third investigates the aberrations resulting from alignment and figure compensation of the controllable degrees of freedom (primary and secondary mirrors), which may be encountered during ground alignment and on-orbit commissioning of the observatory. The work here introduces some of the math software tools used to perform the work of the previous three papers of this series. NASA has recently approved these in-house tools for public release as open source, so this presentation also serves as a quick tutorial on their use. The tools are collections of functions written in Matlab, which interface with optical design software (CodeV, OSLO, and Zemax) using either COM or DDE communication protocol. The functions are discussed, and examples are given.

  11. Optical modeling activities for NASA's James Webb Space Telescope (JWST): IV. Overview and introduction of MATLAB based toolkits used to interface with optical design software

    NASA Astrophysics Data System (ADS)

    Howard, Joseph M.

    2007-09-01

    This paper is part four of a series on the ongoing optical modeling activities for the James Webb Space Telescope (JWST). The first two papers discussed modeling JWST on-orbit performance using wavefront sensitivities to predict line of sight motion induced blur, and stability during thermal transients. The third paper investigates the aberrations resulting from alignment and figure compensation of the controllable degrees of freedom (primary and secondary mirrors), which may be encountered during ground alignment and on-orbit commissioning of the observatory. The work here introduces some of the math software tools used to perform the work of the previous three papers of this series. NASA has recently approved these in-house tools for public release as open source, so this presentation also serves as a quick tutorial on their use. The tools are collections of functions written for use in MATLAB to interface with optical design software (CODE V, OSLO, and ZEMAX) using either COM or DDE communication protocol. The functions are discussed, and examples are given.

  12. jCompoundMapper: An open source Java library and command-line tool for chemical fingerprints

    PubMed Central

    2011-01-01

    Background The decomposition of a chemical graph is a convenient approach to encode information of the corresponding organic compound. While several commercial toolkits exist to encode molecules as so-called fingerprints, only a few open source implementations are available. The aim of this work is to introduce a library for exactly defined molecular decompositions, with a strong focus on the application of these features in machine learning and data mining. It provides several options such as search depth, distance cut-offs, atom- and pharmacophore typing. Furthermore, it provides the functionality to combine, to compare, or to export the fingerprints into several formats. Results We provide a Java 1.6 library for the decomposition of chemical graphs based on the open source Chemistry Development Kit toolkit. We reimplemented popular fingerprinting algorithms such as depth-first search fingerprints, extended connectivity fingerprints, autocorrelation fingerprints (e.g. CATS2D), radial fingerprints (e.g. Molprint2D), geometrical Molprint, atom pairs, and pharmacophore fingerprints. We also implemented custom fingerprints such as the all-shortest path fingerprint that only includes the subset of shortest paths from the full set of paths of the depth-first search fingerprint. As an application of jCompoundMapper, we provide a command-line executable binary. We measured the conversion speed and number of features for each encoding and described the composition of the features in detail. The quality of the encodings was tested using the default parametrizations in combination with a support vector machine on the Sutherland QSAR data sets. Additionally, we benchmarked the fingerprint encodings on the large-scale Ames toxicity benchmark using a large-scale linear support vector machine. The results were promising and could often compete with literature results. On the large Ames benchmark, for example, we obtained an AUC ROC performance of 0.87 with a reimplementation of the extended connectivity fingerprint. This result is comparable to the performance achieved by a non-linear support vector machine using state-of-the-art descriptors. On the Sutherland QSAR data set, the best fingerprint encodings showed a comparable or better performance on 5 of the 8 benchmarks when compared against the results of the best descriptors published in the paper of Sutherland et al. Conclusions jCompoundMapper is a library for chemical graph fingerprints with several tweaking possibilities and exporting options for open source data mining toolkits. The quality of the data mining results, the conversion speed, the LPGL software license, the command-line interface, and the exporters should be useful for many applications in cheminformatics like benchmarks against literature methods, comparison of data mining algorithms, similarity searching, and similarity-based data mining. PMID:21219648

  13. A Qualitative Evaluation of Web-Based Cancer Care Quality Improvement Toolkit Use in the Veterans Health Administration.

    PubMed

    Bowman, Candice; Luck, Jeff; Gale, Randall C; Smith, Nina; York, Laura S; Asch, Steven

    2015-01-01

    Disease severity, complexity, and patient burden highlight cancer care as a target for quality improvement (QI) interventions. The Veterans Health Administration (VHA) implemented a series of disease-specific online cancer care QI toolkits. To describe characteristics of the toolkits, target users, and VHA cancer care facilities that influenced toolkit access and use and assess whether such resources were beneficial for users. Deductive content analysis of detailed notes from 94 telephone interviews with individuals from 48 VHA facilities. We evaluated toolkit access and use across cancer types, participation in learning collaboratives, and affiliation with VHA cancer care facilities. The presence of champions was identified as a strong facilitator of toolkit use, and learning collaboratives were important for spreading information about toolkit availability. Identified barriers included lack of personnel and financial resources and complicated approval processes to support tool use. Online cancer care toolkits are well received across cancer specialties and provider types. Clinicians, administrators, and QI staff may benefit from the availability of toolkits as they become more reliant on rapid access to strategies that support comprehensive delivery of evidence-based care. Toolkits should be considered as a complement to other QI approaches.

  14. cuBLASTP: Fine-Grained Parallelization of Protein Sequence Search on CPU+GPU.

    PubMed

    Zhang, Jing; Wang, Hao; Feng, Wu-Chun

    2017-01-01

    BLAST, short for Basic Local Alignment Search Tool, is a ubiquitous tool used in the life sciences for pairwise sequence search. However, with the advent of next-generation sequencing (NGS), whether at the outset or downstream from NGS, the exponential growth of sequence databases is outstripping our ability to analyze the data. While recent studies have utilized the graphics processing unit (GPU) to speedup the BLAST algorithm for searching protein sequences (i.e., BLASTP), these studies use coarse-grained parallelism, where one sequence alignment is mapped to only one thread. Such an approach does not efficiently utilize the capabilities of a GPU, particularly due to the irregularity of BLASTP in both execution paths and memory-access patterns. To address the above shortcomings, we present a fine-grained approach to parallelize BLASTP, where each individual phase of sequence search is mapped to many threads on a GPU. This approach, which we refer to as cuBLASTP, reorders data-access patterns and reduces divergent branches of the most time-consuming phases (i.e., hit detection and ungapped extension). In addition, cuBLASTP optimizes the remaining phases (i.e., gapped extension and alignment with trace back) on a multicore CPU and overlaps their execution with the phases running on the GPU.

  15. galaxie--CGI scripts for sequence identification through automated phylogenetic analysis.

    PubMed

    Nilsson, R Henrik; Larsson, Karl-Henrik; Ursing, Björn M

    2004-06-12

    The prevalent use of similarity searches like BLAST to identify sequences and species implicitly assumes the reference database to be of extensive sequence sampling. This is often not the case, restraining the correctness of the outcome as a basis for sequence identification. Phylogenetic inference outperforms similarity searches in retrieving correct phylogenies and consequently sequence identities, and a project was initiated to design a freely available script package for sequence identification through automated Web-based phylogenetic analysis. Three CGI scripts were designed to facilitate qualified sequence identification from a Web interface. Query sequences are aligned to pre-made alignments or to alignments made by ClustalW with entries retrieved from a BLAST search. The subsequent phylogenetic analysis is based on the PHYLIP package for inferring neighbor-joining and parsimony trees. The scripts are highly configurable. A service installation and a version for local use are found at http://andromeda.botany.gu.se/galaxiewelcome.html and http://galaxie.cgb.ki.se

  16. Every Place Counts Leadership Academy : transportation toolkit quick guide

    DOT National Transportation Integrated Search

    2016-12-01

    This is a quick guide to the Transportation Toolkit. The Transportation Toolkit is meant to explain the transportation process to members of the public with no prior knowledge of transportation. The Toolkit is meant to demystify transportation and he...

  17. Object Toolkit Version 4.3 User’s Manual

    DTIC Science & Technology

    2016-12-31

    unlimited. (OPS-17-12855 dtd 19 Jan 2017) 13. SUPPLEMENTARY NOTES 14. ABSTRACT Object Toolkit is a finite - element model builder specifically designed for...INTRODUCTION 1 What Is Object Toolkit? Object Toolkit is a finite - element model builder specifically designed for creating representations of spacecraft...Nascap-2k and EPIC, the user is not required to purchase or learn expensive finite element generators to create system models. Second, Object Toolkit

  18. RadSearch: a RIS/PACS integrated query tool

    NASA Astrophysics Data System (ADS)

    Tsao, Sinchai; Documet, Jorge; Moin, Paymann; Wang, Kevin; Liu, Brent J.

    2008-03-01

    Radiology Information Systems (RIS) contain a wealth of information that can be used for research, education, and practice management. However, the sheer amount of information available makes querying specific data difficult and time consuming. Previous work has shown that a clinical RIS database and its RIS text reports can be extracted, duplicated and indexed for searches while complying with HIPAA and IRB requirements. This project's intent is to provide a software tool, the RadSearch Toolkit, to allow intelligent indexing and parsing of RIS reports for easy yet powerful searches. In addition, the project aims to seamlessly query and retrieve associated images from the Picture Archiving and Communication System (PACS) in situations where an integrated RIS/PACS is in place - even subselecting individual series, such as in an MRI study. RadSearch's application of simple text parsing techniques to index text-based radiology reports will allow the search engine to quickly return relevant results. This powerful combination will be useful in both private practice and academic settings; administrators can easily obtain complex practice management information such as referral patterns; researchers can conduct retrospective studies with specific, multiple criteria; teaching institutions can quickly and effectively create thorough teaching files.

  19. Tree decomposition based fast search of RNA structures including pseudoknots in genomes.

    PubMed

    Song, Yinglei; Liu, Chunmei; Malmberg, Russell; Pan, Fangfang; Cai, Liming

    2005-01-01

    Searching genomes for RNA secondary structure with computational methods has become an important approach to the annotation of non-coding RNAs. However, due to the lack of efficient algorithms for accurate RNA structure-sequence alignment, computer programs capable of fast and effectively searching genomes for RNA secondary structures have not been available. In this paper, a novel RNA structure profiling model is introduced based on the notion of a conformational graph to specify the consensus structure of an RNA family. Tree decomposition yields a small tree width t for such conformation graphs (e.g., t = 2 for stem loops and only a slight increase for pseudo-knots). Within this modelling framework, the optimal alignment of a sequence to the structure model corresponds to finding a maximum valued isomorphic subgraph and consequently can be accomplished through dynamic programming on the tree decomposition of the conformational graph in time O(k(t)N(2)), where k is a small parameter; and N is the size of the projiled RNA structure. Experiments show that the application of the alignment algorithm to search in genomes yields the same search accuracy as methods based on a Covariance model with a significant reduction in computation time. In particular; very accurate searches of tmRNAs in bacteria genomes and of telomerase RNAs in yeast genomes can be accomplished in days, as opposed to months required by other methods. The tree decomposition based searching tool is free upon request and can be downloaded at our site h t t p ://w.uga.edu/RNA-informatics/software/index.php.

  20. A Novel Partial Sequence Alignment Tool for Finding Large Deletions

    PubMed Central

    Aruk, Taner; Ustek, Duran; Kursun, Olcay

    2012-01-01

    Finding large deletions in genome sequences has become increasingly more useful in bioinformatics, such as in clinical research and diagnosis. Although there are a number of publically available next generation sequencing mapping and sequence alignment programs, these software packages do not correctly align fragments containing deletions larger than one kb. We present a fast alignment software package, BinaryPartialAlign, that can be used by wet lab scientists to find long structural variations in their experiments. For BinaryPartialAlign, we make use of the Smith-Waterman (SW) algorithm with a binary-search-based approach for alignment with large gaps that we called partial alignment. BinaryPartialAlign implementation is compared with other straight-forward applications of SW. Simulation results on mtDNA fragments demonstrate the effectiveness (runtime and accuracy) of the proposed method. PMID:22566777

  1. "Handy Manny" and the Emergent Literacy Technology Toolkit

    ERIC Educational Resources Information Center

    Hourcade, Jack J.; Parette, Howard P., Jr.; Boeckmann, Nichole; Blum, Craig

    2010-01-01

    This paper outlines the use of a technology toolkit to support emergent literacy curriculum and instruction in early childhood education settings. Components of the toolkit include hardware and software that can facilitate key emergent literacy skills. Implementation of the comprehensive technology toolkit enhances the development of these…

  2. Risk of Resource Failure and Toolkit Variation in Small-Scale Farmers and Herders

    PubMed Central

    Collard, Mark; Ruttle, April; Buchanan, Briggs; O’Brien, Michael J.

    2012-01-01

    Recent work suggests that global variation in toolkit structure among hunter-gatherers is driven by risk of resource failure such that as risk of resource failure increases, toolkits become more diverse and complex. Here we report a study in which we investigated whether the toolkits of small-scale farmers and herders are influenced by risk of resource failure in the same way. In the study, we applied simple linear and multiple regression analysis to data from 45 small-scale food-producing groups to test the risk hypothesis. Our results were not consistent with the hypothesis; none of the risk variables we examined had a significant impact on toolkit diversity or on toolkit complexity. It appears, therefore, that the drivers of toolkit structure differ between hunter-gatherers and small-scale food-producers. PMID:22844421

  3. VIP Barcoding: composition vector-based software for rapid species identification based on DNA barcoding.

    PubMed

    Fan, Long; Hui, Jerome H L; Yu, Zu Guo; Chu, Ka Hou

    2014-07-01

    Species identification based on short sequences of DNA markers, that is, DNA barcoding, has emerged as an integral part of modern taxonomy. However, software for the analysis of large and multilocus barcoding data sets is scarce. The Basic Local Alignment Search Tool (BLAST) is currently the fastest tool capable of handling large databases (e.g. >5000 sequences), but its accuracy is a concern and has been criticized for its local optimization. However, current more accurate software requires sequence alignment or complex calculations, which are time-consuming when dealing with large data sets during data preprocessing or during the search stage. Therefore, it is imperative to develop a practical program for both accurate and scalable species identification for DNA barcoding. In this context, we present VIP Barcoding: a user-friendly software in graphical user interface for rapid DNA barcoding. It adopts a hybrid, two-stage algorithm. First, an alignment-free composition vector (CV) method is utilized to reduce searching space by screening a reference database. The alignment-based K2P distance nearest-neighbour method is then employed to analyse the smaller data set generated in the first stage. In comparison with other software, we demonstrate that VIP Barcoding has (i) higher accuracy than Blastn and several alignment-free methods and (ii) higher scalability than alignment-based distance methods and character-based methods. These results suggest that this platform is able to deal with both large-scale and multilocus barcoding data with accuracy and can contribute to DNA barcoding for modern taxonomy. VIP Barcoding is free and available at http://msl.sls.cuhk.edu.hk/vipbarcoding/. © 2014 John Wiley & Sons Ltd.

  4. Risk-Informed External Hazards Analysis for Seismic and Flooding Phenomena for a Generic PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parisi, Carlo; Prescott, Steve; Ma, Zhegang

    This report describes the activities performed during the FY2017 for the US-DOE Light Water Reactor Sustainability Risk-Informed Safety Margin Characterization (LWRS-RISMC), Industry Application #2. The scope of Industry Application #2 is to deliver a risk-informed external hazards safety analysis for a representative nuclear power plant. Following the advancements occurred during the previous FYs (toolkits identification, models development), FY2017 focused on: increasing the level of realism of the analysis; improving the tools and the coupling methodologies. In particular the following objectives were achieved: calculation of buildings pounding and their effects on components seismic fragility; development of a SAPHIRE code PRA modelsmore » for 3-loops Westinghouse PWR; set-up of a methodology for performing static-dynamic PRA coupling between SAPHIRE and EMRALD codes; coupling RELAP5-3D/RAVEN for performing Best-Estimate Plus Uncertainty analysis and automatic limit surface search; and execute sample calculations for demonstrating the capabilities of the toolkit in performing a risk-informed external hazards safety analyses.« less

  5. Two Influential Primate Classifications Logically Aligned.

    PubMed

    Franz, Nico M; Pier, Naomi M; Reeder, Deeann M; Chen, Mingmin; Yu, Shizhuo; Kianmajd, Parisa; Bowers, Shawn; Ludäscher, Bertram

    2016-07-01

    Classifications and phylogenies of perceived natural entities change in the light of new evidence. Taxonomic changes, translated into Code-compliant names, frequently lead to name:meaning dissociations across succeeding treatments. Classification standards such as the Mammal Species of the World (MSW) may experience significant levels of taxonomic change from one edition to the next, with potential costs to long-term, large-scale information integration. This circumstance challenges the biodiversity and phylogenetic data communities to express taxonomic congruence and incongruence in ways that both humans and machines can process, that is, to logically represent taxonomic alignments across multiple classifications. We demonstrate that such alignments are feasible for two classifications of primates corresponding to the second and third MSW editions. Our approach has three main components: (i) use of taxonomic concept labels, that is name sec. author (where sec. means according to), to assemble each concept hierarchy separately via parent/child relationships; (ii) articulation of select concepts across the two hierarchies with user-provided Region Connection Calculus (RCC-5) relationships; and (iii) the use of an Answer Set Programming toolkit to infer and visualize logically consistent alignments of these input constraints. Our use case entails the Primates sec. Groves (1993; MSW2-317 taxonomic concepts; 233 at the species level) and Primates sec. Groves (2005; MSW3-483 taxonomic concepts; 376 at the species level). Using 402 RCC-5 input articulations, the reasoning process yields a single, consistent alignment and 153,111 Maximally Informative Relations that constitute a comprehensive meaning resolution map for every concept pair in the Primates sec. MSW2/MSW3. The complete alignment, and various partitions thereof, facilitate quantitative analyses of name:meaning dissociation, revealing that nearly one in three taxonomic names are not reliable across treatments-in the sense of the same name identifying congruent taxonomic meanings. The RCC-5 alignment approach is potentially widely applicable in systematics and can achieve scalable, precise resolution of semantically evolving name usages in synthetic, next-generation biodiversity, and phylogeny data platforms. © The Author(s) 2016. Published by Oxford University Press on behalf of the Society of Systematic Biologists.

  6. acdc – Automated Contamination Detection and Confidence estimation for single-cell genome data

    DOE PAGES

    Lux, Markus; Kruger, Jan; Rinke, Christian; ...

    2016-12-20

    A major obstacle in single-cell sequencing is sample contamination with foreign DNA. To guarantee clean genome assemblies and to prevent the introduction of contamination into public databases, considerable quality control efforts are put into post-sequencing analysis. Contamination screening generally relies on reference-based methods such as database alignment or marker gene search, which limits the set of detectable contaminants to organisms with closely related reference species. As genomic coverage in the tree of life is highly fragmented, there is an urgent need for a reference-free methodology for contaminant identification in sequence data. We present acdc, a tool specifically developed to aidmore » the quality control process of genomic sequence data. By combining supervised and unsupervised methods, it reliably detects both known and de novo contaminants. First, 16S rRNA gene prediction and the inclusion of ultrafast exact alignment techniques allow sequence classification using existing knowledge from databases. Second, reference-free inspection is enabled by the use of state-of-the-art machine learning techniques that include fast, non-linear dimensionality reduction of oligonucleotide signatures and subsequent clustering algorithms that automatically estimate the number of clusters. The latter also enables the removal of any contaminant, yielding a clean sample. Furthermore, given the data complexity and the ill-posedness of clustering, acdc employs bootstrapping techniques to provide statistically profound confidence values. Tested on a large number of samples from diverse sequencing projects, our software is able to quickly and accurately identify contamination. Results are displayed in an interactive user interface. Acdc can be run from the web as well as a dedicated command line application, which allows easy integration into large sequencing project analysis workflows. Acdc can reliably detect contamination in single-cell genome data. In addition to database-driven detection, it complements existing tools by its unsupervised techniques, which allow for the detection of de novo contaminants. Our contribution has the potential to drastically reduce the amount of resources put into these processes, particularly in the context of limited availability of reference species. As single-cell genome data continues to grow rapidly, acdc adds to the toolkit of crucial quality assurance tools.« less

  7. acdc – Automated Contamination Detection and Confidence estimation for single-cell genome data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lux, Markus; Kruger, Jan; Rinke, Christian

    A major obstacle in single-cell sequencing is sample contamination with foreign DNA. To guarantee clean genome assemblies and to prevent the introduction of contamination into public databases, considerable quality control efforts are put into post-sequencing analysis. Contamination screening generally relies on reference-based methods such as database alignment or marker gene search, which limits the set of detectable contaminants to organisms with closely related reference species. As genomic coverage in the tree of life is highly fragmented, there is an urgent need for a reference-free methodology for contaminant identification in sequence data. We present acdc, a tool specifically developed to aidmore » the quality control process of genomic sequence data. By combining supervised and unsupervised methods, it reliably detects both known and de novo contaminants. First, 16S rRNA gene prediction and the inclusion of ultrafast exact alignment techniques allow sequence classification using existing knowledge from databases. Second, reference-free inspection is enabled by the use of state-of-the-art machine learning techniques that include fast, non-linear dimensionality reduction of oligonucleotide signatures and subsequent clustering algorithms that automatically estimate the number of clusters. The latter also enables the removal of any contaminant, yielding a clean sample. Furthermore, given the data complexity and the ill-posedness of clustering, acdc employs bootstrapping techniques to provide statistically profound confidence values. Tested on a large number of samples from diverse sequencing projects, our software is able to quickly and accurately identify contamination. Results are displayed in an interactive user interface. Acdc can be run from the web as well as a dedicated command line application, which allows easy integration into large sequencing project analysis workflows. Acdc can reliably detect contamination in single-cell genome data. In addition to database-driven detection, it complements existing tools by its unsupervised techniques, which allow for the detection of de novo contaminants. Our contribution has the potential to drastically reduce the amount of resources put into these processes, particularly in the context of limited availability of reference species. As single-cell genome data continues to grow rapidly, acdc adds to the toolkit of crucial quality assurance tools.« less

  8. Evaluation of an Extension-Delivered Resource for Accelerating Progress in Childhood Obesity Prevention: The BEPA-Toolkit

    ERIC Educational Resources Information Center

    Gunter, Katherine B.; Abi Nader, Patrick; Armington, Amanda; Hicks, John C.; John, Deborah

    2017-01-01

    The Balanced Energy Physical Activity Toolkit, or the BEPA-Toolkit, supports physical activity (PA) programming via Extension in elementary schools. In a pilot study, we evaluated the effectiveness of the BEPA-Toolkit as used by teachers through Supplemental Nutrition Assistance Program Education partnerships. We surveyed teachers (n = 57)…

  9. --No Title--

    Science.gov Websites

    -1242px}.vehicle_search_box{border:1px solid #ccc;background-color:#eee;padding:10px;height:312px }.vehicle_search_by_mfg_box{height:150px}.vehicle_detail_box{border:1px solid #ccc;background-color:#eee;padding:10px;height }.search_button{width:100%;text-align:right}h2{color:#45812E;line-height:24px}h3{margin:0;color:black}.search-btn

  10. RNAi screening of developmental toolkit genes: a search for novel wing genes in the red flour beetle, Tribolium castaneum.

    PubMed

    Linz, David M; Tomoyasu, Yoshinori

    2015-01-01

    The amazing array of diversity among insect wings offers a powerful opportunity to study the mechanisms guiding morphological evolution. Studies in Drosophila (the fruit fly) have identified dozens of genes important for wing development. These genes are often called candidate genes, serving as an ideal starting point to study wing development in other insects. However, we also need to explore beyond the candidate genes to gain a more comprehensive view of insect wing evolution. As a first step away from the traditional candidate genes, we utilized Tribolium (the red flour beetle) as a model and assessed the potential involvement of a group of developmental toolkit genes (embryonic patterning genes) in beetle wing development. We hypothesized that the highly pleiotropic nature of these developmental genes would increase the likelihood of finding novel wing genes in Tribolium. Through the RNA interference screening, we found that Tc-cactus has a less characterized (but potentially evolutionarily conserved) role in wing development. We also found that the odd-skipped family genes are essential for the formation of the thoracic pleural plates, including the recently discovered wing serial homologs in Tribolium. In addition, we obtained several novel insights into the function of these developmental genes, such as the involvement of mille-pattes and Tc-odd-paired in metamorphosis. Despite these findings, no gene we examined was found to have novel wing-related roles unique in Tribolium. These results suggest a relatively conserved nature of developmental toolkit genes and highlight the limited degree to which these genes are co-opted during insect wing evolution.

  11. Accelerated Profile HMM Searches

    PubMed Central

    Eddy, Sean R.

    2011-01-01

    Profile hidden Markov models (profile HMMs) and probabilistic inference methods have made important contributions to the theory of sequence database homology search. However, practical use of profile HMM methods has been hindered by the computational expense of existing software implementations. Here I describe an acceleration heuristic for profile HMMs, the “multiple segment Viterbi” (MSV) algorithm. The MSV algorithm computes an optimal sum of multiple ungapped local alignment segments using a striped vector-parallel approach previously described for fast Smith/Waterman alignment. MSV scores follow the same statistical distribution as gapped optimal local alignment scores, allowing rapid evaluation of significance of an MSV score and thus facilitating its use as a heuristic filter. I also describe a 20-fold acceleration of the standard profile HMM Forward/Backward algorithms using a method I call “sparse rescaling”. These methods are assembled in a pipeline in which high-scoring MSV hits are passed on for reanalysis with the full HMM Forward/Backward algorithm. This accelerated pipeline is implemented in the freely available HMMER3 software package. Performance benchmarks show that the use of the heuristic MSV filter sacrifices negligible sensitivity compared to unaccelerated profile HMM searches. HMMER3 is substantially more sensitive and 100- to 1000-fold faster than HMMER2. HMMER3 is now about as fast as BLAST for protein searches. PMID:22039361

  12. Self-synchronization for spread spectrum audio watermarks after time scale modification

    NASA Astrophysics Data System (ADS)

    Nadeau, Andrew; Sharma, Gaurav

    2014-02-01

    De-synchronizing operations such as insertion, deletion, and warping pose significant challenges for watermarking. Because these operations are not typical for classical communications, watermarking techniques such as spread spectrum can perform poorly. Conversely, specialized synchronization solutions can be challenging to analyze/ optimize. This paper addresses desynchronization for blind spread spectrum watermarks, detected without reference to any unmodified signal, using the robustness properties of short blocks. Synchronization relies on dynamic time warping to search over block alignments to find a sequence with maximum correlation to the watermark. This differs from synchronization schemes that must first locate invariant features of the original signal, or estimate and reverse desynchronization before detection. Without these extra synchronization steps, analysis for the proposed scheme builds on classical SS concepts and allows characterizes the relationship between the size of search space (number of detection alignment tests) and intrinsic robustness (continuous search space region covered by each individual detection test). The critical metrics that determine the search space, robustness, and performance are: time-frequency resolution of the watermarking transform, and blocklength resolution of the alignment. Simultaneous robustness to (a) MP3 compression, (b) insertion/deletion, and (c) time-scale modification is also demonstrated for a practical audio watermarking scheme developed in the proposed framework.

  13. Implementing a user-driven online quality improvement toolkit for cancer care.

    PubMed

    Luck, Jeff; York, Laura S; Bowman, Candice; Gale, Randall C; Smith, Nina; Asch, Steven M

    2015-05-01

    Peer-to-peer collaboration within integrated health systems requires a mechanism for sharing quality improvement lessons. The Veterans Health Administration (VA) developed online compendia of tools linked to specific cancer quality indicators. We evaluated awareness and use of the toolkits, variation across facilities, impact of social marketing, and factors influencing toolkit use. A diffusion of innovations conceptual framework guided the collection of user activity data from the Toolkit Series SharePoint site and an online survey of potential Lung Cancer Care Toolkit users. The VA Toolkit Series site had 5,088 unique visitors in its first 22 months; 5% of users accounted for 40% of page views. Social marketing communications were correlated with site usage. Of survey respondents (n = 355), 54% had visited the site, of whom 24% downloaded at least one tool. Respondents' awareness of the lung cancer quality performance of their facility, and facility participation in quality improvement collaboratives, were positively associated with Toolkit Series site use. Facility-level lung cancer tool implementation varied widely across tool types. The VA Toolkit Series achieved widespread use and a high degree of user engagement, although use varied widely across facilities. The most active users were aware of and active in cancer care quality improvement. Toolkit use seemed to be reinforced by other quality improvement activities. A combination of user-driven tool creation and centralized toolkit development seemed to be effective for leveraging health information technology to spread disease-specific quality improvement tools within an integrated health care system. Copyright © 2015 by American Society of Clinical Oncology.

  14. Parasail: SIMD C library for global, semi-global, and local pairwise sequence alignments.

    PubMed

    Daily, Jeff

    2016-02-10

    Sequence alignment algorithms are a key component of many bioinformatics applications. Though various fast Smith-Waterman local sequence alignment implementations have been developed for x86 CPUs, most are embedded into larger database search tools. In addition, fast implementations of Needleman-Wunsch global sequence alignment and its semi-global variants are not as widespread. This article presents the first software library for local, global, and semi-global pairwise intra-sequence alignments and improves the performance of previous intra-sequence implementations. A faster intra-sequence local pairwise alignment implementation is described and benchmarked, including new global and semi-global variants. Using a 375 residue query sequence a speed of 136 billion cell updates per second (GCUPS) was achieved on a dual Intel Xeon E5-2670 24-core processor system, the highest reported for an implementation based on Farrar's 'striped' approach. Rognes's SWIPE optimal database search application is still generally the fastest available at 1.2 to at best 2.4 times faster than Parasail for sequences shorter than 500 amino acids. However, Parasail was faster for longer sequences. For global alignments, Parasail's prefix scan implementation is generally the fastest, faster even than Farrar's 'striped' approach, however the opal library is faster for single-threaded applications. The software library is designed for 64 bit Linux, OS X, or Windows on processors with SSE2, SSE41, or AVX2. Source code is available from https://github.com/jeffdaily/parasail under the Battelle BSD-style license. Applications that require optimal alignment scores could benefit from the improved performance. For the first time, SIMD global, semi-global, and local alignments are available in a stand-alone C library.

  15. Network Science Research Laboratory (NSRL) Discrete Event Toolkit

    DTIC Science & Technology

    2016-01-01

    ARL-TR-7579 ● JAN 2016 US Army Research Laboratory Network Science Research Laboratory (NSRL) Discrete Event Toolkit by...Laboratory (NSRL) Discrete Event Toolkit by Theron Trout and Andrew J Toth Computational and Information Sciences Directorate, ARL...Research Laboratory (NSRL) Discrete Event Toolkit 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Theron Trout

  16. 'Healthy gums do matter': A case study of clinical leadership within primary dental care.

    PubMed

    Moore, D; Saleem, S; Hawthorn, E; Pealing, R; Ashley, M; Bridgman, C

    2015-09-25

    The Health and Social Care Act 2012 heralded wide reaching reforms intended to place clinicians at the heart of the health service. For NHS general dental practice, the conduits for this clinical leadership are the NHS England local professional networks. In Greater Manchester, the local professional network has developed and piloted a clinician led quality improvement project: 'Healthy Gums DO Matter, a Practitioner's Toolkit'. Used as a case study, the project highlighted the following facilitators to clinical leadership in dentistry: supportive environment; mentoring and transformational leadership; alignment of project goals with national policy; funding allowance; cross-boundary collaboration; determination; altruism; and support from wider academic and specialist colleagues. Barriers to clinical leadership identified were: the hierarchical nature of healthcare, territorialism and competing clinical commitments.

  17. GLAD: a system for developing and deploying large-scale bioinformatics grid.

    PubMed

    Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong

    2005-03-01

    Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.

  18. Plastid: nucleotide-resolution analysis of next-generation sequencing and genomics data.

    PubMed

    Dunn, Joshua G; Weissman, Jonathan S

    2016-11-22

    Next-generation sequencing (NGS) informs many biological questions with unprecedented depth and nucleotide resolution. These assays have created a need for analytical tools that enable users to manipulate data nucleotide-by-nucleotide robustly and easily. Furthermore, because many NGS assays encode information jointly within multiple properties of read alignments - for example, in ribosome profiling, the locations of ribosomes are jointly encoded in alignment coordinates and length - analytical tools are often required to extract the biological meaning from the alignments before analysis. Many assay-specific pipelines exist for this purpose, but there remains a need for user-friendly, generalized, nucleotide-resolution tools that are not limited to specific experimental regimes or analytical workflows. Plastid is a Python library designed specifically for nucleotide-resolution analysis of genomics and NGS data. As such, Plastid is designed to extract assay-specific information from read alignments while retaining generality and extensibility to novel NGS assays. Plastid represents NGS and other biological data as arrays of values associated with genomic or transcriptomic positions, and contains configurable tools to convert data from a variety of sources to such arrays. Plastid also includes numerous tools to manipulate even discontinuous genomic features, such as spliced transcripts, with nucleotide precision. Plastid automatically handles conversion between genomic and feature-centric coordinates, accounting for splicing and strand, freeing users of burdensome accounting. Finally, Plastid's data models use consistent and familiar biological idioms, enabling even beginners to develop sophisticated analytical workflows with minimal effort. Plastid is a versatile toolkit that has been used to analyze data from multiple NGS assays, including RNA-seq, ribosome profiling, and DMS-seq. It forms the genomic engine of our ORF annotation tool, ORF-RATER, and is readily adapted to novel NGS assays. Examples, tutorials, and extensive documentation can be found at https://plastid.readthedocs.io .

  19. COACH: profile-profile alignment of protein families using hidden Markov models.

    PubMed

    Edgar, Robert C; Sjölander, Kimmen

    2004-05-22

    Alignments of two multiple-sequence alignments, or statistical models of such alignments (profiles), have important applications in computational biology. The increased amount of information in a profile versus a single sequence can lead to more accurate alignments and more sensitive homolog detection in database searches. Several profile-profile alignment methods have been proposed and have been shown to improve sensitivity and alignment quality compared with sequence-sequence methods (such as BLAST) and profile-sequence methods (e.g. PSI-BLAST). Here we present a new approach to profile-profile alignment we call Comparison of Alignments by Constructing Hidden Markov Models (HMMs) (COACH). COACH aligns two multiple sequence alignments by constructing a profile HMM from one alignment and aligning the other to that HMM. We compare the alignment accuracy of COACH with two recently published methods: Yona and Levitt's prof_sim and Sadreyev and Grishin's COMPASS. On two sets of reference alignments selected from the FSSP database, we find that COACH is able, on average, to produce alignments giving the best coverage or the fewest errors, depending on the chosen parameter settings. COACH is freely available from www.drive5.com/lobster

  20. Establishing homologies in protein sequences

    NASA Technical Reports Server (NTRS)

    Dayhoff, M. O.; Barker, W. C.; Hunt, L. T.

    1983-01-01

    Computer-based statistical techniques used to determine homologies between proteins occurring in different species are reviewed. The technique is based on comparison of two protein sequences, either by relating all segments of a given length in one sequence to all segments of the second or by finding the best alignment of the two sequences. Approaches discussed include selection using printed tabulations, identification of very similar sequences, and computer searches of a database. The use of the SEARCH, RELATE, and ALIGN programs (Dayhoff, 1979) is explained; sample data are presented in graphs, diagrams, and tables and the construction of scoring matrices is considered.

  1. Approximate matching of regular expressions.

    PubMed

    Myers, E W; Miller, W

    1989-01-01

    Given a sequence A and regular expression R, the approximate regular expression matching problem is to find a sequence matching R whose optimal alignment with A is the highest scoring of all such sequences. This paper develops an algorithm to solve the problem in time O(MN), where M and N are the lengths of A and R. Thus, the time requirement is asymptotically no worse than for the simpler problem of aligning two fixed sequences. Our method is superior to an earlier algorithm by Wagner and Seiferas in several ways. First, it treats real-valued costs, in addition to integer costs, with no loss of asymptotic efficiency. Second, it requires only O(N) space to deliver just the score of the best alignment. Finally, its structure permits implementation techniques that make it extremely fast in practice. We extend the method to accommodate gap penalties, as required for typical applications in molecular biology, and further refine it to search for sub-strings of A that strongly align with a sequence in R, as required for typical data base searches. We also show how to deliver an optimal alignment between A and R in only O(N + log M) space using O(MN log M) time. Finally, an O(MN(M + N) + N2log N) time algorithm is presented for alignment scoring schemes where the cost of a gap is an arbitrary increasing function of its length.

  2. TOOLKIT, Version 2. 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, E.; Bagot, B.; McNeill, R.L.

    1990-05-09

    The purpose of this User's Guide is to show by example many of the features of Toolkit II. Some examples will be copies of screens as they appear while running the Toolkit. Other examples will show what the user should enter in various situations; in these instances, what the computer asserts will be in boldface and what the user responds will be in regular type. The User's Guide is divided into four sections. The first section, FOCUS Databases'', will give a broad overview of the Focus administrative databases that are available on the VAX; easy-to-use reports are available for mostmore » of them in the Toolkit. The second section, Getting Started'', will cover the steps necessary to log onto the Computer Center VAX cluster and how to start Focus and the Toolkit. The third section, Using the Toolkit'', will discuss some of the features in the Toolkit -- the available reports and how to access them, as well as some utilities. The fourth section, Helpful Hints'', will cover some useful facts about the VAX and Focus as well as some of the more common problems that can occur. The Toolkit is not set in concrete but is continually being revised and improved. If you have any opinions as to changes that you would like to see made to the Toolkit or new features that you would like included, please let us know. Since we do try to respond to the needs of the user and make periodic improvement to the Toolkit, this User's Guide may not correspond exactly to what is available in the computer. In general, changes are made to provide new options or features; rarely is an existing feature deleted.« less

  3. Local Foods, Local Places Toolkit

    EPA Pesticide Factsheets

    Toolkit to help communities that want to use local foods to spur revitalization. The toolkit gives step-by-step instructions to help communities plan and host a workshop and create an action plan to implement.

  4. Provider perceptions of an integrated primary care quality improvement strategy: The PPAQ toolkit.

    PubMed

    Beehler, Gregory P; Lilienthal, Kaitlin R

    2017-02-01

    The Primary Care Behavioral Health (PCBH) model of integrated primary care is challenging to implement with high fidelity. The Primary Care Behavioral Health Provider Adherence Questionnaire (PPAQ) was designed to assess provider adherence to essential model components and has recently been adapted into a quality improvement toolkit. The aim of this pilot project was to gather preliminary feedback on providers' perceptions of the acceptability and utility of the PPAQ toolkit for making beneficial practice changes. Twelve mental health providers working in Department of Veterans Affairs integrated primary care clinics participated in semistructured interviews to gather quantitative and qualitative data. Descriptive statistics and qualitative content analysis were used to analyze data. Providers identified several positive features of the PPAQ toolkit organization and structure that resulted in high ratings of acceptability, while also identifying several toolkit components in need of modification to improve usability. Toolkit content was considered highly representative of the (PCBH) model and therefore could be used as a diagnostic self-assessment of model adherence. The toolkit was considered to be high in applicability to providers regardless of their degree of prior professional preparation or current clinical setting. Additionally, providers identified several system-level contextual factors that could impact the usefulness of the toolkit. These findings suggest that frontline mental health providers working in (PCBH) settings may be receptive to using an adherence-focused toolkit for ongoing quality improvement. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. rasbhari: Optimizing Spaced Seeds for Database Searching, Read Mapping and Alignment-Free Sequence Comparison.

    PubMed

    Hahn, Lars; Leimeister, Chris-André; Ounit, Rachid; Lonardi, Stefano; Morgenstern, Burkhard

    2016-10-01

    Many algorithms for sequence analysis rely on word matching or word statistics. Often, these approaches can be improved if binary patterns representing match and don't-care positions are used as a filter, such that only those positions of words are considered that correspond to the match positions of the patterns. The performance of these approaches, however, depends on the underlying patterns. Herein, we show that the overlap complexity of a pattern set that was introduced by Ilie and Ilie is closely related to the variance of the number of matches between two evolutionarily related sequences with respect to this pattern set. We propose a modified hill-climbing algorithm to optimize pattern sets for database searching, read mapping and alignment-free sequence comparison of nucleic-acid sequences; our implementation of this algorithm is called rasbhari. Depending on the application at hand, rasbhari can either minimize the overlap complexity of pattern sets, maximize their sensitivity in database searching or minimize the variance of the number of pattern-based matches in alignment-free sequence comparison. We show that, for database searching, rasbhari generates pattern sets with slightly higher sensitivity than existing approaches. In our Spaced Words approach to alignment-free sequence comparison, pattern sets calculated with rasbhari led to more accurate estimates of phylogenetic distances than the randomly generated pattern sets that we previously used. Finally, we used rasbhari to generate patterns for short read classification with CLARK-S. Here too, the sensitivity of the results could be improved, compared to the default patterns of the program. We integrated rasbhari into Spaced Words; the source code of rasbhari is freely available at http://rasbhari.gobics.de/.

  6. Fidelity of Implementation and Instructional Alignment in Response to Intervention Research

    ERIC Educational Resources Information Center

    Hill, David R.; King, Seth A.; Lemons, Christopher J.; Partanen, Jane N.

    2012-01-01

    In this review, we explore the extent to which researchers evaluating the efficacy of Tier 2 elementary reading interventions within the framework of Response to Intervention reported on fidelity of implementation and alignment of instruction between tiers. A literature search identified 22 empirical studies from which conclusions were drawn.…

  7. Green Infrastructure Modeling Toolkit

    EPA Pesticide Factsheets

    EPA's Green Infrastructure Modeling Toolkit is a toolkit of 5 EPA green infrastructure models and tools, along with communication materials, that can be used as a teaching tool and a quick reference resource when making GI implementation decisions.

  8. iPhos: a toolkit to streamline the alkaline phosphatase-assisted comprehensive LC-MS phosphoproteome investigation

    PubMed Central

    2014-01-01

    Background Comprehensive characterization of the phosphoproteome in living cells is critical in signal transduction research. But the low abundance of phosphopeptides among the total proteome in cells remains an obstacle in mass spectrometry-based proteomic analysis. To provide a solution, an alternative analytic strategy to confidently identify phosphorylated peptides by using the alkaline phosphatase (AP) treatment combined with high-resolution mass spectrometry was provided. While the process is applicable, the key integration along the pipeline was mostly done by tedious manual work. Results We developed a software toolkit, iPhos, to facilitate and streamline the work-flow of AP-assisted phosphoproteome characterization. The iPhos tookit includes one assister and three modules. The iPhos Peak Extraction Assister automates the batch mode peak extraction for multiple liquid chromatography mass spectrometry (LC-MS) runs. iPhos Module-1 can process the peak lists extracted from the LC-MS analyses derived from the original and dephosphorylated samples to mine out potential phosphorylated peptide signals based on mass shift caused by the loss of some multiples of phosphate groups. And iPhos Module-2 provides customized inclusion lists with peak retention time windows for subsequent targeted LC-MS/MS experiments. Finally, iPhos Module-3 facilitates to link the peptide identifications from protein search engines to the quantification results from pattern-based label-free quantification tools. We further demonstrated the utility of the iPhos toolkit on the data of human metastatic lung cancer cells (CL1-5). Conclusions In the comparison study of the control group of CL1-5 cell lysates and the treatment group of dasatinib-treated CL1-5 cell lysates, we demonstrated the applicability of the iPhos toolkit and reported the experimental results based on the iPhos-facilitated phosphoproteome investigation. And further, we also compared the strategy with pure DDA-based LC-MS/MS phosphoproteome investigation. The results of iPhos-facilitated targeted LC-MS/MS analysis convey more thorough and confident phosphopeptide identification than the results of pure DDA-based analysis. PMID:25521246

  9. Effect of gravito-inertial cues on the coding of orientation in pre-attentive vision.

    PubMed

    Stivalet, P; Marendaz, C; Barraclough, L; Mourareau, C

    1995-01-01

    To see if the spatial reference frame used by pre-attentive vision is specified in a retino-centered frame or in a reference frame integrating visual and nonvisual information (vestibular and somatosensory), subjects were centrifuged in a non-pendular cabin and were asked to search for a target distinguishable from distractors by difference in orientation (Treisman's "pop-out" paradigm [1]). In a control condition, in which subjects were sitting immobilized but not centrifuged, this task gave an asymmetric search pattern: Search was rapid and pre-attentional except when the target was aligned with the horizontal retinal/head axis, in which case search was slow and attentional (2). Results using a centrifuge showed that slow/serial search patterns were obtained when the target was aligned with the subjective horizontal axis (and not with the horizontal retinal/head axis). These data suggest that a multisensory reference frame is used in pre-attentive vision. The results are interpreted in terms of Riccio and Stoffregen's "ecological theory" of orientation in which the vertical and horizontal axes constitute independent reference frames (3).

  10. College Women's Health

    MedlinePlus

    ... the College Women's Social Media Kit! College Women's Social Media Toolkit Use the Social Media Toolkit to share health tips with your campus ... toolkit includes resources for young women including sample social media messages, flyers and blogs posts. NEW Social Media ...

  11. Every Place Counts Leadership Academy transportation toolkit

    DOT National Transportation Integrated Search

    2016-12-01

    The Transportation Toolkit is meant to explain the transportation process to members of the public with no prior knowledge of transportation. The Toolkit is meant to demystify transportation and help people engage in local transportation decision-mak...

  12. Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit

    PubMed Central

    O'Boyle, Noel M; Morley, Chris; Hutchison, Geoffrey R

    2008-01-01

    Background Scripting languages such as Python are ideally suited to common programming tasks in cheminformatics such as data analysis and parsing information from files. However, for reasons of efficiency, cheminformatics toolkits such as the OpenBabel toolkit are often implemented in compiled languages such as C++. We describe Pybel, a Python module that provides access to the OpenBabel toolkit. Results Pybel wraps the direct toolkit bindings to simplify common tasks such as reading and writing molecular files and calculating fingerprints. Extensive use is made of Python iterators to simplify loops such as that over all the molecules in a file. A Pybel Molecule can be easily interconverted to an OpenBabel OBMol to access those methods or attributes not wrapped by Pybel. Conclusion Pybel allows cheminformaticians to rapidly develop Python scripts that manipulate chemical information. It is open source, available cross-platform, and offers the power of the OpenBabel toolkit to Python programmers. PMID:18328109

  13. Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit.

    PubMed

    O'Boyle, Noel M; Morley, Chris; Hutchison, Geoffrey R

    2008-03-09

    Scripting languages such as Python are ideally suited to common programming tasks in cheminformatics such as data analysis and parsing information from files. However, for reasons of efficiency, cheminformatics toolkits such as the OpenBabel toolkit are often implemented in compiled languages such as C++. We describe Pybel, a Python module that provides access to the OpenBabel toolkit. Pybel wraps the direct toolkit bindings to simplify common tasks such as reading and writing molecular files and calculating fingerprints. Extensive use is made of Python iterators to simplify loops such as that over all the molecules in a file. A Pybel Molecule can be easily interconverted to an OpenBabel OBMol to access those methods or attributes not wrapped by Pybel. Pybel allows cheminformaticians to rapidly develop Python scripts that manipulate chemical information. It is open source, available cross-platform, and offers the power of the OpenBabel toolkit to Python programmers.

  14. Expanding the species and chemical diversity of Penicillium section Cinnamopurpurea

    USDA-ARS?s Scientific Manuscript database

    A set of isolates genetically similar to or potentially conspecific with an unidentified Penicillium isolate NRRL 735, was assembled using a Basic Local Alignment Search Tool (BLAST) search of internal transcribed spacer (ITS) similarity among described (GenBank) and undescribed Penicillium isolates...

  15. WEB-server for search of a periodicity in amino acid and nucleotide sequences

    NASA Astrophysics Data System (ADS)

    E Frenkel, F.; Skryabin, K. G.; Korotkov, E. V.

    2017-12-01

    A new web server (http://victoria.biengi.ac.ru/splinter/login.php) was designed and developed to search for periodicity in nucleotide and amino acid sequences. The web server operation is based upon a new mathematical method of searching for multiple alignments, which is founded on the position weight matrices optimization, as well as on implementation of the two-dimensional dynamic programming. This approach allows the construction of multiple alignments of the indistinctly similar amino acid and nucleotide sequences that accumulated more than 1.5 substitutions per a single amino acid or a nucleotide without performing the sequences paired comparisons. The article examines the principles of the web server operation and two examples of studying amino acid and nucleotide sequences, as well as information that could be obtained using the web server.

  16. Finding similar nucleotide sequences using network BLAST searches.

    PubMed

    Ladunga, Istvan

    2009-06-01

    The Basic Local Alignment Search Tool (BLAST) is a keystone of bioinformatics due to its performance and user-friendliness. Beginner and intermediate users will learn how to design and submit blastn and Megablast searches on the Web pages at the National Center for Biotechnology Information. We map nucleic acid sequences to genomes, find identical or similar mRNA, expressed sequence tag, and noncoding RNA sequences, and run Megablast searches, which are much faster than blastn. Understanding results is assisted by taxonomy reports, genomic views, and multiple alignments. We interpret expected frequency thresholds, biological significance, and statistical significance. Weak hits provide no evidence, but hints for further analyses. We find genes that may code for homologous proteins by translated BLAST. We reduce false positives by filtering out low-complexity regions. Parsed BLAST results can be integrated into analysis pipelines. Links in the output connect to Entrez, PUBMED, structural, sequence, interaction, and expression databases. This facilitates integration with a wide spectrum of biological knowledge.

  17. Shape-Based Virtual Screening with Volumetric Aligned Molecular Shapes

    PubMed Central

    Koes, David Ryan; Camacho, Carlos J.

    2014-01-01

    Shape-based virtual screening is an established and effective method for identifying small molecules that are similar in shape and function to a reference ligand. We describe a new method of shape-based virtual screening, volumetric aligned molecular shapes (VAMS). VAMS uses efficient data structures to encode and search molecular shapes. We demonstrate that VAMS is an effective method for shape-based virtual screening and that it can be successfully used as a pre-filter to accelerate more computationally demanding search algorithms. Unique to VAMS is a novel minimum/maximum shape constraint query for precisely specifying the desired molecular shape. Shape constraint searches in VAMS are particularly efficient and millions of shapes can be searched in a fraction of a second. We compare the performance of VAMS with two other shape-based virtual screening algorithms a benchmark of 102 protein targets consisting of more than 32 million molecular shapes and find that VAMS provides a competitive trade-off between run-time performance and virtual screening performance. PMID:25049193

  18. BrainLiner: A Neuroinformatics Platform for Sharing Time-Aligned Brain-Behavior Data

    PubMed Central

    Takemiya, Makoto; Majima, Kei; Tsukamoto, Mitsuaki; Kamitani, Yukiyasu

    2016-01-01

    Data-driven neuroscience aims to find statistical relationships between brain activity and task behavior from large-scale datasets. To facilitate high-throughput data processing and modeling, we created BrainLiner as a web platform for sharing time-aligned, brain-behavior data. Using an HDF5-based data format, BrainLiner treats brain activity and data related to behavior with the same salience, aligning both behavioral and brain activity data on a common time axis. This facilitates learning the relationship between behavior and brain activity. Using a common data file format also simplifies data processing and analyses. Properties describing data are unambiguously defined using a schema, allowing machine-readable definition of data. The BrainLiner platform allows users to upload and download data, as well as to explore and search for data from the web platform. A WebGL-based data explorer can visualize highly detailed neurophysiological data from within the web browser, and a data-driven search feature allows users to search for similar time windows of data. This increases transparency, and allows for visual inspection of neural coding. BrainLiner thus provides an essential set of tools for data sharing and data-driven modeling. PMID:26858636

  19. Capturing Petascale Application Characteristics with the Sequoia Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vetter, Jeffrey S; Bhatia, Nikhil; Grobelny, Eric M

    2005-01-01

    Characterization of the computation, communication, memory, and I/O demands of current scientific applications is crucial for identifying which technologies will enable petascale scientific computing. In this paper, we present the Sequoia Toolkit for characterizing HPC applications. The Sequoia Toolkit consists of the Sequoia trace capture library and the Sequoia Event Analysis Library, or SEAL, that facilitates the development of tools for analyzing Sequoia event traces. Using the Sequoia Toolkit, we have characterized the behavior of application runs with up to 2048 application processes. To illustrate the use of the Sequoia Toolkit, we present a preliminary characterization of LAMMPS, a molecularmore » dynamics application of great interest to the computational biology community.« less

  20. An integrative data analysis platform for gene set analysis and knowledge discovery in a data warehouse framework.

    PubMed

    Chen, Yi-An; Tripathi, Lokesh P; Mizuguchi, Kenji

    2016-01-01

    Data analysis is one of the most critical and challenging steps in drug discovery and disease biology. A user-friendly resource to visualize and analyse high-throughput data provides a powerful medium for both experimental and computational biologists to understand vastly different biological data types and obtain a concise, simplified and meaningful output for better knowledge discovery. We have previously developed TargetMine, an integrated data warehouse optimized for target prioritization. Here we describe how upgraded and newly modelled data types in TargetMine can now survey the wider biological and chemical data space, relevant to drug discovery and development. To enhance the scope of TargetMine from target prioritization to broad-based knowledge discovery, we have also developed a new auxiliary toolkit to assist with data analysis and visualization in TargetMine. This toolkit features interactive data analysis tools to query and analyse the biological data compiled within the TargetMine data warehouse. The enhanced system enables users to discover new hypotheses interactively by performing complicated searches with no programming and obtaining the results in an easy to comprehend output format. Database URL: http://targetmine.mizuguchilab.org. © The Author(s) 2016. Published by Oxford University Press.

  1. An integrative data analysis platform for gene set analysis and knowledge discovery in a data warehouse framework

    PubMed Central

    Chen, Yi-An; Tripathi, Lokesh P.; Mizuguchi, Kenji

    2016-01-01

    Data analysis is one of the most critical and challenging steps in drug discovery and disease biology. A user-friendly resource to visualize and analyse high-throughput data provides a powerful medium for both experimental and computational biologists to understand vastly different biological data types and obtain a concise, simplified and meaningful output for better knowledge discovery. We have previously developed TargetMine, an integrated data warehouse optimized for target prioritization. Here we describe how upgraded and newly modelled data types in TargetMine can now survey the wider biological and chemical data space, relevant to drug discovery and development. To enhance the scope of TargetMine from target prioritization to broad-based knowledge discovery, we have also developed a new auxiliary toolkit to assist with data analysis and visualization in TargetMine. This toolkit features interactive data analysis tools to query and analyse the biological data compiled within the TargetMine data warehouse. The enhanced system enables users to discover new hypotheses interactively by performing complicated searches with no programming and obtaining the results in an easy to comprehend output format. Database URL: http://targetmine.mizuguchilab.org PMID:26989145

  2. Evaluation of State Plans and the Livestock Emergency Response Plan (LERP).

    PubMed

    Schaffer, Amy M; Burton, Kenneth R

    The Livestock Emergency Response Plan (LERP) was published in 2014 as a toolkit to assist state agricultural emergency planners in writing or modifying state foreign animal disease/high-consequence disease (FAD/HCD) plans. This research serves as a follow-up to and expands on an initial survey conducted in 2011 by the Department of Homeland Security Office of Health Affairs, Food, Ag, and Veterinary Defense Branch. The purpose of this project is to describe the status of current state animal disease response plans in relation to how closely their content, order, and terminology relate to that described in the LERP template. The analysis was compared to the 2011 study to identify advances, trends, continued areas for increased alignment, and fulfillment of planning gaps in individual state plans. While vast improvements were made in the status of state animal disease response plans from 2011 to 2016, there is nonetheless significant room for enhancing consistency between and identifying gaps in FAD/HCD plans. As awareness of the LERP toolkit grows, the authors hope its use as a template by the states will expand accordingly, thereby increasing consistency between plans and more thoroughly addressing challenges in an FAD/HCD outbreak. The results of this study support the need for curriculum planning resources at the state level. Development of a training curriculum and planning workshops for state agriculture emergency planners will produce a consistent planning philosophy and skill set among state planners-another means of indirectly addressing current planning gaps in agricultural emergency response.

  3. Spice Tools Supporting Planetary Remote Sensing

    NASA Astrophysics Data System (ADS)

    Acton, C.; Bachman, N.; Semenov, B.; Wright, E.

    2016-06-01

    NASA's "SPICE"* ancillary information system has gradually become the de facto international standard for providing scientists the fundamental observation geometry needed to perform photogrammetry, map making and other kinds of planetary science data analysis. SPICE provides position and orientation ephemerides of both the robotic spacecraft and the target body; target body size and shape data; instrument mounting alignment and field-of-view geometry; reference frame specifications; and underlying time system conversions. SPICE comprises not only data, but also a large suite of software, known as the SPICE Toolkit, used to access those data and subsequently compute derived quantities-items such as instrument viewing latitude/longitude, lighting angles, altitude, etc. In existence since the days of the Magellan mission to Venus, the SPICE system has continuously grown to better meet the needs of scientists and engineers. For example, originally the SPICE Toolkit was offered only in Fortran 77, but is now available in C, IDL, MATLAB, and Java Native Interface. SPICE calculations were originally available only using APIs (subroutines), but can now be executed using a client-server interface to a geometry engine. Originally SPICE "products" were only available in numeric form, but now SPICE data visualization is also available. The SPICE components are free of cost, license and export restrictions. Substantial tutorials and programming lessons help new users learn to employ SPICE calculations in their own programs. The SPICE system is implemented and maintained by the Navigation and Ancillary Information Facility (NAIF)-a component of NASA's Planetary Data System (PDS). * Spacecraft, Planet, Instrument, Camera-matrix, Events

  4. A Toolkit for ARB to Integrate Custom Databases and Externally Built Phylogenies

    DOE PAGES

    Essinger, Steven D.; Reichenberger, Erin; Morrison, Calvin; ...

    2015-01-21

    Researchers are perpetually amassing biological sequence data. The computational approaches employed by ecologists for organizing this data (e.g. alignment, phylogeny, etc.) typically scale nonlinearly in execution time with the size of the dataset. This often serves as a bottleneck for processing experimental data since many molecular studies are characterized by massive datasets. To keep up with experimental data demands, ecologists are forced to choose between continually upgrading expensive in-house computer hardware or outsourcing the most demanding computations to the cloud. Outsourcing is attractive since it is the least expensive option, but does not necessarily allow direct user interaction with themore » data for exploratory analysis. Desktop analytical tools such as ARB are indispensable for this purpose, but they do not necessarily offer a convenient solution for the coordination and integration of datasets between local and outsourced destinations. Therefore, researchers are currently left with an undesirable tradeoff between computational throughput and analytical capability. To mitigate this tradeoff we introduce a software package to leverage the utility of the interactive exploratory tools offered by ARB with the computational throughput of cloud-based resources. Our pipeline serves as middleware between the desktop and the cloud allowing researchers to form local custom databases containing sequences and metadata from multiple resources and a method for linking data outsourced for computation back to the local database. Furthermore, a tutorial implementation of the toolkit is provided in the supporting information, S1 Tutorial.« less

  5. A Toolkit for ARB to Integrate Custom Databases and Externally Built Phylogenies

    PubMed Central

    Essinger, Steven D.; Reichenberger, Erin; Morrison, Calvin; Blackwood, Christopher B.; Rosen, Gail L.

    2015-01-01

    Researchers are perpetually amassing biological sequence data. The computational approaches employed by ecologists for organizing this data (e.g. alignment, phylogeny, etc.) typically scale nonlinearly in execution time with the size of the dataset. This often serves as a bottleneck for processing experimental data since many molecular studies are characterized by massive datasets. To keep up with experimental data demands, ecologists are forced to choose between continually upgrading expensive in-house computer hardware or outsourcing the most demanding computations to the cloud. Outsourcing is attractive since it is the least expensive option, but does not necessarily allow direct user interaction with the data for exploratory analysis. Desktop analytical tools such as ARB are indispensable for this purpose, but they do not necessarily offer a convenient solution for the coordination and integration of datasets between local and outsourced destinations. Therefore, researchers are currently left with an undesirable tradeoff between computational throughput and analytical capability. To mitigate this tradeoff we introduce a software package to leverage the utility of the interactive exploratory tools offered by ARB with the computational throughput of cloud-based resources. Our pipeline serves as middleware between the desktop and the cloud allowing researchers to form local custom databases containing sequences and metadata from multiple resources and a method for linking data outsourced for computation back to the local database. A tutorial implementation of the toolkit is provided in the supporting information, S1 Tutorial. Availability: http://www.ece.drexel.edu/gailr/EESI/tutorial.php. PMID:25607539

  6. A graphical user interface (GUI) toolkit for the calculation of three-dimensional (3D) multi-phase biological effective dose (BED) distributions including statistical analyses.

    PubMed

    Kauweloa, Kevin I; Gutierrez, Alonso N; Stathakis, Sotirios; Papanikolaou, Niko; Mavroidis, Panayiotis

    2016-07-01

    A toolkit has been developed for calculating the 3-dimensional biological effective dose (BED) distributions in multi-phase, external beam radiotherapy treatments such as those applied in liver stereotactic body radiation therapy (SBRT) and in multi-prescription treatments. This toolkit also provides a wide range of statistical results related to dose and BED distributions. MATLAB 2010a, version 7.10 was used to create this GUI toolkit. The input data consist of the dose distribution matrices, organ contour coordinates, and treatment planning parameters from the treatment planning system (TPS). The toolkit has the capability of calculating the multi-phase BED distributions using different formulas (denoted as true and approximate). Following the calculations of the BED distributions, the dose and BED distributions can be viewed in different projections (e.g. coronal, sagittal and transverse). The different elements of this toolkit are presented and the important steps for the execution of its calculations are illustrated. The toolkit is applied on brain, head & neck and prostate cancer patients, who received primary and boost phases in order to demonstrate its capability in calculating BED distributions, as well as measuring the inaccuracy and imprecision of the approximate BED distributions. Finally, the clinical situations in which the use of the present toolkit would have a significant clinical impact are indicated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. High-speed all-optical DNA local sequence alignment based on a three-dimensional artificial neural network.

    PubMed

    Maleki, Ehsan; Babashah, Hossein; Koohi, Somayyeh; Kavehvash, Zahra

    2017-07-01

    This paper presents an optical processing approach for exploring a large number of genome sequences. Specifically, we propose an optical correlator for global alignment and an extended moiré matching technique for local analysis of spatially coded DNA, whose output is fed to a novel three-dimensional artificial neural network for local DNA alignment. All-optical implementation of the proposed 3D artificial neural network is developed and its accuracy is verified in Zemax. Thanks to its parallel processing capability, the proposed structure performs local alignment of 4 million sequences of 150 base pairs in a few seconds, which is much faster than its electrical counterparts, such as the basic local alignment search tool.

  8. A Grassroots Remote Sensing Toolkit Using Live Coding, Smartphones, Kites and Lightweight Drones

    PubMed Central

    Anderson, K.; Griffiths, D.; DeBell, L.; Hancock, S.; Duffy, J. P.; Shutler, J. D.; Reinhardt, W. J.; Griffiths, A.

    2016-01-01

    This manuscript describes the development of an android-based smartphone application for capturing aerial photographs and spatial metadata automatically, for use in grassroots mapping applications. The aim of the project was to exploit the plethora of on-board sensors within modern smartphones (accelerometer, GPS, compass, camera) to generate ready-to-use spatial data from lightweight aerial platforms such as drones or kites. A visual coding ‘scheme blocks’ framework was used to build the application (‘app’), so that users could customise their own data capture tools in the field. The paper reports on the coding framework, then shows the results of test flights from kites and lightweight drones and finally shows how open-source geospatial toolkits were used to generate geographical information system (GIS)-ready GeoTIFF images from the metadata stored by the app. Two Android smartphones were used in testing–a high specification OnePlus One handset and a lower cost Acer Liquid Z3 handset, to test the operational limits of the app on phones with different sensor sets. We demonstrate that best results were obtained when the phone was attached to a stable single line kite or to a gliding drone. Results show that engine or motor vibrations from powered aircraft required dampening to ensure capture of high quality images. We demonstrate how the products generated from the open-source processing workflow are easily used in GIS. The app can be downloaded freely from the Google store by searching for ‘UAV toolkit’ (UAV toolkit 2016), and used wherever an Android smartphone and aerial platform are available to deliver rapid spatial data (e.g. in supporting decision-making in humanitarian disaster-relief zones, in teaching or for grassroots remote sensing and democratic mapping). PMID:27144310

  9. Supervised learning of tools for content-based search of image databases

    NASA Astrophysics Data System (ADS)

    Delanoy, Richard L.

    1996-03-01

    A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.

  10. The Comprehension Toolkit

    ERIC Educational Resources Information Center

    Harvey, Stephanie; Goudvis, Anne

    2005-01-01

    "The Comprehension Toolkit" focuses on reading, writing, talking, listening, and investigating, to deepen understanding of nonfiction texts. With a focus on strategic thinking, this toolkit's lessons provide a foundation for developing independent readers and learners. It also provides an alternative to the traditional assign and correct…

  11. JAVA Stereo Display Toolkit

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2008-01-01

    This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.

  12. Code Parallelization with CAPO: A User Manual

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.

  13. GuiTope: an application for mapping random-sequence peptides to protein sequences.

    PubMed

    Halperin, Rebecca F; Stafford, Phillip; Emery, Jack S; Navalkar, Krupa Arun; Johnston, Stephen Albert

    2012-01-03

    Random-sequence peptide libraries are a commonly used tool to identify novel ligands for binding antibodies, other proteins, and small molecules. It is often of interest to compare the selected peptide sequences to the natural protein binding partners to infer the exact binding site or the importance of particular residues. The ability to search a set of sequences for similarity to a set of peptides may sometimes enable the prediction of an antibody epitope or a novel binding partner. We have developed a software application designed specifically for this task. GuiTope provides a graphical user interface for aligning peptide sequences to protein sequences. All alignment parameters are accessible to the user including the ability to specify the amino acid frequency in the peptide library; these frequencies often differ significantly from those assumed by popular alignment programs. It also includes a novel feature to align di-peptide inversions, which we have found improves the accuracy of antibody epitope prediction from peptide microarray data and shows utility in analyzing phage display datasets. Finally, GuiTope can randomly select peptides from a given library to estimate a null distribution of scores and calculate statistical significance. GuiTope provides a convenient method for comparing selected peptide sequences to protein sequences, including flexible alignment parameters, novel alignment features, ability to search a database, and statistical significance of results. The software is available as an executable (for PC) at http://www.immunosignature.com/software and ongoing updates and source code will be available at sourceforge.net.

  14. Hydropower Regulatory and Permitting Information Desktop (RAPID) Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levine, Aaron L

    Hydropower Regulatory and Permitting Information Desktop (RAPID) Toolkit presentation from the WPTO FY14-FY16 Peer Review. The toolkit is aimed at regulatory agencies, consultants, project developers, the public, and any other party interested in learning more about the hydropower regulatory process.

  15. Student Success Center Toolkit

    ERIC Educational Resources Information Center

    Jobs For the Future, 2014

    2014-01-01

    "Student Success Center Toolkit" is a compilation of materials organized to assist Student Success Center directors as they staff, launch, operate, and sustain Centers. The toolkit features materials created and used by existing Centers, such as staffing and budgeting templates, launch materials, sample meeting agendas, and fundraising…

  16. Veterinary Immunology Committee Toolkit Workshop 2010: Progress and plans

    USDA-ARS?s Scientific Manuscript database

    The Third Veterinary Immunology Committee (VIC) Toolkit Workshop took place at the Ninth International Veterinary Immunology Symposium (IVIS) in Tokyo, Japan on August 18, 2020. The Workshop built on previous Toolkit Workshops and covered various aspects of reagent development, commercialisation an...

  17. 77 FR 73023 - U.S. Environmental Solutions Toolkit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-07

    ... foreign end-users of environmental technologies that will outline U.S. approaches to a series of environmental problems and highlight participating U.S. vendors of relevant U.S. technologies. The Toolkit will... DEPARTMENT OF COMMERCE International Trade Administration U.S. Environmental Solutions Toolkit...

  18. Development of an educational 'toolkit' for health professionals and their patients with prediabetes: the WAKEUP study (Ways of Addressing Knowledge Education and Understanding in Pre-diabetes).

    PubMed

    Evans, P H; Greaves, C; Winder, R; Fearn-Smith, J; Campbell, J L

    2007-07-01

    To identify key messages about pre-diabetes and to design, develop and pilot an educational toolkit to address the information needs of patients and health professionals. Mixed qualitative methodology within an action research framework. Focus group interviews with patients and health professionals and discussion with an expert reference group aimed to identify the important messages and produce a draft toolkit. Two action research cycles were then conducted in two general practices, during which the draft toolkit was used and video-taped consultations and follow-up patient interviews provided further data. Framework analysis techniques were used to examine the data and to elicit action points for improving the toolkit. The key messages about pre-diabetes concerned the seriousness of the condition, the preventability of progression to diabetes, and the need for lifestyle change. As well as feedback on the acceptability and use of the toolkit, four main themes were identified in the data: knowledge and education needs (of both patients and health professionals); communicating knowledge and motivating change; redesign of practice systems to support pre-diabetes management and the role of the health professional. The toolkit we developed was found to be an acceptable and useful resource for both patients and health practitioners. Three key messages about pre-diabetes were identified. A toolkit of information materials for patients with pre-diabetes and the health professionals and ideas for improving practice systems for managing pre-diabetes were developed and successfully piloted. Further work is needed to establish the best mode of delivery of the WAKEUP toolkit.

  19. Toolkit of Available EPA Green Infrastructure Modeling Software. National Stormwater Calculator

    EPA Science Inventory

    This webinar will present a toolkit consisting of five EPA green infrastructure models and tools, along with communication material. This toolkit can be used as a teaching and quick reference resource for use by planners and developers when making green infrastructure implementat...

  20. Teacher Quality Toolkit

    ERIC Educational Resources Information Center

    Lauer, Patricia A.; Dean, Ceri B.

    2004-01-01

    This Teacher Quality Toolkit aims to support the continuum of teacher learning by providing tools that institutions of higher education, districts, and schools can use to improve both preservice and inservice teacher education. The toolkit incorporates McREL?s accumulated knowledge and experience related to teacher quality and standards-based…

  1. 77 FR 73022 - U.S. Environmental Solutions Toolkit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-07

    ... Commerce continues to develop the web- based U.S. Environmental Solutions Toolkit to be used by foreign environmental officials and foreign end-users of environmental technologies that will outline U.S. approaches to... DEPARTMENT OF COMMERCE International Trade Administration U.S. Environmental Solutions Toolkit...

  2. Cinfony – combining Open Source cheminformatics toolkits behind a common interface

    PubMed Central

    O'Boyle, Noel M; Hutchison, Geoffrey R

    2008-01-01

    Background Open Source cheminformatics toolkits such as OpenBabel, the CDK and the RDKit share the same core functionality but support different sets of file formats and forcefields, and calculate different fingerprints and descriptors. Despite their complementary features, using these toolkits in the same program is difficult as they are implemented in different languages (C++ versus Java), have different underlying chemical models and have different application programming interfaces (APIs). Results We describe Cinfony, a Python module that presents a common interface to all three of these toolkits, allowing the user to easily combine methods and results from any of the toolkits. In general, the run time of the Cinfony modules is almost as fast as accessing the underlying toolkits directly from C++ or Java, but Cinfony makes it much easier to carry out common tasks in cheminformatics such as reading file formats and calculating descriptors. Conclusion By providing a simplified interface and improving interoperability, Cinfony makes it easy to combine complementary features of OpenBabel, the CDK and the RDKit. PMID:19055766

  3. Free DICOM de-identification tools in clinical research: functioning and safety of patient privacy.

    PubMed

    Aryanto, K Y E; Oudkerk, M; van Ooijen, P M A

    2015-12-01

    To compare non-commercial DICOM toolkits for their de-identification ability in removing a patient's personal health information (PHI) from a DICOM header. Ten DICOM toolkits were selected for de-identification tests. Tests were performed by using the system's default de-identification profile and, subsequently, the tools' best adjusted settings. We aimed to eliminate fifty elements considered to contain identifying patient information. The tools were also examined for their respective methods of customization. Only one tool was able to de-identify all required elements with the default setting. Not all of the toolkits provide a customizable de-identification profile. Six tools allowed changes by selecting the provided profiles, giving input through a graphical user interface (GUI) or configuration text file, or providing the appropriate command-line arguments. Using adjusted settings, four of those six toolkits were able to perform full de-identification. Only five tools could properly de-identify the defined DICOM elements, and in four cases, only after careful customization. Therefore, free DICOM toolkits should be used with extreme care to prevent the risk of disclosing PHI, especially when using the default configuration. In case optimal security is required, one of the five toolkits is proposed. • Free DICOM toolkits should be carefully used to prevent patient identity disclosure. • Each DICOM tool produces its own specific outcomes from the de-identification process. • In case optimal security is required, using one DICOM toolkit is proposed.

  4. Text-image alignment for historical handwritten documents

    NASA Astrophysics Data System (ADS)

    Zinger, S.; Nerbonne, J.; Schomaker, L.

    2009-01-01

    We describe our work on text-image alignment in context of building a historical document retrieval system. We aim at aligning images of words in handwritten lines with their text transcriptions. The images of handwritten lines are automatically segmented from the scanned pages of historical documents and then manually transcribed. To train automatic routines to detect words in an image of handwritten text, we need a training set - images of words with their transcriptions. We present our results on aligning words from the images of handwritten lines and their corresponding text transcriptions. Alignment based on the longest spaces between portions of handwriting is a baseline. We then show that relative lengths, i.e. proportions of words in their lines, can be used to improve the alignment results considerably. To take into account the relative word length, we define the expressions for the cost function that has to be minimized for aligning text words with their images. We apply right to left alignment as well as alignment based on exhaustive search. The quality assessment of these alignments shows correct results for 69% of words from 100 lines, or 90% of partially correct and correct alignments combined.

  5. WATERSHED HEALTH ASSESSMENT TOOLS-INVESTIGATING FISHERIES (WHAT-IF): A MODELING TOOLKIT FOR WATERSHED AND FISHERIES MANAGEMENT

    EPA Science Inventory

    The Watershed Health Assessment Tools-Investigating Fisheries (WHAT-IF) is a decision-analysis modeling toolkit for personal computers that supports watershed and fisheries management. The WHAT-IF toolkit includes a relational database, help-system functions and documentation, a...

  6. Designing and Delivering Intensive Interventions: A Teacher's Toolkit

    ERIC Educational Resources Information Center

    Murray, Christy S.; Coleman, Meghan A.; Vaughn, Sharon; Wanzek, Jeanne; Roberts, Greg

    2012-01-01

    This toolkit provides activities and resources to assist practitioners in designing and delivering intensive interventions in reading and mathematics for K-12 students with significant learning difficulties and disabilities. Grounded in research, this toolkit is based on the Center on Instruction's "Intensive Interventions for Students Struggling…

  7. The National Informal STEM Education Network

    Science.gov Websites

    Evaluation and Research Kits Explore Science: Earth & Space toolkit Building with Biology Kit Explore 2018 toolkits now available for download. Download the 2018 Digital Toolkit! Building with Biology ACTIVITY KIT Building with Biology Conversations and activities about synthetic biology; this emerging

  8. 78 FR 14773 - U.S. Environmental Solutions Toolkit-Landfill Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-07

    ... and foreign end-users of environmental technologies that will outline U.S. approaches to a series of environmental problems and highlight participating U.S. vendors of relevant U.S. technologies. The Toolkit will... DEPARTMENT OF COMMERCE International Trade Administration U.S. Environmental Solutions Toolkit...

  9. Arioc: high-throughput read alignment with GPU-accelerated exploration of the seed-and-extend search space

    PubMed Central

    Budavari, Tamas; Langmead, Ben; Wheelan, Sarah J.; Salzberg, Steven L.; Szalay, Alexander S.

    2015-01-01

    When computing alignments of DNA sequences to a large genome, a key element in achieving high processing throughput is to prioritize locations in the genome where high-scoring mappings might be expected. We formulated this task as a series of list-processing operations that can be efficiently performed on graphics processing unit (GPU) hardware.We followed this approach in implementing a read aligner called Arioc that uses GPU-based parallel sort and reduction techniques to identify high-priority locations where potential alignments may be found. We then carried out a read-by-read comparison of Arioc’s reported alignments with the alignments found by several leading read aligners. With simulated reads, Arioc has comparable or better accuracy than the other read aligners we tested. With human sequencing reads, Arioc demonstrates significantly greater throughput than the other aligners we evaluated across a wide range of sensitivity settings. The Arioc software is available at https://github.com/RWilton/Arioc. It is released under a BSD open-source license. PMID:25780763

  10. PhAST: pharmacophore alignment search tool.

    PubMed

    Hähnke, Volker; Hofmann, Bettina; Grgat, Tomislav; Proschak, Ewgenij; Steinhilber, Dieter; Schneider, Gisbert

    2009-04-15

    We present a ligand-based virtual screening technique (PhAST) for rapid hit and lead structure searching in large compound databases. Molecules are represented as strings encoding the distribution of pharmacophoric features on the molecular graph. In contrast to other text-based methods using SMILES strings, we introduce a new form of text representation that describes the pharmacophore of molecules. This string representation opens the opportunity for revealing functional similarity between molecules by sequence alignment techniques in analogy to homology searching in protein or nucleic acid sequence databases. We favorably compared PhAST with other current ligand-based virtual screening methods in a retrospective analysis using the BEDROC metric. In a prospective application, PhAST identified two novel inhibitors of 5-lipoxygenase product formation with minimal experimental effort. This outcome demonstrates the applicability of PhAST to drug discovery projects and provides an innovative concept of sequence-based compound screening with substantial scaffold hopping potential. 2008 Wiley Periodicals, Inc.

  11. Reducing the number of templates for aligned-spin compact binary coalescence gravitational wave searches using metric-agnostic template nudging

    NASA Astrophysics Data System (ADS)

    Indik, Nathaniel; Fehrmann, Henning; Harke, Franz; Krishnan, Badri; Nielsen, Alex B.

    2018-06-01

    Efficient multidimensional template placement is crucial in computationally intensive matched-filtering searches for gravitational waves (GWs). Here, we implement the neighboring cell algorithm (NCA) to improve the detection volume of an existing compact binary coalescence (CBC) template bank. This algorithm has already been successfully applied for a binary millisecond pulsar search in data from the Fermi satellite. It repositions templates from overdense regions to underdense regions and reduces the number of templates that would have been required by a stochastic method to achieve the same detection volume. Our method is readily generalizable to other CBC parameter spaces. Here we apply this method to the aligned-single-spin neutron star-black hole binary coalescence inspiral-merger-ringdown gravitational wave parameter space. We show that the template nudging algorithm can attain the equivalent effectualness of the stochastic method with 12% fewer templates.

  12. Apodized Pupil Lyot Coronagraphs designs for future segmented space telescopes

    NASA Astrophysics Data System (ADS)

    St. Laurent, Kathryn; Fogarty, Kevin; Zimmerman, Neil; N’Diaye, Mamadou; Stark, Chris; Sivaramakrishnan, Anand; Pueyo, Laurent; Vanderbei, Robert; Soummer, Remi

    2018-01-01

    A coronagraphic starlight suppression system situated on a future flagship space observatory offers a promising avenue to image Earth-like exoplanets and search for biomarkers in their atmospheric spectra. One NASA mission concept that could serve as the platform to realize this scientific breakthrough is the Large UV/Optical/IR Surveyor (LUVOIR). Such a mission would also address a broad range of topics in astrophysics with a multi-wavelength suite of instruments.In support of the community’s assessment of the scientific capability of a LUVOIR mission, the Exoplanet Exploration Program (ExEP) has launched a multi-team technical study: Segmented Coronagraph Design and Analysis (SCDA). The goal of this study is to develop viable coronagraph instrument concepts for a LUVOIR-type mission. Results of the SCDA effort will directly inform the mission concept evaluation being carried out by the LUVOIR Science and Technology Definition Team. The apodized pupil Lyot coronagraph (APLC) is one of several coronagraph design families that the SCDA study is assessing. The APLC is a Lyot-style coronagraph that suppresses starlight through a series of amplitude operations on the on-axis field. Given a suite of seven plausible segmented telescope apertures, we have developed an object-oriented software toolkit to automate the exploration of thousands of APLC design parameter combinations. In the course of exploring this parameter space we have established relationships between APLC throughput and telescope aperture geometry, Lyot stop, inner working angle, bandwidth, and contrast level. In parallel with the parameter space exploration, we have investigated several strategies to improve the robustness of APLC designs to fabrication and alignment errors and integrated a Design Reference Mission framework to evaluate designs with scientific yield metrics.

  13. Southwest Research Institute astronomer Dan Durda checks the alignment of the SWUIS-A Xybion digital

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Southwest Research Institute astronomer Dan Durda checks the alignment of the SWUIS-A Xybion digital camera mounted in the rear cockpit of a NASA Dryden F/A-18B before taking off on an astronomy mission to search for small vulcanoids (asteroids) that may be orbiting between the sun and the planet Mercury.

  14. rVISTA 2.0: Evolutionary Analysis of Transcription Factor Binding Sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loots, G G; Ovcharenko, I

    2004-01-28

    Identifying and characterizing the patterns of DNA cis-regulatory modules represents a challenge that has the potential to reveal the regulatory language the genome uses to dictate transcriptional dynamics. Several studies have demonstrated that regulatory modules are under positive selection and therefore are often conserved between related species. Using this evolutionary principle we have created a comparative tool, rVISTA, for analyzing the regulatory potential of noncoding sequences. The rVISTA tool combines transcription factor binding site (TFBS) predictions, sequence comparisons and cluster analysis to identify noncoding DNA regions that are highly conserved and present in a specific configuration within an alignment. Heremore » we present the newly developed version 2.0 of the rVISTA tool that can process alignments generated by both zPicture and PipMaker alignment programs or use pre-computed pairwise alignments of seven vertebrate genomes available from the ECR Browser. The rVISTA web server is closely interconnected with the TRANSFAC database, allowing users to either search for matrices present in the TRANSFAC library collection or search for user-defined consensus sequences. rVISTA tool is publicly available at http://rvista.dcode.org/.« less

  15. SatelliteDL: a Toolkit for Analysis of Heterogeneous Satellite Datasets

    NASA Astrophysics Data System (ADS)

    Galloy, M. D.; Fillmore, D.

    2014-12-01

    SatelliteDL is an IDL toolkit for the analysis of satellite Earth observations from a diverse set of platforms and sensors. The core function of the toolkit is the spatial and temporal alignment of satellite swath and geostationary data. The design features an abstraction layer that allows for easy inclusion of new datasets in a modular way. Our overarching objective is to create utilities that automate the mundane aspects of satellite data analysis, are extensible and maintainable, and do not place limitations on the analysis itself. IDL has a powerful suite of statistical and visualization tools that can be used in conjunction with SatelliteDL. Toward this end we have constructed SatelliteDL to include (1) HTML and LaTeX API document generation,(2) a unit test framework,(3) automatic message and error logs,(4) HTML and LaTeX plot and table generation, and(5) several real world examples with bundled datasets available for download. For ease of use, datasets, variables and optional workflows may be specified in a flexible format configuration file. Configuration statements may specify, for example, a region and date range, and the creation of images, plots and statistical summary tables for a long list of variables. SatelliteDL enforces data provenance; all data should be traceable and reproducible. The output NetCDF file metadata holds a complete history of the original datasets and their transformations, and a method exists to reconstruct a configuration file from this information. Release 0.1.0 distributes with ingest methods for GOES, MODIS, VIIRS and CERES radiance data (L1) as well as select 2D atmosphere products (L2) such as aerosol and cloud (MODIS and VIIRS) and radiant flux (CERES). Future releases will provide ingest methods for ocean and land surface products, gridded and time averaged datasets (L3 Daily, Monthly and Yearly), and support for 3D products such as temperature and water vapor profiles. Emphasis will be on NPP Sensor, Environmental and Climate Data Records as they become available. To obtain SatelliteDL, please visit the project website at http://www.txcorp.com/SatelliteDL

  16. Fast 3D shape screening of large chemical databases through alignment-recycling

    PubMed Central

    Fontaine, Fabien; Bolton, Evan; Borodina, Yulia; Bryant, Stephen H

    2007-01-01

    Background Large chemical databases require fast, efficient, and simple ways of looking for similar structures. Although such tasks are now fairly well resolved for graph-based similarity queries, they remain an issue for 3D approaches, particularly for those based on 3D shape overlays. Inspired by a recent technique developed to compare molecular shapes, we designed a hybrid methodology, alignment-recycling, that enables efficient retrieval and alignment of structures with similar 3D shapes. Results Using a dataset of more than one million PubChem compounds of limited size (< 28 heavy atoms) and flexibility (< 6 rotatable bonds), we obtained a set of a few thousand diverse structures covering entirely the 3D shape space of the conformers of the dataset. Transformation matrices gathered from the overlays between these diverse structures and the 3D conformer dataset allowed us to drastically (100-fold) reduce the CPU time required for shape overlay. The alignment-recycling heuristic produces results consistent with de novo alignment calculation, with better than 80% hit list overlap on average. Conclusion Overlay-based 3D methods are computationally demanding when searching large databases. Alignment-recycling reduces the CPU time to perform shape similarity searches by breaking the alignment problem into three steps: selection of diverse shapes to describe the database shape-space; overlay of the database conformers to the diverse shapes; and non-optimized overlay of query and database conformers using common reference shapes. The precomputation, required by the first two steps, is a significant cost of the method; however, once performed, querying is two orders of magnitude faster. Extensions and variations of this methodology, for example, to handle more flexible and larger small-molecules are discussed. PMID:17880744

  17. 78 FR 58520 - U.S. Environmental Solutions Toolkit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-24

    ... notice sets forth a request for input from U.S. businesses capable of exporting their goods or services... and foreign end-users of environmental technologies The Toolkit outline U.S. approaches to a series of environmental problems and highlight participating U.S. vendors of relevant U.S. technologies. The Toolkit will...

  18. Practitioner Data Use in Schools: Workshop Toolkit. REL 2015-043

    ERIC Educational Resources Information Center

    Bocala, Candice; Henry, Susan F.; Mundry, Susan; Morgan, Claire

    2014-01-01

    The "Practitioner Data Use in Schools: Workshop Toolkit" is designed to help practitioners systematically and accurately use data to inform their teaching practice. The toolkit includes an agenda, slide deck, participant workbook, and facilitator's guide and covers the following topics: developing data literacy, engaging in a cycle of…

  19. Language Access Toolkit: An Organizing and Advocacy Resource for Community-Based Youth Programs

    ERIC Educational Resources Information Center

    Beyersdorf, Mark Ro

    2013-01-01

    Asian American Legal Defense and Education Fund (AALDEF) developed this language access toolkit to share the expertise and experiences of National Asian American Education Advocates Network (NAAEA) member organizations with other community organizations interested in developing language access campaigns. This toolkit includes an overview of…

  20. FY17Q4 Ristra project: Release Version 1.0 of a production toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hungerford, Aimee L.; Daniel, David John

    2017-09-21

    The Next Generation Code project will release Version 1.0 of a production toolkit for multi-physics application development on advanced architectures. Features of this toolkit will include remap and link utilities, control and state manager, setup, visualization and I/O, as well as support for a variety of mesh and particle data representations. Numerical physics packages that operate atop this foundational toolkit will be employed in a multi-physics demonstration problem and released to the community along with results from the demonstration.

  1. Diagnosing turbulence for research aircraft safety using open source toolkits

    NASA Astrophysics Data System (ADS)

    Lang, T. J.; Guy, N.

    Open source software toolkits have been developed and applied to diagnose in-cloud turbulence in the vicinity of Earth science research aircraft, via analysis of ground-based Doppler radar data. Based on multiple retrospective analyses, these toolkits show promise for detecting significant turbulence well prior to cloud penetrations by research aircraft. A pilot study demonstrated the ability to provide mission scientists turbulence estimates in near real time during an actual field campaign, and thus these toolkits are recommended for usage in future cloud-penetrating aircraft field campaigns.

  2. Software reuse in spacecraft planning and scheduling systems

    NASA Technical Reports Server (NTRS)

    Mclean, David; Tuchman, Alan; Broseghini, Todd; Yen, Wen; Page, Brenda; Johnson, Jay; Bogovich, Lynn; Burkhardt, Chris; Mcintyre, James; Klein, Scott

    1993-01-01

    The use of a software toolkit and development methodology that supports software reuse is described. The toolkit includes source-code-level library modules and stand-alone tools which support such tasks as data reformatting and report generation, simple relational database applications, user interfaces, tactical planning, strategic planning and documentation. The current toolkit is written in C and supports applications that run on IBM-PC's under DOS and UNlX-based workstations under OpenLook and Motif. The toolkit is fully integrated for building scheduling systems that reuse AI knowledge base technology. A typical scheduling scenario and three examples of applications that utilize the reuse toolkit will be briefly described. In addition to the tools themselves, a description of the software evolution and reuse methodology that was used is presented.

  3. Development of an evidence-informed leisure time physical activity resource for adults with spinal cord injury: the SCI Get Fit Toolkit.

    PubMed

    Arbour-Nicitopoulos, K P; Martin Ginis, K A; Latimer-Cheung, A E; Bourne, C; Campbell, D; Cappe, S; Ginis, S; Hicks, A L; Pomerleau, P; Smith, K

    2013-06-01

    To systematically develop an evidence-informed leisure time physical activity (LTPA) resource for adults with spinal cord injury (SCI). Canada. The Appraisal of Guidelines, Research and Evaluation (AGREE) II protocol was used to develop a toolkit to teach and encourage adults with SCI how to make smart and informed choices about being physically active. A multidisciplinary expert panel appraised the evidence and generated specific recommendations for the content of the toolkit. Pilot testing was conducted to refine the toolkit's presentation. Recommendations emanating from the consultation process were that the toolkit be a brief, evidence-based resource that contains images of adults with tetraplegia and paraplegia, and links to more detailed online information. The content of the toolkit should include the physical activity guidelines (PAGs) for adults with SCI, activities tailored to manual and power chair users, the benefits of LTPA, and strategies to overcome common LTPA barriers for adults with SCI. The inclusion of action plans and safety tips was also recommended. These recommendations have resulted in the development of an evidence-informed LTPA resource to assist adults with SCI in meeting the PAGs. This toolkit will have important implications for consumers, health care professionals and policy makers for encouraging LTPA in the SCI community.

  4. A framework for measurement and harmonization of pediatric multiple sclerosis etiologic research studies: The Pediatric MS Tool-Kit.

    PubMed

    Magalhaes, Sandra; Banwell, Brenda; Bar-Or, Amit; Fortier, Isabel; Hanwell, Heather E; Lim, Ming; Matt, Georg E; Neuteboom, Rinze F; O'Riordan, David L; Schneider, Paul K; Pugliatti, Maura; Shatenstein, Bryna; Tansey, Catherine M; Wassmer, Evangeline; Wolfson, Christina

    2018-06-01

    While studying the etiology of multiple sclerosis (MS) in children has several methodological advantages over studying etiology in adults, studies are limited by small sample sizes. Using a rigorous methodological process, we developed the Pediatric MS Tool-Kit, a measurement framework that includes a minimal set of core variables to assess etiological risk factors. We solicited input from the International Pediatric MS Study Group to select three risk factors: environmental tobacco smoke (ETS) exposure, sun exposure, and vitamin D intake. To develop the Tool-Kit, we used a Delphi study involving a working group of epidemiologists, neurologists, and content experts from North America and Europe. The Tool-Kit includes six core variables to measure ETS, six to measure sun exposure, and six to measure vitamin D intake. The Tool-Kit can be accessed online ( www.maelstrom-research.org/mica/network/tool-kit ). The goals of the Tool-Kit are to enhance exposure measurement in newly designed pediatric MS studies and comparability of results across studies, and in the longer term to facilitate harmonization of studies, a methodological approach that can be used to circumvent issues of small sample sizes. We believe the Tool-Kit will prove to be a valuable resource to guide pediatric MS researchers in developing study-specific questionnaire.

  5. Implementation of the Good School Toolkit in Uganda: a quantitative process evaluation of a successful violence prevention program.

    PubMed

    Knight, Louise; Allen, Elizabeth; Mirembe, Angel; Nakuti, Janet; Namy, Sophie; Child, Jennifer C; Sturgess, Joanna; Kyegombe, Nambusi; Walakira, Eddy J; Elbourne, Diana; Naker, Dipak; Devries, Karen M

    2018-05-09

    The Good School Toolkit, a complex behavioural intervention designed by Raising Voices a Ugandan NGO, reduced past week physical violence from school staff to primary students by an average of 42% in a recent randomised controlled trial. This process evaluation quantitatively examines what was implemented across the twenty-one intervention schools, variations in school prevalence of violence after the intervention, factors that influence exposure to the intervention and factors associated with students' experience of physical violence from staff at study endline. Implementation measures were captured prospectively in the twenty-one intervention schools over four school terms from 2012 to 2014 and Toolkit exposure captured in the student (n = 1921) and staff (n = 286) endline cross-sectional surveys in 2014. Implementation measures and the prevalence of violence are summarised across schools and are assessed for correlation using Spearman's Rank Correlation Coefficient. Regression models are used to explore individual factors associated with Toolkit exposure and with physical violence at endline. School prevalence of past week physical violence from staff against students ranged from 7% to 65% across schools at endline. Schools with higher mean levels of teacher Toolkit exposure had larger decreases in violence during the study. Students in schools categorised as implementing a 'low' number of program school-led activities reported less exposure to the Toolkit. Higher student Toolkit exposure was associated with decreased odds of experiencing physical violence from staff (OR: 0.76, 95%CI: 0.67-0.86, p-value< 0.001). Girls, students reporting poorer mental health and students in a lower grade were less exposed to the toolkit. After the intervention, and when adjusting for individual Toolkit exposure, some students remained at increased risk of experiencing violence from staff, including, girls, students reporting poorer mental health, students who experienced other violence and those reporting difficulty with self-care. Our results suggest that increasing students and teachers exposure to the Good School Toolkit within schools has the potential to bring about further reductions in violence. Effectiveness of the Toolkit may be increased by further targeting and supporting teachers' engagement with girls and students with mental health difficulties. The trial is registered at clinicaltrials.gov , NCT01678846, August 24th 2012.

  6. Development of an embedded instrument for autofocus and polarization alignment of polarization maintaining fiber

    NASA Astrophysics Data System (ADS)

    Feng, Di; Fang, Qimeng; Huang, Huaibo; Zhao, Zhengqi; Song, Ningfang

    2017-12-01

    The development and implementation of a practical instrument based on an embedded technique for autofocus and polarization alignment of polarization maintaining fiber is presented. For focusing efficiency and stability, an image-based focusing algorithm fully considering the image definition evaluation and the focusing search strategy was used to accomplish autofocus. For improving the alignment accuracy, various image-based algorithms of alignment detection were developed with high calculation speed and strong robustness. The instrument can be operated as a standalone device with real-time processing and convenience operations. The hardware construction, software interface, and image-based algorithms of main modules are described. Additionally, several image simulation experiments were also carried out to analyze the accuracy of the above alignment detection algorithms. Both the simulation results and experiment results indicate that the instrument can achieve the accuracy of polarization alignment <±0.1 deg.

  7. A Systolic Array-Based FPGA Parallel Architecture for the BLAST Algorithm

    PubMed Central

    Guo, Xinyu; Wang, Hong; Devabhaktuni, Vijay

    2012-01-01

    A design of systolic array-based Field Programmable Gate Array (FPGA) parallel architecture for Basic Local Alignment Search Tool (BLAST) Algorithm is proposed. BLAST is a heuristic biological sequence alignment algorithm which has been used by bioinformatics experts. In contrast to other designs that detect at most one hit in one-clock-cycle, our design applies a Multiple Hits Detection Module which is a pipelining systolic array to search multiple hits in a single-clock-cycle. Further, we designed a Hits Combination Block which combines overlapping hits from systolic array into one hit. These implementations completed the first and second step of BLAST architecture and achieved significant speedup comparing with previously published architectures. PMID:25969747

  8. Ligandbook: an online repository for small and drug-like molecule force field parameters.

    PubMed

    Domanski, Jan; Beckstein, Oliver; Iorga, Bogdan I

    2017-06-01

    Ligandbook is a public database and archive for force field parameters of small and drug-like molecules. It is a repository for parameter sets that are part of published work but are not easily available to the community otherwise. Parameter sets can be downloaded and immediately used in molecular dynamics simulations. The sets of parameters are versioned with full histories and carry unique identifiers to facilitate reproducible research. Text-based search on rich metadata and chemical substructure search allow precise identification of desired compounds or functional groups. Ligandbook enables the rapid set up of reproducible molecular dynamics simulations of ligands and protein-ligand complexes. Ligandbook is available online at https://ligandbook.org and supports all modern browsers. Parameters can be searched and downloaded without registration, including access through a programmatic RESTful API. Deposition of files requires free user registration. Ligandbook is implemented in the PHP Symfony2 framework with TCL scripts using the CACTVS toolkit. oliver.beckstein@asu.edu or bogdan.iorga@cnrs.fr ; contact@ligandbook.org . Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  9. HSA: a heuristic splice alignment tool.

    PubMed

    Bu, Jingde; Chi, Xuebin; Jin, Zhong

    2013-01-01

    RNA-Seq methodology is a revolutionary transcriptomics sequencing technology, which is the representative of Next generation Sequencing (NGS). With the high throughput sequencing of RNA-Seq, we can acquire much more information like differential expression and novel splice variants from deep sequence analysis and data mining. But the short read length brings a great challenge to alignment, especially when the reads span two or more exons. A two steps heuristic splice alignment tool is generated in this investigation. First, map raw reads to reference with unspliced aligner--BWA; second, split initial unmapped reads into three equal short reads (seeds), align each seed to the reference, filter hits, search possible split position of read and extend hits to a complete match. Compare with other splice alignment tools like SOAPsplice and Tophat2, HSA has a better performance in call rate and efficiency, but its results do not as accurate as the other software to some extent. HSA is an effective spliced aligner of RNA-Seq reads mapping, which is available at https://github.com/vlcc/HSA.

  10. A Probabilistic Model of Local Sequence Alignment That Simplifies Statistical Significance Estimation

    PubMed Central

    Eddy, Sean R.

    2008-01-01

    Sequence database searches require accurate estimation of the statistical significance of scores. Optimal local sequence alignment scores follow Gumbel distributions, but determining an important parameter of the distribution (λ) requires time-consuming computational simulation. Moreover, optimal alignment scores are less powerful than probabilistic scores that integrate over alignment uncertainty (“Forward” scores), but the expected distribution of Forward scores remains unknown. Here, I conjecture that both expected score distributions have simple, predictable forms when full probabilistic modeling methods are used. For a probabilistic model of local sequence alignment, optimal alignment bit scores (“Viterbi” scores) are Gumbel-distributed with constant λ = log 2, and the high scoring tail of Forward scores is exponential with the same constant λ. Simulation studies support these conjectures over a wide range of profile/sequence comparisons, using 9,318 profile-hidden Markov models from the Pfam database. This enables efficient and accurate determination of expectation values (E-values) for both Viterbi and Forward scores for probabilistic local alignments. PMID:18516236

  11. DNA sequence alignment by microhomology sampling during homologous recombination

    PubMed Central

    Qi, Zhi; Redding, Sy; Lee, Ja Yil; Gibb, Bryan; Kwon, YoungHo; Niu, Hengyao; Gaines, William A.; Sung, Patrick

    2015-01-01

    Summary Homologous recombination (HR) mediates the exchange of genetic information between sister or homologous chromatids. During HR, members of the RecA/Rad51 family of recombinases must somehow search through vast quantities of DNA sequence to align and pair ssDNA with a homologous dsDNA template. Here we use single-molecule imaging to visualize Rad51 as it aligns and pairs homologous DNA sequences in real-time. We show that Rad51 uses a length-based recognition mechanism while interrogating dsDNA, enabling robust kinetic selection of 8-nucleotide (nt) tracts of microhomology, which kinetically confines the search to sites with a high probability of being a homologous target. Successful pairing with a 9th nucleotide coincides with an additional reduction in binding free energy and subsequent strand exchange occurs in precise 3-nt steps, reflecting the base triplet organization of the presynaptic complex. These findings provide crucial new insights into the physical and evolutionary underpinnings of DNA recombination. PMID:25684365

  12. ExoLocator--an online view into genetic makeup of vertebrate proteins.

    PubMed

    Khoo, Aik Aun; Ogrizek-Tomas, Mario; Bulovic, Ana; Korpar, Matija; Gürler, Ece; Slijepcevic, Ivan; Šikic, Mile; Mihalek, Ivana

    2014-01-01

    ExoLocator (http://exolocator.eopsf.org) collects in a single place information needed for comparative analysis of protein-coding exons from vertebrate species. The main source of data--the genomic sequences, and the existing exon and homology annotation--is the ENSEMBL database of completed vertebrate genomes. To these, ExoLocator adds the search for ostensibly missing exons in orthologous protein pairs across species, using an extensive computational pipeline to narrow down the search region for the candidate exons and find a suitable template in the other species, as well as state-of-the-art implementations of pairwise alignment algorithms. The resulting complements of exons are organized in a way currently unique to ExoLocator: multiple sequence alignments, both on the nucleotide and on the peptide levels, clearly indicating the exon boundaries. The alignments can be inspected in the web-embedded viewer, downloaded or used on the spot to produce an estimate of conservation within orthologous sets, or functional divergence across paralogues.

  13. Assessing the effectiveness of the Pesticides and Farmworker Health Toolkit: a curriculum for enhancing farmworkers' understanding of pesticide safety concepts.

    PubMed

    LePrevost, Catherine E; Storm, Julia F; Asuaje, Cesar R; Arellano, Consuelo; Cope, W Gregory

    2014-01-01

    Among agricultural workers, migrant and seasonal farmworkers have been recognized as a special risk population because these laborers encounter cultural challenges and linguistic barriers while attempting to maintain their safety and health within their working environments. The crop-specific Pesticides and Farmworker Health Toolkit (Toolkit) is a pesticide safety and health curriculum designed to communicate to farmworkers pesticide hazards commonly found in their working environments and to address Worker Protection Standard (WPS) pesticide training criteria for agricultural workers. The goal of this preliminary study was to test evaluation items for measuring knowledge increases among farmworkers and to assess the effectiveness of the Toolkit in improving farmworkers' knowledge of key WPS and risk communication concepts when the Toolkit lesson was delivered by trained trainers in the field. After receiving training on the curriculum, four participating trainers provided lessons using the Toolkit as part of their regular training responsibilities and orally administered a pre- and post-lesson evaluation instrument to 20 farmworker volunteers who were generally representative of the national farmworker population. Farmworker knowledge of pesticide safety messages significantly (P<.05) increased after participation in the lesson. Further, items with visual alternatives were found to be most useful in discriminating between more and less knowledgeable farmworkers. The pilot study suggests that the Pesticides and Farmworker Health Toolkit is an effective, research-based pesticide safety and health intervention for the at-risk farmworker population and identifies a testing format appropriate for evaluating the Toolkit and other similar interventions for farmworkers in the field.

  14. Educator Toolkits on Second Victim Syndrome, Mindfulness and Meditation, and Positive Psychology: The 2017 Resident Wellness Consensus Summit

    PubMed Central

    Smart, Jon; Zdradzinski, Michael; Roth, Sarah; Gende, Alecia; Conroy, Kylie; Battaglioli, Nicole

    2018-01-01

    Introduction Burnout, depression, and suicidality among residents of all specialties have become a critical focus of attention for the medical education community. Methods As part of the 2017 Resident Wellness Consensus Summit in Las Vegas, Nevada, resident participants from 31 programs collaborated in the Educator Toolkit workgroup. Over a seven-month period leading up to the summit, this workgroup convened virtually in the Wellness Think Tank, an online resident community, to perform a literature review and draft curricular plans on three core wellness topics. These topics were second victim syndrome, mindfulness and meditation, and positive psychology. At the live summit event, the workgroup expanded to include residents outside the Wellness Think Tank to obtain a broader consensus of the evidence-based toolkits for these three topics. Results Three educator toolkits were developed. The second victim syndrome toolkit has four modules, each with a pre-reading material and a leader (educator) guide. In the mindfulness and meditation toolkit, there are three modules with a leader guide in addition to a longitudinal, guided meditation plan. The positive psychology toolkit has two modules, each with a leader guide and a PowerPoint slide set. These toolkits provide educators the necessary resources, reading materials, and lesson plans to implement didactic sessions in their residency curriculum. Conclusion Residents from across the world collaborated and convened to reach a consensus on high-yield—and potentially high-impact—lesson plans that programs can use to promote and improve resident wellness. These lesson plans may stand alone or be incorporated into a larger wellness curriculum. PMID:29560061

  15. Educator Toolkits on Second Victim Syndrome, Mindfulness and Meditation, and Positive Psychology: The 2017 Resident Wellness Consensus Summit.

    PubMed

    Chung, Arlene S; Smart, Jon; Zdradzinski, Michael; Roth, Sarah; Gende, Alecia; Conroy, Kylie; Battaglioli, Nicole

    2018-03-01

    Burnout, depression, and suicidality among residents of all specialties have become a critical focus of attention for the medical education community. As part of the 2017 Resident Wellness Consensus Summit in Las Vegas, Nevada, resident participants from 31 programs collaborated in the Educator Toolkit workgroup. Over a seven-month period leading up to the summit, this workgroup convened virtually in the Wellness Think Tank, an online resident community, to perform a literature review and draft curricular plans on three core wellness topics. These topics were second victim syndrome, mindfulness and meditation, and positive psychology. At the live summit event, the workgroup expanded to include residents outside the Wellness Think Tank to obtain a broader consensus of the evidence-based toolkits for these three topics. Three educator toolkits were developed. The second victim syndrome toolkit has four modules, each with a pre-reading material and a leader (educator) guide. In the mindfulness and meditation toolkit, there are three modules with a leader guide in addition to a longitudinal, guided meditation plan. The positive psychology toolkit has two modules, each with a leader guide and a PowerPoint slide set. These toolkits provide educators the necessary resources, reading materials, and lesson plans to implement didactic sessions in their residency curriculum. Residents from across the world collaborated and convened to reach a consensus on high-yield-and potentially high-impact-lesson plans that programs can use to promote and improve resident wellness. These lesson plans may stand alone or be incorporated into a larger wellness curriculum.

  16. SFESA: a web server for pairwise alignment refinement by secondary structure shifts.

    PubMed

    Tong, Jing; Pei, Jimin; Grishin, Nick V

    2015-09-03

    Protein sequence alignment is essential for a variety of tasks such as homology modeling and active site prediction. Alignment errors remain the main cause of low-quality structure models. A bioinformatics tool to refine alignments is needed to make protein alignments more accurate. We developed the SFESA web server to refine pairwise protein sequence alignments. Compared to the previous version of SFESA, which required a set of 3D coordinates for a protein, the new server will search a sequence database for the closest homolog with an available 3D structure to be used as a template. For each alignment block defined by secondary structure elements in the template, SFESA evaluates alignment variants generated by local shifts and selects the best-scoring alignment variant. A scoring function that combines the sequence score of profile-profile comparison and the structure score of template-derived contact energy is used for evaluation of alignments. PROMALS pairwise alignments refined by SFESA are more accurate than those produced by current advanced alignment methods such as HHpred and CNFpred. In addition, SFESA also improves alignments generated by other software. SFESA is a web-based tool for alignment refinement, designed for researchers to compute, refine, and evaluate pairwise alignments with a combined sequence and structure scoring of alignment blocks. To our knowledge, the SFESA web server is the only tool that refines alignments by evaluating local shifts of secondary structure elements. The SFESA web server is available at http://prodata.swmed.edu/sfesa.

  17. The RAPID Toolkit: Facilitating Utility-Scale Renewable Energy Development

    Science.gov Websites

    energy and bulk transmission projects. The RAPID Toolkit, developed by the National Renewable Energy Renewable Energy Development The RAPID Toolkit: Facilitating Utility-Scale Renewable Energy Development information about federal, state, and local permitting and regulations for utility-scale renewable energy and

  18. Quality Assurance Toolkit for Distance Higher Education Institutions and Programmes

    ERIC Educational Resources Information Center

    Rama, Kondapalli, Ed.; Hope, Andrea, Ed.

    2009-01-01

    The Commonwealth of Learning is proud to partner with the Sri Lankan Ministry of Higher Education and UNESCO to produce this "Quality Assurance Toolkit for Distance Higher Education Institutions and Programmes". The Toolkit has been prepared with three features. First, it is a generic document on quality assurance, complete with a…

  19. 75 FR 35038 - Agency Information Collection Activities; Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-21

    ... components of the RED, as well as the needs of culturally and linguistically diverse patients; (2) To pre-test the revised RED Toolkit in ten varied hospital settings, evaluating how the RED Toolkit is... intensity of technical assistance (TA). (3) To modify the revised RED Toolkit based on pre-testing and to...

  20. Transition Toolkit 3.0: Meeting the Educational Needs of Youth Exposed to the Juvenile Justice System. Third Edition

    ERIC Educational Resources Information Center

    Clark, Heather Griller; Mathur, Sarup; Brock, Leslie; O'Cummings, Mindee; Milligan, DeAngela

    2016-01-01

    The third edition of the National Technical Assistance Center for the Education of Neglected or Delinquent Children and Youth's (NDTAC's) "Transition Toolkit" provides updated information on existing policies, practices, strategies, and resources for transition that build on field experience and research. The "Toolkit" offers…

  1. Problem posing and cultural tailoring: developing an HIV/AIDS health literacy toolkit with the African American community.

    PubMed

    Rikard, R V; Thompson, Maxine S; Head, Rachel; McNeil, Carlotta; White, Caressa

    2012-09-01

    The rate of HIV infection among African Americans is disproportionately higher than for other racial groups in the United States. Previous research suggests that low level of health literacy (HL) is an underlying factor to explain racial disparities in the prevalence and incidence of HIV/AIDS. The present research describes a community and university project to develop a culturally tailored HIV/AIDS HL toolkit in the African American community. Paulo Freire's pedagogical philosophy and problem-posing methodology served as the guiding framework throughout the development process. Developing the HIV/AIDS HL toolkit occurred in a two-stage process. In Stage 1, a nonprofit organization and research team established a collaborative partnership to develop a culturally tailored HIV/AIDS HL toolkit. In Stage 2, African American community members participated in focus groups conducted as Freirian cultural circles to further refine the HIV/AIDS HL toolkit. In both stages, problem posing engaged participants' knowledge, experiences, and concerns to evaluate a working draft toolkit. The discussion and implications highlight how Freire's pedagogical philosophy and methodology enhances the development of culturally tailored health information.

  2. Aligning Metabolic Pathways Exploiting Binary Relation of Reactions.

    PubMed

    Huang, Yiran; Zhong, Cheng; Lin, Hai Xiang; Huang, Jing

    2016-01-01

    Metabolic pathway alignment has been widely used to find one-to-one and/or one-to-many reaction mappings to identify the alternative pathways that have similar functions through different sets of reactions, which has important applications in reconstructing phylogeny and understanding metabolic functions. The existing alignment methods exhaustively search reaction sets, which may become infeasible for large pathways. To address this problem, we present an effective alignment method for accurately extracting reaction mappings between two metabolic pathways. We show that connected relation between reactions can be formalized as binary relation of reactions in metabolic pathways, and the multiplications of zero-one matrices for binary relations of reactions can be accomplished in finite steps. By utilizing the multiplications of zero-one matrices for binary relation of reactions, we efficiently obtain reaction sets in a small number of steps without exhaustive search, and accurately uncover biologically relevant reaction mappings. Furthermore, we introduce a measure of topological similarity of nodes (reactions) by comparing the structural similarity of the k-neighborhood subgraphs of the nodes in aligning metabolic pathways. We employ this similarity metric to improve the accuracy of the alignments. The experimental results on the KEGG database show that when compared with other state-of-the-art methods, in most cases, our method obtains better performance in the node correctness and edge correctness, and the number of the edges of the largest common connected subgraph for one-to-one reaction mappings, and the number of correct one-to-many reaction mappings. Our method is scalable in finding more reaction mappings with better biological relevance in large metabolic pathways.

  3. Inside Man

    ERIC Educational Resources Information Center

    Riofrio, Richard

    2008-01-01

    The author was on the academic job market in English eight years in a row. The first four times, he applied all over the place, searching for his first tenure-track job. The next four times, he applied selectively, searching for a position more closely aligned with his academic and personal interests. Although each year on the market was…

  4. A flexible motif search technique based on generalized profiles.

    PubMed

    Bucher, P; Karplus, K; Moeri, N; Hofmann, K

    1996-03-01

    A flexible motif search technique is presented which has two major components: (1) a generalized profile syntax serving as a motif definition language; and (2) a motif search method specifically adapted to the problem of finding multiple instances of a motif in the same sequence. The new profile structure, which is the core of the generalized profile syntax, combines the functions of a variety of motif descriptors implemented in other methods, including regular expression-like patterns, weight matrices, previously used profiles, and certain types of hidden Markov models (HMMs). The relationship between generalized profiles and other biomolecular motif descriptors is analyzed in detail, with special attention to HMMs. Generalized profiles are shown to be equivalent to a particular class of HMMs, and conversion procedures in both directions are given. The conversion procedures provide an interpretation for local alignment in the framework of stochastic models, allowing for clear, simple significance tests. A mathematical statement of the motif search problem defines the new method exactly without linking it to a specific algorithmic solution. Part of the definition includes a new definition of disjointness of alignments.

  5. Deep conservation of cis-regulatory elements in metazoans

    PubMed Central

    Maeso, Ignacio; Irimia, Manuel; Tena, Juan J.; Casares, Fernando; Gómez-Skarmeta, José Luis

    2013-01-01

    Despite the vast morphological variation observed across phyla, animals share multiple basic developmental processes orchestrated by a common ancestral gene toolkit. These genes interact with each other building complex gene regulatory networks (GRNs), which are encoded in the genome by cis-regulatory elements (CREs) that serve as computational units of the network. Although GRN subcircuits involved in ancient developmental processes are expected to be at least partially conserved, identification of CREs that are conserved across phyla has remained elusive. Here, we review recent studies that revealed such deeply conserved CREs do exist, discuss the difficulties associated with their identification and describe new approaches that will facilitate this search. PMID:24218633

  6. MITK-OpenIGTLink for combining open-source toolkits in real-time computer-assisted interventions.

    PubMed

    Klemm, Martin; Kirchner, Thomas; Gröhl, Janek; Cheray, Dominique; Nolden, Marco; Seitel, Alexander; Hoppe, Harald; Maier-Hein, Lena; Franz, Alfred M

    2017-03-01

    Due to rapid developments in the research areas of medical imaging, medical image processing and robotics, computer-assisted interventions (CAI) are becoming an integral part of modern patient care. From a software engineering point of view, these systems are highly complex and research can benefit greatly from reusing software components. This is supported by a number of open-source toolkits for medical imaging and CAI such as the medical imaging interaction toolkit (MITK), the public software library for ultrasound imaging research (PLUS) and 3D Slicer. An independent inter-toolkit communication such as the open image-guided therapy link (OpenIGTLink) can be used to combine the advantages of these toolkits and enable an easier realization of a clinical CAI workflow. MITK-OpenIGTLink is presented as a network interface within MITK that allows easy to use, asynchronous two-way messaging between MITK and clinical devices or other toolkits. Performance and interoperability tests with MITK-OpenIGTLink were carried out considering the whole CAI workflow from data acquisition over processing to visualization. We present how MITK-OpenIGTLink can be applied in different usage scenarios. In performance tests, tracking data were transmitted with a frame rate of up to 1000 Hz and a latency of 2.81 ms. Transmission of images with typical ultrasound (US) and greyscale high-definition (HD) resolutions of [Formula: see text] and [Formula: see text] is possible at up to 512 and 128 Hz, respectively. With the integration of OpenIGTLink into MITK, this protocol is now supported by all established open-source toolkits in the field. This eases interoperability between MITK and toolkits such as PLUS or 3D Slicer and facilitates cross-toolkit research collaborations. MITK and its submodule MITK-OpenIGTLink are provided open source under a BSD-style licence ( http://mitk.org ).

  7. Alarm rationalization: practical experience rationalizing alarm configuration for an accelerator subsystem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kasemir, Kay; Hartman, Steven M

    2009-01-01

    A new alarm system toolkit has been implemented at SNS. The toolkit handles the Central Control Room (CCR) 'annunciator', or audio alarms. For the new alarm system to be effective, the alarms must be meaningful and properly configured. Along with the implementation of the new alarm toolkit, a thorough documentation and rationalization of the alarm configuration is taking place. Requirements and maintenance of a robust alarm configuration have been gathered from system and operations experts. In this paper we present our practical experience with the vacuum system alarm handling configuration of the alarm toolkit.

  8. Demonstration of the Health Literacy Universal Precautions Toolkit

    PubMed Central

    Mabachi, Natabhona M.; Cifuentes, Maribel; Barnard, Juliana; Brega, Angela G.; Albright, Karen; Weiss, Barry D.; Brach, Cindy; West, David

    2016-01-01

    The Agency for Healthcare Research and Quality Health Literacy Universal Precautions Toolkit was developed to help primary care practices assess and make changes to improve communication with and support for patients. Twelve diverse primary care practices implemented assigned tools over a 6-month period. Qualitative results revealed challenges practices experienced during implementation, including competing demands, bureaucratic hurdles, technological challenges, limited quality improvement experience, and limited leadership support. Practices used the Toolkit flexibly and recognized the efficiencies of implementing tools in tandem and in coordination with other quality improvement initiatives. Practices recommended reducing Toolkit density and making specific refinements. PMID:27232681

  9. Demonstration of the Health Literacy Universal Precautions Toolkit: Lessons for Quality Improvement.

    PubMed

    Mabachi, Natabhona M; Cifuentes, Maribel; Barnard, Juliana; Brega, Angela G; Albright, Karen; Weiss, Barry D; Brach, Cindy; West, David

    2016-01-01

    The Agency for Healthcare Research and Quality Health Literacy Universal Precautions Toolkit was developed to help primary care practices assess and make changes to improve communication with and support for patients. Twelve diverse primary care practices implemented assigned tools over a 6-month period. Qualitative results revealed challenges practices experienced during implementation, including competing demands, bureaucratic hurdles, technological challenges, limited quality improvement experience, and limited leadership support. Practices used the Toolkit flexibly and recognized the efficiencies of implementing tools in tandem and in coordination with other quality improvement initiatives. Practices recommended reducing Toolkit density and making specific refinements.

  10. The nursing human resource planning best practice toolkit: creating a best practice resource for nursing managers.

    PubMed

    Vincent, Leslie; Beduz, Mary Agnes

    2010-05-01

    Evidence of acute nursing shortages in urban hospitals has been surfacing since 2000. Further, new graduate nurses account for more than 50% of total nurse turnover in some hospitals and between 35% and 60% of new graduates change workplace during the first year. Critical to organizational success, first line nurse managers must have the knowledge and skills to ensure the accurate projection of nursing resource requirements and to develop proactive recruitment and retention programs that are effective, promote positive nursing socialization, and provide early exposure to the clinical setting. The Nursing Human Resource Planning Best Practice Toolkit project supported the creation of a network of teaching and community hospitals to develop a best practice toolkit in nursing human resource planning targeted at first line nursing managers. The toolkit includes the development of a framework including the conceptual building blocks of planning tools, manager interventions, retention and recruitment and professional practice models. The development of the toolkit involved conducting a review of the literature for best practices in nursing human resource planning, using a mixed method approach to data collection including a survey and extensive interviews of managers and completing a comprehensive scan of human resource practices in the participating organizations. This paper will provide an overview of the process used to develop the toolkit, a description of the toolkit contents and a reflection on the outcomes of the project.

  11. A proposal for a spiritual care assessment toolkit for religious volunteers and volunteer service users.

    PubMed

    Liu, Yi-Jung

    2014-10-01

    Based on the idea that volunteer services in healthcare settings should focus on the service users' best interests and providing holistic care for the body, mind, and spirit, the aim of this study was to propose an assessment toolkit for assessing the effectiveness of religious volunteers and improving their service. By analyzing and categorizing the results of previous studies, we incorporated effective care goals and methods in the proposed religious and spiritual care assessment toolkit. Two versions of the toolkit were created. The service users' version comprises 10 questions grouped into the following five dimensions: "physical care," "psychological and emotional support," "social relationships," "religious and spiritual care," and "hope restoration." Each question could either be answered with "yes" or "no". The volunteers' version contains 14 specific care goals and 31 care methods, in addition to the 10 care dimensions in the residents' version. A small sample of 25 experts was asked to judge the usefulness of each of the toolkit items for evaluating volunteers' effectiveness. Although some experts questioned the volunteer's capacity, however, to improve the spiritual care capacity and effectiveness provided by volunteers is the main purpose of developing this assessment toolkit. The toolkit developed in this study may not be applicable to other countries, and only addressed patients' general spiritual needs. Volunteers should receive special training in caring for people with special needs.

  12. Integrating hidden Markov model and PRAAT: a toolbox for robust automatic speech transcription

    NASA Astrophysics Data System (ADS)

    Kabir, A.; Barker, J.; Giurgiu, M.

    2010-09-01

    An automatic time-aligned phone transcription toolbox of English speech corpora has been developed. Especially the toolbox would be very useful to generate robust automatic transcription and able to produce phone level transcription using speaker independent models as well as speaker dependent models without manual intervention. The system is based on standard Hidden Markov Models (HMM) approach and it was successfully experimented over a large audiovisual speech corpus namely GRID corpus. One of the most powerful features of the toolbox is the increased flexibility in speech processing where the speech community would be able to import the automatic transcription generated by HMM Toolkit (HTK) into a popular transcription software, PRAAT, and vice-versa. The toolbox has been evaluated through statistical analysis on GRID data which shows that automatic transcription deviates by an average of 20 ms with respect to manual transcription.

  13. Highlighting 2004 award-winning initiatives.

    PubMed

    2005-02-01

    This issue takes a closer look at how five award-winning healthcare organizations are finding--and continually refining--innovative ways to provide high-quality healthcare. One of those organizations is Robert Wood Johnson University Hospital Hamilton, which recently was named the fourth healthcare winner of the annual Malcolm Baldrige National Quality Award. The Joint Commission on Accreditation of Healthcare Organizations recently selected two facilities in the hospital category--Stamford Hospital and Staten Island University Hospital--as recipients of the eighth annual Codman Award for their work in using outcomes measurement to promote quality care. The Reading Hospital and Medical Center received a Cheers Award from the Institute for Safe Medication Practices for its toolkit promoting patient safety. Sentara Healthcare System, top winner of the American Hospital Association's Quest for Quality Award, has been cited for its efforts to align its quality and safety goals with its organizational goals.

  14. Improving student critical thinking skills through a root cause analysis pilot project.

    PubMed

    Tschannen, Dana; Aebersold, Michelle

    2010-08-01

    The Essentials of Baccalaureate Education for Professional Nursing Practice provides a framework for building the baccalaureate education for the twenty-first century. One of the exemplars included in the essentials toolkit includes student participation in an actual root cause analysis (RCA) or failure mode effects analysis. To align with this exemplar, faculty at the University of Michigan School of Nursing developed a pilot RCA project for the senior-level Leadership and Management course. While working collaboratively with faculty and unit liaisons at the University Health System, students completed an RCA on a nursing sensitive indicator (pain assessment or plan of care compliance). An overview of the pilot project, including the implementation process, is described. Each team of students identified root causes and recommendations for improvement on clinical and documentation practice within the context of the unit. Feedback from both the unit liaisons and the students confirmed the pilot's success.

  15. A Data Audit and Analysis Toolkit To Support Assessment of the First College Year.

    ERIC Educational Resources Information Center

    Paulson, Karen

    This "toolkit" provides a process by which institutions can identify and use information resources to enhance the experiences and outcomes of first-year students. The toolkit contains a "Technical Manual" designed for use by the technical personnel who will be conducting the data audit and associated analyses. Administrators who want more…

  16. Dynamic programming algorithms for biological sequence comparison.

    PubMed

    Pearson, W R; Miller, W

    1992-01-01

    Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.

  17. ETE: a python Environment for Tree Exploration.

    PubMed

    Huerta-Cepas, Jaime; Dopazo, Joaquín; Gabaldón, Toni

    2010-01-13

    Many bioinformatics analyses, ranging from gene clustering to phylogenetics, produce hierarchical trees as their main result. These are used to represent the relationships among different biological entities, thus facilitating their analysis and interpretation. A number of standalone programs are available that focus on tree visualization or that perform specific analyses on them. However, such applications are rarely suitable for large-scale surveys, in which a higher level of automation is required. Currently, many genome-wide analyses rely on tree-like data representation and hence there is a growing need for scalable tools to handle tree structures at large scale. Here we present the Environment for Tree Exploration (ETE), a python programming toolkit that assists in the automated manipulation, analysis and visualization of hierarchical trees. ETE libraries provide a broad set of tree handling options as well as specific methods to analyze phylogenetic and clustering trees. Among other features, ETE allows for the independent analysis of tree partitions, has support for the extended newick format, provides an integrated node annotation system and permits to link trees to external data such as multiple sequence alignments or numerical arrays. In addition, ETE implements a number of built-in analytical tools, including phylogeny-based orthology prediction and cluster validation techniques. Finally, ETE's programmable tree drawing engine can be used to automate the graphical rendering of trees with customized node-specific visualizations. ETE provides a complete set of methods to manipulate tree data structures that extends current functionality in other bioinformatic toolkits of a more general purpose. ETE is free software and can be downloaded from http://ete.cgenomics.org.

  18. ETE: a python Environment for Tree Exploration

    PubMed Central

    2010-01-01

    Background Many bioinformatics analyses, ranging from gene clustering to phylogenetics, produce hierarchical trees as their main result. These are used to represent the relationships among different biological entities, thus facilitating their analysis and interpretation. A number of standalone programs are available that focus on tree visualization or that perform specific analyses on them. However, such applications are rarely suitable for large-scale surveys, in which a higher level of automation is required. Currently, many genome-wide analyses rely on tree-like data representation and hence there is a growing need for scalable tools to handle tree structures at large scale. Results Here we present the Environment for Tree Exploration (ETE), a python programming toolkit that assists in the automated manipulation, analysis and visualization of hierarchical trees. ETE libraries provide a broad set of tree handling options as well as specific methods to analyze phylogenetic and clustering trees. Among other features, ETE allows for the independent analysis of tree partitions, has support for the extended newick format, provides an integrated node annotation system and permits to link trees to external data such as multiple sequence alignments or numerical arrays. In addition, ETE implements a number of built-in analytical tools, including phylogeny-based orthology prediction and cluster validation techniques. Finally, ETE's programmable tree drawing engine can be used to automate the graphical rendering of trees with customized node-specific visualizations. Conclusions ETE provides a complete set of methods to manipulate tree data structures that extends current functionality in other bioinformatic toolkits of a more general purpose. ETE is free software and can be downloaded from http://ete.cgenomics.org. PMID:20070885

  19. STELLAR: fast and exact local alignments

    PubMed Central

    2011-01-01

    Background Large-scale comparison of genomic sequences requires reliable tools for the search of local alignments. Practical local aligners are in general fast, but heuristic, and hence sometimes miss significant matches. Results We present here the local pairwise aligner STELLAR that has full sensitivity for ε-alignments, i.e. guarantees to report all local alignments of a given minimal length and maximal error rate. The aligner is composed of two steps, filtering and verification. We apply the SWIFT algorithm for lossless filtering, and have developed a new verification strategy that we prove to be exact. Our results on simulated and real genomic data confirm and quantify the conjecture that heuristic tools like BLAST or BLAT miss a large percentage of significant local alignments. Conclusions STELLAR is very practical and fast on very long sequences which makes it a suitable new tool for finding local alignments between genomic sequences under the edit distance model. Binaries are freely available for Linux, Windows, and Mac OS X at http://www.seqan.de/projects/stellar. The source code is freely distributed with the SeqAn C++ library version 1.3 and later at http://www.seqan.de. PMID:22151882

  20. Development of an Integrated Human Factors Toolkit

    NASA Technical Reports Server (NTRS)

    Resnick, Marc L.

    2003-01-01

    An effective integration of human abilities and limitations is crucial to the success of all NASA missions. The Integrated Human Factors Toolkit facilitates this integration by assisting system designers and analysts to select the human factors tools that are most appropriate for the needs of each project. The HF Toolkit contains information about a broad variety of human factors tools addressing human requirements in the physical, information processing and human reliability domains. Analysis of each tool includes consideration of the most appropriate design stage, the amount of expertise in human factors that is required, the amount of experience with the tool and the target job tasks that are needed, and other factors that are critical for successful use of the tool. The benefits of the Toolkit include improved safety, reliability and effectiveness of NASA systems throughout the agency. This report outlines the initial stages of development for the Integrated Human Factors Toolkit.

  1. Field tests of a participatory ergonomics toolkit for Total Worker Health

    PubMed Central

    Kernan, Laura; Plaku-Alakbarova, Bora; Robertson, Michelle; Warren, Nicholas; Henning, Robert

    2018-01-01

    Growing interest in Total Worker Health® (TWH) programs to advance worker safety, health and well-being motivated development of a toolkit to guide their implementation. Iterative design of a program toolkit occurred in which participatory ergonomics (PE) served as the primary basis to plan integrated TWH interventions in four diverse organizations. The toolkit provided start-up guides for committee formation and training, and a structured PE process for generating integrated TWH interventions. Process data from program facilitators and participants throughout program implementation were used for iterative toolkit design. Program success depended on organizational commitment to regular design team meetings with a trained facilitator, the availability of subject matter experts on ergonomics and health to support the design process, and retraining whenever committee turnover occurred. A two committee structure (employee Design Team, management Steering Committee) provided advantages over a single, multilevel committee structure, and enhanced the planning, communication, and team-work skills of participants. PMID:28166897

  2. Fall TIPS: strategies to promote adoption and use of a fall prevention toolkit.

    PubMed

    Dykes, Patricia C; Carroll, Diane L; Hurley, Ann; Gersh-Zaremski, Ronna; Kennedy, Ann; Kurowski, Jan; Tierney, Kim; Benoit, Angela; Chang, Frank; Lipsitz, Stuart; Pang, Justine; Tsurkova, Ruslana; Zuyov, Lyubov; Middleton, Blackford

    2009-11-14

    Patient falls are serious problems in hospitals. Risk factors for falls are well understood and nurses routinely assess for fall risk on all hospitalized patients. However, the link from nursing assessment of fall risk, to identification and communication of tailored interventions to prevent falls is yet to be established. The Fall TIPS (Tailoring Interventions for Patient Safety) Toolkit was developed to leverage existing practices and workflows and to employ information technology to improve fall prevention practices. The purpose of this paper is to describe the Fall TIPS Toolkit and to report on strategies used to drive adoption of the Toolkit in four acute care hospitals. Using the IHI "Framework for Spread" as a conceptual model, the research team describes the "spread" of the Fall TIPS Toolkit as means to integrate effective fall prevention practices into the workflow of interdisciplinary caregivers, patients and family members.

  3. Planning changes to health library services on the basis of impact assessment.

    PubMed

    Urquhart, Christine; Thomas, Rhian; Ovens, Jason; Lucking, Wendy; Villa, Jane

    2010-12-01

    Various methods of impact assessment for health library services exist, including a toolkit developed for the UK. The Knowledge, Resource and Information service (KRIS) for health promotion, health service commissioning and public health (Bristol area, UK) commissioned an independent team at Aberystwyth University to provide an impact assessment and evaluation of their services and to provide evidence for future planning. The review aimed to provide an action plan for KRIS through assessing the impact of the current service, extent of satisfaction with existing services and views on desirable improvements. Existing impact toolkit guidance was used, with an adapted impact questionnaire, which was distributed by the KRIS staff to 244 users (response rate 62.3%) in early 2009. The independent team analysed the questionnaire data and presented the findings. Users valued the service (93% considered that relevant information was obtained). The most frequent impacts on work were advice to patients, clients or carers, and advice to colleagues. Literature searching and current awareness services saved staff time. Many users were seeking health promotion materials. The adapted questionnaire worked well in demonstrating the service impacts achieved by KRIS, as well as indicating desirable improvements in service delivery. © 2010 The authors. Health Information and Libraries Journal © 2010 Health Libraries Group.

  4. Building Emergency Contraception Awareness among Adolescents. A Toolkit for Schools and Community-Based Organizations.

    ERIC Educational Resources Information Center

    Simkin, Linda; Radosh, Alice; Nelsesteun, Kari; Silverstein, Stacy

    This toolkit presents emergency contraception (EC) as a method to help adolescent women avoid pregnancy and abortion after unprotected sexual intercourse. The sections of this toolkit are designed to help increase your knowledge of EC and stay up to date. They provide suggestions for increasing EC awareness in the workplace, whether it is a school…

  5. The Customer Flow Toolkit: A Framework for Designing High Quality Customer Services.

    ERIC Educational Resources Information Center

    New York Association of Training and Employment Professionals, Albany.

    This document presents a toolkit to assist staff involved in the design and development of New York's one-stop system. Section 1 describes the preplanning issues to be addressed and the intended outcomes that serve as the framework for creation of the customer flow toolkit. Section 2 outlines the following strategies to assist in designing local…

  6. HIV Prevention in Schools: A Tool Kit for Education Leaders.

    ERIC Educational Resources Information Center

    Office of the Surgeon General (DHHS/PHS), Washington, DC.

    This packet of materials is Phase 1 of a toolkit designed to enlighten education leaders about the need for HIV prevention for youth, especially in communities of color. One element of the toolkit is a VHS videotape that features a brief message from former Surgeon General, Dr. David Satcher. The toolkit also includes a copy of a letter sent to…

  7. Toolkit of Resources for Engaging Parents and Community as Partners in Education. Part I: Building an Understanding of Family and Community Engagement

    ERIC Educational Resources Information Center

    Regional Educational Laboratory Pacific, 2014

    2014-01-01

    This toolkit is designed to guide school staff in strengthening partnerships with families and community members to support student learning. This toolkit offers an integrated approach to family and community engagement, bringing together research, promising practices, and a wide range of useful tools and resources with explanations and directions…

  8. Toolkit for a Workshop on Building a Culture of Data Use. REL 2015-063

    ERIC Educational Resources Information Center

    Gerzon, Nancy; Guckenburg, Sarah

    2015-01-01

    The Culture of Data Use Workshop Toolkit helps school and district teams apply research to practice as they establish and support a culture of data use in their educational setting. The field-tested workshop toolkit guides teams through a set of structured activities to develop an understanding of data-use research in schools and to analyze…

  9. Implementing Project Based Survey Research Skills to Grade Six ELP Students with "The Survey Toolkit" and "TinkerPlots"[R

    ERIC Educational Resources Information Center

    Walsh, Thomas, Jr.

    2011-01-01

    "Survey Toolkit Collecting Information, Analyzing Data and Writing Reports" (Walsh, 2009a) is discussed as a survey research curriculum used by the author's sixth grade students. The report describes the implementation of "The Survey Toolkit" curriculum and "TinkerPlots"[R] software to provide instruction to students learning a project based…

  10. Overview and Meteorological Validation of the Wind Integration National Dataset toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draxl, C.; Hodge, B. M.; Clifton, A.

    2015-04-13

    The Wind Integration National Dataset (WIND) Toolkit described in this report fulfills these requirements, and constitutes a state-of-the-art national wind resource data set covering the contiguous United States from 2007 to 2013 for use in a variety of next-generation wind integration analyses and wind power planning. The toolkit is a wind resource data set, wind forecast data set, and wind power production and forecast data set derived from the Weather Research and Forecasting (WRF) numerical weather prediction model. WIND Toolkit data are available online for over 116,000 land-based and 10,000 offshore sites representing existing and potential wind facilities.

  11. An open source toolkit for medical imaging de-identification.

    PubMed

    González, David Rodríguez; Carpenter, Trevor; van Hemert, Jano I; Wardlaw, Joanna

    2010-08-01

    Medical imaging acquired for clinical purposes can have several legitimate secondary uses in research projects and teaching libraries. No commonly accepted solution for anonymising these images exists because the amount of personal data that should be preserved varies case by case. Our objective is to provide a flexible mechanism for anonymising Digital Imaging and Communications in Medicine (DICOM) data that meets the requirements for deployment in multicentre trials. We reviewed our current de-identification practices and defined the relevant use cases to extract the requirements for the de-identification process. We then used these requirements in the design and implementation of the toolkit. Finally, we tested the toolkit taking as a reference those requirements, including a multicentre deployment. The toolkit successfully anonymised DICOM data from various sources. Furthermore, it was shown that it could forward anonymous data to remote destinations, remove burned-in annotations, and add tracking information to the header. The toolkit also implements the DICOM standard confidentiality mechanism. A DICOM de-identification toolkit that facilitates the enforcement of privacy policies was developed. It is highly extensible, provides the necessary flexibility to account for different de-identification requirements and has a low adoption barrier for new users.

  12. The Reconstruction Toolkit (RTK), an open-source cone-beam CT reconstruction toolkit based on the Insight Toolkit (ITK)

    NASA Astrophysics Data System (ADS)

    Rit, S.; Vila Oliva, M.; Brousmiche, S.; Labarbe, R.; Sarrut, D.; Sharp, G. C.

    2014-03-01

    We propose the Reconstruction Toolkit (RTK, http://www.openrtk.org), an open-source toolkit for fast cone-beam CT reconstruction, based on the Insight Toolkit (ITK) and using GPU code extracted from Plastimatch. RTK is developed by an open consortium (see affiliations) under the non-contaminating Apache 2.0 license. The quality of the platform is daily checked with regression tests in partnership with Kitware, the company supporting ITK. Several features are already available: Elekta, Varian and IBA inputs, multi-threaded Feldkamp-David-Kress reconstruction on CPU and GPU, Parker short scan weighting, multi-threaded CPU and GPU forward projectors, etc. Each feature is either accessible through command line tools or C++ classes that can be included in independent software. A MIDAS community has been opened to share CatPhan datasets of several vendors (Elekta, Varian and IBA). RTK will be used in the upcoming cone-beam CT scanner developed by IBA for proton therapy rooms. Many features are under development: new input format support, iterative reconstruction, hybrid Monte Carlo / deterministic CBCT simulation, etc. RTK has been built to freely share tomographic reconstruction developments between researchers and is open for new contributions.

  13. Development of an Online Toolkit for Measuring Performance in Health Emergency Response Exercises.

    PubMed

    Agboola, Foluso; Bernard, Dorothy; Savoia, Elena; Biddinger, Paul D

    2015-10-01

    Exercises that simulate emergency scenarios are accepted widely as an essential component of a robust Emergency Preparedness program. Unfortunately, the variability in the quality of the exercises conducted, and the lack of standardized processes to measure performance, has limited the value of exercises in measuring preparedness. In order to help health organizations improve the quality and standardization of the performance data they collect during simulated emergencies, a model online exercise evaluation toolkit was developed using performance measures tested in over 60 Emergency Preparedness exercises. The exercise evaluation toolkit contains three major components: (1) a database of measures that can be used to assess performance during an emergency response exercise; (2) a standardized data collection tool (form); and (3) a program that populates the data collection tool with the measures that have been selected by the user from the database. The evaluation toolkit was pilot tested from January through September 2014 in collaboration with 14 partnering organizations representing 10 public health agencies and four health care agencies from eight states across the US. Exercise planners from the partnering organizations were asked to use the toolkit for their exercise evaluation process and were interviewed to provide feedback on the use of the toolkit, the generated evaluation tool, and the usefulness of the data being gathered for the development of the exercise after-action report. Ninety-three percent (93%) of exercise planners reported that they found the online database of performance measures appropriate for the creation of exercise evaluation forms, and they stated that they would use it again for future exercises. Seventy-two percent (72%) liked the exercise evaluation form that was generated from the toolkit, and 93% reported that the data collected by the use of the evaluation form were useful in gauging their organization's performance during the exercise. Seventy-nine percent (79%) of exercise planners preferred the evaluation form generated by the toolkit to other forms of evaluations. Results of this project show that users found the newly developed toolkit to be user friendly and more relevant to measurement of specific public health and health care capabilities than other tools currently available. The developed toolkit may contribute to the further advancement of developing a valid approach to exercise performance measurement.

  14. Dali server update.

    PubMed

    Holm, Liisa; Laakso, Laura M

    2016-07-08

    The Dali server (http://ekhidna2.biocenter.helsinki.fi/dali) is a network service for comparing protein structures in 3D. In favourable cases, comparing 3D structures may reveal biologically interesting similarities that are not detectable by comparing sequences. The Dali server has been running in various places for over 20 years and is used routinely by crystallographers on newly solved structures. The latest update of the server provides enhanced analytics for the study of sequence and structure conservation. The server performs three types of structure comparisons: (i) Protein Data Bank (PDB) search compares one query structure against those in the PDB and returns a list of similar structures; (ii) pairwise comparison compares one query structure against a list of structures specified by the user; and (iii) all against all structure comparison returns a structural similarity matrix, a dendrogram and a multidimensional scaling projection of a set of structures specified by the user. Structural superimpositions are visualized using the Java-free WebGL viewer PV. The structural alignment view is enhanced by sequence similarity searches against Uniprot. The combined structure-sequence alignment information is compressed to a stack of aligned sequence logos. In the stack, each structure is structurally aligned to the query protein and represented by a sequence logo. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Facilitated family presence at resuscitation: effectiveness of a nursing student toolkit.

    PubMed

    Kantrowitz-Gordon, Ira; Bennett, Deborah; Wise Stauffer, Debra; Champ-Gibson, Erla; Fitzgerald, Cynthia; Corbett, Cynthia

    2013-10-01

    Facilitated family presence at resuscitation is endorsed by multiple nursing and specialty practice organizations. Implementation of this practice is not universal so there is a need to increase familiarity and competence with facilitated family presence at resuscitation during this significant life event. One strategy to promote this practice is to use a nursing student toolkit for pre-licensure and graduate nursing students. The toolkit includes short video simulations of facilitated family presence at resuscitation, a PowerPoint presentation of evidence-based practice, and questions to facilitate guided discussion. This study tested the effectiveness of this toolkit in increasing nursing students' knowledge, perceptions, and confidence in facilitated family presence at resuscitation. Nursing students from five universities in the United States completed the Family Presence Risk-Benefit Scale, Family Presence Self-Confidence Scale, and a knowledge test before and after the intervention. Implementing the facilitated family presence at resuscitation toolkit significantly increased nursing students' knowledge, perceptions, and confidence related to facilitated family presence at resuscitation (p<.001). The effect size was large for knowledge (d=.90) and perceptions (d=1.04) and moderate for confidence (d=.51). The facilitated family presence at resuscitation toolkit used in this study had a positive impact on students' knowledge, perception of benefits and risks, and self-confidence in facilitated family presence at resuscitation. The toolkit provides students a structured opportunity to consider the presence of family members at resuscitation prior to encountering this situation in clinical practice. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Behavioral Genetic Toolkits: Toward the Evolutionary Origins of Complex Phenotypes.

    PubMed

    Rittschof, C C; Robinson, G E

    2016-01-01

    The discovery of toolkit genes, which are highly conserved genes that consistently regulate the development of similar morphological phenotypes across diverse species, is one of the most well-known observations in the field of evolutionary developmental biology. Surprisingly, this phenomenon is also relevant for a wide array of behavioral phenotypes, despite the fact that these phenotypes are highly complex and regulated by many genes operating in diverse tissues. In this chapter, we review the use of the toolkit concept in the context of behavior, noting the challenges of comparing behaviors and genes across diverse species, but emphasizing the successes in identifying genetic toolkits for behavior; these successes are largely attributable to the creative research approaches fueled by advances in behavioral genomics. We have two general goals: (1) to acknowledge the groundbreaking progress in this field, which offers new approaches to the difficult but exciting challenge of understanding the evolutionary genetic basis of behaviors, some of the most complex phenotypes known, and (2) to provide a theoretical framework that encompasses the scope of behavioral genetic toolkit studies in order to clearly articulate the research questions relevant to the toolkit concept. We emphasize areas for growth and highlight the emerging approaches that are being used to drive the field forward. Behavioral genetic toolkit research has elevated the use of integrative and comparative approaches in the study of behavior, with potentially broad implications for evolutionary biologists and behavioral ecologists alike. © 2016 Elsevier Inc. All rights reserved.

  17. Image correlation method for DNA sequence alignment.

    PubMed

    Curilem Saldías, Millaray; Villarroel Sassarini, Felipe; Muñoz Poblete, Carlos; Vargas Vásquez, Asticio; Maureira Butler, Iván

    2012-01-01

    The complexity of searches and the volume of genomic data make sequence alignment one of bioinformatics most active research areas. New alignment approaches have incorporated digital signal processing techniques. Among these, correlation methods are highly sensitive. This paper proposes a novel sequence alignment method based on 2-dimensional images, where each nucleic acid base is represented as a fixed gray intensity pixel. Query and known database sequences are coded to their pixel representation and sequence alignment is handled as object recognition in a scene problem. Query and database become object and scene, respectively. An image correlation process is carried out in order to search for the best match between them. Given that this procedure can be implemented in an optical correlator, the correlation could eventually be accomplished at light speed. This paper shows an initial research stage where results were "digitally" obtained by simulating an optical correlation of DNA sequences represented as images. A total of 303 queries (variable lengths from 50 to 4500 base pairs) and 100 scenes represented by 100 x 100 images each (in total, one million base pair database) were considered for the image correlation analysis. The results showed that correlations reached very high sensitivity (99.01%), specificity (98.99%) and outperformed BLAST when mutation numbers increased. However, digital correlation processes were hundred times slower than BLAST. We are currently starting an initiative to evaluate the correlation speed process of a real experimental optical correlator. By doing this, we expect to fully exploit optical correlation light properties. As the optical correlator works jointly with the computer, digital algorithms should also be optimized. The results presented in this paper are encouraging and support the study of image correlation methods on sequence alignment.

  18. Rapid protein alignment in the cloud: HAMOND combines fast DIAMOND alignments with Hadoop parallelism.

    PubMed

    Yu, Jia; Blom, Jochen; Sczyrba, Alexander; Goesmann, Alexander

    2017-09-10

    The introduction of next generation sequencing has caused a steady increase in the amounts of data that have to be processed in modern life science. Sequence alignment plays a key role in the analysis of sequencing data e.g. within whole genome sequencing or metagenome projects. BLAST is a commonly used alignment tool that was the standard approach for more than two decades, but in the last years faster alternatives have been proposed including RapSearch, GHOSTX, and DIAMOND. Here we introduce HAMOND, an application that uses Apache Hadoop to parallelize DIAMOND computation in order to scale-out the calculation of alignments. HAMOND is fault tolerant and scalable by utilizing large cloud computing infrastructures like Amazon Web Services. HAMOND has been tested in comparative genomics analyses and showed promising results both in efficiency and accuracy. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  19. Wind Integration National Dataset (WIND) Toolkit; NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draxl, Caroline; Hodge, Bri-Mathias

    A webinar about the Wind Integration National Dataset (WIND) Toolkit was presented by Bri-Mathias Hodge and Caroline Draxl on July 14, 2015. It was hosted by the Southern Alliance for Clean Energy. The toolkit is a grid integration data set that contains meteorological and power data at a 5-minute resolution across the continental United States for 7 years and hourly power forecasts.

  20. Toolkit of Resources for Engaging Families and the Community as Partners in Education: Part 2: Building a Cultural Bridge. REL 2016-151

    ERIC Educational Resources Information Center

    Garcia, Maria Elena; Frunzi, Kay; Dean, Ceri B.; Flores, Nieves; Miller, Kirsten B.

    2016-01-01

    The Toolkit of Resources for Engaging Families and the Community as Partners in Education is a four-part resource that brings together research, promising practices, and useful tools and resources to guide educators in strengthening partnerships with families and community members to support student learning. The toolkit defines family and…

  1. Semantic Web Technologies for Mobile Context-Aware Services

    DTIC Science & Technology

    2006-03-01

    for Context-Aware Service Provisioning 6 Communication toolkit (http, e-mail, IM, etc.) User interaction manager Platform manager White & yellow ...NAICS] North American Industry Classification System , http://www.census.gov/epcd/www/naics.html [OS00] Opermann, R., and Specht , M., A Context...toolkit (http, e-mail, IM, etc.) User interaction manager Platform manager White & yellow pages MAS administration toolkit N ET W O R K knowledge

  2. Toolkit of Resources for Engaging Parents and Community as Partners in Education. Part 3: Building Trusting Relationships with Families & Community through Effective Communication

    ERIC Educational Resources Information Center

    Regional Educational Laboratory Pacific, 2015

    2015-01-01

    This toolkit is designed to guide school staff in strengthening partnerships with families and community members to support student learning. This toolkit offers an integrated approach to family and community engagement, bringing together research, promising practices, and a wide range of useful tools and resources with explanations and directions…

  3. Designing and evaluating an interprofessional shared decision-making and goal-setting decision aid for patients with diabetes in clinical care--systematic decision aid development and study protocol.

    PubMed

    Yu, Catherine H; Stacey, Dawn; Sale, Joanna; Hall, Susan; Kaplan, David M; Ivers, Noah; Rezmovitz, Jeremy; Leung, Fok-Han; Shah, Baiju R; Straus, Sharon E

    2014-01-22

    Care of patients with diabetes often occurs in the context of other chronic illness. Competing disease priorities and competing patient-physician priorities present challenges in the provision of care for the complex patient. Guideline implementation interventions to date do not acknowledge these intricacies of clinical practice. As a result, patients and providers are left overwhelmed and paralyzed by the sheer volume of recommendations and tasks. An individualized approach to the patient with diabetes and multiple comorbid conditions using shared decision-making (SDM) and goal setting has been advocated as a patient-centred approach that may facilitate prioritization of treatment options. Furthermore, incorporating interprofessional integration into practice may overcome barriers to implementation. However, these strategies have not been taken up extensively in clinical practice. To systematically develop and test an interprofessional SDM and goal-setting toolkit for patients with diabetes and other chronic diseases, following the Knowledge to Action framework. 1. Feasibility study: Individual interviews with primary care physicians, nurses, dietitians, pharmacists, and patients with diabetes will be conducted, exploring their experiences with shared decision-making and priority-setting, including facilitators and barriers, the relevance of a decision aid and toolkit for priority-setting, and how best to integrate it into practice.2. Toolkit development: Based on this data, an evidence-based multi-component SDM toolkit will be developed. The toolkit will be reviewed by content experts (primary care, endocrinology, geriatricians, nurses, dietitians, pharmacists, patients) for accuracy and comprehensiveness.3. Heuristic evaluation: A human factors engineer will review the toolkit and identify, list and categorize usability issues by severity.4. Usability testing: This will be done using cognitive task analysis.5. Iterative refinement: Throughout the development process, the toolkit will be refined through several iterative cycles of feedback and redesign. Interprofessional shared decision-making regarding priority-setting with the use of a decision aid toolkit may help prioritize care of individuals with multiple comorbid conditions. Adhering to principles of user-centered design, we will develop and refine a toolkit to assess the feasibility of this approach.

  4. Designing and evaluating an interprofessional shared decision-making and goal-setting decision aid for patients with diabetes in clinical care - systematic decision aid development and study protocol

    PubMed Central

    2014-01-01

    Background Care of patients with diabetes often occurs in the context of other chronic illness. Competing disease priorities and competing patient-physician priorities present challenges in the provision of care for the complex patient. Guideline implementation interventions to date do not acknowledge these intricacies of clinical practice. As a result, patients and providers are left overwhelmed and paralyzed by the sheer volume of recommendations and tasks. An individualized approach to the patient with diabetes and multiple comorbid conditions using shared decision-making (SDM) and goal setting has been advocated as a patient-centred approach that may facilitate prioritization of treatment options. Furthermore, incorporating interprofessional integration into practice may overcome barriers to implementation. However, these strategies have not been taken up extensively in clinical practice. Objectives To systematically develop and test an interprofessional SDM and goal-setting toolkit for patients with diabetes and other chronic diseases, following the Knowledge to Action framework. Methods 1. Feasibility study: Individual interviews with primary care physicians, nurses, dietitians, pharmacists, and patients with diabetes will be conducted, exploring their experiences with shared decision-making and priority-setting, including facilitators and barriers, the relevance of a decision aid and toolkit for priority-setting, and how best to integrate it into practice. 2. Toolkit development: Based on this data, an evidence-based multi-component SDM toolkit will be developed. The toolkit will be reviewed by content experts (primary care, endocrinology, geriatricians, nurses, dietitians, pharmacists, patients) for accuracy and comprehensiveness. 3. Heuristic evaluation: A human factors engineer will review the toolkit and identify, list and categorize usability issues by severity. 4. Usability testing: This will be done using cognitive task analysis. 5. Iterative refinement: Throughout the development process, the toolkit will be refined through several iterative cycles of feedback and redesign. Discussion Interprofessional shared decision-making regarding priority-setting with the use of a decision aid toolkit may help prioritize care of individuals with multiple comorbid conditions. Adhering to principles of user-centered design, we will develop and refine a toolkit to assess the feasibility of this approach. PMID:24450385

  5. Plant Genome Resources at the National Center for Biotechnology Information

    PubMed Central

    Wheeler, David L.; Smith-White, Brian; Chetvernin, Vyacheslav; Resenchuk, Sergei; Dombrowski, Susan M.; Pechous, Steven W.; Tatusova, Tatiana; Ostell, James

    2005-01-01

    The National Center for Biotechnology Information (NCBI) integrates data from more than 20 biological databases through a flexible search and retrieval system called Entrez. A core Entrez database, Entrez Nucleotide, includes GenBank and is tightly linked to the NCBI Taxonomy database, the Entrez Protein database, and the scientific literature in PubMed. A suite of more specialized databases for genomes, genes, gene families, gene expression, gene variation, and protein domains dovetails with the core databases to make Entrez a powerful system for genomic research. Linked to the full range of Entrez databases is the NCBI Map Viewer, which displays aligned genetic, physical, and sequence maps for eukaryotic genomes including those of many plants. A specialized plant query page allow maps from all plant genomes covered by the Map Viewer to be searched in tandem to produce a display of aligned maps from several species. PlantBLAST searches against the sequences shown in the Map Viewer allow BLAST alignments to be viewed within a genomic context. In addition, precomputed sequence similarities, such as those for proteins offered by BLAST Link, enable fluid navigation from unannotated to annotated sequences, quickening the pace of discovery. NCBI Web pages for plants, such as Plant Genome Central, complete the system by providing centralized access to NCBI's genomic resources as well as links to organism-specific Web pages beyond NCBI. PMID:16010002

  6. Simultaneous phylogeny reconstruction and multiple sequence alignment

    PubMed Central

    Yue, Feng; Shi, Jian; Tang, Jijun

    2009-01-01

    Background A phylogeny is the evolutionary history of a group of organisms. To date, sequence data is still the most used data type for phylogenetic reconstruction. Before any sequences can be used for phylogeny reconstruction, they must be aligned, and the quality of the multiple sequence alignment has been shown to affect the quality of the inferred phylogeny. At the same time, all the current multiple sequence alignment programs use a guide tree to produce the alignment and experiments showed that good guide trees can significantly improve the multiple alignment quality. Results We devise a new algorithm to simultaneously align multiple sequences and search for the phylogenetic tree that leads to the best alignment. We also implemented the algorithm as a C program package, which can handle both DNA and protein data and can take simple cost model as well as complex substitution matrices, such as PAM250 or BLOSUM62. The performance of the new method are compared with those from other popular multiple sequence alignment tools, including the widely used programs such as ClustalW and T-Coffee. Experimental results suggest that this method has good performance in terms of both phylogeny accuracy and alignment quality. Conclusion We present an algorithm to align multiple sequences and reconstruct the phylogenies that minimize the alignment score, which is based on an efficient algorithm to solve the median problems for three sequences. Our extensive experiments suggest that this method is very promising and can produce high quality phylogenies and alignments. PMID:19208110

  7. ARYANA: Aligning Reads by Yet Another Approach

    PubMed Central

    2014-01-01

    Motivation Although there are many different algorithms and software tools for aligning sequencing reads, fast gapped sequence search is far from solved. Strong interest in fast alignment is best reflected in the $106 prize for the Innocentive competition on aligning a collection of reads to a given database of reference genomes. In addition, de novo assembly of next-generation sequencing long reads requires fast overlap-layout-concensus algorithms which depend on fast and accurate alignment. Contribution We introduce ARYANA, a fast gapped read aligner, developed on the base of BWA indexing infrastructure with a completely new alignment engine that makes it significantly faster than three other aligners: Bowtie2, BWA and SeqAlto, with comparable generality and accuracy. Instead of the time-consuming backtracking procedures for handling mismatches, ARYANA comes with the seed-and-extend algorithmic framework and a significantly improved efficiency by integrating novel algorithmic techniques including dynamic seed selection, bidirectional seed extension, reset-free hash tables, and gap-filling dynamic programming. As the read length increases ARYANA's superiority in terms of speed and alignment rate becomes more evident. This is in perfect harmony with the read length trend as the sequencing technologies evolve. The algorithmic platform of ARYANA makes it easy to develop mission-specific aligners for other applications using ARYANA engine. Availability ARYANA with complete source code can be obtained from http://github.com/aryana-aligner PMID:25252881

  8. ARYANA: Aligning Reads by Yet Another Approach.

    PubMed

    Gholami, Milad; Arbabi, Aryan; Sharifi-Zarchi, Ali; Chitsaz, Hamidreza; Sadeghi, Mehdi

    2014-01-01

    Although there are many different algorithms and software tools for aligning sequencing reads, fast gapped sequence search is far from solved. Strong interest in fast alignment is best reflected in the $10(6) prize for the Innocentive competition on aligning a collection of reads to a given database of reference genomes. In addition, de novo assembly of next-generation sequencing long reads requires fast overlap-layout-concensus algorithms which depend on fast and accurate alignment. We introduce ARYANA, a fast gapped read aligner, developed on the base of BWA indexing infrastructure with a completely new alignment engine that makes it significantly faster than three other aligners: Bowtie2, BWA and SeqAlto, with comparable generality and accuracy. Instead of the time-consuming backtracking procedures for handling mismatches, ARYANA comes with the seed-and-extend algorithmic framework and a significantly improved efficiency by integrating novel algorithmic techniques including dynamic seed selection, bidirectional seed extension, reset-free hash tables, and gap-filling dynamic programming. As the read length increases ARYANA's superiority in terms of speed and alignment rate becomes more evident. This is in perfect harmony with the read length trend as the sequencing technologies evolve. The algorithmic platform of ARYANA makes it easy to develop mission-specific aligners for other applications using ARYANA engine. ARYANA with complete source code can be obtained from http://github.com/aryana-aligner.

  9. Local alignment of two-base encoded DNA sequence

    PubMed Central

    Homer, Nils; Merriman, Barry; Nelson, Stanley F

    2009-01-01

    Background DNA sequence comparison is based on optimal local alignment of two sequences using a similarity score. However, some new DNA sequencing technologies do not directly measure the base sequence, but rather an encoded form, such as the two-base encoding considered here. In order to compare such data to a reference sequence, the data must be decoded into sequence. The decoding is deterministic, but the possibility of measurement errors requires searching among all possible error modes and resulting alignments to achieve an optimal balance of fewer errors versus greater sequence similarity. Results We present an extension of the standard dynamic programming method for local alignment, which simultaneously decodes the data and performs the alignment, maximizing a similarity score based on a weighted combination of errors and edits, and allowing an affine gap penalty. We also present simulations that demonstrate the performance characteristics of our two base encoded alignment method and contrast those with standard DNA sequence alignment under the same conditions. Conclusion The new local alignment algorithm for two-base encoded data has substantial power to properly detect and correct measurement errors while identifying underlying sequence variants, and facilitating genome re-sequencing efforts based on this form of sequence data. PMID:19508732

  10. NetCoffee: a fast and accurate global alignment approach to identify functionally conserved proteins in multiple networks.

    PubMed

    Hu, Jialu; Kehr, Birte; Reinert, Knut

    2014-02-15

    Owing to recent advancements in high-throughput technologies, protein-protein interaction networks of more and more species become available in public databases. The question of how to identify functionally conserved proteins across species attracts a lot of attention in computational biology. Network alignments provide a systematic way to solve this problem. However, most existing alignment tools encounter limitations in tackling this problem. Therefore, the demand for faster and more efficient alignment tools is growing. We present a fast and accurate algorithm, NetCoffee, which allows to find a global alignment of multiple protein-protein interaction networks. NetCoffee searches for a global alignment by maximizing a target function using simulated annealing on a set of weighted bipartite graphs that are constructed using a triplet approach similar to T-Coffee. To assess its performance, NetCoffee was applied to four real datasets. Our results suggest that NetCoffee remedies several limitations of previous algorithms, outperforms all existing alignment tools in terms of speed and nevertheless identifies biologically meaningful alignments. The source code and data are freely available for download under the GNU GPL v3 license at https://code.google.com/p/netcoffee/.

  11. Identification and analysis of multigene families by comparison of exon fingerprints.

    PubMed

    Brown, N P; Whittaker, A J; Newell, W R; Rawlings, C J; Beck, S

    1995-06-02

    Gene families are often recognised by sequence homology using similarity searching to find relationships, however, genomic sequence data provides gene architectural information not used by conventional search methods. In particular, intron positions and phases are expected to be relatively conserved features, because mis-splicing and reading frame shifts should be selected against. A fast search technique capable of detecting possible weak sequence homologies apparent at the intron/exon level of gene organization is presented for comparing spliceosomal genes and gene fragments. FINEX compares strings of exons delimited by intron/exon boundary positions and intron phases (exon fingerprint) using a global dynamic programming algorithm with a combined intron phase identity and exon size dissimilarity score. Exon fingerprints are typically two orders of magnitude smaller than their nucleic acid sequence counterparts giving rise to fast search times: a ranked search against a library of 6755 fingerprints for a typical three exon fingerprint completes in under 30 seconds on an ordinary workstation, while a worst case largest fingerprint of 52 exons completes in just over one minute. The short "sequence" length of exon fingerprints in comparisons is compensated for by the large exon alphabet compounded of intron phase types and a wide range of exon sizes, the latter contributing the most information to alignments. FINEX performs better in some searches than conventional methods, finding matches with similar exon organization, but low sequence homology. A search using a human serum albumin finds all members of the multigene family in the FINEX database at the top of the search ranking, despite very low amino acid percentage identities between family members. The method should complement conventional sequence searching and alignment techniques, offering a means of identifying otherwise hard to detect homologies where genomic data are available.

  12. FoldMiner and LOCK 2: protein structure comparison and motif discovery on the web.

    PubMed

    Shapiro, Jessica; Brutlag, Douglas

    2004-07-01

    The FoldMiner web server (http://foldminer.stanford.edu/) provides remote access to methods for protein structure alignment and unsupervised motif discovery. FoldMiner is unique among such algorithms in that it improves both the motif definition and the sensitivity of a structural similarity search by combining the search and motif discovery methods and using information from each process to enhance the other. In a typical run, a query structure is aligned to all structures in one of several databases of single domain targets in order to identify its structural neighbors and to discover a motif that is the basis for the similarity among the query and statistically significant targets. This process is fully automated, but options for manual refinement of the results are available as well. The server uses the Chime plugin and customized controls to allow for visualization of the motif and of structural superpositions. In addition, we provide an interface to the LOCK 2 algorithm for rapid alignments of a query structure to smaller numbers of user-specified targets.

  13. The VirusBanker database uses a Java program to allow flexible searching through Bunyaviridae sequences.

    PubMed

    Fourment, Mathieu; Gibbs, Mark J

    2008-02-05

    Viruses of the Bunyaviridae have segmented negative-stranded RNA genomes and several of them cause significant disease. Many partial sequences have been obtained from the segments so that GenBank searches give complex results. Sequence databases usually use HTML pages to mediate remote sorting, but this approach can be limiting and may discourage a user from exploring a database. The VirusBanker database contains Bunyaviridae sequences and alignments and is presented as two spreadsheets generated by a Java program that interacts with a MySQL database on a server. Sequences are displayed in rows and may be sorted using information that is displayed in columns and includes data relating to the segment, gene, protein, species, strain, sequence length, terminal sequence and date and country of isolation. Bunyaviridae sequences and alignments may be downloaded from the second spreadsheet with titles defined by the user from the columns, or viewed when passed directly to the sequence editor, Jalview. VirusBanker allows large datasets of aligned nucleotide and protein sequences from the Bunyaviridae to be compiled and winnowed rapidly using criteria that are formulated heuristically.

  14. Mitigation of Angle Tracking Errors Due to Color Dependent Centroid Shifts in SIM-Lite

    NASA Technical Reports Server (NTRS)

    Nemati, Bijan; An, Xin; Goullioud, Renaud; Shao, Michael; Shen, Tsae-Pyng; Wehmeier, Udo J.; Weilert, Mark A.; Wang, Xu; Werne, Thomas A.; Wu, Janet P.; hide

    2010-01-01

    The SIM-Lite astrometric interferometer will search for Earth-size planets in the habitable zones of nearby stars. In this search the interferometer will monitor the astrometric position of candidate stars relative to nearby reference stars over the course of a 5 year mission. The elemental measurement is the angle between a target star and a reference star. This is a two-step process, in which the interferometer will each time need to use its controllable optics to align the starlight in the two arms with each other and with the metrology beams. The sensor for this alignment is an angle tracking CCD camera. Various constraints in the design of the camera subject it to systematic alignment errors when observing a star of one spectrum compared with a start of a different spectrum. This effect is called a Color Dependent Centroid Shift (CDCS) and has been studied extensively with SIM-Lite's SCDU testbed. Here we describe results from the simulation and testing of this error in the SCDU testbed, as well as effective ways that it can be reduced to acceptable levels.

  15. Toolkit of Available EPA Green Infrastructure Modeling ...

    EPA Pesticide Factsheets

    This webinar will present a toolkit consisting of five EPA green infrastructure models and tools, along with communication material. This toolkit can be used as a teaching and quick reference resource for use by planners and developers when making green infrastructure implementation decisions. It can also be used for low impact development design competitions. Models and tools included: Green Infrastructure Wizard (GIWiz), Watershed Management Optimization Support Tool (WMOST), Visualizing Ecosystem Land Management Assessments (VELMA) Model, Storm Water Management Model (SWMM), and the National Stormwater Calculator (SWC). This webinar will present a toolkit consisting of five EPA green infrastructure models and tools, along with communication material. This toolkit can be used as a teaching and quick reference resource for use by planners and developers when making green infrastructure implementation decisions. It can also be used for low impact development design competitions. Models and tools included: Green Infrastructure Wizard (GIWiz), Watershed Management Optimization Support Tool (WMOST), Visualizing Ecosystem Land Management Assessments (VELMA) Model, Storm Water Management Model (SWMM), and the National Stormwater Calculator (SWC).

  16. Designing a ticket to ride with the Cognitive Work Analysis Design Toolkit.

    PubMed

    Read, Gemma J M; Salmon, Paul M; Lenné, Michael G; Jenkins, Daniel P

    2015-01-01

    Cognitive work analysis has been applied in the design of numerous sociotechnical systems. The process used to translate analysis outputs into design concepts, however, is not always clear. Moreover, structured processes for translating the outputs of ergonomics methods into concrete designs are lacking. This paper introduces the Cognitive Work Analysis Design Toolkit (CWA-DT), a design approach which has been developed specifically to provide a structured means of incorporating cognitive work analysis outputs in design using design principles and values derived from sociotechnical systems theory. This paper outlines the CWA-DT and describes its application in a public transport ticketing design case study. Qualitative and quantitative evaluations of the process provide promising early evidence that the toolkit fulfils the evaluation criteria identified for its success, with opportunities for improvement also highlighted. The Cognitive Work Analysis Design Toolkit has been developed to provide ergonomics practitioners with a structured approach for translating the outputs of cognitive work analysis into design solutions. This paper demonstrates an application of the toolkit and provides evaluation findings.

  17. Field tests of a participatory ergonomics toolkit for Total Worker Health.

    PubMed

    Nobrega, Suzanne; Kernan, Laura; Plaku-Alakbarova, Bora; Robertson, Michelle; Warren, Nicholas; Henning, Robert

    2017-04-01

    Growing interest in Total Worker Health ® (TWH) programs to advance worker safety, health and well-being motivated development of a toolkit to guide their implementation. Iterative design of a program toolkit occurred in which participatory ergonomics (PE) served as the primary basis to plan integrated TWH interventions in four diverse organizations. The toolkit provided start-up guides for committee formation and training, and a structured PE process for generating integrated TWH interventions. Process data from program facilitators and participants throughout program implementation were used for iterative toolkit design. Program success depended on organizational commitment to regular design team meetings with a trained facilitator, the availability of subject matter experts on ergonomics and health to support the design process, and retraining whenever committee turnover occurred. A two committee structure (employee Design Team, management Steering Committee) provided advantages over a single, multilevel committee structure, and enhanced the planning, communication, and teamwork skills of participants. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Measures of Success for Earth System Science Education: The DLESE Evaluation Services and the Evaluation Toolkit Collection

    NASA Astrophysics Data System (ADS)

    McCaffrey, M. S.; Buhr, S. M.; Lynds, S.

    2005-12-01

    Increased agency emphasis upon the integration of research and education coupled with the ability to provide students with access to digital background materials, learning activities and primary data sources has begun to revolutionize Earth science education in formal and informal settings. The DLESE Evaluation Services team and the related Evaluation Toolkit collection (http://www.dlese.org/cms/evalservices/ ) provides services and tools for education project leads and educators. Through the Evaluation Toolkit, educators may access high-quality digital materials to assess students' cognitive gains, examples of alternative assessments, and case studies and exemplars of authentic research. The DLESE Evaluation Services team provides support for those who are developing evaluation plans on an as-requested basis. In addition, the Toolkit provides authoritative peer reviewed articlesabout evaluation research techniques and strategies of particular importance to geoscience education. This paper will provide an overview of the DLESE Evaluation Toolkit and discuss challenges and best practices for assessing student learning and evaluating Earth system sciences education in a digital world.

  19. Respiratory Protection Toolkit: Providing Guidance Without Changing Requirements-Can We Make an Impact?

    PubMed

    Bien, Elizabeth Ann; Gillespie, Gordon Lee; Betcher, Cynthia Ann; Thrasher, Terri L; Mingerink, Donna R

    2016-12-01

    International travel and infectious respiratory illnesses worldwide place health care workers (HCWs) at increasing risk of respiratory exposures. To ensure the highest quality safety initiatives, one health care system used a quality improvement model of Plan-Do-Study-Act and guidance from Occupational Safety and Health Administration's (OSHA) May 2015 Hospital Respiratory Protection Program (RPP) Toolkit to assess a current program. The toolkit aided in identification of opportunities for improvement within their well-designed RPP. One opportunity was requiring respirator use during aerosol-generating procedures for specific infectious illnesses. Observation data demonstrated opportunities to mitigate controllable risks including strap placement, user seal check, and reuse of disposable N95 filtering facepiece respirators. Subsequent interdisciplinary collaboration resulted in other ideas to decrease risks and increase protection from potentially infectious respiratory illnesses. The toolkit's comprehensive document to evaluate the program showed that while the OSHA standards have not changed, the addition of the toolkit can better protect HCWs. © 2016 The Author(s).

  20. Can a workbook work? Examining whether a practitioner evaluation toolkit can promote instrumental use.

    PubMed

    Campbell, Rebecca; Townsend, Stephanie M; Shaw, Jessica; Karim, Nidal; Markowitz, Jenifer

    2015-10-01

    In large-scale, multi-site contexts, developing and disseminating practitioner-oriented evaluation toolkits are an increasingly common strategy for building evaluation capacity. Toolkits explain the evaluation process, present evaluation design choices, and offer step-by-step guidance to practitioners. To date, there has been limited research on whether such resources truly foster the successful design, implementation, and use of evaluation findings. In this paper, we describe a multi-site project in which we developed a practitioner evaluation toolkit and then studied the extent to which the toolkit and accompanying technical assistance was effective in promoting successful completion of local-level evaluations and fostering instrumental use of the findings (i.e., whether programs directly used their findings to improve practice, see Patton, 2008). Forensic nurse practitioners from six geographically dispersed service programs completed methodologically rigorous evaluations; furthermore, all six programs used the findings to create programmatic and community-level changes to improve local practice. Implications for evaluation capacity building are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Accuracy of Binary Black Hole waveforms for Advanced LIGO searches

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Chu, Tony; Fong, Heather; Brown, Duncan; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela

    2015-04-01

    Coalescing binaries of compact objects are flagship sources for the first direct detection of gravitational waves with LIGO-Virgo observatories. Matched-filtering based detection searches aimed at binaries of black holes will use aligned spin waveforms as filters, and their efficiency hinges on the accuracy of the underlying waveform models. A number of gravitational waveform models are available in literature, e.g. the Effective-One-Body, Phenomenological, and traditional post-Newtonian ones. While Numerical Relativity (NR) simulations provide for the most accurate modeling of gravitational radiation from compact binaries, their computational cost limits their application in large scale searches. In this talk we assess the accuracy of waveform models in two regions of parameter space, which have only been explored cursorily in the past: the high mass-ratio regime as well as the comparable mass-ratio + high spin regime.s Using the SpEC code, six q = 7 simulations with aligned-spins and lasting 60 orbits, and tens of q ∈ [1,3] simulations with high black hole spins were performed. We use them to study the accuracy and intrinsic parameter biases of different waveform families, and assess their viability for Advanced LIGO searches.

  2. Protein structure database search and evolutionary classification.

    PubMed

    Yang, Jinn-Moon; Tung, Chi-Hua

    2006-01-01

    As more protein structures become available and structural genomics efforts provide structural models in a genome-wide strategy, there is a growing need for fast and accurate methods for discovering homologous proteins and evolutionary classifications of newly determined structures. We have developed 3D-BLAST, in part, to address these issues. 3D-BLAST is as fast as BLAST and calculates the statistical significance (E-value) of an alignment to indicate the reliability of the prediction. Using this method, we first identified 23 states of the structural alphabet that represent pattern profiles of the backbone fragments and then used them to represent protein structure databases as structural alphabet sequence databases (SADB). Our method enhanced BLAST as a search method, using a new structural alphabet substitution matrix (SASM) to find the longest common substructures with high-scoring structured segment pairs from an SADB database. Using personal computers with Intel Pentium4 (2.8 GHz) processors, our method searched more than 10 000 protein structures in 1.3 s and achieved a good agreement with search results from detailed structure alignment methods. [3D-BLAST is available at http://3d-blast.life.nctu.edu.tw].

  3. An intuitive graphical webserver for multiple-choice protein sequence search.

    PubMed

    Banky, Daniel; Szalkai, Balazs; Grolmusz, Vince

    2014-04-10

    Every day tens of thousands of sequence searches and sequence alignment queries are submitted to webservers. The capitalized word "BLAST" becomes a verb, describing the act of performing sequence search and alignment. However, if one needs to search for sequences that contain, for example, two hydrophobic and three polar residues at five given positions, the query formation on the most frequently used webservers will be difficult. Some servers support the formation of queries with regular expressions, but most of the users are unfamiliar with their syntax. Here we present an intuitive, easily applicable webserver, the Protein Sequence Analysis server, that allows the formation of multiple choice queries by simply drawing the residues to their positions; if more than one residue are drawn to the same position, then they will be nicely stacked on the user interface, indicating the multiple choice at the given position. This computer-game-like interface is natural and intuitive, and the coloring of the residues makes possible to form queries requiring not just certain amino acids in the given positions, but also small nonpolar, negatively charged, hydrophobic, positively charged, or polar ones. The webserver is available at http://psa.pitgroup.org. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Computer vision applications for coronagraphic optical alignment and image processing.

    PubMed

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  5. Automated Neuropsychological Assessment Metrics, Version 4 (ANAM4): Examination of Select Psychometric Properties and Administration Procedures

    DTIC Science & Technology

    2016-12-01

    2017 was approved in August 2016. The supplemental project has 2 primary objectives: • Recommend cognitive assessment tools/approaches ( toolkit ) from...strategies for use in future military-relevant environments The supplemental project has two primary deliverables: • Proposed Toolkit of cognitive...6 Vet Final Report and Cognitive performance recommendations through Steering Committee Task 7 Provide Toolkit Report 16 Months 8-12 Task 8

  6. Toolkit of Resources for Engaging Families and the Community as Partners in Education. Part 1: Building an Understanding of Family and Community Engagement. REL 2016-148

    ERIC Educational Resources Information Center

    Garcia, Maria Elena; Frunzi, Kay; Dean, Ceri B.; Flores, Nieves; Miller, Kirsten B.

    2016-01-01

    The Toolkit of Resources for Engaging Families and the Community as Partners in Education is a four-part resource that brings together research, promising practices, and useful tools and resources to guide educators in strengthening partnerships with families and community members to support student learning. The toolkit defines family and…

  7. Toolkit of Resources for Engaging Families and the Community as Partners in Education: Part 3: Building Trusting Relationships with Families and the Community through Effective Communication. REL 2016-152

    ERIC Educational Resources Information Center

    Garcia, Maria Elena; Frunzi, Kay; Dean, Ceri B.; Flores, Nieves; Miller, Kirsten B.

    2016-01-01

    The Toolkit of Resources for Engaging Families and the Community as Partners in Education is a four-part resource that brings together research, promising practices, and useful tools and resources to guide educators in strengthening partnerships with families and community members to support student learning. The toolkit defines family and…

  8. BRDF profile of Tyvek and its implementation in the Geant4 simulation toolkit.

    PubMed

    Nozka, Libor; Pech, Miroslav; Hiklova, Helena; Mandat, Dusan; Hrabovsky, Miroslav; Schovanek, Petr; Palatka, Miroslav

    2011-02-28

    Diffuse and specular characteristics of the Tyvek 1025-BL material are reported with respect to their implementation in the Geant4 Monte Carlo simulation toolkit. This toolkit incorporates the UNIFIED model. Coefficients defined by the UNIFIED model were calculated from the bidirectional reflectance distribution function (BRDF) profiles measured with a scatterometer for several angles of incidence. Results were amended with profile measurements made by a profilometer.

  9. SIGKit: Software for Introductory Geophysics Toolkit

    NASA Astrophysics Data System (ADS)

    Kruse, S.; Bank, C. G.; Esmaeili, S.; Jazayeri, S.; Liu, S.; Stoikopoulos, N.

    2017-12-01

    The Software for Introductory Geophysics Toolkit (SIGKit) affords students the opportunity to create model data and perform simple processing of field data for various geophysical methods. SIGkit provides a graphical user interface built with the MATLAB programming language, but can run even without a MATLAB installation. At this time SIGkit allows students to pick first arrivals and match a two-layer model to seismic refraction data; grid total-field magnetic data, extract a profile, and compare this to a synthetic profile; and perform simple processing steps (subtraction of a mean trace, hyperbola fit) to ground-penetrating radar data. We also have preliminary tools for gravity, resistivity, and EM data representation and analysis. SIGkit is being built by students for students, and the intent of the toolkit is to provide an intuitive interface for simple data analysis and understanding of the methods, and act as an entrance to more sophisticated software. The toolkit has been used in introductory courses as well as field courses. First reactions from students are positive. Think-aloud observations of students using the toolkit have helped identify problems and helped shape it. We are planning to compare the learning outcomes of students who have used the toolkit in a field course to students in a previous course to test its effectiveness.

  10. The development of an artificial organic networks toolkit for LabVIEW.

    PubMed

    Ponce, Hiram; Ponce, Pedro; Molina, Arturo

    2015-03-15

    Two of the most challenging problems that scientists and researchers face when they want to experiment with new cutting-edge algorithms are the time-consuming for encoding and the difficulties for linking them with other technologies and devices. In that sense, this article introduces the artificial organic networks toolkit for LabVIEW™ (AON-TL) from the implementation point of view. The toolkit is based on the framework provided by the artificial organic networks technique, giving it the potential to add new algorithms in the future based on this technique. Moreover, the toolkit inherits both the rapid prototyping and the easy-to-use characteristics of the LabVIEW™ software (e.g., graphical programming, transparent usage of other softwares and devices, built-in programming event-driven for user interfaces), to make it simple for the end-user. In fact, the article describes the global architecture of the toolkit, with particular emphasis in the software implementation of the so-called artificial hydrocarbon networks algorithm. Lastly, the article includes two case studies for engineering purposes (i.e., sensor characterization) and chemistry applications (i.e., blood-brain barrier partitioning data model) to show the usage of the toolkit and the potential scalability of the artificial organic networks technique. © 2015 Wiley Periodicals, Inc.

  11. BEAUTY-X: enhanced BLAST searches for DNA queries.

    PubMed

    Worley, K C; Culpepper, P; Wiese, B A; Smith, R F

    1998-01-01

    BEAUTY (BLAST Enhanced Alignment Utility) is an enhanced version of the BLAST database search tool that facilitates identification of the functions of matched sequences. Three recent improvements to the BEAUTY program described here make the enhanced output (1) available for DNA queries, (2) available for searches of any protein database, and (3) more up-to-date, with periodic updates of the domain information. BEAUTY searches of the NCBI and EMBL non-redundant protein sequence databases are available from the BCM Search Launcher Web pages (http://gc.bcm.tmc. edu:8088/search-launcher/launcher.html). BEAUTY Post-Processing of submitted search results is available using the BCM Search Launcher Batch Client (version 2.6) (ftp://gc.bcm.tmc. edu/pub/software/search-launcher/). Example figures are available at http://dot.bcm.tmc. edu:9331/papers/beautypp.html (kworley,culpep)@bcm.tmc.edu

  12. The Visualization Toolkit (VTK): Rewriting the rendering code for modern graphics cards

    NASA Astrophysics Data System (ADS)

    Hanwell, Marcus D.; Martin, Kenneth M.; Chaudhary, Aashish; Avila, Lisa S.

    2015-09-01

    The Visualization Toolkit (VTK) is an open source, permissively licensed, cross-platform toolkit for scientific data processing, visualization, and data analysis. It is over two decades old, originally developed for a very different graphics card architecture. Modern graphics cards feature fully programmable, highly parallelized architectures with large core counts. VTK's rendering code was rewritten to take advantage of modern graphics cards, maintaining most of the toolkit's programming interfaces. This offers the opportunity to compare the performance of old and new rendering code on the same systems/cards. Significant improvements in rendering speeds and memory footprints mean that scientific data can be visualized in greater detail than ever before. The widespread use of VTK means that these improvements will reap significant benefits.

  13. AutoMicromanager: A microscopy scripting toolkit for LABVIEW and other programming environments

    NASA Astrophysics Data System (ADS)

    Ashcroft, Brian Alan; Oosterkamp, Tjerk

    2010-11-01

    We present a scripting toolkit for the acquisition and analysis of a wide variety of imaging data by integrating the ease of use of various programming environments such as LABVIEW, IGOR PRO, MATLAB, SCILAB, and others. This toolkit is designed to allow the user to quickly program a variety of standard microscopy components for custom microscopy applications allowing much more flexibility than other packages. Included are both programming tools as well as graphical user interface classes allowing a standard, consistent, and easy to maintain scripting environment. This programming toolkit allows easy access to most commonly used cameras, stages, and shutters through the Micromanager project so the scripter can focus on their custom application instead of boilerplate code generation.

  14. AutoMicromanager: a microscopy scripting toolkit for LABVIEW and other programming environments.

    PubMed

    Ashcroft, Brian Alan; Oosterkamp, Tjerk

    2010-11-01

    We present a scripting toolkit for the acquisition and analysis of a wide variety of imaging data by integrating the ease of use of various programming environments such as LABVIEW, IGOR PRO, MATLAB, SCILAB, and others. This toolkit is designed to allow the user to quickly program a variety of standard microscopy components for custom microscopy applications allowing much more flexibility than other packages. Included are both programming tools as well as graphical user interface classes allowing a standard, consistent, and easy to maintain scripting environment. This programming toolkit allows easy access to most commonly used cameras, stages, and shutters through the Micromanager project so the scripter can focus on their custom application instead of boilerplate code generation.

  15. Desensitized Optimal Filtering and Sensor Fusion Toolkit

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2015-01-01

    Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.

  16. SA-Search: a web tool for protein structure mining based on a Structural Alphabet

    PubMed Central

    Guyon, Frédéric; Camproux, Anne-Claude; Hochez, Joëlle; Tufféry, Pierre

    2004-01-01

    SA-Search is a web tool that can be used to mine for protein structures and extract structural similarities. It is based on a hidden Markov model derived Structural Alphabet (SA) that allows the compression of three-dimensional (3D) protein conformations into a one-dimensional (1D) representation using a limited number of prototype conformations. Using such a representation, classical methods developed for amino acid sequences can be employed. Currently, SA-Search permits the performance of fast 3D similarity searches such as the extraction of exact words using a suffix tree approach, and the search for fuzzy words viewed as a simple 1D sequence alignment problem. SA-Search is available at http://bioserv.rpbs.jussieu.fr/cgi-bin/SA-Search. PMID:15215446

  17. SA-Search: a web tool for protein structure mining based on a Structural Alphabet.

    PubMed

    Guyon, Frédéric; Camproux, Anne-Claude; Hochez, Joëlle; Tufféry, Pierre

    2004-07-01

    SA-Search is a web tool that can be used to mine for protein structures and extract structural similarities. It is based on a hidden Markov model derived Structural Alphabet (SA) that allows the compression of three-dimensional (3D) protein conformations into a one-dimensional (1D) representation using a limited number of prototype conformations. Using such a representation, classical methods developed for amino acid sequences can be employed. Currently, SA-Search permits the performance of fast 3D similarity searches such as the extraction of exact words using a suffix tree approach, and the search for fuzzy words viewed as a simple 1D sequence alignment problem. SA-Search is available at http://bioserv.rpbs.jussieu.fr/cgi-bin/SA-Search.

  18. PIPI: PTM-Invariant Peptide Identification Using Coding Method.

    PubMed

    Yu, Fengchao; Li, Ning; Yu, Weichuan

    2016-12-02

    In computational proteomics, the identification of peptides with an unlimited number of post-translational modification (PTM) types is a challenging task. The computational cost associated with database search increases exponentially with respect to the number of modified amino acids and linearly with respect to the number of potential PTM types at each amino acid. The problem becomes intractable very quickly if we want to enumerate all possible PTM patterns. To address this issue, one group of methods named restricted tools (including Mascot, Comet, and MS-GF+) only allow a small number of PTM types in database search process. Alternatively, the other group of methods named unrestricted tools (including MS-Alignment, ProteinProspector, and MODa) avoids enumerating PTM patterns with an alignment-based approach to localizing and characterizing modified amino acids. However, because of the large search space and PTM localization issue, the sensitivity of these unrestricted tools is low. This paper proposes a novel method named PIPI to achieve PTM-invariant peptide identification. PIPI belongs to the category of unrestricted tools. It first codes peptide sequences into Boolean vectors and codes experimental spectra into real-valued vectors. For each coded spectrum, it then searches the coded sequence database to find the top scored peptide sequences as candidates. After that, PIPI uses dynamic programming to localize and characterize modified amino acids in each candidate. We used simulation experiments and real data experiments to evaluate the performance in comparison with restricted tools (i.e., Mascot, Comet, and MS-GF+) and unrestricted tools (i.e., Mascot with error tolerant search, MS-Alignment, ProteinProspector, and MODa). Comparison with restricted tools shows that PIPI has a close sensitivity and running speed. Comparison with unrestricted tools shows that PIPI has the highest sensitivity except for Mascot with error tolerant search and ProteinProspector. These two tools simplify the task by only considering up to one modified amino acid in each peptide, which results in a higher sensitivity but has difficulty in dealing with multiple modified amino acids. The simulation experiments also show that PIPI has the lowest false discovery proportion, the highest PTM characterization accuracy, and the shortest running time among the unrestricted tools.

  19. NeuroPigPen: A Scalable Toolkit for Processing Electrophysiological Signal Data in Neuroscience Applications Using Apache Pig

    PubMed Central

    Sahoo, Satya S.; Wei, Annan; Valdez, Joshua; Wang, Li; Zonjy, Bilal; Tatsuoka, Curtis; Loparo, Kenneth A.; Lhatoo, Samden D.

    2016-01-01

    The recent advances in neurological imaging and sensing technologies have led to rapid increase in the volume, rate of data generation, and variety of neuroscience data. This “neuroscience Big data” represents a significant opportunity for the biomedical research community to design experiments using data with greater timescale, large number of attributes, and statistically significant data size. The results from these new data-driven research techniques can advance our understanding of complex neurological disorders, help model long-term effects of brain injuries, and provide new insights into dynamics of brain networks. However, many existing neuroinformatics data processing and analysis tools were not built to manage large volume of data, which makes it difficult for researchers to effectively leverage this available data to advance their research. We introduce a new toolkit called NeuroPigPen that was developed using Apache Hadoop and Pig data flow language to address the challenges posed by large-scale electrophysiological signal data. NeuroPigPen is a modular toolkit that can process large volumes of electrophysiological signal data, such as Electroencephalogram (EEG), Electrocardiogram (ECG), and blood oxygen levels (SpO2), using a new distributed storage model called Cloudwave Signal Format (CSF) that supports easy partitioning and storage of signal data on commodity hardware. NeuroPigPen was developed with three design principles: (a) Scalability—the ability to efficiently process increasing volumes of data; (b) Adaptability—the toolkit can be deployed across different computing configurations; and (c) Ease of programming—the toolkit can be easily used to compose multi-step data processing pipelines using high-level programming constructs. The NeuroPigPen toolkit was evaluated using 750 GB of electrophysiological signal data over a variety of Hadoop cluster configurations ranging from 3 to 30 Data nodes. The evaluation results demonstrate that the toolkit is highly scalable and adaptable, which makes it suitable for use in neuroscience applications as a scalable data processing toolkit. As part of the ongoing extension of NeuroPigPen, we are developing new modules to support statistical functions to analyze signal data for brain connectivity research. In addition, the toolkit is being extended to allow integration with scientific workflow systems. NeuroPigPen is released under BSD license at: https://sites.google.com/a/case.edu/neuropigpen/. PMID:27375472

  20. NeuroPigPen: A Scalable Toolkit for Processing Electrophysiological Signal Data in Neuroscience Applications Using Apache Pig.

    PubMed

    Sahoo, Satya S; Wei, Annan; Valdez, Joshua; Wang, Li; Zonjy, Bilal; Tatsuoka, Curtis; Loparo, Kenneth A; Lhatoo, Samden D

    2016-01-01

    The recent advances in neurological imaging and sensing technologies have led to rapid increase in the volume, rate of data generation, and variety of neuroscience data. This "neuroscience Big data" represents a significant opportunity for the biomedical research community to design experiments using data with greater timescale, large number of attributes, and statistically significant data size. The results from these new data-driven research techniques can advance our understanding of complex neurological disorders, help model long-term effects of brain injuries, and provide new insights into dynamics of brain networks. However, many existing neuroinformatics data processing and analysis tools were not built to manage large volume of data, which makes it difficult for researchers to effectively leverage this available data to advance their research. We introduce a new toolkit called NeuroPigPen that was developed using Apache Hadoop and Pig data flow language to address the challenges posed by large-scale electrophysiological signal data. NeuroPigPen is a modular toolkit that can process large volumes of electrophysiological signal data, such as Electroencephalogram (EEG), Electrocardiogram (ECG), and blood oxygen levels (SpO2), using a new distributed storage model called Cloudwave Signal Format (CSF) that supports easy partitioning and storage of signal data on commodity hardware. NeuroPigPen was developed with three design principles: (a) Scalability-the ability to efficiently process increasing volumes of data; (b) Adaptability-the toolkit can be deployed across different computing configurations; and (c) Ease of programming-the toolkit can be easily used to compose multi-step data processing pipelines using high-level programming constructs. The NeuroPigPen toolkit was evaluated using 750 GB of electrophysiological signal data over a variety of Hadoop cluster configurations ranging from 3 to 30 Data nodes. The evaluation results demonstrate that the toolkit is highly scalable and adaptable, which makes it suitable for use in neuroscience applications as a scalable data processing toolkit. As part of the ongoing extension of NeuroPigPen, we are developing new modules to support statistical functions to analyze signal data for brain connectivity research. In addition, the toolkit is being extended to allow integration with scientific workflow systems. NeuroPigPen is released under BSD license at: https://sites.google.com/a/case.edu/neuropigpen/.

  1. Next Generation Search Interfaces

    NASA Astrophysics Data System (ADS)

    Roby, W.; Wu, X.; Ly, L.; Goldina, T.

    2015-09-01

    Astronomers are constantly looking for easier ways to access multiple data sets. While much effort is spent on VO, little thought is given to the types of User Interfaces we need to effectively search this sort of data. For instance, an astronomer might need to search Spitzer, WISE, and 2MASS catalogs and images then see the results presented together in one UI. Moving seamlessly between data sets is key to presenting integrated results. Results need to be viewed using first class, web based, integrated FITS viewers, XY Plots, and advanced table display tools. These components should be able to handle very large datasets. To make a powerful Web based UI that can manage and present multiple searches to the user requires taking advantage of many HTML5 features. AJAX is used to start searches and present results. Push notifications (Server Sent Events) monitor background jobs. Canvas is required for advanced result displays. Lesser known CSS3 technologies makes it all flow seamlessly together. At IPAC, we have been developing our Firefly toolkit for several years. We are now using it to solve this multiple data set, multiple queries, and integrated presentation problem to create a powerful research experience. Firefly was created in IRSA, the NASA/IPAC Infrared Science Archive (http://irsa.ipac.caltech.edu). Firefly is the core for applications serving many project archives, including Spitzer, Planck, WISE, PTF, LSST and others. It is also used in IRSA's new Finder Chart and catalog and image displays.

  2. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    PubMed Central

    2011-01-01

    Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples. PMID:21352538

  3. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines.

    PubMed

    Cieślik, Marcin; Mura, Cameron

    2011-02-25

    Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples.

  4. Facilitating interprofessional evidence-based practice in paediatric rehabilitation: development, implementation and evaluation of an online toolkit for health professionals.

    PubMed

    Glegg, Stephanie M N; Livingstone, Roslyn; Montgomery, Ivonne

    2016-01-01

    Lack of time, competencies, resources and supports are documented as barriers to evidence-based practice (EBP). This paper introduces a recently developed web-based toolkit designed to assist interprofessional clinicians in implementing EBP within a paediatric rehabilitation setting. EBP theory, models, frameworks and tools were applied or adapted in the development of the online resources, which formed the basis of a larger support strategy incorporating interactive workshops, knowledge broker facilitation and mentoring. The highly accessed toolkit contains flowcharts with embedded information sheets, resources and templates to streamline, quantify and document outcomes throughout the EBP process. Case examples relevance to occupational therapy and physical therapy highlight the utility and application of the toolkit in a clinical paediatric setting. Workshops were highly rated by learners for clinical relevance, presentation level and effectiveness. Eight evidence syntheses have been created and 79 interventions have been evaluated since the strategy's inception in January 2011. The toolkit resources streamlined and supported EBP processes, promoting consistency in quality and presentation of outputs. The online toolkit can be a useful tool to facilitate clinicians' use of EBP in order to meet the needs of the clients and families whom they support. Implications for Rehabilitation A comprehensive online EBP toolkit for interprofessional clinicians is available to streamline the EBP process and to support learning needs regardless of competency level. Multi-method facilitation support, including interactive education, e-learning, clinical librarian services and knowledge brokering, is a valued but cost-restrictive supplement to the implementation of online EBP resources. EBP resources are not one-size-fits-all; targeted appraisal tools, models and frameworks may be integrated to improve their utility for specific sectors, which may limit them for others.

  5. Application development environment for advanced digital workstations

    NASA Astrophysics Data System (ADS)

    Valentino, Daniel J.; Harreld, Michael R.; Liu, Brent J.; Brown, Matthew S.; Huang, Lu J.

    1998-06-01

    One remaining barrier to the clinical acceptance of electronic imaging and information systems is the difficulty in providing intuitive access to the information needed for a specific clinical task (such as reaching a diagnosis or tracking clinical progress). The purpose of this research was to create a development environment that enables the design and implementation of advanced digital imaging workstations. We used formal data and process modeling to identify the diagnostic and quantitative data that radiologists use and the tasks that they typically perform to make clinical decisions. We studied a diverse range of radiology applications, including diagnostic neuroradiology in an academic medical center, pediatric radiology in a children's hospital, screening mammography in a breast cancer center, and thoracic radiology consultation for an oncology clinic. We used object- oriented analysis to develop software toolkits that enable a programmer to rapidly implement applications that closely match clinical tasks. The toolkits support browsing patient information, integrating patient images and reports, manipulating images, and making quantitative measurements on images. Collectively, we refer to these toolkits as the UCLA Digital ViewBox toolkit (ViewBox/Tk). We used the ViewBox/Tk to rapidly prototype and develop a number of diverse medical imaging applications. Our task-based toolkit approach enabled rapid and iterative prototyping of workstations that matched clinical tasks. The toolkit functionality and performance provided a 'hands-on' feeling for manipulating images, and for accessing textual information and reports. The toolkits directly support a new concept for protocol based-reading of diagnostic studies. The design supports the implementation of network-based application services (e.g., prefetching, workflow management, and post-processing) that will facilitate the development of future clinical applications.

  6. Using stakeholder perspectives to develop an ePrescribing toolkit for NHS Hospitals: a questionnaire study.

    PubMed

    Lee, Lisa; Cresswell, Kathrin; Slee, Ann; Slight, Sarah P; Coleman, Jamie; Sheikh, Aziz

    2014-10-01

    To evaluate how an online toolkit may support ePrescribing deployments in National Health Service hospitals, by assessing the type of knowledge-based resources currently sought by key stakeholders. Questionnaire-based survey of attendees at a national ePrescribing symposium. 2013 National ePrescribing Symposium in London, UK. Eighty-four delegates were eligible for inclusion in the survey, of whom 70 completed and returned the questionnaire. Estimate of the usefulness and type of content to be included in an ePrescribing toolkit. Interest in a toolkit designed to support the implementation and use of ePrescribing systems was high (n = 64; 91.4%). As could be expected given the current dearth of such a resource, few respondents (n = 2; 2.9%) had access or used an ePrescribing toolkit at the time of the survey. Anticipated users for the toolkit included implementation (n = 62; 88.6%) and information technology (n = 61; 87.1%) teams, pharmacists (n = 61; 87.1%), doctors (n = 58; 82.9%) and nurses (n = 56; 80.0%). Summary guidance for every stage of the implementation (n = 48; 68.6%), planning and monitoring tools (n = 47; 67.1%) and case studies of hospitals' experiences (n = 45; 64.3%) were considered the most useful types of content. There is a clear need for reliable and up-to-date knowledge to support ePrescribing system deployments and longer term use. The findings highlight how a toolkit may become a useful instrument for the management of knowledge in the field, not least by allowing the exchange of ideas and shared learning.

  7. The PRIDE (Partnership to Improve Diabetes Education) Toolkit: Development and Evaluation of Novel Literacy and Culturally Sensitive Diabetes Education Materials.

    PubMed

    Wolff, Kathleen; Chambers, Laura; Bumol, Stefan; White, Richard O; Gregory, Becky Pratt; Davis, Dianne; Rothman, Russell L

    2016-02-01

    Patients with low literacy, low numeracy, and/or linguistic needs can experience challenges understanding diabetes information and applying concepts to their self-management. The authors designed a toolkit of education materials that are sensitive to patients' literacy and numeracy levels, language preferences, and cultural norms and that encourage shared goal setting to improve diabetes self-management and health outcomes. The Partnership to Improve Diabetes Education (PRIDE) toolkit was developed to facilitate diabetes self-management education and support. The PRIDE toolkit includes a comprehensive set of 30 interactive education modules in English and Spanish to support diabetes self-management activities. The toolkit builds upon the authors' previously validated Diabetes Literacy and Numeracy Education Toolkit (DLNET) by adding a focus on shared goal setting, addressing the needs of Spanish-speaking patients, and including a broader range of diabetes management topics. Each PRIDE module was evaluated using the Suitability Assessment of Materials (SAM) instrument to determine the material's cultural appropriateness and its sensitivity to the needs of patients with low literacy and low numeracy. Reading grade level was also assessed using the Automated Readability Index (ARI), Coleman-Liau, Flesch-Kincaid, Fry, and SMOG formulas. The average reading grade level of the materials was 5.3 (SD 1.0), with a mean SAM of 91.2 (SD 5.4). All of the 30 modules received a "superior" score (SAM >70%) when evaluated by 2 independent raters. The PRIDE toolkit modules can be used by all members of a multidisciplinary team to assist patients with low literacy and low numeracy in managing their diabetes. © 2015 The Author(s).

  8. Does the Good Schools Toolkit Reduce Physical, Sexual and Emotional Violence, and Injuries, in Girls and Boys equally? A Cluster-Randomised Controlled Trial.

    PubMed

    Devries, Karen M; Knight, Louise; Allen, Elizabeth; Parkes, Jenny; Kyegombe, Nambusi; Naker, Dipak

    2017-10-01

    We aimed to investigate whether the Good School Toolkit reduced emotional violence, severe physical violence, sexual violence and injuries from school staff to students, as well as emotional, physical and sexual violence between peers, in Ugandan primary schools. We performed a two-arm cluster randomised controlled trial with parallel assignment. Forty-two schools in one district were allocated to intervention (n = 21) or wait-list control (n = 21) arms in 2012. We did cross-sectional baseline and endline surveys in 2012 and 2014, and the Good School Toolkit intervention was implemented for 18 months between surveys. Analyses were by intention to treat and are adjusted for clustering within schools and for baseline school-level proportions of outcomes. The Toolkit was associated with an overall reduction in any form of violence from staff and/or peers in the past week towards both male (aOR = 0.34, 95%CI 0.22-0.53) and female students (aOR = 0.55, 95%CI 0.36-0.84). Injuries as a result of violence from school staff were also lower in male (aOR = 0.36, 95%CI 0.20-0.65) and female students (aOR = 0.51, 95%CI 0.29-0.90). Although the Toolkit seems to be effective at reducing violence in both sexes, there is some suggestion that the Toolkit may have stronger effects in boys than girls. The Toolkit is a promising intervention to reduce a wide range of different forms of violence from school staff and between peers in schools, and should be urgently considered for scale-up. Further research is needed to investigate how the intervention could engage more successfully with girls.

  9. Improving the Effectiveness of Medication Review: Guidance from the Health Literacy Universal Precautions Toolkit.

    PubMed

    Weiss, Barry D; Brega, Angela G; LeBlanc, William G; Mabachi, Natabhona M; Barnard, Juliana; Albright, Karen; Cifuentes, Maribel; Brach, Cindy; West, David R

    2016-01-01

    Although routine medication reviews in primary care practice are recommended to identify drug therapy problems, it is often difficult to get patients to bring all their medications to office visits. The objective of this study was to determine whether the medication review tool in the Agency for Healthcare Research and Quality Health Literacy Universal Precautions Toolkit can help to improve medication reviews in primary care practices. The toolkit's "Brown Bag Medication Review" was implemented in a rural private practice in Missouri and an urban teaching practice in California. Practices recorded outcomes of medication reviews with 45 patients before toolkit implementation and then changed their medication review processes based on guidance in the toolkit. Six months later we conducted interviews with practice staff to identify changes made as a result of implementing the tool, and practices recorded outcomes of medication reviews with 41 additional patients. Data analyses compared differences in whether all medications were brought to visits, the number of medications reviewed, drug therapy problems identified, and changes in medication regimens before and after implementation. Interviews revealed that practices made the changes recommended in the toolkit to encourage patients to bring medications to office visits. Evaluation before and after implementation revealed a 3-fold increase in the percentage of patients who brought all their prescription medications and a 6-fold increase in the number of prescription medications brought to office visits. The percentage of reviews in which drug therapy problems were identified doubled, as did the percentage of medication regimens revised. Use of the Health Literacy Universal Precautions Toolkit can help to identify drug therapy problems. © Copyright 2016 by the American Board of Family Medicine.

  10. Using stakeholder perspectives to develop an ePrescribing toolkit for NHS Hospitals: a questionnaire study

    PubMed Central

    Cresswell, Kathrin; Slee, Ann; Slight, Sarah P; Coleman, Jamie; Sheikh, Aziz

    2014-01-01

    Summary Objective To evaluate how an online toolkit may support ePrescribing deployments in National Health Service hospitals, by assessing the type of knowledge-based resources currently sought by key stakeholders. Design Questionnaire-based survey of attendees at a national ePrescribing symposium. Setting 2013 National ePrescribing Symposium in London, UK. Participants Eighty-four delegates were eligible for inclusion in the survey, of whom 70 completed and returned the questionnaire. Main outcome measures Estimate of the usefulness and type of content to be included in an ePrescribing toolkit. Results Interest in a toolkit designed to support the implementation and use of ePrescribing systems was high (n = 64; 91.4%). As could be expected given the current dearth of such a resource, few respondents (n = 2; 2.9%) had access or used an ePrescribing toolkit at the time of the survey. Anticipated users for the toolkit included implementation (n = 62; 88.6%) and information technology (n = 61; 87.1%) teams, pharmacists (n = 61; 87.1%), doctors (n = 58; 82.9%) and nurses (n = 56; 80.0%). Summary guidance for every stage of the implementation (n = 48; 68.6%), planning and monitoring tools (n = 47; 67.1%) and case studies of hospitals’ experiences (n = 45; 64.3%) were considered the most useful types of content. Conclusions There is a clear need for reliable and up-to-date knowledge to support ePrescribing system deployments and longer term use. The findings highlight how a toolkit may become a useful instrument for the management of knowledge in the field, not least by allowing the exchange of ideas and shared learning. PMID:25383199

  11. Pcetk: A pDynamo-based Toolkit for Protonation State Calculations in Proteins.

    PubMed

    Feliks, Mikolaj; Field, Martin J

    2015-10-26

    Pcetk (a pDynamo-based continuum electrostatic toolkit) is an open-source, object-oriented toolkit for the calculation of proton binding energetics in proteins. The toolkit is a module of the pDynamo software library, combining the versatility of the Python scripting language and the efficiency of the compiled languages, C and Cython. In the toolkit, we have connected pDynamo to the external Poisson-Boltzmann solver, extended-MEAD. Our goal was to provide a modern and extensible environment for the calculation of protonation states, electrostatic energies, titration curves, and other electrostatic-dependent properties of proteins. Pcetk is freely available under the CeCILL license, which is compatible with the GNU General Public License. The toolkit can be found on the Web at the address http://github.com/mfx9/pcetk. The calculation of protonation states in proteins requires a knowledge of pKa values of protonatable groups in aqueous solution. However, for some groups, such as protonatable ligands bound to protein, the pKa aq values are often difficult to obtain from experiment. As a complement to Pcetk, we revisit an earlier computational method for the estimation of pKa aq values that has an accuracy of ± 0.5 pKa-units or better. Finally, we verify the Pcetk module and the method for estimating pKa aq values with different model cases.

  12. A Toolkit to assess health needs for congenital disorders in low- and middle-income countries: an instrument for public health action.

    PubMed

    Nacul, L C; Stewart, A; Alberg, C; Chowdhury, S; Darlison, M W; Grollman, C; Hall, A; Modell, B; Moorthie, S; Sagoo, G S; Burton, H

    2014-06-01

    In 2010 the World Health Assembly called for action to improve the care and prevention of congenital disorders, noting that technical guidance would be required for this task, especially in low- and middle-income countries. Responding to this call, we have developed a freely available web-accessible Toolkit for assessing health needs for congenital disorders. Materials for the Toolkit website (http://toolkit.phgfoundation.org) were prepared by an iterative process of writing, discussion and modification by the project team, with advice from external experts. A customized database was developed using epidemiological, demographic, socio-economic and health-services data from a range of validated sources. Document-processing and data integration software combines data from the database with a template to generate topic- and country-specific Calculator documents for quantitative analysis. The Toolkit guides users through selection of topics (including both clinical conditions and relevant health services), assembly and evaluation of qualitative and quantitative information, assessment of the potential effects of selected interventions, and planning and prioritization of actions to reduce the risk or prevalence of congenital disorders. The Toolkit enables users without epidemiological or public health expertise to undertake health needs assessment as a prerequisite for strategic planning in relation to congenital disorders in their country or region. © The Author 2013. Published by Oxford University Press on behalf of Faculty of Public Health.

  13. New Careers in Nursing Scholar Alumni Toolkit: Development of an Innovative Resource for Transition to Practice.

    PubMed

    Mauro, Ann Marie P; Escallier, Lori A; Rosario-Sim, Maria G

    2016-01-01

    The transition from student to professional nurse is challenging and may be more difficult for underrepresented minority nurses. The Robert Wood Johnson Foundation New Careers in Nursing (NCIN) program supported development of a toolkit that would serve as a transition-to-practice resource to promote retention of NCIN alumni and other new nurses. Thirteen recent NCIN alumni (54% male, 23% Hispanic/Latino, 23% African Americans) from 3 schools gave preliminary content feedback. An e-mail survey was sent to a convenience sample of 29 recent NCIN alumni who evaluated the draft toolkit using a Likert scale (poor = 1; excellent = 5). Twenty NCIN alumni draft toolkit reviewers (response rate 69%) were primarily female (80%) and Hispanic/Latino (40%). Individual chapters' mean overall rating of 4.67 demonstrated strong validation. Mean scores for overall toolkit content (4.57), usability (4.5), relevance (4.79), and quality (4.71) were also excellent. Qualitative comments were analyzed using thematic content analysis and supported the toolkit's relevance and utility. A multilevel peer review process was also conducted. Peer reviewer feedback resulted in a 6-chapter document that offers resources for successful transition to practice and lays the groundwork for continued professional growth. Future research is needed to determine the ideal time to introduce this resource. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Third Party TMDL Development Toolkit

    EPA Pesticide Factsheets

    Water Environment Federation's toolkit provides basic steps in which an organization or group other than the lead water quality agency takes responsibility for developing the TMDL document and supporting analysis.

  15. Parallel seed-based approach to multiple protein structure similarities detection

    DOE PAGES

    Chapuis, Guillaume; Le Boudic-Jamin, Mathilde; Andonov, Rumen; ...

    2015-01-01

    Finding similarities between protein structures is a crucial task in molecular biology. Most of the existing tools require proteins to be aligned in order-preserving way and only find single alignments even when multiple similar regions exist. We propose a new seed-based approach that discovers multiple pairs of similar regions. Its computational complexity is polynomial and it comes with a quality guarantee—the returned alignments have both root mean squared deviations (coordinate-based as well as internal-distances based) lower than a given threshold, if such exist. We do not require the alignments to be order preserving (i.e., we consider nonsequential alignments), which makesmore » our algorithm suitable for detecting similar domains when comparing multidomain proteins as well as to detect structural repetitions within a single protein. Because the search space for nonsequential alignments is much larger than for sequential ones, the computational burden is addressed by extensive use of parallel computing techniques: a coarse-grain level parallelism making use of available CPU cores for computation and a fine-grain level parallelism exploiting bit-level concurrency as well as vector instructions.« less

  16. Probabilistic biological network alignment.

    PubMed

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-01-01

    Interactions between molecules are probabilistic events. An interaction may or may not happen with some probability, depending on a variety of factors such as the size, abundance, or proximity of the interacting molecules. In this paper, we consider the problem of aligning two biological networks. Unlike existing methods, we allow one of the two networks to contain probabilistic interactions. Allowing interaction probabilities makes the alignment more biologically relevant at the expense of explosive growth in the number of alternative topologies that may arise from different subsets of interactions that take place. We develop a novel method that efficiently and precisely characterizes this massive search space. We represent the topological similarity between pairs of aligned molecules (i.e., proteins) with the help of random variables and compute their expected values. We validate our method showing that, without sacrificing the running time performance, it can produce novel alignments. Our results also demonstrate that our method identifies biologically meaningful mappings under a comprehensive set of criteria used in the literature as well as the statistical coherence measure that we developed to analyze the statistical significance of the similarity of the functions of the aligned protein pairs.

  17. A low-complexity add-on score for protein remote homology search with COMER.

    PubMed

    Margelevicius, Mindaugas

    2018-06-15

    Protein sequence alignment forms the basis for comparative modeling, the most reliable approach to protein structure prediction, among many other applications. Alignment between sequence families, or profile-profile alignment, represents one of the most, if not the most, sensitive means for homology detection but still necessitates improvement. We aim at improving the quality of profile-profile alignments and the sensitivity induced by them by refining profile-profile substitution scores. We have developed a new score that represents an additional component of profile-profile substitution scores. A comprehensive evaluation shows that the new add-on score statistically significantly improves both the sensitivity and the alignment quality of the COMER method. We discuss why the score leads to the improvement and its almost optimal computational complexity that makes it easily implementable in any profile-profile alignment method. An implementation of the add-on score in the open-source COMER software and data are available at https://sourceforge.net/projects/comer. The COMER software is also available on Github at https://github.com/minmarg/comer and as a Docker image (minmar/comer). Supplementary data are available at Bioinformatics online.

  18. An approach to improve the care of mid-life women through the implementation of a Women’s Health Assessment Tool/Clinical Decision Support toolkit

    PubMed Central

    Silvestrin, Terry M; Steenrod, Anna W; Coyne, Karin S; Gross, David E; Esinduy, Canan B; Kodsi, Angela B; Slifka, Gayle J; Abraham, Lucy; Araiza, Anna L; Bushmakin, Andrew G; Luo, Xuemei

    2016-01-01

    The objectives of this study are to describe the implementation process of the Women’s Health Assessment Tool/Clinical Decision Support toolkit and summarize patients’ and clinicians’ perceptions of the toolkit. The Women’s Health Assessment Tool/Clinical Decision Support toolkit was piloted at three clinical sites over a 4-month period in Washington State to evaluate health outcomes among mid-life women. The implementation involved a multistep process and engagement of multiple stakeholders over 18 months. Two-thirds of patients (n = 76/110) and clinicians (n = 8/12) participating in pilot completed feedback surveys; five clinicians participated in qualitative interviews. Most patients felt more prepared for their annual visit (69.7%) and that quality of care improved (68.4%) while clinicians reported streamlined patient visits and improved communication with patients. The Women’s Health Assessment Tool/Clinical Decision Support toolkit offers a unique approach to introduce and address some of the key health issues that affect mid-life women. PMID:27558508

  19. Enhancing the sustainability and climate resiliency of health care facilities: a comparison of initiatives and toolkits.

    PubMed

    Balbus, John; Berry, Peter; Brettle, Meagan; Jagnarine-Azan, Shalini; Soares, Agnes; Ugarte, Ciro; Varangu, Linda; Prats, Elena Villalobos

    2016-09-01

    Extreme weather events have revealed the vulnerability of health care facilities and the extent of devastation to the community when they fail. With climate change anticipated to increase extreme weather and its impacts worldwide-severe droughts, floods, heat waves, and related vector-borne diseases-health care officials need to understand and address the vulnerabilities of their health care systems and take action to improve resiliency in ways that also meet sustainability goals. Generally, the health sector is among a country's largest consumers of energy and a significant source of greenhouse gas emissions. Now it has the opportunity lead climate mitigation, while reducing energy, water, and other costs. This Special Report summarizes several initiatives and compares three toolkits for implementing sustainability and resiliency measures for health care facilities: the Canadian Health Care Facility Climate Change Resiliency Toolkit, the U.S. Sustainable and Climate Resilient Health Care Facilities Toolkit, and the PAHO SMART Hospitals Toolkit of the World Health Organization/Pan American Health Organization. These tools and the lessons learned can provide a critical starting point for any health system in the Americas.

  20. A Software Toolkit to Study Systematic Uncertainties of the Physics Models of the Geant4 Simulation Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genser, Krzysztof; Hatcher, Robert; Kelsey, Michael

    The Geant4 simulation toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models rely on measured cross-sections and phenomenological models with the physically motivated parameters that are tuned to cover many application domains. To study what uncertainties are associated with the Geant4 physics models we have designed and implemented a comprehensive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies. It also enables analysis of multiple variantsmore » of the resulting physics observables of interest in order to estimate the uncertainties associated with the simulation model choices. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. exible run-time con gurable work ow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented in this paper and illustrated with selected results.« less

  1. CRISPR-Cas9 Toolkit for Actinomycete Genome Editing.

    PubMed

    Tong, Yaojun; Robertsen, Helene Lunde; Blin, Kai; Weber, Tilmann; Lee, Sang Yup

    2018-01-01

    Bacteria of the order Actinomycetales are one of the most important sources of bioactive natural products, which are the source of many drugs. However, many of them still lack efficient genome editing methods, some strains even cannot be manipulated at all. This restricts systematic metabolic engineering approaches for boosting known and discovering novel natural products. In order to facilitate the genome editing for actinomycetes, we developed a CRISPR-Cas9 toolkit with high efficiency for actinomyces genome editing. This basic toolkit includes a software for spacer (sgRNA) identification, a system for in-frame gene/gene cluster knockout, a system for gene loss-of-function study, a system for generating a random size deletion library, and a system for gene knockdown. For the latter, a uracil-specific excision reagent (USER) cloning technology was adapted to simplify the CRISPR vector construction process. The application of this toolkit was successfully demonstrated by perturbation of genomes of Streptomyces coelicolor A3(2) and Streptomyces collinus Tü 365. The CRISPR-Cas9 toolkit and related protocol described here can be widely used for metabolic engineering of actinomycetes.

  2. Mission Analysis, Operations, and Navigation Toolkit Environment (Monte) Version 040

    NASA Technical Reports Server (NTRS)

    Sunseri, Richard F.; Wu, Hsi-Cheng; Evans, Scott E.; Evans, James R.; Drain, Theodore R.; Guevara, Michelle M.

    2012-01-01

    Monte is a software set designed for use in mission design and spacecraft navigation operations. The system can process measurement data, design optimal trajectories and maneuvers, and do orbit determination, all in one application. For the first time, a single software set can be used for mission design and navigation operations. This eliminates problems due to different models and fidelities used in legacy mission design and navigation software. The unique features of Monte 040 include a blowdown thruster model for GRAIL (Gravity Recovery and Interior Laboratory) with associated pressure models, as well as an updated, optimalsearch capability (COSMIC) that facilitated mission design for ARTEMIS. Existing legacy software lacked the capabilities necessary for these two missions. There is also a mean orbital element propagator and an osculating to mean element converter that allows long-term orbital stability analysis for the first time in compiled code. The optimized trajectory search tool COSMIC allows users to place constraints and controls on their searches without any restrictions. Constraints may be user-defined and depend on trajectory information either forward or backwards in time. In addition, a long-term orbit stability analysis tool (morbiter) existed previously as a set of scripts on top of Monte. Monte is becoming the primary tool for navigation operations, a core competency at JPL. The mission design capabilities in Monte are becoming mature enough for use in project proposals as well as post-phase A mission design. Monte has three distinct advantages over existing software. First, it is being developed in a modern paradigm: object- oriented C++ and Python. Second, the software has been developed as a toolkit, which allows users to customize their own applications and allows the development team to implement requirements quickly, efficiently, and with minimal bugs. Finally, the software is managed in accordance with the CMMI (Capability Maturity Model Integration), where it has been ap praised at maturity level 3.

  3. Retrieval of radiology reports citing critical findings with disease-specific customization.

    PubMed

    Lacson, Ronilda; Sugarbaker, Nathanael; Prevedello, Luciano M; Ivan, Ip; Mar, Wendy; Andriole, Katherine P; Khorasani, Ramin

    2012-01-01

    Communication of critical results from diagnostic procedures between caregivers is a Joint Commission national patient safety goal. Evaluating critical result communication often requires manual analysis of voluminous data, especially when reviewing unstructured textual results of radiologic findings. Information retrieval (IR) tools can facilitate this process by enabling automated retrieval of radiology reports that cite critical imaging findings. However, IR tools that have been developed for one disease or imaging modality often need substantial reconfiguration before they can be utilized for another disease entity. THIS PAPER: 1) describes the process of customizing two Natural Language Processing (NLP) and Information Retrieval/Extraction applications - an open-source toolkit, A Nearly New Information Extraction system (ANNIE); and an application developed in-house, Information for Searching Content with an Ontology-Utilizing Toolkit (iSCOUT) - to illustrate the varying levels of customization required for different disease entities and; 2) evaluates each application's performance in identifying and retrieving radiology reports citing critical imaging findings for three distinct diseases, pulmonary nodule, pneumothorax, and pulmonary embolus. Both applications can be utilized for retrieval. iSCOUT and ANNIE had precision values between 0.90-0.98 and recall values between 0.79 and 0.94. ANNIE had consistently higher precision but required more customization. Understanding the customizations involved in utilizing NLP applications for various diseases will enable users to select the most suitable tool for specific tasks.

  4. Retrieval of Radiology Reports Citing Critical Findings with Disease-Specific Customization

    PubMed Central

    Lacson, Ronilda; Sugarbaker, Nathanael; Prevedello, Luciano M; Ivan, IP; Mar, Wendy; Andriole, Katherine P; Khorasani, Ramin

    2012-01-01

    Background: Communication of critical results from diagnostic procedures between caregivers is a Joint Commission national patient safety goal. Evaluating critical result communication often requires manual analysis of voluminous data, especially when reviewing unstructured textual results of radiologic findings. Information retrieval (IR) tools can facilitate this process by enabling automated retrieval of radiology reports that cite critical imaging findings. However, IR tools that have been developed for one disease or imaging modality often need substantial reconfiguration before they can be utilized for another disease entity. Purpose: This paper: 1) describes the process of customizing two Natural Language Processing (NLP) and Information Retrieval/Extraction applications – an open-source toolkit, A Nearly New Information Extraction system (ANNIE); and an application developed in-house, Information for Searching Content with an Ontology-Utilizing Toolkit (iSCOUT) – to illustrate the varying levels of customization required for different disease entities and; 2) evaluates each application’s performance in identifying and retrieving radiology reports citing critical imaging findings for three distinct diseases, pulmonary nodule, pneumothorax, and pulmonary embolus. Results: Both applications can be utilized for retrieval. iSCOUT and ANNIE had precision values between 0.90-0.98 and recall values between 0.79 and 0.94. ANNIE had consistently higher precision but required more customization. Conclusion: Understanding the customizations involved in utilizing NLP applications for various diseases will enable users to select the most suitable tool for specific tasks. PMID:22934127

  5. The identification of complete domains within protein sequences using accurate E-values for semi-global alignment

    PubMed Central

    Kann, Maricel G.; Sheetlin, Sergey L.; Park, Yonil; Bryant, Stephen H.; Spouge, John L.

    2007-01-01

    The sequencing of complete genomes has created a pressing need for automated annotation of gene function. Because domains are the basic units of protein function and evolution, a gene can be annotated from a domain database by aligning domains to the corresponding protein sequence. Ideally, complete domains are aligned to protein subsequences, in a ‘semi-global alignment’. Local alignment, which aligns pieces of domains to subsequences, is common in high-throughput annotation applications, however. It is a mature technique, with the heuristics and accurate E-values required for screening large databases and evaluating the screening results. Hidden Markov models (HMMs) provide an alternative theoretical framework for semi-global alignment, but their use is limited because they lack heuristic acceleration and accurate E-values. Our new tool, GLOBAL, overcomes some limitations of previous semi-global HMMs: it has accurate E-values and the possibility of the heuristic acceleration required for high-throughput applications. Moreover, according to a standard of truth based on protein structure, two semi-global HMM alignment tools (GLOBAL and HMMer) had comparable performance in identifying complete domains, but distinctly outperformed two tools based on local alignment. When searching for complete protein domains, therefore, GLOBAL avoids disadvantages commonly associated with HMMs, yet maintains their superior retrieval performance. PMID:17596268

  6. Transportation librarian's toolkit

    DOT National Transportation Integrated Search

    2007-12-01

    The Transportation Librarians Toolkit is a product of the Transportation Library Connectivity pooled fund study, TPF- 5(105), a collaborative, grass-roots effort by transportation libraries to enhance information accessibility and professional expert...

  7. The Python ARM Radar Toolkit (Py-ART), a library for working with weather radar data in the Python programming language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmus, Jonathan J.; Collis, Scott M.

    The Python ARM Radar Toolkit is a package for reading, visualizing, correcting and analysing data from weather radars. Development began to meet the needs of the Atmospheric Radiation Measurement Climate Research Facility and has since expanded to provide a general-purpose framework for working with data from weather radars in the Python programming language. The toolkit is built on top of libraries in the Scientific Python ecosystem including NumPy, SciPy, and matplotlib, and makes use of Cython for interfacing with existing radar libraries written in C and to speed up computationally demanding algorithms. As a result, the source code for themore » toolkit is available on GitHub and is distributed under a BSD license.« less

  8. Integrated Systems Health Management (ISHM) Toolkit

    NASA Technical Reports Server (NTRS)

    Venkatesh, Meera; Kapadia, Ravi; Walker, Mark; Wilkins, Kim

    2013-01-01

    A framework of software components has been implemented to facilitate the development of ISHM systems according to a methodology based on Reliability Centered Maintenance (RCM). This framework is collectively referred to as the Toolkit and was developed using General Atomics' Health MAP (TM) technology. The toolkit is intended to provide assistance to software developers of mission-critical system health monitoring applications in the specification, implementation, configuration, and deployment of such applications. In addition to software tools designed to facilitate these objectives, the toolkit also provides direction to software developers in accordance with an ISHM specification and development methodology. The development tools are based on an RCM approach for the development of ISHM systems. This approach focuses on defining, detecting, and predicting the likelihood of system functional failures and their undesirable consequences.

  9. The Python ARM Radar Toolkit (Py-ART), a library for working with weather radar data in the Python programming language

    DOE PAGES

    Helmus, Jonathan J.; Collis, Scott M.

    2016-07-18

    The Python ARM Radar Toolkit is a package for reading, visualizing, correcting and analysing data from weather radars. Development began to meet the needs of the Atmospheric Radiation Measurement Climate Research Facility and has since expanded to provide a general-purpose framework for working with data from weather radars in the Python programming language. The toolkit is built on top of libraries in the Scientific Python ecosystem including NumPy, SciPy, and matplotlib, and makes use of Cython for interfacing with existing radar libraries written in C and to speed up computationally demanding algorithms. As a result, the source code for themore » toolkit is available on GitHub and is distributed under a BSD license.« less

  10. Evaluating the efficacy of a structure-derived amino acid substitution matrix in detecting protein homologs by BLAST and PSI-BLAST.

    PubMed

    Goonesekere, Nalin Cw

    2009-01-01

    The large numbers of protein sequences generated by whole genome sequencing projects require rapid and accurate methods of annotation. The detection of homology through computational sequence analysis is a powerful tool in determining the complex evolutionary and functional relationships that exist between proteins. Homology search algorithms employ amino acid substitution matrices to detect similarity between proteins sequences. The substitution matrices in common use today are constructed using sequences aligned without reference to protein structure. Here we present amino acid substitution matrices constructed from the alignment of a large number of protein domain structures from the structural classification of proteins (SCOP) database. We show that when incorporated into the homology search algorithms BLAST and PSI-blast, the structure-based substitution matrices enhance the efficacy of detecting remote homologs.

  11. Searching social networks for subgraph patterns

    NASA Astrophysics Data System (ADS)

    Ogaard, Kirk; Kase, Sue; Roy, Heather; Nagi, Rakesh; Sambhoos, Kedar; Sudit, Moises

    2013-06-01

    Software tools for Social Network Analysis (SNA) are being developed which support various types of analysis of social networks extracted from social media websites (e.g., Twitter). Once extracted and stored in a database such social networks are amenable to analysis by SNA software. This data analysis often involves searching for occurrences of various subgraph patterns (i.e., graphical representations of entities and relationships). The authors have developed the Graph Matching Toolkit (GMT) which provides an intuitive Graphical User Interface (GUI) for a heuristic graph matching algorithm called the Truncated Search Tree (TruST) algorithm. GMT is a visual interface for graph matching algorithms processing large social networks. GMT enables an analyst to draw a subgraph pattern by using a mouse to select categories and labels for nodes and links from drop-down menus. GMT then executes the TruST algorithm to find the top five occurrences of the subgraph pattern within the social network stored in the database. GMT was tested using a simulated counter-insurgency dataset consisting of cellular phone communications within a populated area of operations in Iraq. The results indicated GMT (when executing the TruST graph matching algorithm) is a time-efficient approach to searching large social networks. GMT's visual interface to a graph matching algorithm enables intelligence analysts to quickly analyze and summarize the large amounts of data necessary to produce actionable intelligence.

  12. The Lean and Environment Toolkit

    EPA Pesticide Factsheets

    This Lean and Environment Toolkit assembles practical experience collected by the U.S. Environmental Protection Agency (EPA) and partner companies and organizations that have experience with coordinating Lean implementation and environmental management.

  13. Factors driving physician-hospital alignment in orthopaedic surgery.

    PubMed

    Page, Alexandra E; Butler, Craig A; Bozic, Kevin J

    2013-06-01

    The relationships between physicians and hospitals are viewed as central to the proposition of delivering high-quality health care at a sustainable cost. Over the last two decades, major changes in the scope, breadth, and complexities of these relationships have emerged. Despite understanding the need for physician-hospital alignment, identification and understanding the incentives and drivers of alignment prove challenging. Our review identifies the primary drivers of physician alignment with hospitals from both the physician and hospital perspectives. Further, we assess the drivers more specific to motivating orthopaedic surgeons to align with hospitals. We performed a comprehensive literature review from 1992 to March 2012 to evaluate published studies and opinions on the issues surrounding physician-hospital alignment. Literature searches were performed in both MEDLINE(®) and Health Business™ Elite. Available literature identifies economic and regulatory shifts in health care and cultural factors as primary drivers of physician-hospital alignment. Specific to orthopaedics, factors driving alignment include the profitability of orthopaedic service lines, the expense of implants, and issues surrounding ambulatory surgery centers and other ancillary services. Evolving healthcare delivery and payment reforms promote increased collaboration between physicians and hospitals. While economic incentives and increasing regulatory demands provide the strongest drivers, cultural changes including physician leadership and changing expectations of work-life balance must be considered when pursuing successful alignment models. Physicians and hospitals view each other as critical to achieving lower-cost, higher-quality health care.

  14. User's manual for the two-dimensional transputer graphics toolkit

    NASA Technical Reports Server (NTRS)

    Ellis, Graham K.

    1988-01-01

    The user manual for the 2-D graphics toolkit for a transputer based parallel processor is presented. The toolkit consists of a package of 2-D display routines that can be used for the simulation visualizations. It supports multiple windows, double buffered screens for animations, and simple graphics transformations such as translation, rotation, and scaling. The display routines are written in occam to take advantage of the multiprocessing features available on transputers. The package is designed to run on a transputer separate from the graphics board.

  15. A Machine Learning and Optimization Toolkit for the Swarm

    DTIC Science & Technology

    2014-11-17

    Machine   Learning  and  Op0miza0on   Toolkit  for  the  Swarm   Ilge  Akkaya,  Shuhei  Emoto...3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE A Machine Learning and Optimization Toolkit for the Swarm 5a. CONTRACT NUMBER... machine   learning   methodologies  by  providing  the  right  interfaces  between   machine   learning  tools  and

  16. A clinical research analytics toolkit for cohort study.

    PubMed

    Yu, Yiqin; Zhu, Yu; Sun, Xingzhi; Tao, Ying; Zhang, Shuo; Xu, Linhao; Pan, Yue

    2012-01-01

    This paper presents a clinical informatics toolkit that can assist physicians to conduct cohort studies effectively and efficiently. The toolkit has three key features: 1) support of procedures defined in epidemiology, 2) recommendation of statistical methods in data analysis, and 3) automatic generation of research reports. On one hand, our system can help physicians control research quality by leveraging the integrated knowledge of epidemiology and medical statistics; on the other hand, it can improve productivity by reducing the complexities for physicians during their cohort studies.

  17. UQTk Version 3.0.3 User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sargsyan, Khachik; Safta, Cosmin; Chowdhary, Kamaljit Singh

    2017-05-01

    The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 3.0.3 offers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sen- sitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.

  18. Searching for SNPs with cloud computing

    PubMed Central

    2009-01-01

    As DNA sequencing outpaces improvements in computer speed, there is a critical need to accelerate tasks like alignment and SNP calling. Crossbow is a cloud-computing software tool that combines the aligner Bowtie and the SNP caller SOAPsnp. Executing in parallel using Hadoop, Crossbow analyzes data comprising 38-fold coverage of the human genome in three hours using a 320-CPU cluster rented from a cloud computing service for about $85. Crossbow is available from http://bowtie-bio.sourceforge.net/crossbow/. PMID:19930550

  19. Food: Too Good to Waste Implementation Guide and Toolkit

    EPA Pesticide Factsheets

    The Food: Too Good to Waste (FTGTW) Implementation Guide and Toolkit is designed for community organizations, local governments, households and others interested in reducing wasteful household food management practices.

  20. The Development and Evaluation of an Online Healthcare Toolkit for Autistic Adults and their Primary Care Providers.

    PubMed

    Nicolaidis, Christina; Raymaker, Dora; McDonald, Katherine; Kapp, Steven; Weiner, Michael; Ashkenazy, Elesia; Gerrity, Martha; Kripke, Clarissa; Platt, Laura; Baggs, Amelia

    2016-10-01

    The healthcare system is ill-equipped to meet the needs of adults on the autism spectrum. Our goal was to use a community-based participatory research (CBPR) approach to develop and evaluate tools to facilitate the primary healthcare of autistic adults. Toolkit development included cognitive interviewing and test-retest reliability studies. Evaluation consisted of a mixed-methods, single-arm pre/post-intervention comparison. A total of 259 autistic adults and 51 primary care providers (PCPs) residing in the United States. The AASPIRE Healthcare toolkit includes the Autism Healthcare Accommodations Tool (AHAT)-a tool that allows patients to create a personalized accommodations report for their PCP-and general healthcare- and autism-related information, worksheets, checklists, and resources for patients and healthcare providers. Satisfaction with patient-provider communication, healthcare self-efficacy, barriers to healthcare, and satisfaction with the toolkit's usability and utility; responses to open-ended questions. Preliminary testing of the AHAT demonstrated strong content validity and adequate test-retest stability. Almost all patient participants (>94 %) felt that the AHAT and the toolkit were easy to use, important, and useful. In pre/post-intervention comparisons, the mean number of barriers decreased (from 4.07 to 2.82, p < 0.0001), healthcare self-efficacy increased (from 37.9 to 39.4, p = 0.02), and satisfaction with PCP communication improved (from 30.9 to 32.6, p = 0.03). Patients stated that the toolkit helped clarify their needs, enabled them to self-advocate and prepare for visits more effectively, and positively influenced provider behavior. Most of the PCPs surveyed read the AHAT (97 %), rated it as moderately or very useful (82 %), and would recommend it to other patients (87 %). The CBPR process resulted in a reliable healthcare accommodation tool and a highly accessible healthcare toolkit. Patients and providers indicated that the tools positively impacted healthcare interactions. The toolkit has the potential to reduce barriers to healthcare and improve healthcare self-efficacy and patient-provider communication.

  1. A patient and public involvement (PPI) toolkit for meaningful and flexible involvement in clinical trials - a work in progress.

    PubMed

    Bagley, Heather J; Short, Hannah; Harman, Nicola L; Hickey, Helen R; Gamble, Carrol L; Woolfall, Kerry; Young, Bridget; Williamson, Paula R

    2016-01-01

    Funders of research are increasingly requiring researchers to involve patients and the public in their research. Patient and public involvement (PPI) in research can potentially help researchers make sure that the design of their research is relevant, that it is participant friendly and ethically sound. Using and sharing PPI resources can benefit those involved in undertaking PPI, but existing PPI resources are not used consistently and this can lead to duplication of effort. This paper describes how we are developing a toolkit to support clinical trials teams in a clinical trials unit. The toolkit will provide a key 'off the shelf' resource to support trial teams with limited resources, in undertaking PPI. Key activities in further developing and maintaining the toolkit are to: ● listen to the views and experience of both research teams and patient and public contributors who use the tools; ● modify the tools based on our experience of using them; ● identify the need for future tools; ● update the toolkit based on any newly identified resources that come to light; ● raise awareness of the toolkit and ● work in collaboration with others to either develop or test out PPI resources in order to reduce duplication of work in PPI. Background Patient and public involvement (PPI) in research is increasingly a funder requirement due to the potential benefits in the design of relevant, participant friendly, ethically sound research. The use and sharing of resources can benefit PPI, but available resources are not consistently used leading to duplication of effort. This paper describes a developing toolkit to support clinical trials teams to undertake effective and meaningful PPI. Methods The first phase in developing the toolkit was to describe which PPI activities should be considered in the pathway of a clinical trial and at what stage these activities should take place. This pathway was informed through review of the type and timing of PPI activities within trials coordinated by the Clinical Trials Research Centre and previously described areas of potential PPI impact in trials. In the second phase, key websites around PPI and identification of resources opportunistically, e.g. in conversation with other trialists or social media, were used to identify resources. Tools were developed where gaps existed. Results A flowchart was developed describing PPI activities that should be considered in the clinical trial pathway and the point at which these activities should happen. Three toolkit domains were identified: planning PPI; supporting PPI; recording and evaluating PPI. Four main activities and corresponding tools were identified under the planning for PPI: developing a plan; identifying patient and public contributors; allocating appropriate costs; and managing expectations. In supporting PPI, tools were developed to review participant information sheets. These tools, which require a summary of potential trial participant characteristics and circumstances help to clarify requirements and expectations of PPI review. For recording and evaluating PPI, the planned PPI interventions should be monitored in terms of impact, and a tool to monitor public contributor experience is in development. Conclusions This toolkit provides a developing 'off the shelf' resource to support trial teams with limited resources in undertaking PPI. Key activities in further developing and maintaining the toolkit are to: listen to the views and experience of both research teams and public contributors using the tools, to identify the need for future tools, to modify tools based on experience of their use; to update the toolkit based on any newly identified resources that come to light; to raise awareness of the toolkit and to work in collaboration with others to both develop and test out PPI resources in order to reduce duplication of work in PPI.

  2. Upgrade of the HET segment control system, utilizing state-of-the-art, decentralized and embedded system controllers

    NASA Astrophysics Data System (ADS)

    Häuser, Marco; Richter, Josef; Kriel, Herman; Turbyfill, Amanda; Buetow, Brent; Ward, Michael

    2016-07-01

    Together with the ongoing major instrument upgrade of the Hobby-Eberly Telescope (HET) we present the planned upgrade of the HET Segment Control System (SCS) to SCS2. Because HET's primary mirror is segmented into 91 individual 1-meter hexagonal mirrors, the SCS is essential to maintain the mirror alignment throughout an entire night of observations. SCS2 will complete tip, tilt and piston corrections of each mirror segment at a significantly higher rate than the original SCS. The new motion control hardware will further increase the system's reliability. The initial optical measurements of this array are performed by the Mirror Alignment Recovery System (MARS) and the HET Extra Focal Instrument (HEFI). Once the segments are optically aligned, the inductive edge sensors give sub-micron precise feedback of each segment's positions relative to its adjacent segments. These sensors are part of the Segment Alignment Maintenance System (SAMS) and are responsible for providing information about positional changes due to external influences, such as steep temperature changes and mechanical stress, and for making compensatory calculations while tracking the telescope on sky. SCS2 will use the optical alignment systems and SAMS inputs to command corrections of every segment in a closed loop. The correction period will be roughly 30 seconds, mostly due to the measurement and averaging process of the SAMS algorithm. The segment actuators will be controlled by the custom developed HET Segment MOtion COntroller (SMOCO). It is a direct descendant of University Observatory Munich's embedded, CAN-based system and instrument control tool-kit. To preserve the existing HET hardware layout, each SMOCO will control two adjacent mirror segments. Unlike the original SCS motor controllers, SMOCO is able to drive all six axes of its two segments at the same time. SCS2 will continue to allow for sub-arcsecond precision in tip and tilt as well as sub-micro meter precision in piston. These estimations are based on the current performance of the segment support mechanics. SMOCO's smart motion control allows for on-the-y correction of the move targets. Since SMOCO uses state-of-the-art motion control electronics and embedded decentralized controllers, we expect reduction in thermal emission as well as less maintenance time.

  3. Lean and Information Technology Toolkit

    EPA Pesticide Factsheets

    The Lean and Information Technology Toolkit is a how-to guide which provides resources to environmental agencies to help them use Lean Startup, Lean process improvement, and Agile tools to streamline and automate processes.

  4. Health Information in Kirundi (Rundi)

    MedlinePlus

    ... Abuse Healthy Living Toolkit: Violence In the Home - English PDF Healthy Living Toolkit: Violence In the Home - ... Parents on Talking to Children About the Flu - English PDF Advice for Parents on Talking to Children ...

  5. 78 FR 14774 - U.S. Environmental Solutions Toolkit-Universal Waste

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-07

    ... following list: (a) Mercury Recycling Technology (b) E-Waste Recycling Technology (c) CRT Recycling Technology (d) Lamp Crushing Systems For purposes of participation in the Toolkit, ``United States exporter...

  6. Scan for Motifs: a webserver for the analysis of post-transcriptional regulatory elements in the 3' untranslated regions (3' UTRs) of mRNAs.

    PubMed

    Biswas, Ambarish; Brown, Chris M

    2014-06-08

    Gene expression in vertebrate cells may be controlled post-transcriptionally through regulatory elements in mRNAs. These are usually located in the untranslated regions (UTRs) of mRNA sequences, particularly the 3'UTRs. Scan for Motifs (SFM) simplifies the process of identifying a wide range of regulatory elements on alignments of vertebrate 3'UTRs. SFM includes identification of both RNA Binding Protein (RBP) sites and targets of miRNAs. In addition to searching pre-computed alignments, the tool provides users the flexibility to search their own sequences or alignments. The regulatory elements may be filtered by expected value cutoffs and are cross-referenced back to their respective sources and literature. The output is an interactive graphical representation, highlighting potential regulatory elements and overlaps between them. The output also provides simple statistics and links to related resources for complementary analyses. The overall process is intuitive and fast. As SFM is a free web-application, the user does not need to install any software or databases. Visualisation of the binding sites of different classes of effectors that bind to 3'UTRs will facilitate the study of regulatory elements in 3' UTRs.

  7. The VirusBanker database uses a Java program to allow flexible searching through Bunyaviridae sequences

    PubMed Central

    Fourment, Mathieu; Gibbs, Mark J

    2008-01-01

    Background Viruses of the Bunyaviridae have segmented negative-stranded RNA genomes and several of them cause significant disease. Many partial sequences have been obtained from the segments so that GenBank searches give complex results. Sequence databases usually use HTML pages to mediate remote sorting, but this approach can be limiting and may discourage a user from exploring a database. Results The VirusBanker database contains Bunyaviridae sequences and alignments and is presented as two spreadsheets generated by a Java program that interacts with a MySQL database on a server. Sequences are displayed in rows and may be sorted using information that is displayed in columns and includes data relating to the segment, gene, protein, species, strain, sequence length, terminal sequence and date and country of isolation. Bunyaviridae sequences and alignments may be downloaded from the second spreadsheet with titles defined by the user from the columns, or viewed when passed directly to the sequence editor, Jalview. Conclusion VirusBanker allows large datasets of aligned nucleotide and protein sequences from the Bunyaviridae to be compiled and winnowed rapidly using criteria that are formulated heuristically. PMID:18251994

  8. Fast and accurate non-sequential protein structure alignment using a new asymmetric linear sum assignment heuristic.

    PubMed

    Brown, Peter; Pullan, Wayne; Yang, Yuedong; Zhou, Yaoqi

    2016-02-01

    The three dimensional tertiary structure of a protein at near atomic level resolution provides insight alluding to its function and evolution. As protein structure decides its functionality, similarity in structure usually implies similarity in function. As such, structure alignment techniques are often useful in the classifications of protein function. Given the rapidly growing rate of new, experimentally determined structures being made available from repositories such as the Protein Data Bank, fast and accurate computational structure comparison tools are required. This paper presents SPalignNS, a non-sequential protein structure alignment tool using a novel asymmetrical greedy search technique. The performance of SPalignNS was evaluated against existing sequential and non-sequential structure alignment methods by performing trials with commonly used datasets. These benchmark datasets used to gauge alignment accuracy include (i) 9538 pairwise alignments implied by the HOMSTRAD database of homologous proteins; (ii) a subset of 64 difficult alignments from set (i) that have low structure similarity; (iii) 199 pairwise alignments of proteins with similar structure but different topology; and (iv) a subset of 20 pairwise alignments from the RIPC set. SPalignNS is shown to achieve greater alignment accuracy (lower or comparable root-mean squared distance with increased structure overlap coverage) for all datasets, and the highest agreement with reference alignments from the challenging dataset (iv) above, when compared with both sequentially constrained alignments and other non-sequential alignments. SPalignNS was implemented in C++. The source code, binary executable, and a web server version is freely available at: http://sparks-lab.org yaoqi.zhou@griffith.edu.au. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Water Quality Trading Toolkit for Permit Writers

    EPA Pesticide Factsheets

    The Water Quality Trading Toolkit for Permit Writers is EPA’s first “how-to” manual on designing and implementing water quality trading programs. It helps NPDES permitting authorities incorporate trading provisions into permits.

  10. Heart Health for Women

    MedlinePlus

    ... more about how you can participate. Heart Health Social Media Toolkit The FDA Office of Women's Health offers ... informed about heart health. Use the Heart Health Social Media Toolkit to encourage women in your network to ...

  11. Integrating single-cell transcriptomic data across different conditions, technologies, and species.

    PubMed

    Butler, Andrew; Hoffman, Paul; Smibert, Peter; Papalexi, Efthymia; Satija, Rahul

    2018-06-01

    Computational single-cell RNA-seq (scRNA-seq) methods have been successfully applied to experiments representing a single condition, technology, or species to discover and define cellular phenotypes. However, identifying subpopulations of cells that are present across multiple data sets remains challenging. Here, we introduce an analytical strategy for integrating scRNA-seq data sets based on common sources of variation, enabling the identification of shared populations across data sets and downstream comparative analysis. We apply this approach, implemented in our R toolkit Seurat (http://satijalab.org/seurat/), to align scRNA-seq data sets of peripheral blood mononuclear cells under resting and stimulated conditions, hematopoietic progenitors sequenced using two profiling technologies, and pancreatic cell 'atlases' generated from human and mouse islets. In each case, we learn distinct or transitional cell states jointly across data sets, while boosting statistical power through integrated analysis. Our approach facilitates general comparisons of scRNA-seq data sets, potentially deepening our understanding of how distinct cell states respond to perturbation, disease, and evolution.

  12. C-CAT: a computer software used to analyze and select Chinese characters and character components for psychological research.

    PubMed

    Lo, Ming; Hue, Chih-Wei

    2008-11-01

    The Character-Component Analysis Toolkit (C-CAT) software was designed to assist researchers in constructing experimental materials using traditional Chinese characters. The software package contains two sets of character stocks: one suitable for research using literate adults as subjects and one suitable for research using schoolchildren as subjects. The software can identify linguistic properties, such as the number of strokes contained, the character-component pronunciation regularity, and the arrangement of character components within a character. Moreover, it can compute a character's linguistic frequency, neighborhood size, and phonetic validity with respect to a user-selected character stock. It can also search the selected character stock for similar characters or for character components with user-specified linguistic properties.

  13. Current data do not support routine use of patient-specific instrumentation in total knee arthroplasty.

    PubMed

    Voleti, Pramod B; Hamula, Mathew J; Baldwin, Keith D; Lee, Gwo-Chin

    2014-09-01

    The purpose of this systematic review and meta-analysis is to compare patient-specific instrumentation (PSI) versus standard instrumentation for total knee arthroplasty (TKA) with regard to coronal and sagittal alignment, operative time, intraoperative blood loss, and cost. A systematic query in search of relevant studies was performed, and the data published in these studies were extracted and aggregated. In regard to coronal alignment, PSI demonstrated improved accuracy in femorotibial angle (FTA) (P=0.0003), while standard instrumentation demonstrated improved accuracy in hip-knee-ankle angle (HKA) (P=0.02). Importantly, there were no differences between treatment groups in the percentages of FTA or HKA outliers (>3 degrees from target alignment) (P=0.7). Sagittal alignment, operative time, intraoperative blood loss, and cost were also similar between groups (P>0.1 for all comparisons). Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Accelerating Information Retrieval from Profile Hidden Markov Model Databases.

    PubMed

    Tamimi, Ahmad; Ashhab, Yaqoub; Tamimi, Hashem

    2016-01-01

    Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases.

  15. The connectome viewer toolkit: an open source framework to manage, analyze, and visualize connectomes.

    PubMed

    Gerhard, Stephan; Daducci, Alessandro; Lemkaddem, Alia; Meuli, Reto; Thiran, Jean-Philippe; Hagmann, Patric

    2011-01-01

    Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit - a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/

  16. The Connectome Viewer Toolkit: An Open Source Framework to Manage, Analyze, and Visualize Connectomes

    PubMed Central

    Gerhard, Stephan; Daducci, Alessandro; Lemkaddem, Alia; Meuli, Reto; Thiran, Jean-Philippe; Hagmann, Patric

    2011-01-01

    Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit – a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/ PMID:21713110

  17. An evaluation capacity building toolkit for principal investigators of undergraduate research experiences: A demonstration of transforming theory into practice.

    PubMed

    Rorrer, Audrey S

    2016-04-01

    This paper describes the approach and process undertaken to develop evaluation capacity among the leaders of a federally funded undergraduate research program. An evaluation toolkit was developed for Computer and Information Sciences and Engineering(1) Research Experiences for Undergraduates(2) (CISE REU) programs to address the ongoing need for evaluation capacity among principal investigators who manage program evaluation. The toolkit was the result of collaboration within the CISE REU community with the purpose being to provide targeted instructional resources and tools for quality program evaluation. Challenges were to balance the desire for standardized assessment with the responsibility to account for individual program contexts. Toolkit contents included instructional materials about evaluation practice, a standardized applicant management tool, and a modulated outcomes measure. Resulting benefits from toolkit deployment were having cost effective, sustainable evaluation tools, a community evaluation forum, and aggregate measurement of key program outcomes for the national program. Lessons learned included the imperative of understanding the evaluation context, engaging stakeholders, and building stakeholder trust. Results from project measures are presented along with a discussion of guidelines for facilitating evaluation capacity building that will serve a variety of contexts. Copyright © 2016. Published by Elsevier Ltd.

  18. Health Care Facilities Resilient to Climate Change Impacts

    PubMed Central

    Paterson, Jaclyn; Berry, Peter; Ebi, Kristie; Varangu, Linda

    2014-01-01

    Climate change will increase the frequency and magnitude of extreme weather events and create risks that will impact health care facilities. Health care facilities will need to assess climate change risks and adopt adaptive management strategies to be resilient, but guidance tools are lacking. In this study, a toolkit was developed for health care facility officials to assess the resiliency of their facility to climate change impacts. A mixed methods approach was used to develop climate change resiliency indicators to inform the development of the toolkit. The toolkit consists of a checklist for officials who work in areas of emergency management, facilities management and health care services and supply chain management, a facilitator’s guide for administering the checklist, and a resource guidebook to inform adaptation. Six health care facilities representing three provinces in Canada piloted the checklist. Senior level officials with expertise in the aforementioned areas were invited to review the checklist, provide feedback during qualitative interviews and review the final toolkit at a stakeholder workshop. The toolkit helps health care facility officials identify gaps in climate change preparedness, direct allocation of adaptation resources and inform strategic planning to increase resiliency to climate change. PMID:25522050

  19. FLASHFLOOD: A 3D Field-based similarity search and alignment method for flexible molecules

    NASA Astrophysics Data System (ADS)

    Pitman, Michael C.; Huber, Wolfgang K.; Horn, Hans; Krämer, Andreas; Rice, Julia E.; Swope, William C.

    2001-07-01

    A three-dimensional field-based similarity search and alignment method for flexible molecules is introduced. The conformational space of a flexible molecule is represented in terms of fragments and torsional angles of allowed conformations. A user-definable property field is used to compute features of fragment pairs. Features are generalizations of CoMMA descriptors (Silverman, B.D. and Platt, D.E., J. Med. Chem., 39 (1996) 2129.) that characterize local regions of the property field by its local moments. The features are invariant under coordinate system transformations. Features taken from a query molecule are used to form alignments with fragment pairs in the database. An assembly algorithm is then used to merge the fragment pairs into full structures, aligned to the query. Key to the method is the use of a context adaptive descriptor scaling procedure as the basis for similarity. This allows the user to tune the weights of the various feature components based on examples relevant to the particular context under investigation. The property fields may range from simple, phenomenological fields, to fields derived from quantum mechanical calculations. We apply the method to the dihydrofolate/methotrexate benchmark system, and show that when one injects relevant contextual information into the descriptor scaling procedure, better results are obtained more efficiently. We also show how the method works and include computer times for a query from a database that represents approximately 23 million conformers of seventeen flexible molecules.

  20. Model of myosin node aggregation into a contractile ring: the effect of local alignment

    NASA Astrophysics Data System (ADS)

    Ojkic, Nikola; Wu, Jian-Qiu; Vavylonis, Dimitrios

    2011-09-01

    Actomyosin bundles frequently form through aggregation of membrane-bound myosin clusters. One such example is the formation of the contractile ring in fission yeast from a broad band of cortical nodes. Nodes are macromolecular complexes containing several dozens of myosin-II molecules and a few formin dimers. The condensation of a broad band of nodes into the contractile ring has been previously described by a search, capture, pull and release (SCPR) model. In SCPR, a random search process mediated by actin filaments nucleated by formins leads to transient actomyosin connections among nodes that pull one another into a ring. The SCPR model reproduces the transport of nodes over long distances and predicts observed clump-formation instabilities in mutants. However, the model does not generate transient linear elements and meshwork structures as observed in some wild-type and mutant cells during ring assembly. As a minimal model of node alignment, we added short-range aligning forces to the SCPR model representing currently unresolved mechanisms that may involve structural components, cross-linking and bundling proteins. We studied the effect of the local node alignment mechanism on ring formation numerically. We varied the new parameters and found viable rings for a realistic range of values. Morphologically, transient structures that form during ring assembly resemble those observed in experiments with wild-type and cdc25-22 cells. Our work supports a hierarchical process of ring self-organization involving components drawn together from distant parts of the cell followed by progressive stabilization.

  1. Current Status of VO Compliant Data Service in Japanese Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Shirasaki, Y.; Komiya, Y.; Ohishi, M.; Mizumoto, Y.; Ishihara, Y.; Tsutsumi, J.; Hiyama, T.; Nakamoto, H.; Sakamoto, M.

    2012-09-01

    In these years, standards to build a Virtual Observatory (VO) data service have been established with the efforts in the International Virtual Observatory Alliance (IVOA). We applied these newly established standards (SSAP, TAP) to our VO service toolkit which was developed to implement earlier VO standards SIAP and (deprecated) SkyNode. The toolkit can be easily installed and provides a GUI interface to construct and manage VO service. In this paper, we describes the architecture of our toolkit and how it is used to start hosting VO service.

  2. Python-based geometry preparation and simulation visualization toolkits for STEPS

    PubMed Central

    Chen, Weiliang; De Schutter, Erik

    2014-01-01

    STEPS is a stochastic reaction-diffusion simulation engine that implements a spatial extension of Gillespie's Stochastic Simulation Algorithm (SSA) in complex tetrahedral geometries. An extensive Python-based interface is provided to STEPS so that it can interact with the large number of scientific packages in Python. However, a gap existed between the interfaces of these packages and the STEPS user interface, where supporting toolkits could reduce the amount of scripting required for research projects. This paper introduces two new supporting toolkits that support geometry preparation and visualization for STEPS simulations. PMID:24782754

  3. Loss of Coolant Accident (LOCA) / Emergency Core Coolant System (ECCS Evaluation of Risk-Informed Margins Management Strategies for a Representative Pressurized Water Reactor (PWR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szilard, Ronaldo Henriques

    A Risk Informed Safety Margin Characterization (RISMC) toolkit and methodology are proposed for investigating nuclear power plant core, fuels design and safety analysis, including postulated Loss-of-Coolant Accident (LOCA) analysis. This toolkit, under an integrated evaluation model framework, is name LOCA toolkit for the US (LOTUS). This demonstration includes coupled analysis of core design, fuel design, thermal hydraulics and systems analysis, using advanced risk analysis tools and methods to investigate a wide range of results.

  4. WIND Toolkit Power Data Site Index

    DOE Data Explorer

    Draxl, Caroline; Mathias-Hodge, Bri

    2016-10-19

    This spreadsheet contains per-site metadata for the WIND Toolkit sites and serves as an index for the raw data hosted on Globus connect (nrel#globus:/globusro/met_data). Aside from the metadata, per site average power and capacity factor are given. This data was prepared by 3TIER under contract by NREL and is public domain. Authoritative documentation on the creation of the underlying dataset is at: Final Report on the Creation of the Wind Integration National Dataset (WIND) Toolkit and API: http://www.nrel.gov/docs/fy16osti/66189.pdf

  5. Search for signatures of magnetically-induced alignment in the arrival directions measured by the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Pierre Auger Collaboration; Abreu, P.; Aglietta, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antičić, T.; Anzalone, A.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Bäcker, T.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Bäuml, J.; Beatty, J. J.; Becker, B. R.; Becker, K. H.; Bellétoile, A.; Bellido, J. A.; Benzvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Burton, R. E.; Caballero-Mora, K. S.; Caramete, L.; Caruso, R.; Castellina, A.; Catalano, O.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Cheng, S. H.; Chiavassa, A.; Chinellato, J. A.; Chou, A.; Chudoba, J.; Clay, R. W.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Domenico, M.; de Donato, C.; de Jong, S. J.; de La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; de Mitri, I.; de Souza, V.; de Vries, K. D.; Decerprit, G.; Del Peral, L.; Del Río, M.; Deligny, O.; Dembinski, H.; Dhital, N.; di Giulio, C.; Diaz, J. C.; Díaz Castro, M. L.; Diep, P. N.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; Dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Fajardo Tapia, I.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Ferrero, A.; Fick, B.; Filevich, A.; Filipčič, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fröhlich, U.; Fuchs, B.; Gaior, R.; Gamarra, R. F.; Gambetta, S.; García, B.; García Gámez, D.; Garcia-Pinto, D.; Gascon, A.; Gemmeke, H.; Gesterling, K.; Ghia, P. L.; Giaccari, U.; Giller, M.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gonçalves, P.; Gonzalez, D.; Gonzalez, J. G.; Gookin, B.; Góra, D.; Gorgi, A.; Gouffon, P.; Gozzini, S. R.; Grashorn, E.; Grebe, S.; Griffith, N.; Grigat, M.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Guzman, A.; Hague, J. D.; Hansen, P.; Harari, D.; Harmsma, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hojvat, C.; Hollon, N.; Holmes, V. C.; Homola, P.; Hörandel, J. R.; Horneffer, A.; Hrabovský, M.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jarne, C.; Jiraskova, S.; Josebachuili, M.; Kadija, K.; Kampert, K. H.; Karhan, P.; Kasper, P.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuehn, F.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; Lautridou, P.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Lemiere, A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Lyberis, H.; Maccarone, M. C.; Macolino, C.; Maldera, S.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, J.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Meurer, C.; Mićanović, S.; Micheletti, M. I.; Miller, W.; Miramonti, L.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Morris, C.; Mostafá, M.; Moura, C. A.; Mueller, S.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nhung, P. T.; Niemietz, L.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Nyklicek, M.; Oehlschläger, J.; Olinto, A.; Oliva, P.; Olmos-Gilbaja, V. M.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Parsons, R. D.; Pastor, S.; Paul, T.; Pech, M.; PeĶala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrinca, P.; Petrolini, A.; Petrov, Y.; Petrovic, J.; Pfendner, C.; Phan, N.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Ponce, V. H.; Pontz, M.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rivera, H.; Rizi, V.; Roberts, J.; Robledo, C.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodriguez-Cabo, I.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-D'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Rühle, C.; Salamida, F.; Salazar, H.; Salina, G.; Sánchez, F.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Schmidt, F.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schröder, F.; Schulte, S.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Silva Lopez, H. H.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Stanic, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Strazzeri, E.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tamashiro, A.; Tapia, A.; Tartare, M.; Taşcău, O.; Tavera Ruiz, C. G.; Tcaciuc, R.; Tegolo, D.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tiwari, D. K.; Tkaczyk, W.; Todero Peixoto, C. J.; Tomé, B.; Tonachini, A.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van den Berg, A. M.; Varela, E.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Walz, D.; Warner, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Westerhoff, S.; Whelan, B. J.; Wieczorek, G.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Winnick, M. G.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Yapici, T.; Younk, P.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zimbres Silva, M.; Ziolkowski, M.

    2012-01-01

    We present the results of an analysis of data recorded at the Pierre Auger Observatory in which we search for groups of directionally-aligned events (or 'multiplets') which exhibit a correlation between arrival direction and the inverse of the energy. These signatures are expected from sets of events coming from the same source after having been deflected by intervening coherent magnetic fields. The observation of several events from the same source would open the possibility to accurately reconstruct the position of the source and also measure the integral of the component of the magnetic field orthogonal to the trajectory of the cosmic rays. We describe the largest multiplets found and compute the probability that they appeared by chance from an isotropic distribution. We find no statistically significant evidence for the presence of multiplets arising from magnetic deflections in the present data.

  6. Search for signatures of magnetically-induced alignment in the arrival directions measured by the Pierre Auger Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abreu, P.; /Lisbon, IST; Aglietta, M.

    2011-11-01

    We present the results of an analysis of data recorded at the Pierre Auger Observatory in which we search for groups of directionally-aligned events (or ''multiplets'') which exhibit a correlation between arrival direction and the inverse of the energy. These signatures are expected from sets of events coming from the same source after having been deflected by intervening coherent magnetic fields. The observation of several events from the same source would open the possibility to accurately reconstruct the position of the source and also measure the integral of the component of the magnetic field orthogonal to the trajectory of themore » cosmic rays. We describe the largest multiplets found and compute the probability that they appeared by chance from an isotropic distribution. We find no statistically significant evidence for the presence of multiplets arising from magnetic deflections in the present data.« less

  7. PARTS: Probabilistic Alignment for RNA joinT Secondary structure prediction

    PubMed Central

    Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H.

    2008-01-01

    A novel method is presented for joint prediction of alignment and common secondary structures of two RNA sequences. The joint consideration of common secondary structures and alignment is accomplished by structural alignment over a search space defined by the newly introduced motif called matched helical regions. The matched helical region formulation generalizes previously employed constraints for structural alignment and thereby better accommodates the structural variability within RNA families. A probabilistic model based on pseudo free energies obtained from precomputed base pairing and alignment probabilities is utilized for scoring structural alignments. Maximum a posteriori (MAP) common secondary structures, sequence alignment and joint posterior probabilities of base pairing are obtained from the model via a dynamic programming algorithm called PARTS. The advantage of the more general structural alignment of PARTS is seen in secondary structure predictions for the RNase P family. For this family, the PARTS MAP predictions of secondary structures and alignment perform significantly better than prior methods that utilize a more restrictive structural alignment model. For the tRNA and 5S rRNA families, the richer structural alignment model of PARTS does not offer a benefit and the method therefore performs comparably with existing alternatives. For all RNA families studied, the posterior probability estimates obtained from PARTS offer an improvement over posterior probability estimates from a single sequence prediction. When considering the base pairings predicted over a threshold value of confidence, the combination of sensitivity and positive predictive value is superior for PARTS than for the single sequence prediction. PARTS source code is available for download under the GNU public license at http://rna.urmc.rochester.edu. PMID:18304945

  8. AntiClustal: Multiple Sequence Alignment by antipole clustering and linear approximate 1-median computation.

    PubMed

    Di Pietro, C; Di Pietro, V; Emmanuele, G; Ferro, A; Maugeri, T; Modica, E; Pigola, G; Pulvirenti, A; Purrello, M; Ragusa, M; Scalia, M; Shasha, D; Travali, S; Zimmitti, V

    2003-01-01

    In this paper we present a new Multiple Sequence Alignment (MSA) algorithm called AntiClusAl. The method makes use of the commonly use idea of aligning homologous sequences belonging to classes generated by some clustering algorithm, and then continue the alignment process ina bottom-up way along a suitable tree structure. The final result is then read at the root of the tree. Multiple sequence alignment in each cluster makes use of the progressive alignment with the 1-median (center) of the cluster. The 1-median of set S of sequences is the element of S which minimizes the average distance from any other sequence in S. Its exact computation requires quadratic time. The basic idea of our proposed algorithm is to make use of a simple and natural algorithmic technique based on randomized tournaments which has been successfully applied to large size search problems in general metric spaces. In particular a clustering algorithm called Antipole tree and an approximate linear 1-median computation are used. Our algorithm compared with Clustal W, a widely used tool to MSA, shows a better running time results with fully comparable alignment quality. A successful biological application showing high aminoacid conservation during evolution of Xenopus laevis SOD2 is also cited.

  9. The 2016 ACCP Pharmacotherapy Didactic Curriculum Toolkit.

    PubMed

    Schwinghammer, Terry L; Crannage, Andrew J; Boyce, Eric G; Bradley, Bridget; Christensen, Alyssa; Dunnenberger, Henry M; Fravel, Michelle; Gurgle, Holly; Hammond, Drayton A; Kwon, Jennifer; Slain, Douglas; Wargo, Kurt A

    2016-11-01

    The 2016 American College of Clinical Pharmacy (ACCP) Educational Affairs Committee was charged with updating and contemporizing ACCP's 2009 Pharmacotherapy Didactic Curriculum Toolkit. The toolkit has been designed to guide schools and colleges of pharmacy in developing, maintaining, and modifying their curricula. The 2016 committee reviewed the recent medical literature and other documents to identify disease states that are responsive to drug therapy. Diseases and content topics were organized by organ system, when feasible, and grouped into tiers as defined by practice competency. Tier 1 topics should be taught in a manner that prepares all students to provide collaborative, patient-centered care upon graduation and licensure. Tier 2 topics are generally taught in the professional curriculum, but students may require additional knowledge or skills after graduation (e.g., residency training) to achieve competency in providing direct patient care. Tier 3 topics may not be taught in the professional curriculum; thus, graduates will be required to obtain the necessary knowledge and skills on their own to provide direct patient care, if required in their practice. The 2016 toolkit contains 276 diseases and content topics, of which 87 (32%) are categorized as tier 1, 133 (48%) as tier 2, and 56 (20%) as tier 3. The large number of tier 1 topics will require schools and colleges to use creative pedagogical strategies to achieve the necessary practice competencies. Almost half of the topics (48%) are tier 2, highlighting the importance of postgraduate residency training or equivalent practice experience to competently care for patients with these disorders. The Pharmacotherapy Didactic Curriculum Toolkit will continue to be updated to provide guidance to faculty at schools and colleges of pharmacy as these academic pharmacy institutions regularly evaluate and modify their curricula to keep abreast of scientific advances and associated practice changes. Access the current Pharmacotherapy Didactic Curriculum Toolkit at http://www.accp.com/docs/positions/misc/Toolkit_final.pdf. © 2016 Pharmacotherapy Publications, Inc.

  10. Patient-Centered Personal Health Record and Portal Implementation Toolkit for Ambulatory Clinics: A Feasibility Study.

    PubMed

    Nahm, Eun-Shim; Diblasi, Catherine; Gonzales, Eva; Silver, Kristi; Zhu, Shijun; Sagherian, Knar; Kongs, Katherine

    2017-04-01

    Personal health records and patient portals have been shown to be effective in managing chronic illnesses. Despite recent nationwide implementation efforts, the personal health record and patient portal adoption rates among patients are low, and the lack of support for patients using the programs remains a critical gap in most implementation processes. In this study, we implemented the Patient-Centered Personal Health Record and Patient Portal Implementation Toolkit in a large diabetes/endocrinology center and assessed its preliminary impact on personal health record and patient portal knowledge, self-efficacy, patient-provider communication, and adherence to treatment plans. Patient-Centered Personal Health Record and Patient Portal Implementation Toolkit is composed of Patient-Centered Personal Health Record and Patient Portal Implementation Toolkit-General, clinic-level resources for clinicians, staff, and patients, and Patient-Centered Personal Health Record and Patient Portal Implementation Toolkit Plus, an optional 4-week online resource program for patients ("MyHealthPortal"). First, Patient-Centered Personal Health Record and Patient Portal Implementation Toolkit-General was implemented, and all clinicians and staff were educated about the center's personal health record and patient portal. Then general patient education was initiated, while a randomized controlled trial was conducted to test the preliminary effects of "MyHealthPortal" using a small sample (n = 74) with three observations (baseline and 4 and 12 weeks). The intervention group showed significantly greater improvement than the control group in patient-provider communication at 4 weeks (t56 = 3.00, P = .004). For other variables, the intervention group tended to show greater improvement; however, the differences were not significant. In this preliminary study, Patient-Centered Personal Health Record and Patient Portal Implementation Toolkit showed potential for filling the gap in the current personal health record and patient portal implementation process. Further studies are needed using larger samples in other settings to ascertain if these results are generalizable to other populations.

  11. The Ames MER microscopic imager toolkit

    USGS Publications Warehouse

    Sargent, R.; Deans, Matthew; Kunz, C.; Sims, M.; Herkenhoff, K.

    2005-01-01

    12The Mars Exploration Rovers, Spirit and Opportunity, have spent several successful months on Mars, returning gigabytes of images and spectral data to scientists on Earth. One of the instruments on the MER rovers, the Athena Microscopic Imager (MI), is a fixed focus, megapixel camera providing a ??3mm depth of field and a 31??31mm field of view at a working distance of 63 mm from the lens to the object being imaged. In order to maximize the science return from this instrument, we developed the Ames MI Toolkit and supported its use during the primary mission. The MI Toolkit is a set of programs that operate on collections of MI images, with the goal of making the data more understandable to the scientists on the ground. Because of the limited depth of field of the camera, and the often highly variable topography of the terrain being imaged, MI images of a given rock are often taken as a stack, with the Instrument Deployment Device (IDD) moving along a computed normal vector, pausing every few millimeters for the MI to acquire an image. The MI Toolkit provides image registration and focal section merging, which combine these images to form a single, maximally in-focus image, while compensating for changes in lighting as well as parallax due to the motion of the camera. The MI Toolkit also provides a 3-D reconstruction of the surface being imaged using stereo and can embed 2-D MI images as texture maps into 3-D meshes produced by other imagers on board the rover to provide context. The 2-D images and 3-D meshes output from the Toolkit are easily viewed by scientists using other mission tools, such as Viz or the MI Browser.This paper describes the MI Toolkit in detail, as well as our experience using it with scientists at JPL during the primary MER mission. ?? 2005 IEEE.

  12. The Ames MER Microscopic Imager Toolkit

    NASA Technical Reports Server (NTRS)

    Sargent, Randy; Deans, Matthew; Kunz, Clayton; Sims, Michael; Herkenhoff, Ken

    2005-01-01

    The Mars Exploration Rovers, Spirit and Opportunity, have spent several successful months on Mars, returning gigabytes of images and spectral data to scientists on Earth. One of the instruments on the MER rovers, the Athena Microscopic Imager (MI), is a fixed focus, megapixel camera providing a plus or minus mm depth of field and a 3lx31mm field of view at a working distance of 63 mm from the lens to the object being imaged. In order to maximize the science return from this instrument, we developed the Ames MI Toolkit and supported its use during the primary mission. The MI Toolkit is a set of programs that operate on collections of MI images, with the goal of making the data more understandable to the scientists on the ground. Because of the limited depth of field of the camera, and the often highly variable topography of the terrain being imaged, MI images of a given rock are often taken as a stack, with the Instrument Deployment Device (IDD) moving along a computed normal vector, pausing every few millimeters for the MI to acquire an image. The MI Toolkit provides image registration and focal section merging, which combine these images to form a single, maximally in-focus image, while compensating for changes in lighting as well as parallax due to the motion of the camera. The MI Toolkit also provides a 3-D reconstruction of the surface being imaged using stereo and can embed 2-D MI images as texture maps into 3-D meshes produced by other imagers on board the rover to provide context. The 2-D images and 3-D meshes output from the Toolkit are easily viewed by scientists using other mission tools, such as Viz or the MI Browser. This paper describes the MI Toolkit in detail, as well as our experience using it with scientists at JPL during the primary MER mission.

  13. Development and Demonstration of a Networked Telepathology 3-D Imaging, Databasing, and Communication System

    DTIC Science & Technology

    1996-10-01

    aligned using an octree search algorithm combined with cross correlation analysis . Successive 4x downsampling with optional and specifiable neighborhood...desired and the search engine embedded in the OODBMS will find the requested imagery and que it to the user for further analysis . This application was...obtained during Hoftmann-LaRoche production pathology imaging performed at UMICH. Versant works well and is easy to use; 3) Pathology Image Analysis

  14. mRAISE: an alternative algorithmic approach to ligand-based virtual screening

    NASA Astrophysics Data System (ADS)

    von Behren, Mathias M.; Bietz, Stefan; Nittinger, Eva; Rarey, Matthias

    2016-08-01

    Ligand-based virtual screening is a well established method to find new lead molecules in todays drug discovery process. In order to be applicable in day to day practice, such methods have to face multiple challenges. The most important part is the reliability of the results, which can be shown and compared in retrospective studies. Furthermore, in the case of 3D methods, they need to provide biologically relevant molecular alignments of the ligands, that can be further investigated by a medicinal chemist. Last but not least, they have to be able to screen large databases in reasonable time. Many algorithms for ligand-based virtual screening have been proposed in the past, most of them based on pairwise comparisons. Here, a new method is introduced called mRAISE. Based on structural alignments, it uses a descriptor-based bitmap search engine (RAISE) to achieve efficiency. Alignments created on the fly by the search engine get evaluated with an independent shape-based scoring function also used for ranking of compounds. The correct ranking as well as the alignment quality of the method are evaluated and compared to other state of the art methods. On the commonly used Directory of Useful Decoys dataset mRAISE achieves an average area under the ROC curve of 0.76, an average enrichment factor at 1 % of 20.2 and an average hit rate at 1 % of 55.5. With these results, mRAISE is always among the top performing methods with available data for comparison. To access the quality of the alignments calculated by ligand-based virtual screening methods, we introduce a new dataset containing 180 prealigned ligands for 11 diverse targets. Within the top ten ranked conformations, the alignment closest to X-ray structure calculated with mRAISE has a root-mean-square deviation of less than 2.0 Å for 80.8 % of alignment pairs and achieves a median of less than 2.0 Å for eight of the 11 cases. The dataset used to rate the quality of the calculated alignments is freely available at http://www.zbh.uni-hamburg.de/mraise-dataset.html. The table of all PDB codes contained in the ensembles can be found in the supplementary material. The software tool mRAISE is freely available for evaluation purposes and academic use (see http://www.zbh.uni-hamburg.de/raise).

  15. Ridesharing options analysis and practitioners' toolkit

    DOT National Transportation Integrated Search

    2010-12-01

    The purpose of this toolkit is to elaborate upon the recent changes in ridesharing, introduce the wide variety that exists in ridesharing programs today, and the developments in technology and funding availability that create greater incentives for p...

  16. Energy Savings Performance Contract Energy Sales Agreement Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    FEMP developed the Energy Savings Performance Contracting Energy Sales Agreement (ESPC ESA) Toolkit to provide federal agency contracting officers and other acquisition team members with information that will facilitate the timely execution of ESPC ESA projects.

  17. Engineering and algorithm design for an image processing Api: a technical report on ITK--the Insight Toolkit.

    PubMed

    Yoo, Terry S; Ackerman, Michael J; Lorensen, William E; Schroeder, Will; Chalana, Vikram; Aylward, Stephen; Metaxas, Dimitris; Whitaker, Ross

    2002-01-01

    We present the detailed planning and execution of the Insight Toolkit (ITK), an application programmers interface (API) for the segmentation and registration of medical image data. This public resource has been developed through the NLM Visible Human Project, and is in beta test as an open-source software offering under cost-free licensing. The toolkit concentrates on 3D medical data segmentation and registration algorithms, multimodal and multiresolution capabilities, and portable platform independent support for Windows, Linux/Unix systems. This toolkit was built using current practices in software engineering. Specifically, we embraced the concept of generic programming during the development of these tools, working extensively with C++ templates and the freedom and flexibility they allow. Software development tools for distributed consortium-based code development have been created and are also publicly available. We discuss our assumptions, design decisions, and some lessons learned.

  18. The Biological Reference Repository (BioR): a rapid and flexible system for genomics annotation.

    PubMed

    Kocher, Jean-Pierre A; Quest, Daniel J; Duffy, Patrick; Meiners, Michael A; Moore, Raymond M; Rider, David; Hossain, Asif; Hart, Steven N; Dinu, Valentin

    2014-07-01

    The Biological Reference Repository (BioR) is a toolkit for annotating variants. BioR stores public and user-specific annotation sources in indexed JSON-encoded flat files (catalogs). The BioR toolkit provides the functionality to combine and retrieve annotation from these catalogs via the command-line interface. Several catalogs from commonly used annotation sources and instructions for creating user-specific catalogs are provided. Commands from the toolkit can be combined with other UNIX commands for advanced annotation processing. We also provide instructions for the development of custom annotation pipelines. The package is implemented in Java and makes use of external tools written in Java and Perl. The toolkit can be executed on Mac OS X 10.5 and above or any Linux distribution. The BioR application, quickstart, and user guide documents and many biological examples are available at http://bioinformaticstools.mayo.edu. © The Author 2014. Published by Oxford University Press.

  19. The GeoViz Toolkit: Using component-oriented coordination methods for geographic visualization and analysis

    PubMed Central

    Hardisty, Frank; Robinson, Anthony C.

    2010-01-01

    In this paper we present the GeoViz Toolkit, an open-source, internet-delivered program for geographic visualization and analysis that features a diverse set of software components which can be flexibly combined by users who do not have programming expertise. The design and architecture of the GeoViz Toolkit allows us to address three key research challenges in geovisualization: allowing end users to create their own geovisualization and analysis component set on-the-fly, integrating geovisualization methods with spatial analysis methods, and making geovisualization applications sharable between users. Each of these tasks necessitates a robust yet flexible approach to inter-tool coordination. The coordination strategy we developed for the GeoViz Toolkit, called Introspective Observer Coordination, leverages and combines key advances in software engineering from the last decade: automatic introspection of objects, software design patterns, and reflective invocation of methods. PMID:21731423

  20. Security Hardened Cyber Components for Nuclear Power Plants: Phase I SBIR Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franusich, Michael D.

    SpiralGen, Inc. built a proof-of-concept toolkit for enhancing the cyber security of nuclear power plants and other critical infrastructure with high-assurance instrumentation and control code. The toolkit is based on technology from the DARPA High-Assurance Cyber Military Systems (HACMS) program, which has focused on applying the science of formal methods to the formidable set of problems involved in securing cyber physical systems. The primary challenges beyond HACMS in developing this toolkit were to make the new technology usable by control system engineers and compatible with the regulatory and commercial constraints of the nuclear power industry. The toolkit, packaged as amore » Simulink add-on, allows a system designer to assemble a high-assurance component from formally specified and proven blocks and generate provably correct control and monitor code for that subsystem.« less

  1. Critical Care Organizations: Business of Critical Care and Value/Performance Building.

    PubMed

    Leung, Sharon; Gregg, Sara R; Coopersmith, Craig M; Layon, A Joseph; Oropello, John; Brown, Daniel R; Pastores, Stephen M; Kvetan, Vladimir

    2018-01-01

    New, value-based regulations and reimbursement structures are creating historic care management challenges, thinning the margins and threatening the viability of hospitals and health systems. The Society of Critical Care Medicine convened a taskforce of Academic Leaders in Critical Care Medicine on February 22, 2016, during the 45th Critical Care Congress to develop a toolkit drawing on the experience of successful leaders of critical care organizations in North America for advancing critical care organizations (Appendix 1). The goal of this article was to provide a roadmap and call attention to key factors that adult critical care medicine leadership in both academic and nonacademic setting should consider when planning for value-based care. Relevant medical literature was accessed through a literature search. Material published by federal health agencies and other specialty organizations was also reviewed. Collaboratively and iteratively, taskforce members corresponded by electronic mail and held monthly conference calls to finalize this report. The business and value/performance critical care organization building section comprised of leaders of critical care organizations with expertise in critical care administration, healthcare management, and clinical practice. Two phases of critical care organizations care integration are described: "horizontal," within the system and regionalization of care as an initial phase, and "vertical," with a post-ICU and postacute care continuum as a succeeding phase. The tools required for the clinical and financial transformation are provided, including the essential prerequisites of forming a critical care organization; the manner in which a critical care organization can help manage transformational domains is considered. Lastly, how to achieve organizational health system support for critical care organization implementation is discussed. A critical care organization that incorporates functional clinical horizontal and vertical integration for ICU patients and survivors, aligns strategy and operations with those of the parent health system, and encompasses knowledge on finance and risk will be better positioned to succeed in the value-based world.

  2. Fast and accurate reference-free alignment of subtomograms.

    PubMed

    Chen, Yuxiang; Pfeffer, Stefan; Hrabe, Thomas; Schuller, Jan Michael; Förster, Friedrich

    2013-06-01

    In cryoelectron tomography alignment and averaging of subtomograms, each dnepicting the same macromolecule, improves the resolution compared to the individual subtomogram. Major challenges of subtomogram alignment are noise enhancement due to overfitting, the bias of an initial reference in the iterative alignment process, and the computational cost of processing increasingly large amounts of data. Here, we propose an efficient and accurate alignment algorithm via a generalized convolution theorem, which allows computation of a constrained correlation function using spherical harmonics. This formulation increases computational speed of rotational matching dramatically compared to rotation search in Cartesian space without sacrificing accuracy in contrast to other spherical harmonic based approaches. Using this sampling method, a reference-free alignment procedure is proposed to tackle reference bias and overfitting, which also includes contrast transfer function correction by Wiener filtering. Application of the method to simulated data allowed us to obtain resolutions near the ground truth. For two experimental datasets, ribosomes from yeast lysate and purified 20S proteasomes, we achieved reconstructions of approximately 20Å and 16Å, respectively. The software is ready-to-use and made public to the community. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. LS-align: an atom-level, flexible ligand structural alignment algorithm for high-throughput virtual screening.

    PubMed

    Hu, Jun; Liu, Zi; Yu, Dong-Jun; Zhang, Yang

    2018-02-15

    Sequence-order independent structural comparison, also called structural alignment, of small ligand molecules is often needed for computer-aided virtual drug screening. Although many ligand structure alignment programs are proposed, most of them build the alignments based on rigid-body shape comparison which cannot provide atom-specific alignment information nor allow structural variation; both abilities are critical to efficient high-throughput virtual screening. We propose a novel ligand comparison algorithm, LS-align, to generate fast and accurate atom-level structural alignments of ligand molecules, through an iterative heuristic search of the target function that combines inter-atom distance with mass and chemical bond comparisons. LS-align contains two modules of Rigid-LS-align and Flexi-LS-align, designed for rigid-body and flexible alignments, respectively, where a ligand-size independent, statistics-based scoring function is developed to evaluate the similarity of ligand molecules relative to random ligand pairs. Large-scale benchmark tests are performed on prioritizing chemical ligands of 102 protein targets involving 1,415,871 candidate compounds from the DUD-E (Database of Useful Decoys: Enhanced) database, where LS-align achieves an average enrichment factor (EF) of 22.0 at the 1% cutoff and the AUC score of 0.75, which are significantly higher than other state-of-the-art methods. Detailed data analyses show that the advanced performance is mainly attributed to the design of the target function that combines structural and chemical information to enhance the sensitivity of recognizing subtle difference of ligand molecules and the introduces of structural flexibility that help capture the conformational changes induced by the ligand-receptor binding interactions. These data demonstrate a new avenue to improve the virtual screening efficiency through the development of sensitive ligand structural alignments. http://zhanglab.ccmb.med.umich.edu/LS-align/. njyudj@njust.edu.cn or zhng@umich.edu. Supplementary data are available at Bioinformatics online.

  4. Margins of safety provided by COSHH Essentials and the ILO Chemical Control Toolkit.

    PubMed

    Jones, Rachael M; Nicas, Mark

    2006-03-01

    COSHH Essentials, developed by the UK Health and Safety Executive, and the Chemical Control Toolkit (Toolkit) proposed by the International Labor Organization, are 'control banding' approaches to workplace risk management intended for use by proprietors of small and medium-sized businesses. Both systems group chemical substances into hazard bands based on toxicological endpoint and potency. COSSH Essentials uses the European Union's Risk-phrases (R-phrases), whereas the Toolkit uses R-phrases and the Globally Harmonized System (GHS) of Classification and Labeling of Chemicals. Each hazard band is associated with a range of airborne concentrations, termed exposure bands, which are to be attained by the implementation of recommended control technologies. Here we analyze the margin of safety afforded by the systems and, for each hazard band, define the minimal margin as the ratio of the minimum airborne concentration that produced the toxicological endpoint of interest in experimental animals to the maximum concentration in workplace air permitted by the exposure band. We found that the minimal margins were always <100, with some ranging to <1, and inversely related to molecular weight. The Toolkit-GHS system generally produced margins equal to or larger than COSHH Essentials, suggesting that the Toolkit-GHS system is more protective of worker health. Although, these systems predict exposures comparable with current occupational exposure limits, we argue that the minimal margins are better indicators of health protection. Further, given the small margins observed, we feel it is important that revisions of these systems provide the exposure bands to users, so as to permit evaluation of control technology capture efficiency.

  5. Effects of a Short Video-Based Resident-as-Teacher Training Toolkit on Resident Teaching.

    PubMed

    Ricciotti, Hope A; Freret, Taylor S; Aluko, Ashley; McKeon, Bri Anne; Haviland, Miriam J; Newman, Lori R

    2017-10-01

    To pilot a short video-based resident-as-teacher training toolkit and assess its effect on resident teaching skills in clinical settings. A video-based resident-as-teacher training toolkit was previously developed by educational experts at Beth Israel Deaconess Medical Center, Harvard Medical School. Residents were recruited from two academic hospitals, watched two videos from the toolkit ("Clinical Teaching Skills" and "Effective Clinical Supervision"), and completed an accompanying self-study guide. A novel assessment instrument for evaluating the effect of the toolkit on teaching was created through a modified Delphi process. Before and after the intervention, residents were observed leading a clinical teaching encounter and scored using the 15-item assessment instrument. The primary outcome of interest was the change in number of skills exhibited, which was assessed using the Wilcoxon signed-rank test. Twenty-eight residents from two academic hospitals were enrolled, and 20 (71%) completed all phases of the study. More than one third of residents who volunteered to participate reported no prior formal teacher training. After completing two training modules, residents demonstrated a significant increase in the median number of teaching skills exhibited in a clinical teaching encounter, from 7.5 (interquartile range 6.5-9.5) to 10.0 (interquartile range 9.0-11.5; P<.001). Of the 15 teaching skills assessed, there were significant improvements in asking for the learner's perspective (P=.01), providing feedback (P=.005), and encouraging questions (P=.046). Using a resident-as-teacher video-based toolkit was associated with improvements in teaching skills in residents from multiple specialties.

  6. Innovations and Challenges of Implementing a Glucose Gel Toolkit for Neonatal Hypoglycemia.

    PubMed

    Hammer, Denise; Pohl, Carla; Jacobs, Peggy J; Kaufman, Susan; Drury, Brenda

    2018-05-24

    Transient neonatal hypoglycemia occurs most commonly in newborns who are small for gestational age, large for gestational age, infants of diabetic mothers, and late preterm infants. An exact blood glucose value has not been determined for neonatal hypoglycemia, and it is important to note that poor neurologic outcomes can occur if hypoglycemia is left untreated. Interventions that separate mothers and newborns, as well as use of formula to treat hypoglycemia, have the potential to disrupt exclusive breastfeeding. To determine whether implementation of a toolkit designed to support staff in the adaptation of the practice change for management of newborns at risk for hypoglycemia, that includes 40% glucose gel in an obstetric unit with a level 2 nursery will decrease admissions to the Intermediate Care Nursery, and increase exclusive breastfeeding. This descriptive study used a retrospective chart review for pre/postimplementation of the Management of Newborns at Risk for Hypoglycemia Toolkit (Toolkit) using a convenience sample of at-risk newborns in the first 2 days of life to evaluate the proposed outcomes. Following implementation of the Toolkit, at-risk newborns had a clinically but not statistically significant 6.5% increase in exclusive breastfeeding and a clinically but not statistically significant 5% decrease in admissions to the Intermediate Care Nursery. The Toolkit was designed for ease of staff use and to improve outcomes for the at-risk newborn. Future research includes replication at other level 2 and level 1 obstetric centers and investigation into the number of 40% glucose gel doses that can safely be administered.

  7. Improved quality of care for patients infected or colonised with ESBL-producing Enterobacteriaceae in a French teaching hospital: impact of an interventional prospective study and development of specific tools.

    PubMed

    Mondain, Véronique; Lieutier, Florence; Pulcini, Céline; Degand, Nicolas; Landraud, Luce; Ruimy, Raymond; Fosse, Thierry; Roger, Pierre Marie

    2018-05-01

    The increasing incidence of ESBL-producing Enterobacteriaceae (ESBL-E) in France prompted the publication of national recommendations in 2010. Based on these, we developed a toolkit and a warning system to optimise management of ESBL-E infected or colonised patients in both community and hospital settings. The impact of this initiative on quality of care was assessed in a teaching hospital. The ESBL toolkit was developed in 2011 during multidisciplinary meetings involving a regional network of hospital, private clinic and laboratory staff in Southeastern France. It includes antibiotic treatment protocols, a check list, mail templates and a patient information sheet focusing on infection control. Upon identification of ESBL-E, the warning system involves alerting the attending physician and the infectious disease (ID) advisor, with immediate, advice-based implementation of the toolkit. The procedure and toolkit were tested in our teaching hospital. Patient management was compared before and after implementation of the toolkit over two 3-month periods (July-October 2010 and 2012). Implementation of the ESBL-E warning system and ESBL-E toolkit was tested for 87 patients in 2010 and 92 patients in 2012, resulting in improved patient management: expert advice sought and followed (16 vs 97%), information provided to the patient's general practitioner (18 vs 63%) and coding of the condition in the patient's medical file (17 vs 59%), respectively. Our multidisciplinary strategy improved quality of care for in-patients infected or colonised with ESBL-E, increasing compliance with national recommendations.

  8. RISMC Toolkit and Methodology Research and Development Plan for External Hazards Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, Justin Leigh

    This report includes the description and development plan for a Risk Informed Safety Margins Characterization (RISMC) toolkit and methodology that will evaluate multihazard risk in an integrated manner to support the operating nuclear fleet.

  9. Resource Toolkit for Working with Education Service Providers

    ERIC Educational Resources Information Center

    National Association of Charter School Authorizers (NJ1), 2005

    2005-01-01

    This resource toolkit for working education service providers contains four sections. Section 1, "Roles Responsibilities, and Relationships," contains: (1) "Purchasing Services from an Educational Management Organization," excerpted from "The Charter School Administrative and Governance Guide" (Massachusetts Dept. of…

  10. Fast online and index-based algorithms for approximate search of RNA sequence-structure patterns

    PubMed Central

    2013-01-01

    Background It is well known that the search for homologous RNAs is more effective if both sequence and structure information is incorporated into the search. However, current tools for searching with RNA sequence-structure patterns cannot fully handle mutations occurring on both these levels or are simply not fast enough for searching large sequence databases because of the high computational costs of the underlying sequence-structure alignment problem. Results We present new fast index-based and online algorithms for approximate matching of RNA sequence-structure patterns supporting a full set of edit operations on single bases and base pairs. Our methods efficiently compute semi-global alignments of structural RNA patterns and substrings of the target sequence whose costs satisfy a user-defined sequence-structure edit distance threshold. For this purpose, we introduce a new computing scheme to optimally reuse the entries of the required dynamic programming matrices for all substrings and combine it with a technique for avoiding the alignment computation of non-matching substrings. Our new index-based methods exploit suffix arrays preprocessed from the target database and achieve running times that are sublinear in the size of the searched sequences. To support the description of RNA molecules that fold into complex secondary structures with multiple ordered sequence-structure patterns, we use fast algorithms for the local or global chaining of approximate sequence-structure pattern matches. The chaining step removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our improved online algorithm is faster than the best previous method by up to factor 45. Our best new index-based algorithm achieves a speedup of factor 560. Conclusions The presented methods achieve considerable speedups compared to the best previous method. This, together with the expected sublinear running time of the presented index-based algorithms, allows for the first time approximate matching of RNA sequence-structure patterns in large sequence databases. Beyond the algorithmic contributions, we provide with RaligNAtor a robust and well documented open-source software package implementing the algorithms presented in this manuscript. The RaligNAtor software is available at http://www.zbh.uni-hamburg.de/ralignator. PMID:23865810

  11. Improving safety on rural local and tribal roads safety toolkit.

    DOT National Transportation Integrated Search

    2014-08-01

    Rural roadway safety is an important issue for communities throughout the country and presents a challenge for state, local, and Tribal agencies. The Improving Safety on Rural Local and Tribal Roads Safety Toolkit was created to help rural local ...

  12. A methodological toolkit for field assessments of artisanally mined alluvial diamond deposits

    USGS Publications Warehouse

    Chirico, Peter G.; Malpeli, Katherine C.

    2014-01-01

    This toolkit provides a standardized checklist of critical issues relevant to artisanal mining-related field research. An integrated sociophysical geographic approach to collecting data at artisanal mine sites is outlined. The implementation and results of a multistakeholder approach to data collection, carried out in the assessment of Guinea’s artisanally mined diamond deposits, also are summarized. This toolkit, based on recent and successful field campaigns in West Africa, has been developed as a reference document to assist other government agencies or organizations in collecting the data necessary for artisanal diamond mining or similar natural resource assessments.

  13. TRSkit: A Simple Digital Library Toolkit

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Esler, Sandra L.

    1997-01-01

    This paper introduces TRSkit, a simple and effective toolkit for building digital libraries on the World Wide Web. The toolkit was developed for the creation of the Langley Technical Report Server and the NASA Technical Report Server, but is applicable to most simple distribution paradigms. TRSkit contains a handful of freely available software components designed to be run under the UNIX operating system and served via the World Wide Web. The intended customer is the person that must continuously and synchronously distribute anywhere from 100 - 100,000's of information units and does not have extensive resources to devote to the problem.

  14. Toolkit for testing scientific CCD cameras

    NASA Astrophysics Data System (ADS)

    Uzycki, Janusz; Mankiewicz, Lech; Molak, Marcin; Wrochna, Grzegorz

    2006-03-01

    The CCD Toolkit (1) is a software tool for testing CCD cameras which allows to measure important characteristics of a camera like readout noise, total gain, dark current, 'hot' pixels, useful area, etc. The application makes a statistical analysis of images saved in files with FITS format, commonly used in astronomy. A graphical interface is based on the ROOT package, which offers high functionality and flexibility. The program was developed in a way to ensure future compatibility with different operating systems: Windows and Linux. The CCD Toolkit was created for the "Pie of the Sky" project collaboration (2).

  15. RAVE—a Detector-independent vertex reconstruction toolkit

    NASA Astrophysics Data System (ADS)

    Waltenberger, Wolfgang; Mitaroff, Winfried; Moser, Fabian

    2007-10-01

    A detector-independent toolkit for vertex reconstruction (RAVE ) is being developed, along with a standalone framework (VERTIGO ) for testing, analyzing and debugging. The core algorithms represent state of the art for geometric vertex finding and fitting by both linear (Kalman filter) and robust estimation methods. Main design goals are ease of use, flexibility for embedding into existing software frameworks, extensibility, and openness. The implementation is based on modern object-oriented techniques, is coded in C++ with interfaces for Java and Python, and follows an open-source approach. A beta release is available. VERTIGO = "vertex reconstruction toolkit and interface to generic objects".

  16. Methodology for the development of a taxonomy and toolkit to evaluate health-related habits and lifestyle (eVITAL)

    PubMed Central

    2010-01-01

    Background Chronic diseases cause an ever-increasing percentage of morbidity and mortality, but many have modifiable risk factors. Many behaviors that predispose or protect an individual to chronic disease are interrelated, and therefore are best approached using an integrated model of health and the longevity paradigm, using years lived without disability as the endpoint. Findings This study used a 4-phase mixed qualitative design to create a taxonomy and related online toolkit for the evaluation of health-related habits. Core members of a working group conducted a literature review and created a framing document that defined relevant constructs. This document was revised, first by a working group and then by a series of multidisciplinary expert groups. The working group and expert panels also designed a systematic evaluation of health behaviors and risks, which was computerized and evaluated for feasibility. A demonstration study of the toolkit was performed in 11 healthy volunteers. Discussion In this protocol, we used forms of the community intelligence approach, including frame analysis, feasibility, and demonstration, to develop a clinical taxonomy and an online toolkit with standardized procedures for screening and evaluation of multiple domains of health, with a focus on longevity and the goal of integrating the toolkit into routine clinical practice. Trial Registration IMSERSO registry 200700012672 PMID:20334642

  17. Implementing a Breastfeeding Toolkit for Nursing Education.

    PubMed

    Folker-Maglaya, Catherine; Pylman, Maureen E; Couch, Kimberly A; Spatz, Diane L; Marzalik, Penny R

    All health professional organizations recommend exclusive breastfeeding for at least 6 months, with continued breastfeeding for 1 year or more after birth. Women cite lack of support from health professionals as a barrier to breastfeeding. Meanwhile, breastfeeding education is not considered essential to basic nursing education and students are not adequately prepared to support breastfeeding women. Therefore, a toolkit of comprehensive evidence-based breastfeeding educational materials was developed to provide essential breastfeeding knowledge. A study was performed to determine the effectiveness of the breastfeeding toolkit education in an associate degree nursing program. A pretest/posttest survey design with intervention and comparison groups was used. One hundred fourteen students completed pre- and posttests. Student knowledge was measured using a 12-item survey derived with minor modifications from Marzalik's 2004 instrument measuring breastfeeding knowledge. When pre- and posttests scores were compared within groups, both groups' knowledge scores increased. A change score was calculated with a significantly higher mean score for the intervention group. When regression analysis was used to control for the pretest score, belonging to the intervention group increased student scores but not significantly. The toolkit was developed to provide a curriculum that demonstrates enhanced learning to prepare nursing students for practice. The toolkit could be used in other settings, such as to educate staff nurses working with childbearing families.

  18. The Automatic Parallelisation of Scientific Application Codes Using a Computer Aided Parallelisation Toolkit

    NASA Technical Reports Server (NTRS)

    Ierotheou, C.; Johnson, S.; Leggett, P.; Cross, M.; Evans, E.; Jin, Hao-Qiang; Frumkin, M.; Yan, J.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. Historically, the lack of a programming standard for using directives and the rather limited performance due to scalability have affected the take-up of this programming model approach. Significant progress has been made in hardware and software technologies, as a result the performance of parallel programs with compiler directives has also made improvements. The introduction of an industrial standard for shared-memory programming with directives, OpenMP, has also addressed the issue of portability. In this study, we have extended the computer aided parallelization toolkit (developed at the University of Greenwich), to automatically generate OpenMP based parallel programs with nominal user assistance. We outline the way in which loop types are categorized and how efficient OpenMP directives can be defined and placed using the in-depth interprocedural analysis that is carried out by the toolkit. We also discuss the application of the toolkit on the NAS Parallel Benchmarks and a number of real-world application codes. This work not only demonstrates the great potential of using the toolkit to quickly parallelize serial programs but also the good performance achievable on up to 300 processors for hybrid message passing and directive-based parallelizations.

  19. Tailored prevention of inpatient falls: development and usability testing of the fall TIPS toolkit.

    PubMed

    Zuyev, Lyubov; Benoit, Angela N; Chang, Frank Y; Dykes, Patricia C

    2011-02-01

    Patient falls and fall-related injuries are serious problems in hospitals. The Fall TIPS application aims to prevent patient falls by translating routine nursing fall risk assessment into a decision support intervention that communicates fall risk status and creates a tailored evidence-based plan of care that is accessible to the care team, patients, and family members. In our design and implementation of the Fall TIPS toolkit, we used the Spiral Software Development Life Cycle model. Three output tools available to be generated from the toolkit are bed poster, plan of care, and patient education handout. A preliminary design of the application was based on initial requirements defined by project leaders and informed by focus groups with end users. Preliminary design partially simulated the paper version of the Morse Fall Scale currently used in hospitals involved in the research study. Strengths and weaknesses of the first prototype were identified by heuristic evaluation. Usability testing was performed at sites where research study is implemented. Suggestions mentioned by end users participating in usability studies were either directly incorporated into the toolkit and output tools, were slightly modified, or will be addressed during training. The next step is implementation of the fall prevention toolkit on the pilot testing units.

  20. Testing Video and Social Media for Engaging Users of the U.S. Climate Resilience Toolkit

    NASA Astrophysics Data System (ADS)

    Green, C. J.; Gardiner, N.; Niepold, F., III; Esposito, C.

    2015-12-01

    We developed a custom video production stye and a method for analyzing social media behavior so that we may deliberately build and track audience growth for decision-support tools and case studies within the U.S. Climate Resilience Toolkit. The new style of video focuses quickly on decision processes; its 30s format is well-suited for deployment through social media. We measured both traffic and engagement with video using Google Analytics. Each video included an embedded tag, allowing us to measure viewers' behavior: whether or not they entered the toolkit website; the duration of their session on the website; and the number pages they visited in that session. Results showed that video promotion was more effective on Facebook than Twitter. Facebook links generated twice the number of visits to the toolkit. Videos also increased Facebook interaction overall. Because most Facebook users are return visitors, this campaign did not substantially draw new site visitors. We continue to research and apply these methods in a targeted engagement and outreach campaign that utilizes the theory of social diffusion and social influence strategies to grow our audience of "influential" decision-makers and people within their social networks. Our goal is to increase access and use of the U.S. Climate Resilience Toolkit.

  1. Genetic Engineering of Bee Gut Microbiome Bacteria with a Toolkit for Modular Assembly of Broad-Host-Range Plasmids.

    PubMed

    Leonard, Sean P; Perutka, Jiri; Powell, J Elijah; Geng, Peng; Richhart, Darby D; Byrom, Michelle; Kar, Shaunak; Davies, Bryan W; Ellington, Andrew D; Moran, Nancy A; Barrick, Jeffrey E

    2018-05-18

    Engineering the bacteria present in animal microbiomes promises to lead to breakthroughs in medicine and agriculture, but progress is hampered by a dearth of tools for genetically modifying the diverse species that comprise these communities. Here we present a toolkit of genetic parts for the modular construction of broad-host-range plasmids built around the RSF1010 replicon. Golden Gate assembly of parts in this toolkit can be used to rapidly test various antibiotic resistance markers, promoters, fluorescent reporters, and other coding sequences in newly isolated bacteria. We demonstrate the utility of this toolkit in multiple species of Proteobacteria that are native to the gut microbiomes of honey bees ( Apis mellifera) and bumble bees (B ombus sp.). Expressing fluorescent proteins in Snodgrassella alvi, Gilliamella apicola, Bartonella apis, and Serratia strains enables us to visualize how these bacteria colonize the bee gut. We also demonstrate CRISPRi repression in B. apis and use Cas9-facilitated knockout of an S. alvi adhesion gene to show that it is important for colonization of the gut. Beyond characterizing how the gut microbiome influences the health of these prominent pollinators, this bee microbiome toolkit (BTK) will be useful for engineering bacteria found in other natural microbial communities.

  2. Searching for management approaches to reduce HAI transmission (SMART): a study protocol.

    PubMed

    McAlearney, Ann Scheck; Hefner, Jennifer L; Sieck, Cynthia J; Walker, Daniel M; Aldrich, Alison M; Sova, Lindsey N; Gaughan, Alice A; Slevin, Caitlin M; Hebert, Courtney; Hade, Erinn; Buck, Jacalyn; Grove, Michele; Huerta, Timothy R

    2017-06-28

    Healthcare-associated infections (HAIs) impact patients' lives through prolonged hospitalization, morbidity, and death, resulting in significant costs to both health systems and society. Central line-associated bloodstream infections (CLABSIs) and catheter-associated urinary tract infections (CAUTIs) are two of the most preventable HAIs. As a result, these HAIs have been the focus of significant efforts to identify evidence-based clinical strategies to reduce infection rates. The Comprehensive Unit-based Safety Program (CUSP) provides a formal model for translating CLABSI-reduction evidence into practice. Yet, a national demonstration project found organizations experienced variable levels of success using CUSP to reduce CLABSIs. In addition, in Fiscal year 2019, Medicare will expand use of CLABSI and CAUTI metrics beyond ICUs to the entire hospital for reimbursement purposes. As a result, hospitals need guidance about how to successfully translate HAI-reduction efforts such as CUSP to non-ICU settings (clinical practice), and how to shape context (management practice)-including culture and management strategies-to proactively support clinical teams. Using a mixed-methods approach to evaluate the contribution of management factors to successful HAI-reduction efforts, our study aims to: (1) Develop valid and reliable measures of structural management practices associated with the recommended CLABSI Management Strategies for use as a survey (HAI Management Practice Guideline Survey) to support HAI-reduction efforts in both medical/surgical units and ICUs; (2) Develop, validate, and then deploy the HAI Management Practice Guideline Survey, first across Ohio hospitals, then nationwide, to determine the positive predictive value of the measurement instrument as it relates to CLABSI- and CAUTI-prevention; and (3) Integrate findings into a Management Practices Toolkit for HAI reduction that includes an organization-specific data dashboard for monitoring progress and an implementation program for toolkit use, and disseminate that Toolkit nationwide. Providing hospitals with the tools they need to successfully measure management structures that support clinical care provides a powerful approach that can be leveraged to reduce the incidence of HAIs experienced by patients. This study is critical to providing the information necessary to successfully "make health care safer" by providing guidance on how contextual factors within a healthcare setting can improve patient safety across hospitals.

  3. Evaluation of chiller modeling approaches and their usability for fault detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreedharan, Priya

    Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Several factors must be considered in model evaluation, including accuracy, training data requirements, calibration effort, generality, and computational requirements. All modeling approaches fall somewhere between pure first-principles models, and empirical models. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression air conditioning units, which are commonly known as chillers. Three different models were studied: two are based on first-principles and the third is empirical in nature. The first-principles models are themore » Gordon and Ng Universal Chiller model (2nd generation), and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles. The DOE-2 chiller model as implemented in CoolTools{trademark} was selected for the empirical category. The models were compared in terms of their ability to reproduce the observed performance of an older chiller operating in a commercial building, and a newer chiller in a laboratory. The DOE-2 and Gordon-Ng models were calibrated by linear regression, while a direct-search method was used to calibrate the Toolkit model. The ''CoolTools'' package contains a library of calibrated DOE-2 curves for a variety of different chillers, and was used to calibrate the building chiller to the DOE-2 model. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less

  4. Open source software integrated into data services of Japanese planetary explorations

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y.; Ishihara, Y.; Otake, H.; Imai, K.; Masuda, K.

    2015-12-01

    Scientific data obtained by Japanese scientific satellites and lunar and planetary explorations are archived in DARTS (Data ARchives and Transmission System). DARTS provides the data with a simple method such as HTTP directory listing for long-term preservation while DARTS tries to provide rich web applications for ease of access with modern web technologies based on open source software. This presentation showcases availability of open source software through our services. KADIAS is a web-based application to search, analyze, and obtain scientific data measured by SELENE(Kaguya), a Japanese lunar orbiter. KADIAS uses OpenLayers to display maps distributed from Web Map Service (WMS). As a WMS server, open source software MapServer is adopted. KAGUYA 3D GIS (KAGUYA 3D Moon NAVI) provides a virtual globe for the SELENE's data. The main purpose of this application is public outreach. NASA World Wind Java SDK is used to develop. C3 (Cross-Cutting Comparisons) is a tool to compare data from various observations and simulations. It uses Highcharts to draw graphs on web browsers. Flow is a tool to simulate a Field-Of-View of an instrument onboard a spacecraft. This tool itself is open source software developed by JAXA/ISAS, and the license is BSD 3-Caluse License. SPICE Toolkit is essential to compile FLOW. SPICE Toolkit is also open source software developed by NASA/JPL, and the website distributes many spacecrafts' data. Nowadays, open source software is an indispensable tool to integrate DARTS services.

  5. Alignment of high-throughput sequencing data inside in-memory databases.

    PubMed

    Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias

    2014-01-01

    In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.

  6. Scoping review and evaluation of SMS/text messaging platforms for mHealth projects or clinical interventions.

    PubMed

    Iribarren, Sarah J; Brown, William; Giguere, Rebecca; Stone, Patricia; Schnall, Rebecca; Staggers, Nancy; Carballo-Diéguez, Alex

    2017-05-01

    Mobile technology supporting text messaging interventions (TMIs) continues to evolve, presenting challenges for researchers and healthcare professionals who need to choose software solutions to best meet their program needs. The objective of this review was to systematically identify and compare text messaging platforms and to summarize their advantages and disadvantages as described in peer-reviewed literature. A scoping review was conducted using four steps: 1) identify currently available platforms through online searches and in mHealth repositories; 2) expand evaluation criteria of an mHealth mobile messaging toolkit and integrate prior user experiences as researchers; 3) evaluate each platform's functions and features based on the expanded criteria and a vendor survey; and 4) assess the documentation of platform use in the peer-review literature. Platforms meeting inclusion criteria were assessed independently by three reviewers and discussed until consensus was reached. The PRISMA guidelines were followed to report findings. Of the 1041 potentially relevant search results, 27 platforms met inclusion criteria. Most were excluded because they were not platforms (e.g., guides, toolkits, reports, or SMS gateways). Of the 27 platforms, only 12 were identified in existing mHealth repositories, 10 from Google searches, while five were found in both. The expanded evaluation criteria included 22 items. Results indicate no uniform presentation of platform features and functions, often making these difficult to discern. Fourteen of the platforms were reported as open source, 10 focused on health care and 16 were tailored to meet needs of low resource settings (not mutually exclusive). Fifteen platforms had do-it-yourself setup (programming not required) while the remainder required coding/programming skills or setups could be built to specification by the vendor. Frequently described features included data security and access to the platform via cloud-based systems. Pay structures and reported targeted end-users varied. Peer-reviewed publications listed only 6 of the 27 platforms across 21 publications. The majority of these articles reported the name of the platform used but did not describe advantages or disadvantages. Searching for and comparing mHealth platforms for TMIs remains a challenge. The results of this review can serve as a resource for researchers and healthcare professionals wanting to integrate TMIs into health interventions. Steps to identify, compare and assess advantages and disadvantages are outlined for consideration. Expanded evaluation criteria can be used by future researchers. Continued and more comprehensive platform tools should be integrated into mHealth repositories. Detailed descriptions of platform advantages and disadvantages are needed when mHealth researchers publish findings to expand the body of research on TMI tools for healthcare. Standardized descriptions and features are recommended for vendor sites. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Scoping Review and Evaluation of SMS/text Messaging Platforms for mHealth Projects or Clinical Interventions

    PubMed Central

    Iribarren, Sarah; Brown, William; Giguere, Rebecca; Stone, Patricia; Schnall, Rebecca; Staggers, Nancy; Carballo-Diéguez, Alex

    2017-01-01

    Objectives Mobile technology supporting text messaging interventions (TMIs) continues to evolve, presenting challenges for researchers and healthcare professionals who need to choose software solutions to best meet their program needs. The objective of this review was to systematically identify and compare text messaging platforms and to summarize their advantages and disadvantages as described in peer-reviewed literature. Methods A scoping review was conducted using four steps: 1) identify currently available platforms through online searches and in mHealth repositories; 2) expand evaluation criteria of an mHealth mobile messaging toolkit and prior user experiences as researchers; 3) evaluate each platform’s functions and features based on the expanded criteria and a vendor survey; and 4) assess the documentation of platform use in the peer-review literature. Platforms meeting inclusion criteria were assessed independently by three reviewers and discussed until consensus was reached. The PRISMA guidelines were followed to report findings. Results Of the 1041 potentially relevant search results, 27 platforms met inclusion criteria. Most were excluded because they were not platforms (e.g., guides, toolkits, reports, or SMS gateways). Of the 27 platforms, only 12 were identified in existing mHealth repositories, 10 from Google searches, while five were found in both. The expanded evaluation criteria included 22 items. Results indicate no uniform presentation of platform features and functions, often making these difficult to discern. Fourteen of the platforms were reported as open source, 10 focused on health care and 16 were tailored to meet needs of low resource settings (not mutually exclusive). Fifteen platforms had do-it-yourself setup (programming not required) while the remainder required coding/programming skills or setups could be built to specification by the vendor. Frequently described features included data security and access to the platform via cloud-based systems. Pay structures and reported targeted end-users varied. Peer-reviewed publications listed only 6 of the 27 platforms across 21 publications. The majority of these articles reported the name of the platform used but did not describe advantages or disadvantages. Conclusions Searching for and comparing mHealth platforms for TMIs remains a challenge. The results of this review can serve as a resource for researchers and healthcare professionals wanting to integrate TMIs into health interventions. Steps to identify, compare and assess advantages and disadvantages are outlined for consideration. Expanded evaluation criteria can be used by future researchers. Continued and more comprehensive platform tools should be integrated into mHealth repositories. Detailed descriptions of platform advantages and disadvantages are needed when mHealth researchers publish findings to expand the body of research on texting-based tools for healthcare. Standardized descriptions and features are recommended for vendor sites. PMID:28347445

  8. Marine Debris and Plastic Source Reduction Toolkit

    EPA Pesticide Factsheets

    Many plastic food service ware items originate on college and university campuses—in cafeterias, snack rooms, cafés, and eateries with take-out dining options. This Campus Toolkit is a detailed “how to” guide for reducing plastic waste on college campuses.

  9. 77 FR 73023 - U.S. Environmental Solutions Toolkit

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-07

    ... officials and foreign end-users of environmental technologies that will outline U.S. [[Page 73024.... approaches to solving environmental problems and to U.S. companies that can export related technologies. The... DEPARTMENT OF COMMERCE International Trade Administration U.S. Environmental Solutions Toolkit...

  10. THE EPANET PROGRAMMER'S TOOLKIT FOR ANALYSIS OF WATER DISTRIBUTION SYSTEMS

    EPA Science Inventory

    The EPANET Programmer's Toolkit is a collection of functions that helps simplify computer programming of water distribution network analyses. the functions can be used to read in a pipe network description file, modify selected component properties, run multiple hydraulic and wa...

  11. Intensity modulated operating mode of the rotating gamma system.

    PubMed

    Sengupta, Bishwambhar; Gulyas, Laszlo; Medlin, Donald; Koroknai, Tibor; Takacs, David; Filep, Gyorgy; Panko, Peter; Godo, Bence; Hollo, Tamas; Zheng, Xiao Ran; Fedorcsak, Imre; Dobai, Jozsef; Bognar, Laszlo; Takacs, Endre

    2018-05-01

    The purpose of this work was to explore two novel operation modalities of the rotating gamma systems (RGS) that could expand its clinical application to lesions in close proximity to critical organs at risk (OAR). The approach taken in this study consists of two components. First, a Geant4-based Monte Carlo (MC) simulation toolkit is used to model the dosimetric properties of the RGS Vertex 360™ for the normal, intensity modulated radiosurgery (IMRS), and speed modulated radiosurgery (SMRS) operation modalities. Second, the RGS Vertex 360™ at the Rotating Gamma Institute in Debrecen, Hungary is used to collect experimental data for the normal and IMRS operation modes. An ion chamber is used to record measurements of the absolute dose. The dose profiles are measured using Gafchromic EBT3 films positioned within a spherical water equivalent phantom. A strong dosimetric agreement between the measured and simulated dose profiles and penumbra was found for both the normal and IMRS operation modes for all collimator sizes (4, 8, 14, and 18 mm diameter). The simulated falloff and maximum dose regions agree better with the experimental results for the 4 and 8 mm diameter collimators. Although the falloff regions align well in the 14 and 18 mm collimators, the maximum dose regions have a larger difference. For the IMRS operation mode, the simulated and experimental dose distributions are ellipsoidal, where the short axis aligns with the blocked angles. Similarly, the simulated dose distributions for the SMRS operation mode also adopt an ellipsoidal shape, where the short axis aligns with the angles where the orbital speed is highest. For both modalities, the dose distribution is highly constrained with a sharper penumbra along the short axes. Dose modulation of the RGS can be achieved with the IMRS and SMRS modes. By providing a highly constrained dose distribution with a sharp penumbra, both modes could be clinically applicable for the treatment of lesions in close proximity to critical OARs. © 2018 American Association of Physicists in Medicine.

  12. Accuracy of Assessment of Eligibility for Early Medical Abortion by Community Health Workers in Ethiopia, India and South Africa.

    PubMed

    Johnston, Heidi Bart; Ganatra, Bela; Nguyen, My Huong; Habib, Ndema; Afework, Mesganaw Fantahun; Harries, Jane; Iyengar, Kirti; Moodley, Jennifer; Lema, Hailu Yeneneh; Constant, Deborah; Sen, Swapnaleen

    2016-01-01

    To assess the accuracy of assessment of eligibility for early medical abortion by community health workers using a simple checklist toolkit. Diagnostic accuracy study. Ethiopia, India and South Africa. Two hundred seventeen women in Ethiopia, 258 in India and 236 in South Africa were enrolled into the study. A checklist toolkit to determine eligibility for early medical abortion was validated by comparing results of clinician and community health worker assessment of eligibility using the checklist toolkit with the reference standard exam. Accuracy was over 90% and the negative likelihood ratio <0.1 at all three sites when used by clinician assessors. Positive likelihood ratios were 4.3 in Ethiopia, 5.8 in India and 6.3 in South Africa. When used by community health workers the overall accuracy of the toolkit was 92% in Ethiopia, 80% in India and 77% in South Africa negative likelihood ratios were 0.08 in Ethiopia, 0.25 in India and 0.22 in South Africa and positive likelihood ratios were 5.9 in Ethiopia and 2.0 in India and South Africa. The checklist toolkit, as used by clinicians, was excellent at ruling out participants who were not eligible, and moderately effective at ruling in participants who were eligible for medical abortion. Results were promising when used by community health workers particularly in Ethiopia where they had more prior experience with use of diagnostic aids and longer professional training. The checklist toolkit assessments resulted in some participants being wrongly assessed as eligible for medical abortion which is an area of concern. Further research is needed to streamline the components of the tool, explore optimal duration and content of training for community health workers, and test feasibility and acceptability.

  13. An evaluation of a toolkit for the early detection, management, and control of carbapenemase-producing Enterobacteriaceae: a survey of acute hospital trusts in England.

    PubMed

    Coope, C M; Verlander, N Q; Schneider, A; Hopkins, S; Welfare, W; Johnson, A P; Patel, B; Oliver, I

    2018-03-09

    Following hospital outbreaks of carbapenemase-producing Enterobacteriaceae (CPE), Public Health England published a toolkit in December 2013 to promote the early detection, management, and control of CPE colonization and infection in acute hospital settings. To examine awareness, uptake, implementation and usefulness of the CPE toolkit and identify potential barriers and facilitators to its adoption in order to inform future guidance. A cross-sectional survey of National Health Service (NHS) acute trusts was conducted in May 2016. Descriptive analysis and multivariable regression models were conducted, and narrative responses were analysed thematically and informed using behaviour change theory. Most (92%) acute trusts had a written CPE plan. Fewer (75%) reported consistent compliance with screening and isolation of CPE risk patients. Lower prioritization and weaker senior management support for CPE prevention were associated with poorer compliance. Awareness of the CPE toolkit was high and all trusts with patients infected or colonized with CPE had used the toolkit either as provided (32%), or to inform (65%) their own local CPE plan. Despite this, many respondents (80%) did not believe that the CPE toolkit guidance offered an effective means to prevent CPE or was practical to follow. CPE prevention and control requires robust IPC measures. Successful implementation can be hindered by a complex set of factors related to their practical execution, insufficient resources and a lack of confidence in the effectiveness of the guidance. Future CPE guidance would benefit from substantive user involvement, processes for ongoing feedback, and regular guidance updates. Copyright © 2018 The Healthcare Infection Society. All rights reserved.

  14. A randomized controlled trial to test the efficacy of the SCI Get Fit Toolkit on leisure-time physical activity behaviour and social-cognitive processes in adults with spinal cord injury.

    PubMed

    Arbour-Nicitopoulos, Kelly P; Sweet, Shane N; Lamontagne, Marie-Eve; Ginis, Kathleen A Martin; Jeske, Samantha; Routhier, François; Latimer-Cheung, Amy E

    2017-01-01

    Single blind, two-group randomized controlled trial. To evaluate the efficacy of the SCI Get Fit Toolkit delivered online on theoretical constructs and moderate-to-vigorous physical activity (MVPA) among adults with SCI. Ontario and Quebec, Canada. Inactive, English- and French-speaking Canadian adults with traumatic SCI with Internet access, and no self-reported cognitive or memory impairments. Participants ( N =90 M age =48.12±11.29 years; 79% male) were randomized to view the SCI Get Fit Toolkit or the Physical Activity Guidelines for adults with SCI (PAG-SCI) online. Primary (intentions) and secondary (outcome expectancies, self-efficacy, planning and MVPA behaviour) outcomes were assessed over a 1-month period. Of the 90 participants randomized, 77 were included in the analyses. Participants viewed the experimental stimuli only briefly, reading the 4-page toolkit for approximately 2.5 min longer than the 1-page guideline document. No condition effects were found for intentions, outcome expectancies, self-efficacy, and planning (ΔR 2 ⩽0.03). Individuals in the toolkit condition were more likely to participate in at least one bout of 20 min of MVPA behaviour at 1-week post-intervention compared to individuals in the guidelines condition (OR=3.54, 95% CI=0.95, 13.17). However, no differences were found when examining change in weekly minutes of MVPA or comparing whether participants met the PAG-SCI. No firm conclusions can be made regarding the impact of the SCI Get Fit Toolkit in comparison to the PAG-SCI on social cognitions and MVPA behaviour. The limited online access to this resource may partially explain these null findings.

  15. Development and formative evaluation of the e-Health Implementation Toolkit (e-HIT).

    PubMed

    Murray, Elizabeth; May, Carl; Mair, Frances

    2010-10-18

    The use of Information and Communication Technology (ICT) or e-Health is seen as essential for a modern, cost-effective health service. However, there are well documented problems with implementation of e-Health initiatives, despite the existence of a great deal of research into how best to implement e-Health (an example of the gap between research and practice). This paper reports on the development and formative evaluation of an e-Health Implementation Toolkit (e-HIT) which aims to summarise and synthesise new and existing research on implementation of e-Health initiatives, and present it to senior managers in a user-friendly format. The content of the e-HIT was derived by combining data from a systematic review of reviews of barriers and facilitators to implementation of e-Health initiatives with qualitative data derived from interviews of "implementers", that is people who had been charged with implementing an e-Health initiative. These data were summarised, synthesised and combined with the constructs from the Normalisation Process Model. The software for the toolkit was developed by a commercial company (RocketScience). Formative evaluation was undertaken by obtaining user feedback. There are three components to the toolkit--a section on background and instructions for use aimed at novice users; the toolkit itself; and the report generated by completing the toolkit. It is available to download from http://www.ucl.ac.uk/pcph/research/ehealth/documents/e-HIT.xls. The e-HIT shows potential as a tool for enhancing future e-Health implementations. Further work is needed to make it fully web-enabled, and to determine its predictive potential for future implementations.

  16. Development and formative evaluation of the e-Health Implementation Toolkit (e-HIT)

    PubMed Central

    2010-01-01

    Background The use of Information and Communication Technology (ICT) or e-Health is seen as essential for a modern, cost-effective health service. However, there are well documented problems with implementation of e-Health initiatives, despite the existence of a great deal of research into how best to implement e-Health (an example of the gap between research and practice). This paper reports on the development and formative evaluation of an e-Health Implementation Toolkit (e-HIT) which aims to summarise and synthesise new and existing research on implementation of e-Health initiatives, and present it to senior managers in a user-friendly format. Results The content of the e-HIT was derived by combining data from a systematic review of reviews of barriers and facilitators to implementation of e-Health initiatives with qualitative data derived from interviews of "implementers", that is people who had been charged with implementing an e-Health initiative. These data were summarised, synthesised and combined with the constructs from the Normalisation Process Model. The software for the toolkit was developed by a commercial company (RocketScience). Formative evaluation was undertaken by obtaining user feedback. There are three components to the toolkit - a section on background and instructions for use aimed at novice users; the toolkit itself; and the report generated by completing the toolkit. It is available to download from http://www.ucl.ac.uk/pcph/research/ehealth/documents/e-HIT.xls Conclusions The e-HIT shows potential as a tool for enhancing future e-Health implementations. Further work is needed to make it fully web-enabled, and to determine its predictive potential for future implementations. PMID:20955594

  17. PAGANI Toolkit: Parallel graph-theoretical analysis package for brain network big data.

    PubMed

    Du, Haixiao; Xia, Mingrui; Zhao, Kang; Liao, Xuhong; Yang, Huazhong; Wang, Yu; He, Yong

    2018-05-01

    The recent collection of unprecedented quantities of neuroimaging data with high spatial resolution has led to brain network big data. However, a toolkit for fast and scalable computational solutions is still lacking. Here, we developed the PArallel Graph-theoretical ANalysIs (PAGANI) Toolkit based on a hybrid central processing unit-graphics processing unit (CPU-GPU) framework with a graphical user interface to facilitate the mapping and characterization of high-resolution brain networks. Specifically, the toolkit provides flexible parameters for users to customize computations of graph metrics in brain network analyses. As an empirical example, the PAGANI Toolkit was applied to individual voxel-based brain networks with ∼200,000 nodes that were derived from a resting-state fMRI dataset of 624 healthy young adults from the Human Connectome Project. Using a personal computer, this toolbox completed all computations in ∼27 h for one subject, which is markedly less than the 118 h required with a single-thread implementation. The voxel-based functional brain networks exhibited prominent small-world characteristics and densely connected hubs, which were mainly located in the medial and lateral fronto-parietal cortices. Moreover, the female group had significantly higher modularity and nodal betweenness centrality mainly in the medial/lateral fronto-parietal and occipital cortices than the male group. Significant correlations between the intelligence quotient and nodal metrics were also observed in several frontal regions. Collectively, the PAGANI Toolkit shows high computational performance and good scalability for analyzing connectome big data and provides a friendly interface without the complicated configuration of computing environments, thereby facilitating high-resolution connectomics research in health and disease. © 2018 Wiley Periodicals, Inc.

  18. B-HIT - A Tool for Harvesting and Indexing Biodiversity Data

    PubMed Central

    Barker, Katharine; Braak, Kyle; Cawsey, E. Margaret; Coddington, Jonathan; Robertson, Tim; Whitacre, Jamie

    2015-01-01

    With the rapidly growing number of data publishers, the process of harvesting and indexing information to offer advanced search and discovery becomes a critical bottleneck in globally distributed primary biodiversity data infrastructures. The Global Biodiversity Information Facility (GBIF) implemented a Harvesting and Indexing Toolkit (HIT), which largely automates data harvesting activities for hundreds of collection and observational data providers. The team of the Botanic Garden and Botanical Museum Berlin-Dahlem has extended this well-established system with a range of additional functions, including improved processing of multiple taxon identifications, the ability to represent associations between specimen and observation units, new data quality control and new reporting capabilities. The open source software B-HIT can be freely installed and used for setting up thematic networks serving the demands of particular user groups. PMID:26544980

  19. B-HIT - A Tool for Harvesting and Indexing Biodiversity Data.

    PubMed

    Kelbert, Patricia; Droege, Gabriele; Barker, Katharine; Braak, Kyle; Cawsey, E Margaret; Coddington, Jonathan; Robertson, Tim; Whitacre, Jamie; Güntsch, Anton

    2015-01-01

    With the rapidly growing number of data publishers, the process of harvesting and indexing information to offer advanced search and discovery becomes a critical bottleneck in globally distributed primary biodiversity data infrastructures. The Global Biodiversity Information Facility (GBIF) implemented a Harvesting and Indexing Toolkit (HIT), which largely automates data harvesting activities for hundreds of collection and observational data providers. The team of the Botanic Garden and Botanical Museum Berlin-Dahlem has extended this well-established system with a range of additional functions, including improved processing of multiple taxon identifications, the ability to represent associations between specimen and observation units, new data quality control and new reporting capabilities. The open source software B-HIT can be freely installed and used for setting up thematic networks serving the demands of particular user groups.

  20. GWFASTA: server for FASTA search in eukaryotic and microbial genomes.

    PubMed

    Issac, Biju; Raghava, G P S

    2002-09-01

    Similarity searches are a powerful method for solving important biological problems such as database scanning, evolutionary studies, gene prediction, and protein structure prediction. FASTA is a widely used sequence comparison tool for rapid database scanning. Here we describe the GWFASTA server that was developed to assist the FASTA user in similarity searches against partially and/or completely sequenced genomes. GWFASTA consists of more than 60 microbial genomes, eight eukaryote genomes, and proteomes of annotatedgenomes. Infact, it provides the maximum number of databases for similarity searching from a single platform. GWFASTA allows the submission of more than one sequence as a single query for a FASTA search. It also provides integrated post-processing of FASTA output, including compositional analysis of proteins, multiple sequences alignment, and phylogenetic analysis. Furthermore, it summarizes the search results organism-wise for prokaryotes and chromosome-wise for eukaryotes. Thus, the integration of different tools for sequence analyses makes GWFASTA a powerful toolfor biologists.

  1. DNA Commission of the International Society for Forensic Genetics: revised and extended guidelines for mitochondrial DNA typing.

    PubMed

    Parson, W; Gusmão, L; Hares, D R; Irwin, J A; Mayr, W R; Morling, N; Pokorak, E; Prinz, M; Salas, A; Schneider, P M; Parsons, T J

    2014-11-01

    The DNA Commission of the International Society of Forensic Genetics (ISFG) regularly publishes guidelines and recommendations concerning the application of DNA polymorphisms to the question of human identification. Previous recommendations published in 2000 addressed the analysis and interpretation of mitochondrial DNA (mtDNA) in forensic casework. While the foundations set forth in the earlier recommendations still apply, new approaches to the quality control, alignment and nomenclature of mitochondrial sequences, as well as the establishment of mtDNA reference population databases, have been developed. Here, we describe these developments and discuss their application to both mtDNA casework and mtDNA reference population databasing applications. While the generation of mtDNA for forensic casework has always been guided by specific standards, it is now well-established that data of the same quality are required for the mtDNA reference population data used to assess the statistical weight of the evidence. As a result, we introduce guidelines regarding sequence generation, as well as quality control measures based on the known worldwide mtDNA phylogeny, that can be applied to ensure the highest quality population data possible. For both casework and reference population databasing applications, the alignment and nomenclature of haplotypes is revised here and the phylogenetic alignment proffered as acceptable standard. In addition, the interpretation of heteroplasmy in the forensic context is updated, and the utility of alignment-free database searches for unbiased probability estimates is highlighted. Finally, we discuss statistical issues and define minimal standards for mtDNA database searches. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics.

    PubMed

    Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf

    2015-08-01

    RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of [Formula: see text]. Subsequently, numerous faster 'Sankoff-style' approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity ([Formula: see text] quartic time). Breaking this barrier, we introduce the novel Sankoff-style algorithm 'sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)', which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff's original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. © The Author 2015. Published by Oxford University Press.

  3. SINA: accurate high-throughput multiple sequence alignment of ribosomal RNA genes.

    PubMed

    Pruesse, Elmar; Peplies, Jörg; Glöckner, Frank Oliver

    2012-07-15

    In the analysis of homologous sequences, computation of multiple sequence alignments (MSAs) has become a bottleneck. This is especially troublesome for marker genes like the ribosomal RNA (rRNA) where already millions of sequences are publicly available and individual studies can easily produce hundreds of thousands of new sequences. Methods have been developed to cope with such numbers, but further improvements are needed to meet accuracy requirements. In this study, we present the SILVA Incremental Aligner (SINA) used to align the rRNA gene databases provided by the SILVA ribosomal RNA project. SINA uses a combination of k-mer searching and partial order alignment (POA) to maintain very high alignment accuracy while satisfying high throughput performance demands. SINA was evaluated in comparison with the commonly used high throughput MSA programs PyNAST and mothur. The three BRAliBase III benchmark MSAs could be reproduced with 99.3, 97.6 and 96.1 accuracy. A larger benchmark MSA comprising 38 772 sequences could be reproduced with 98.9 and 99.3% accuracy using reference MSAs comprising 1000 and 5000 sequences. SINA was able to achieve higher accuracy than PyNAST and mothur in all performed benchmarks. Alignment of up to 500 sequences using the latest SILVA SSU/LSU Ref datasets as reference MSA is offered at http://www.arb-silva.de/aligner. This page also links to Linux binaries, user manual and tutorial. SINA is made available under a personal use license.

  4. Parasail: SIMD C library for global, semi-global, and local pairwise sequence alignments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daily, Jeffrey A.

    Sequence alignment algorithms are a key component of many bioinformatics applications. Though various fast Smith-Waterman local sequence alignment implementations have been developed for x86 CPUs, most are embedded into larger database search tools. In addition, fast implementations of Needleman-Wunsch global sequence alignment and its semi-global variants are not as widespread. This article presents the first software library for local, global, and semi-global pairwise intra-sequence alignments and improves the performance of previous intra-sequence implementations. As a result, a faster intra-sequence pairwise alignment implementation is described and benchmarked. Using a 375 residue query sequence a speed of 136 billion cell updates permore » second (GCUPS) was achieved on a dual Intel Xeon E5-2670 12-core processor system, the highest reported for an implementation based on Farrar’s ’striped’ approach. When using only a single thread, parasail was 1.7 times faster than Rognes’s SWIPE. For many score matrices, parasail is faster than BLAST. The software library is designed for 64 bit Linux, OS X, or Windows on processors with SSE2, SSE41, or AVX2. Source code is available from https://github.com/jeffdaily/parasail under the Battelle BSD-style license. In conclusion, applications that require optimal alignment scores could benefit from the improved performance. For the first time, SIMD global, semi-global, and local alignments are available in a stand-alone C library.« less

  5. Parasail: SIMD C library for global, semi-global, and local pairwise sequence alignments

    DOE PAGES

    Daily, Jeffrey A.

    2016-02-10

    Sequence alignment algorithms are a key component of many bioinformatics applications. Though various fast Smith-Waterman local sequence alignment implementations have been developed for x86 CPUs, most are embedded into larger database search tools. In addition, fast implementations of Needleman-Wunsch global sequence alignment and its semi-global variants are not as widespread. This article presents the first software library for local, global, and semi-global pairwise intra-sequence alignments and improves the performance of previous intra-sequence implementations. As a result, a faster intra-sequence pairwise alignment implementation is described and benchmarked. Using a 375 residue query sequence a speed of 136 billion cell updates permore » second (GCUPS) was achieved on a dual Intel Xeon E5-2670 12-core processor system, the highest reported for an implementation based on Farrar’s ’striped’ approach. When using only a single thread, parasail was 1.7 times faster than Rognes’s SWIPE. For many score matrices, parasail is faster than BLAST. The software library is designed for 64 bit Linux, OS X, or Windows on processors with SSE2, SSE41, or AVX2. Source code is available from https://github.com/jeffdaily/parasail under the Battelle BSD-style license. In conclusion, applications that require optimal alignment scores could benefit from the improved performance. For the first time, SIMD global, semi-global, and local alignments are available in a stand-alone C library.« less

  6. 42: DESIGNING A DASHBOARD FOR EVALUATION THE BUSINESS-IT ALIGNMENT STRATEGIES IN HOSPITALS ORGANIZATIONS

    PubMed Central

    piri, Zakieh; Raef, Behnaz; moftian, Nazila; dehghani, Mohamad; khara, Rouhallah

    2017-01-01

    Background and aims Business-IT Alignment Evaluation is One of the most important issues that managers should monitor and make decisions about it. Dashboard software combines data and graphical indicators to deliver at-a-glance summaries of information for users to view the state of their business and quickly respond. The aim of this study was to design a dashboard to assess the business-IT alignment strategies for hospitals organizations in Tehran University of Medical Sciences. Methods This is a functional-developmental study. Initially, we searched related databases (PubMed and ProQuest) to determine the key performance indicators of business-IT alignment for selecting the best model for dashboard designing. After selecting the Luftman model, the key indicators were extracted for designing the dashboard model. In the next stage, an electronic questionnaire was designed based on extracted indicators. This questionnaire sends to Hospital managers and IT administrators. Collected data were analyzed by Excel 2015 and displayed in dashboard page. Results The number of key performance indicators was 39. After recognition the technical requirements the dashboard was designed in Excel. The overall business-IT alignment rate in hospitals was 3.12. Amir-aalam hospital has the highest business-IT alignment rate (3.55) and vali-asr hospital has the lowest business-IT alignment rat (2.80). Conclusion Using dashboard software improves the alignment and reduces the time and energy compared with doing this process manually.

  7. Keep It Sacred | National Native Network

    Science.gov Websites

    Detection Health Care Coverage Get Involved Resources NNN Webinar Archive Newsletter Archive Podcasts Cancer Guide Tribal Public Health Data Toolkits Smoke-Free Policy Toolkit Success Stories Resource Library Colorectal Cancer Diabetes Fact Sheets & Contact General Health Problems & Cancers General State

  8. Literacy Toolkit

    ERIC Educational Resources Information Center

    Center for Best Practices in Early Childhood Education, 2005

    2005-01-01

    The toolkit contains print and electronic resources, including (1) "eMERGing Literacy and Technology: Working Together", A 492 page curriculum guide; (2) "LitTECH Interactive Presents: The Beginning of Literacy", a DVD that provides and overview linking technology to the concepts of emerging literacy; (3) "Your Preschool Classroom Computer Center:…

  9. A Software Toolkit to Study Systematic Uncertainties of the Physics Models of the Geant4 Simulation Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genser, Krzysztof; Hatcher, Robert; Perdue, Gabriel

    2016-11-10

    The Geant4 toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models are tuned to cover a large variety of possible applications. This raises the critical question of what uncertainties are associated with the Geant4 physics model, or group of models, involved in a simulation project. To address the challenge, we have designed and implemented a comprehen- sive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies.more » It also enables analysis of multiple variants of the resulting physics observables of interest in order to estimate the uncertain- ties associated with the simulation model choices. Key functionalities of the toolkit are presented in this paper and are illustrated with selected results.« less

  10. Toolkit for US colleges/schools of pharmacy to prepare learners for careers in academia.

    PubMed

    Haines, Seena L; Summa, Maria A; Peeters, Michael J; Dy-Boarman, Eliza A; Boyle, Jaclyn A; Clifford, Kalin M; Willson, Megan N

    2017-09-01

    The objective of this article is to provide an academic toolkit for use by colleges/schools of pharmacy to prepare student pharmacists/residents for academic careers. Through the American Association of Colleges of Pharmac (AACP) Section of Pharmacy Practice, the Student Resident Engagement Task Force (SRETF) collated teaching materials used by colleges/schools of pharmacy from a previously reported national survey. The SRETF developed a toolkit for student pharmacists/residents interested in academic pharmacy. Eighteen institutions provided materials; five provided materials describing didactic coursework; over fifteen provided materials for an academia-focused Advanced Pharmacy Practice Experiences (APPE), while one provided materials for an APPE teaching-research elective. SRETF members created a syllabus template and sample lesson plan by integrating submitted resources. Submissions still needed to complete the toolkit include examples of curricular tracks and certificate programs. Pharmacy faculty vacancies still exist in pharmacy education. Engaging student pharmacists/residents about academia pillars of teaching, scholarship and service is critical for the future success of the academy. Published by Elsevier Inc.

  11. Hydrogen quantitative risk assessment workshop proceedings.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groth, Katrina M.; Harris, Aaron P.

    2013-09-01

    The Quantitative Risk Assessment (QRA) Toolkit Introduction Workshop was held at Energetics on June 11-12. The workshop was co-hosted by Sandia National Laboratories (Sandia) and HySafe, the International Association for Hydrogen Safety. The objective of the workshop was twofold: (1) Present a hydrogen-specific methodology and toolkit (currently under development) for conducting QRA to support the development of codes and standards and safety assessments of hydrogen-fueled vehicles and fueling stations, and (2) Obtain feedback on the needs of early-stage users (hydrogen as well as potential leveraging for Compressed Natural Gas [CNG], and Liquefied Natural Gas [LNG]) and set priorities for %E2%80%9CVersionmore » 1%E2%80%9D of the toolkit in the context of the commercial evolution of hydrogen fuel cell electric vehicles (FCEV). The workshop consisted of an introduction and three technical sessions: Risk Informed Development and Approach; CNG/LNG Applications; and Introduction of a Hydrogen Specific QRA Toolkit.« less

  12. Assessing Coverage of Population-Based and Targeted Fortification Programs with the Use of the Fortification Assessment Coverage Toolkit (FACT): Background, Toolkit Development, and Supplement Overview.

    PubMed

    Friesen, Valerie M; Aaron, Grant J; Myatt, Mark; Neufeld, Lynnette M

    2017-05-01

    Food fortification is a widely used approach to increase micronutrient intake in the diet. High coverage is essential for achieving impact. Data on coverage is limited in many countries, and tools to assess coverage of fortification programs have not been standardized. In 2013, the Global Alliance for Improved Nutrition developed the Fortification Assessment Coverage Toolkit (FACT) to carry out coverage assessments in both population-based (i.e., staple foods and/or condiments) and targeted (e.g., infant and young child) fortification programs. The toolkit was designed to generate evidence on program coverage and the use of fortified foods to provide timely and programmatically relevant information for decision making. This supplement presents results from FACT surveys that assessed the coverage of population-based and targeted food fortification programs across 14 countries. It then discusses the policy and program implications of the findings for the potential for impact and program improvement.

  13. Robust prediction of consensus secondary structures using averaged base pairing probability matrices.

    PubMed

    Kiryu, Hisanori; Kin, Taishin; Asai, Kiyoshi

    2007-02-15

    Recent transcriptomic studies have revealed the existence of a considerable number of non-protein-coding RNA transcripts in higher eukaryotic cells. To investigate the functional roles of these transcripts, it is of great interest to find conserved secondary structures from multiple alignments on a genomic scale. Since multiple alignments are often created using alignment programs that neglect the special conservation patterns of RNA secondary structures for computational efficiency, alignment failures can cause potential risks of overlooking conserved stem structures. We investigated the dependence of the accuracy of secondary structure prediction on the quality of alignments. We compared three algorithms that maximize the expected accuracy of secondary structures as well as other frequently used algorithms. We found that one of our algorithms, called McCaskill-MEA, was more robust against alignment failures than others. The McCaskill-MEA method first computes the base pairing probability matrices for all the sequences in the alignment and then obtains the base pairing probability matrix of the alignment by averaging over these matrices. The consensus secondary structure is predicted from this matrix such that the expected accuracy of the prediction is maximized. We show that the McCaskill-MEA method performs better than other methods, particularly when the alignment quality is low and when the alignment consists of many sequences. Our model has a parameter that controls the sensitivity and specificity of predictions. We discussed the uses of that parameter for multi-step screening procedures to search for conserved secondary structures and for assigning confidence values to the predicted base pairs. The C++ source code that implements the McCaskill-MEA algorithm and the test dataset used in this paper are available at http://www.ncrna.org/papers/McCaskillMEA/. Supplementary data are available at Bioinformatics online.

  14. An efficient multi-resolution GA approach to dental image alignment

    NASA Astrophysics Data System (ADS)

    Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany

    2006-02-01

    Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments.

  15. Perspective: The missing link in academic career planning and development: pursuit of meaningful and aligned work.

    PubMed

    Lieff, Susan J

    2009-10-01

    Retention of faculty in academic medicine is a growing challenge. It has been suggested that inattention to the humanistic values of the faculty is contributing to this problem. Professional development should consider faculty members' search for meaning, purpose, and professional fulfillment and should support the development of an ability to reflect on these issues. Ensuring the alignment of academic physicians' inner direction with their outer context is critical to professional fulfillment and effectiveness. Personal reflection on the synergy of one's strengths, passions, and values can help faculty members define meaningful work so as to enable clearer career decision making. The premise of this article is that an awareness of and the pursuit of meaningful work and its alignment with the academic context are important considerations in the professional fulfillment and retention of academic faculty. A conceptual framework for understanding meaningful work and alignment and ways in which that framework can be applied and taught in development programs are presented and discussed.

  16. PSI/TM-Coffee: a web server for fast and accurate multiple sequence alignments of regular and transmembrane proteins using homology extension on reduced databases.

    PubMed

    Floden, Evan W; Tommaso, Paolo D; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming

    2016-07-08

    The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Perspectives of healthcare providers and HIV-affected individuals and couples during the development of a Safer Conception Counseling Toolkit in Kenya: stigma, fears, and recommendations for the delivery of services.

    PubMed

    Mmeje, Okeoma; Njoroge, Betty; Akama, Eliud; Leddy, Anna; Breitnauer, Brooke; Darbes, Lynae; Brown, Joelle

    2016-01-01

    Reproduction is important to many HIV-affected individuals and couples and healthcare providers (HCPs) are responsible for providing resources to help them safely conceive while minimizing the risk of sexual and perinatal HIV transmission. In order to fulfill their reproductive goals, HIV-affected individuals and their partners need access to information regarding safer methods of conception. The objective of this qualitative study was to develop a Safer Conception Counseling Toolkit that can be used to train HCPs and counsel HIV-affected individuals and couples in HIV care and treatment clinics in Kenya. We conducted a two-phased qualitative study among HCPs and HIV-affected individuals and couples from eight HIV care and treatment sites in Kisumu, Kenya. We conducted in-depth interviews (IDIs) and focus group discussions (FGDs) to assess the perspectives of HCPs and HIV-affected individuals and couples in order to develop and refine the content of the Toolkit. Subsequently, IDIs were conducted among HCPs who were trained using the Toolkit and FGDs among HIV-affected individuals and couples who were counseled with the Toolkit. HIV-related stigma, fears, and recommendations for delivery of safer conception counseling were assessed during the discussions. One hundred and six individuals participated in FGDs and IDIs; 29 HCPs, 49 HIV-affected women and men, and 14 HIV-serodiscordant couples. Participants indicated that a safer conception counseling and training program for HCPs is needed and that routine provision of safer conception counseling may promote maternal and child health by enhancing reproductive autonomy among HIV-affected couples. They also reported that the Toolkit may help dispel the stigma and fears associated with reproduction in HIV-affected couples, while supporting them in achieving their reproductive goals. Additional research is needed to evaluate the Safer Conception Toolkit in order to support its implementation and use in HIV care and treatment programs in Kenya and other HIV endemic regions of sub-Saharan Africa.

  18. Land surface Verification Toolkit (LVT)

    NASA Technical Reports Server (NTRS)

    Kumar, Sujay V.

    2017-01-01

    LVT is a framework developed to provide an automated, consolidated environment for systematic land surface model evaluation Includes support for a range of in-situ, remote-sensing and other model and reanalysis products. Supports the analysis of outputs from various LIS subsystems, including LIS-DA, LIS-OPT, LIS-UE. Note: The Land Information System Verification Toolkit (LVT) is a NASA software tool designed to enable the evaluation, analysis and comparison of outputs generated by the Land Information System (LIS). The LVT software is released under the terms and conditions of the NASA Open Source Agreement (NOSA) Version 1.1 or later. Land Information System Verification Toolkit (LVT) NOSA.

  19. MAVEN Data Analysis and Visualization Toolkits

    NASA Astrophysics Data System (ADS)

    Harter, B., Jr.; DeWolfe, A. W.; Brain, D.; Chaffin, M.

    2017-12-01

    The Mars Atmospheric and Volatile Evolution (MAVEN) mission has been collecting data at Mars since September 2014. The MAVEN Science Data Center has developed software toolkits for analyzing and visualizing the science data. Our Data Intercomparison and Visualization Development Effort (DIVIDE) toolkit is written in IDL, and utilizes the widely used "tplot" IDL libraries. Recently, we have converted DIVIDE into Python in an effort to increase the accessibility of the MAVEN data. This conversion also necessitated the development of a Python version of the tplot libraries, which we have dubbed "PyTplot". PyTplot is generalized to work with missions beyond MAVEN, and our software is available on Github.

  20. An Industrial Physics Toolkit

    NASA Astrophysics Data System (ADS)

    Cummings, Bill

    2004-03-01

    Physicists possess many skills highly valued in industrial companies. However, with the exception of a decreasing number of positions in long range research at large companies, job openings in industry rarely say "Physicist Required." One key to a successful industrial career is to know what subset of your physics skills is most highly valued by a given industry and to continue to build these skills while working. This combination of skills from both academic and industrial experience becomes your "Industrial Physics Toolkit" and is a transferable resource when you change positions or companies. This presentation will describe how one builds and sells your own "Industrial Physics Toolkit" using concrete examples from the speaker's industrial experience.

  1. WIRM: An Open Source Toolkit for Building Biomedical Web Applications

    PubMed Central

    Jakobovits, Rex M.; Rosse, Cornelius; Brinkley, James F.

    2002-01-01

    This article describes an innovative software toolkit that allows the creation of web applications that facilitate the acquisition, integration, and dissemination of multimedia biomedical data over the web, thereby reducing the cost of knowledge sharing. There is a lack of high-level web application development tools suitable for use by researchers, clinicians, and educators who are not skilled programmers. Our Web Interfacing Repository Manager (WIRM) is a software toolkit that reduces the complexity of building custom biomedical web applications. WIRM’s visual modeling tools enable domain experts to describe the structure of their knowledge, from which WIRM automatically generates full-featured, customizable content management systems. PMID:12386108

  2. RAPID Toolkit Creates Smooth Flow Toward New Projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levine, Aaron; Young, Katherine

    2016-07-01

    Uncertainty about the duration and outcome of the permitting process has historically been seen as a deterrent to investment in renewable energy projects, including new hydropower projects. What if the process were clearer, smoother, faster? That's the purpose of the Regulatory and Permitting Information Desktop (RAPID) Toolkit, developed by the National Renewable Energy Laboratory (NREL) with funding from the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy and the Western Governors' Association. Now, the RAPID Toolkit is being expanded to include information about developing and permitting hydropower projects, with initial outreach and information gathering occurring duringmore » 2015.« less

  3. New Software Developments for Quality Mesh Generation and Optimization from Biomedical Imaging Data

    PubMed Central

    Yu, Zeyun; Wang, Jun; Gao, Zhanheng; Xu, Ming; Hoshijima, Masahiko

    2013-01-01

    In this paper we present a new software toolkit for generating and optimizing surface and volumetric meshes from three-dimensional (3D) biomedical imaging data, targeted at image-based finite element analysis of some biomedical activities in a single material domain. Our toolkit includes a series of geometric processing algorithms including surface re-meshing and quality-guaranteed tetrahedral mesh generation and optimization. All methods described have been encapsulated into a user-friendly graphical interface for easy manipulation and informative visualization of biomedical images and mesh models. Numerous examples are presented to demonstrate the effectiveness and efficiency of the described methods and toolkit. PMID:24252469

  4. Open source tools and toolkits for bioinformatics: significance, and where are we?

    PubMed

    Stajich, Jason E; Lapp, Hilmar

    2006-09-01

    This review summarizes important work in open-source bioinformatics software that has occurred over the past couple of years. The survey is intended to illustrate how programs and toolkits whose source code has been developed or released under an Open Source license have changed informatics-heavy areas of life science research. Rather than creating a comprehensive list of all tools developed over the last 2-3 years, we use a few selected projects encompassing toolkit libraries, analysis tools, data analysis environments and interoperability standards to show how freely available and modifiable open-source software can serve as the foundation for building important applications, analysis workflows and resources.

  5. The PyRosetta Toolkit: a graphical user interface for the Rosetta software suite.

    PubMed

    Adolf-Bryfogle, Jared; Dunbrack, Roland L

    2013-01-01

    The Rosetta Molecular Modeling suite is a command-line-only collection of applications that enable high-resolution modeling and design of proteins and other molecules. Although extremely useful, Rosetta can be difficult to learn for scientists with little computational or programming experience. To that end, we have created a Graphical User Interface (GUI) for Rosetta, called the PyRosetta Toolkit, for creating and running protocols in Rosetta for common molecular modeling and protein design tasks and for analyzing the results of Rosetta calculations. The program is highly extensible so that developers can add new protocols and analysis tools to the PyRosetta Toolkit GUI.

  6. CUDASW++ 3.0: accelerating Smith-Waterman protein database search by coupling CPU and GPU SIMD instructions.

    PubMed

    Liu, Yongchao; Wirawan, Adrianto; Schmidt, Bertil

    2013-04-04

    The maximal sensitivity for local alignments makes the Smith-Waterman algorithm a popular choice for protein sequence database search based on pairwise alignment. However, the algorithm is compute-intensive due to a quadratic time complexity. Corresponding runtimes are further compounded by the rapid growth of sequence databases. We present CUDASW++ 3.0, a fast Smith-Waterman protein database search algorithm, which couples CPU and GPU SIMD instructions and carries out concurrent CPU and GPU computations. For the CPU computation, this algorithm employs SSE-based vector execution units as accelerators. For the GPU computation, we have investigated for the first time a GPU SIMD parallelization, which employs CUDA PTX SIMD video instructions to gain more data parallelism beyond the SIMT execution model. Moreover, sequence alignment workloads are automatically distributed over CPUs and GPUs based on their respective compute capabilities. Evaluation on the Swiss-Prot database shows that CUDASW++ 3.0 gains a performance improvement over CUDASW++ 2.0 up to 2.9 and 3.2, with a maximum performance of 119.0 and 185.6 GCUPS, on a single-GPU GeForce GTX 680 and a dual-GPU GeForce GTX 690 graphics card, respectively. In addition, our algorithm has demonstrated significant speedups over other top-performing tools: SWIPE and BLAST+. CUDASW++ 3.0 is written in CUDA C++ and PTX assembly languages, targeting GPUs based on the Kepler architecture. This algorithm obtains significant speedups over its predecessor: CUDASW++ 2.0, by benefiting from the use of CPU and GPU SIMD instructions as well as the concurrent execution on CPUs and GPUs. The source code and the simulated data are available at http://cudasw.sourceforge.net.

  7. Implementing a search for gravitational waves from binary black holes with nonprecessing spin

    NASA Astrophysics Data System (ADS)

    Capano, Collin; Harry, Ian; Privitera, Stephen; Buonanno, Alessandra

    2016-06-01

    Searching for gravitational waves (GWs) from binary black holes (BBHs) with LIGO and Virgo involves matched-filtering data against a set of representative signal waveforms—a template bank—chosen to cover the full signal space of interest with as few template waveforms as possible. Although the component black holes may have significant angular momenta (spin), previous searches for BBHs have filtered LIGO and Virgo data using only waveforms where both component spins are zero. This leads to a loss of signal-to-noise ratio for signals where this is not the case. Combining the best available template placement techniques and waveform models, we construct a template bank of GW signals from BBHs with component spins χ1 ,2∈[-0.99 ,0.99 ] aligned with the orbital angular momentum, component masses m1 ,2∈[2 ,48 ]M⊙ , and total mass Mtotal≤50 M⊙ . Using effective-one-body waveforms with spin effects, we show that less than 3% of the maximum signal-to-noise ratio (SNR) of these signals is lost due to the discreetness of the bank, using the early Advanced LIGO noise curve. We use simulated Advanced LIGO noise to compare the sensitivity of this bank to a nonspinning bank covering the same parameter space. In doing so, we consider the competing effects between improved SNR and signal-based vetoes and the increase in the rate of false alarms of the aligned-spin bank due to covering a larger parameter space. We find that the aligned-spin bank can be a factor of 1.3-5 more sensitive than a nonspinning bank to BBHs with dimensionless spins >+0.6 and component masses ≳20 M⊙ . Even larger gains are obtained for systems with equally high spins but smaller component masses.

  8. CUDA compatible GPU cards as efficient hardware accelerators for Smith-Waterman sequence alignment

    PubMed Central

    Manavski, Svetlin A; Valle, Giorgio

    2008-01-01

    Background Searching for similarities in protein and DNA databases has become a routine procedure in Molecular Biology. The Smith-Waterman algorithm has been available for more than 25 years. It is based on a dynamic programming approach that explores all the possible alignments between two sequences; as a result it returns the optimal local alignment. Unfortunately, the computational cost is very high, requiring a number of operations proportional to the product of the length of two sequences. Furthermore, the exponential growth of protein and DNA databases makes the Smith-Waterman algorithm unrealistic for searching similarities in large sets of sequences. For these reasons heuristic approaches such as those implemented in FASTA and BLAST tend to be preferred, allowing faster execution times at the cost of reduced sensitivity. The main motivation of our work is to exploit the huge computational power of commonly available graphic cards, to develop high performance solutions for sequence alignment. Results In this paper we present what we believe is the fastest solution of the exact Smith-Waterman algorithm running on commodity hardware. It is implemented in the recently released CUDA programming environment by NVidia. CUDA allows direct access to the hardware primitives of the last-generation Graphics Processing Units (GPU) G80. Speeds of more than 3.5 GCUPS (Giga Cell Updates Per Second) are achieved on a workstation running two GeForce 8800 GTX. Exhaustive tests have been done to compare our implementation to SSEARCH and BLAST, running on a 3 GHz Intel Pentium IV processor. Our solution was also compared to a recently published GPU implementation and to a Single Instruction Multiple Data (SIMD) solution. These tests show that our implementation performs from 2 to 30 times faster than any other previous attempt available on commodity hardware. Conclusions The results show that graphic cards are now sufficiently advanced to be used as efficient hardware accelerators for sequence alignment. Their performance is better than any alternative available on commodity hardware platforms. The solution presented in this paper allows large scale alignments to be performed at low cost, using the exact Smith-Waterman algorithm instead of the largely adopted heuristic approaches. PMID:18387198

  9. A Toolkit to Implement Graduate Attributes in Geography Curricula

    ERIC Educational Resources Information Center

    Spronken-Smith, Rachel; McLean, Angela; Smith, Nell; Bond, Carol; Jenkins, Martin; Marshall, Stephen; Frielick, Stanley

    2016-01-01

    This article uses findings from a project on engagement with graduate outcomes across higher education institutions in New Zealand to produce a toolkit for implementing graduate attributes in geography curricula. Key facets include strong leadership; academic developers to facilitate conversations about graduate attributes and teaching towards…

  10. Developments in Geometric Metadata and Tools at the PDS Ring-Moon Systems Node

    NASA Astrophysics Data System (ADS)

    Showalter, M. R.; Ballard, L.; French, R. S.; Gordon, M. K.; Tiscareno, M. S.

    2018-04-01

    Object-Oriented Python/SPICE (OOPS) is an overlay on the SPICE toolkit that vastly simplifies and speeds up geometry calculations for planetary data products. This toolkit is the basis for much of the development at the PDS Ring-Moon Systems Node.

  11. Teacher Quality Toolkit. 2nd Edition

    ERIC Educational Resources Information Center

    Lauer, Patricia A.; Dean, Ceri B.; Martin-Glenn, Mya L.; Asensio, Margaret L.

    2005-01-01

    The Teacher Quality Toolkit addresses the continuum of teacher learning by providing tools that can be used to improve both preservice, and inservice teacher education. Each chapter provides self assessment tools that can guide progress toward improved teacher quality and describes resources for designing exemplary programs and practices. Chapters…

  12. WHU at TREC KBA Vital Filtering Track 2014

    DTIC Science & Technology

    2014-11-01

    view the problem as a classification problem and use Stanford NLP Toolkit to extract necessary information. Various kinds of features are leveraged to...profile of an entity. Our approach is to view the problem as a classification problem and use Stanford NLP Toolkit to extract necessary information

  13. Basic Internet Software Toolkit.

    ERIC Educational Resources Information Center

    Buchanan, Larry

    1998-01-01

    Once schools are connected to the Internet, the next step is getting network workstations configured for Internet access. This article describes a basic toolkit comprising software currently available on the Internet for free or modest cost. Lists URLs for Web browser, Telnet, FTP, file decompression, portable document format (PDF) reader,…

  14. Field trials of a novel toolkit for evaluating 'intangible' values-related dimensions of projects.

    PubMed

    Burford, Gemma; Velasco, Ismael; Janoušková, Svatava; Zahradnik, Martin; Hak, Tomas; Podger, Dimity; Piggot, Georgia; Harder, Marie K

    2013-02-01

    A novel toolkit has been developed, using an original approach to develop its components, for the purpose of evaluating 'soft' outcomes and processes that have previously been generally considered 'intangible': those which are specifically values based. This represents a step-wise, significant, change in provision for the assessment of values-based achievements that are of absolutely key importance to most civil society organisations (CSOs) and values-based businesses, and fills a known gap in evaluation practice. In this paper, we demonstrate the significance and rigour of the toolkit by presenting an evaluation of it in three diverse scenarios where different CSOs use it to co-evaluate locally relevant outcomes and processes to obtain results which are both meaningful to them and potentially comparable across organisations. A key strength of the toolkit is its original use of a prior generated, peer-elicited 'menu' of values-based indicators which provides a framework for user CSOs to localise. Principles of participatory, process-based and utilisation-focused evaluation are embedded in this toolkit and shown to be critical to its success, achieving high face-validity and wide applicability. The emerging contribution of this next-generation evaluation tool to other fields, such as environmental values, development and environmental sustainable development, shared values, business, education and organisational change is outlined. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Practical computational toolkits for dendrimers and dendrons structure design.

    PubMed

    Martinho, Nuno; Silva, Liana C; Florindo, Helena F; Brocchini, Steve; Barata, Teresa; Zloh, Mire

    2017-09-01

    Dendrimers and dendrons offer an excellent platform for developing novel drug delivery systems and medicines. The rational design and further development of these repetitively branched systems are restricted by difficulties in scalable synthesis and structural determination, which can be overcome by judicious use of molecular modelling and molecular simulations. A major difficulty to utilise in silico studies to design dendrimers lies in the laborious generation of their structures. Current modelling tools utilise automated assembly of simpler dendrimers or the inefficient manual assembly of monomer precursors to generate more complicated dendrimer structures. Herein we describe two novel graphical user interface toolkits written in Python that provide an improved degree of automation for rapid assembly of dendrimers and generation of their 2D and 3D structures. Our first toolkit uses the RDkit library, SMILES nomenclature of monomers and SMARTS reaction nomenclature to generate SMILES and mol files of dendrimers without 3D coordinates. These files are used for simple graphical representations and storing their structures in databases. The second toolkit assembles complex topology dendrimers from monomers to construct 3D dendrimer structures to be used as starting points for simulation using existing and widely available software and force fields. Both tools were validated for ease-of-use to prototype dendrimer structure and the second toolkit was especially relevant for dendrimers of high complexity and size.

  16. Practical computational toolkits for dendrimers and dendrons structure design

    NASA Astrophysics Data System (ADS)

    Martinho, Nuno; Silva, Liana C.; Florindo, Helena F.; Brocchini, Steve; Barata, Teresa; Zloh, Mire

    2017-09-01

    Dendrimers and dendrons offer an excellent platform for developing novel drug delivery systems and medicines. The rational design and further development of these repetitively branched systems are restricted by difficulties in scalable synthesis and structural determination, which can be overcome by judicious use of molecular modelling and molecular simulations. A major difficulty to utilise in silico studies to design dendrimers lies in the laborious generation of their structures. Current modelling tools utilise automated assembly of simpler dendrimers or the inefficient manual assembly of monomer precursors to generate more complicated dendrimer structures. Herein we describe two novel graphical user interface toolkits written in Python that provide an improved degree of automation for rapid assembly of dendrimers and generation of their 2D and 3D structures. Our first toolkit uses the RDkit library, SMILES nomenclature of monomers and SMARTS reaction nomenclature to generate SMILES and mol files of dendrimers without 3D coordinates. These files are used for simple graphical representations and storing their structures in databases. The second toolkit assembles complex topology dendrimers from monomers to construct 3D dendrimer structures to be used as starting points for simulation using existing and widely available software and force fields. Both tools were validated for ease-of-use to prototype dendrimer structure and the second toolkit was especially relevant for dendrimers of high complexity and size.

  17. The ProteoRed MIAPE web toolkit: A User-friendly Framework to Connect and Share Proteomics Standards*

    PubMed Central

    Medina-Aunon, J. Alberto; Martínez-Bartolomé, Salvador; López-García, Miguel A.; Salazar, Emilio; Navajas, Rosana; Jones, Andrew R.; Paradela, Alberto; Albar, Juan P.

    2011-01-01

    The development of the HUPO-PSI's (Proteomics Standards Initiative) standard data formats and MIAPE (Minimum Information About a Proteomics Experiment) guidelines should improve proteomics data sharing within the scientific community. Proteomics journals have encouraged the use of these standards and guidelines to improve the quality of experimental reporting and ease the evaluation and publication of manuscripts. However, there is an evident lack of bioinformatics tools specifically designed to create and edit standard file formats and reports, or embed them within proteomics workflows. In this article, we describe a new web-based software suite (The ProteoRed MIAPE web toolkit) that performs several complementary roles related to proteomic data standards. First, it can verify that the reports fulfill the minimum information requirements of the corresponding MIAPE modules, highlighting inconsistencies or missing information. Second, the toolkit can convert several XML-based data standards directly into human readable MIAPE reports stored within the ProteoRed MIAPE repository. Finally, it can also perform the reverse operation, allowing users to export from MIAPE reports into XML files for computational processing, data sharing, or public database submission. The toolkit is thus the first application capable of automatically linking the PSI's MIAPE modules with the corresponding XML data exchange standards, enabling bidirectional conversions. This toolkit is freely available at http://www.proteored.org/MIAPE/. PMID:21983993

  18. The Liquid Argon Software Toolkit (LArSoft): Goals, Status and Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pordes, Rush; Snider, Erica

    LArSoft is a toolkit that provides a software infrastructure and algorithms for the simulation, reconstruction and analysis of events in Liquid Argon Time Projection Chambers (LArTPCs). It is used by the ArgoNeuT, LArIAT, MicroBooNE, DUNE (including 35ton prototype and ProtoDUNE) and SBND experiments. The LArSoft collaboration provides an environment for the development, use, and sharing of code across experiments. The ultimate goal is to develop fully automatic processes for reconstruction and analysis of LArTPC events. The toolkit is based on the art framework and has a well-defined architecture to interface to other packages, including to GEANT4 and GENIE simulation softwaremore » and the Pandora software development kit for pattern recognition. It is designed to facilitate and support the evolution of algorithms including their transition to new computing platforms. The development of the toolkit is driven by the scientific stakeholders involved. The core infrastructure includes standard definitions of types and constants, means to input experiment geometries as well as meta and event- data in several formats, and relevant general utilities. Examples of algorithms experiments have contributed to date are: photon-propagation; particle identification; hit finding, track finding and fitting; electromagnetic shower identification and reconstruction. We report on the status of the toolkit and plans for future work.« less

  19. Organizational Context Matters: A Research Toolkit for Conducting Standardized Case Studies of Integrated Care Initiatives

    PubMed Central

    Grudniewicz, Agnes; Gray, Carolyn Steele; Wodchis, Walter P.; Carswell, Peter; Baker, G. Ross

    2017-01-01

    Introduction: The variable success of integrated care initiatives has led experts to recommend tailoring design and implementation to the organizational context. Yet, organizational contexts are rarely described, understood, or measured with sufficient depth and breadth in empirical studies or in practice. We thus lack knowledge of when and specifically how organizational contexts matter. To facilitate the accumulation of evidence, we developed a research toolkit for conducting case studies using standardized measures of the (inter-)organizational context for integrating care. Theory and Methods: We used a multi-method approach to develop the research toolkit: (1) development and validation of the Context and Capabilities for Integrating Care (CCIC) Framework, (2) identification, assessment, and selection of survey instruments, (3) development of document review methods, (4) development of interview guide resources, and (5) pilot testing of the document review guidelines, consolidated survey, and interview guide. Results: The toolkit provides a framework and measurement tools that examine 18 organizational and inter-organizational factors that affect the implementation and success of integrated care initiatives. Discussion and Conclusion: The toolkit can be used to characterize and compare organizational contexts across cases and enable comparison of results across studies. This information can enhance our understanding of the influence of organizational contexts, support the transfer of best practices, and help explain why some integrated care initiatives succeed and some fail. PMID:28970750

  20. Understanding Disabilities in American Indian and Alaska Native Communities. Toolkit Guide.

    ERIC Educational Resources Information Center

    National Council on Disability, Washington, DC.

    This "toolkit" document is intended to provide a culturally appropriate set of resources to address the unique political and legal concerns of people with disabilities in American Indian/Alaska Native (AI/AN) communities. It provides information on education, health, vocational rehabilitation (VR), independent living, model approaches, and…

  1. Object Toolkit Version 4.2 Users Manual

    DTIC Science & Technology

    2014-10-31

    48 Figure 39. Geocentric Orbit Dialog Box...Z Side at (0.44, -0.44, 1.46). ............................................ 114 Figure 133. Geocentric Orbit Dialog Box...building an object for MEM, Object Toolkit has an Orbit menu that allows the user to specify and edit a heliocentric or geocentric orbit. The dialog

  2. Educating Globally Competent Citizens: A Toolkit. Second Edition

    ERIC Educational Resources Information Center

    Elliott-Gower, Steven; Falk, Dennis R.; Shapiro, Martin

    2012-01-01

    Educating Globally Competent Citizens, a product of AASCU's American Democracy Project and its Global Engagement Initiative, introduces readers to a set of global challenges facing society based on the Center for Strategic and International Studies' 7 Revolutions. The toolkit is designed to aid faculty in incorporating global challenges into new…

  3. 75 FR 53969 - Office of Community Services: Notice To Award an Expansion Supplement

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-02

    ... related to job creation and new initiatives that target careers in energy efficiency and other...; and (3) collects, develops, and disseminates resources related to job creation and careers related to... through professional consultations and peer assistance sessions; and online toolkit(s). The T/TA CAP will...

  4. The Student Writing Toolkit: Enhancing Undergraduate Teaching of Scientific Writing in the Biological Sciences

    ERIC Educational Resources Information Center

    Dirrigl, Frank J., Jr.; Noe, Mark

    2014-01-01

    Teaching scientific writing in biology classes is challenging for both students and instructors. This article offers and reviews several useful "toolkit" items that improve student writing. These include sentence and paper-length templates, funnelling and compartmentalisation, and preparing compendiums of corrections. In addition,…

  5. The Archivists' Toolkit: Another Step toward Streamlined Archival Processing

    ERIC Educational Resources Information Center

    Westbrook, Bradley D.; Mandell, Lee; Shepherd, Kelcy; Stevens, Brian; Varghese, Jason

    2006-01-01

    The Archivists' Toolkit is a software application currently in development and designed to support the creation and management of archival information. This article summarizes the development of the application, including some of the problems the application is designed to resolve. Primary emphasis is placed on describing the application's…

  6. Healthy People 2010: Oral Health Toolkit

    ERIC Educational Resources Information Center

    Isman, Beverly

    2007-01-01

    The purpose of this Toolkit is to provide guidance, technical tools, and resources to help states, territories, tribes and communities develop and implement successful oral health components of Healthy People 2010 plans as well as other oral health plans. These plans are useful for: (1) promoting, implementing and tracking oral health objectives;…

  7. Using Toolkits to Achieve STEM Enterprise Learning Outcomes

    ERIC Educational Resources Information Center

    Watts, Carys A.; Wray, Katie

    2012-01-01

    Purpose: The purpose of this paper is to evaluate the effectiveness of using several commercial tools in science, technology, engineering and maths (STEM) subjects for enterprise education at Newcastle University, UK. Design/methodology/approach: The paper provides an overview of existing toolkit use in higher education, before reviewing where and…

  8. BAT - The Bayesian analysis toolkit

    NASA Astrophysics Data System (ADS)

    Caldwell, Allen; Kollár, Daniel; Kröninger, Kevin

    2009-11-01

    We describe the development of a new toolkit for data analysis. The analysis package is based on Bayes' Theorem, and is realized with the use of Markov Chain Monte Carlo. This gives access to the full posterior probability distribution. Parameter estimation, limit setting and uncertainty propagation are implemented in a straightforward manner.

  9. A Toolkit for Teacher Engagement

    ERIC Educational Resources Information Center

    Grantmakers for Education, 2014

    2014-01-01

    Teachers are critical to the success of education grantmaking strategies, yet in talking with them we discovered that the world of philanthropy is often a mystery. GFE's Toolkit for Teacher Engagement aims to assist funders in authentically and effectively involving teachers in the education reform and innovation process. Built directly from the…

  10. The Mentoring Toolkit 2.0: Resources for Developing Programs for Incarcerated Youth. Guide

    ERIC Educational Resources Information Center

    Zaugg, Nathan; Jarjoura, Roger

    2017-01-01

    "The Mentoring Toolkit 2.0: Resources for Developing Programs for Incarcerated Youth" provides information, program descriptions, and links to important resources that can assist juvenile correctional facilities and other organizations to design effective mentoring programs for neglected and delinquent youth, particularly those who are…

  11. Making Schools the Model for Healthier Environments Toolkit: What It Is

    ERIC Educational Resources Information Center

    Robert Wood Johnson Foundation, 2012

    2012-01-01

    Healthy students perform better. Poor nutrition and inadequate physical activity can affect not only academic achievement, but also other factors such as absenteeism, classroom behavior, ability to concentrate, self-esteem, cognitive performance, and test scores. This toolkit provides information to help make schools the model for healthier…

  12. Policy to Performance Toolkit: Transitioning Adults to Opportunity

    ERIC Educational Resources Information Center

    Alamprese, Judith A.; Limardo, Chrys

    2012-01-01

    The "Policy to Performance Toolkit" is designed to provide state adult education staff and key stakeholders with guidance and tools to use in developing, implementing, and monitoring state policies and their associated practices that support an effective state adult basic education (ABE) to postsecondary education and training transition…

  13. Excellence in Teaching End-of-Life Care. A New Multimedia Toolkit for Nurse Educators.

    ERIC Educational Resources Information Center

    Wilkie, Diana J.; Judge, Kay M.; Wells, Marjorie J.; Berkley, Ila Meredith

    2001-01-01

    Describes a multimedia toolkit for teaching palliative care in nursing, which contains modules on end-of-life topics: comfort, connections, ethics, grief, impact, and well-being. Other contents include myths, definitions, pre- and postassessments, teaching materials, case studies, learning activities, and resources. (SK)

  14. Designing a Bioengine for Detection and Analysis of Base String on an Affected Sequence in High-Concentration Regions

    PubMed Central

    Mandal, Bijoy Kumar; Kim, Tai-hoon

    2013-01-01

    We design an Algorithm for bioengine. As a program are enable optimal alignments searching between two sequences, the host sequence (normal plant) as well as query sequence (virus). Searching for homologues has become a routine operation of biological sequences in 4 × 4 combination with different subsequence (word size). This program takes the advantage of the high degree of homology between such sequences to construct an alignment of the matching regions. There is a main aim which is to detect the overlapping reading frames. This program also enables to find out the highly infected colones selection highest matching region with minimum gap or mismatch zones and unique virus colones matches. This is a small, portable, interactive, front-end program intended to be used to find out the regions of matching between host sequence and query subsequences. All the operations are carried out in fraction of seconds, depending on the required task and on the sequence length. PMID:24000321

  15. Word aligned bitmap compression method, data structure, and apparatus

    DOEpatents

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  16. MetPetDB: A database for metamorphic geochemistry

    NASA Astrophysics Data System (ADS)

    Spear, Frank S.; Hallett, Benjamin; Pyle, Joseph M.; Adalı, Sibel; Szymanski, Boleslaw K.; Waters, Anthony; Linder, Zak; Pearce, Shawn O.; Fyffe, Matthew; Goldfarb, Dennis; Glickenhouse, Nickolas; Buletti, Heather

    2009-12-01

    We present a data model for the initial implementation of MetPetDB, a geochemical database specific to metamorphic rock samples. The database is designed around the concept of preservation of spatial relationships, at all scales, of chemical analyses and their textural setting. Objects in the database (samples) represent physical rock samples; each sample may contain one or more subsamples with associated geochemical and image data. Samples, subsamples, geochemical data, and images are described with attributes (some required, some optional); these attributes also serve as search delimiters. All data in the database are classified as published (i.e., archived or published data), public or private. Public and published data may be freely searched and downloaded. All private data is owned; permission to view, edit, download and otherwise manipulate private data may be granted only by the data owner; all such editing operations are recorded by the database to create a data version log. The sharing of data permissions among a group of collaborators researching a common sample is done by the sample owner through the project manager. User interaction with MetPetDB is hosted by a web-based platform based upon the Java servlet application programming interface, with the PostgreSQL relational database. The database web portal includes modules that allow the user to interact with the database: registered users may save and download public and published data, upload private data, create projects, and assign permission levels to project collaborators. An Image Viewer module provides for spatial integration of image and geochemical data. A toolkit consisting of plotting and geochemical calculation software for data analysis and a mobile application for viewing the public and published data is being developed. Future issues to address include population of the database, integration with other geochemical databases, development of the analysis toolkit, creation of data models for derivative data, and building a community-wide user base. It is believed that this and other geochemical databases will enable more productive collaborations, generate more efficient research efforts, and foster new developments in basic research in the field of solid earth geochemistry.

  17. Single-Copy Genes as Molecular Markers for Phylogenomic Studies in Seed Plants

    PubMed Central

    De La Torre, Amanda R.; Sterck, Lieven; Cánovas, Francisco M.; Avila, Concepción; Merino, Irene; Cabezas, José Antonio; Cervera, María Teresa; Ingvarsson, Pär K.

    2017-01-01

    Phylogenetic relationships among seed plant taxa, especially within the gymnosperms, remain contested. In contrast to angiosperms, for which several genomic, transcriptomic and phylogenetic resources are available, there are few, if any, molecular markers that allow broad comparisons among gymnosperm species. With few gymnosperm genomes available, recently obtained transcriptomes in gymnosperms are a great addition to identifying single-copy gene families as molecular markers for phylogenomic analysis in seed plants. Taking advantage of an increasing number of available genomes and transcriptomes, we identified single-copy genes in a broad collection of seed plants and used these to infer phylogenetic relationships between major seed plant taxa. This study aims at extending the current phylogenetic toolkit for seed plants, assessing its ability for resolving seed plant phylogeny, and discussing potential factors affecting phylogenetic reconstruction. In total, we identified 3,072 single-copy genes in 31 gymnosperms and 2,156 single-copy genes in 34 angiosperms. All studied seed plants shared 1,469 single-copy genes, which are generally involved in functions like DNA metabolism, cell cycle, and photosynthesis. A selected set of 106 single-copy genes provided good resolution for the seed plant phylogeny except for gnetophytes. Although some of our analyses support a sister relationship between gnetophytes and other gymnosperms, phylogenetic trees from concatenated alignments without 3rd codon positions and amino acid alignments under the CAT + GTR model, support gnetophytes as a sister group to Pinaceae. Our phylogenomic analyses demonstrate that, in general, single-copy genes can uncover both recent and deep divergences of seed plant phylogeny. PMID:28460034

  18. Modeling and prediction of human word search behavior in interactive machine translation

    NASA Astrophysics Data System (ADS)

    Ji, Duo; Yu, Bai; Ma, Bin; Ye, Na

    2017-12-01

    As a kind of computer aided translation method, Interactive Machine Translation technology reduced manual translation repetitive and mechanical operation through a variety of methods, so as to get the translation efficiency, and played an important role in the practical application of the translation work. In this paper, we regarded the behavior of users' frequently searching for words in the translation process as the research object, and transformed the behavior to the translation selection problem under the current translation. The paper presented a prediction model, which is a comprehensive utilization of alignment model, translation model and language model of the searching words behavior. It achieved a highly accurate prediction of searching words behavior, and reduced the switching of mouse and keyboard operations in the users' translation process.

  19. HIA: a genome mapper using hybrid index-based sequence alignment.

    PubMed

    Choi, Jongpill; Park, Kiejung; Cho, Seong Beom; Chung, Myungguen

    2015-01-01

    A number of alignment tools have been developed to align sequencing reads to the human reference genome. The scale of information from next-generation sequencing (NGS) experiments, however, is increasing rapidly. Recent studies based on NGS technology have routinely produced exome or whole-genome sequences from several hundreds or thousands of samples. To accommodate the increasing need of analyzing very large NGS data sets, it is necessary to develop faster, more sensitive and accurate mapping tools. HIA uses two indices, a hash table index and a suffix array index. The hash table performs direct lookup of a q-gram, and the suffix array performs very fast lookup of variable-length strings by exploiting binary search. We observed that combining hash table and suffix array (hybrid index) is much faster than the suffix array method for finding a substring in the reference sequence. Here, we defined the matching region (MR) is a longest common substring between a reference and a read. And, we also defined the candidate alignment regions (CARs) as a list of MRs that is close to each other. The hybrid index is used to find candidate alignment regions (CARs) between a reference and a read. We found that aligning only the unmatched regions in the CAR is much faster than aligning the whole CAR. In benchmark analysis, HIA outperformed in mapping speed compared with the other aligners, without significant loss of mapping accuracy. Our experiments show that the hybrid of hash table and suffix array is useful in terms of speed for mapping NGS sequencing reads to the human reference genome sequence. In conclusion, our tool is appropriate for aligning massive data sets generated by NGS sequencing.

  20. A Toolkit for Forward/Inverse Problems in Electrocardiography within the SCIRun Problem Solving Environment

    PubMed Central

    Burton, Brett M; Tate, Jess D; Erem, Burak; Swenson, Darrell J; Wang, Dafang F; Steffen, Michael; Brooks, Dana H; van Dam, Peter M; Macleod, Rob S

    2012-01-01

    Computational modeling in electrocardiography often requires the examination of cardiac forward and inverse problems in order to non-invasively analyze physiological events that are otherwise inaccessible or unethical to explore. The study of these models can be performed in the open-source SCIRun problem solving environment developed at the Center for Integrative Biomedical Computing (CIBC). A new toolkit within SCIRun provides researchers with essential frameworks for constructing and manipulating electrocardiographic forward and inverse models in a highly efficient and interactive way. The toolkit contains sample networks, tutorials and documentation which direct users through SCIRun-specific approaches in the assembly and execution of these specific problems. PMID:22254301

Top