Sample records for search tool analysis

  1. Combining the Bourne-Shell, sed and awk in the UNIX Environment for Language Analysis.

    ERIC Educational Resources Information Center

    Schmitt, Lothar M.; Christianson, Kiel T.

    This document describes how to construct tools for language analysis in research and teaching using the Bourne-shell, sed, and awk, three search tools, in the UNIX operating system. Applications include: searches for words, phrases, grammatical patterns, and phonemic patterns in text; statistical analysis of text in regard to such searches,…

  2. Web Usage Mining Analysis of Federated Search Tools for Egyptian Scholars

    ERIC Educational Resources Information Center

    Mohamed, Khaled A.; Hassan, Ahmed

    2008-01-01

    Purpose: This paper aims to examine the behaviour of the Egyptian scholars while accessing electronic resources through two federated search tools. The main purpose of this article is to provide guidance for federated search tool technicians and support teams about user issues, including the need for training. Design/methodology/approach: Log…

  3. The Theory of Planned Behaviour Applied to Search Engines as a Learning Tool

    ERIC Educational Resources Information Center

    Liaw, Shu-Sheng

    2004-01-01

    Search engines have been developed for helping learners to seek online information. Based on theory of planned behaviour approach, this research intends to investigate the behaviour of using search engines as a learning tool. After factor analysis, the results suggest that perceived satisfaction of search engine, search engines as an information…

  4. The EMBL-EBI bioinformatics web and programmatic tools framework.

    PubMed

    Li, Weizhong; Cowley, Andrew; Uludag, Mahmut; Gur, Tamer; McWilliam, Hamish; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Lopez, Rodrigo

    2015-07-01

    Since 2009 the EMBL-EBI Job Dispatcher framework has provided free access to a range of mainstream sequence analysis applications. These include sequence similarity search services (https://www.ebi.ac.uk/Tools/sss/) such as BLAST, FASTA and PSI-Search, multiple sequence alignment tools (https://www.ebi.ac.uk/Tools/msa/) such as Clustal Omega, MAFFT and T-Coffee, and other sequence analysis tools (https://www.ebi.ac.uk/Tools/pfa/) such as InterProScan. Through these services users can search mainstream sequence databases such as ENA, UniProt and Ensembl Genomes, utilising a uniform web interface or systematically through Web Services interfaces (https://www.ebi.ac.uk/Tools/webservices/) using common programming languages, and obtain enriched results with novel visualisations. Integration with EBI Search (https://www.ebi.ac.uk/ebisearch/) and the dbfetch retrieval service (https://www.ebi.ac.uk/Tools/dbfetch/) further expands the usefulness of the framework. New tools and updates such as NCBI BLAST+, InterProScan 5 and PfamScan, new categories such as RNA analysis tools (https://www.ebi.ac.uk/Tools/rna/), new databases such as ENA non-coding, WormBase ParaSite, Pfam and Rfam, and new workflow methods, together with the retirement of depreciated services, ensure that the framework remains relevant to today's biological community. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science

    NASA Technical Reports Server (NTRS)

    Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon

    2014-01-01

    Approaches used in Earth science research such as case study analysis and climatology studies involve discovering and gathering diverse data sets and information to support the research goals. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. In cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. This paper presents a specialized search, aggregation and curation tool for Earth science to address these challenges. The search rool automatically creates curated 'Data Albums', aggregated collections of information related to a specific event, containing links to relevant data files [granules] from different instruments, tools and services for visualization and analysis, and information about the event contained in news reports, images or videos to supplement research analysis. Curation in the tool is driven via an ontology based relevancy ranking algorithm to filter out non relevant information and data.

  6. Building and evaluating an informatics tool to facilitate analysis of a biomedical literature search service in an academic medical center library.

    PubMed

    Hinton, Elizabeth G; Oelschlegel, Sandra; Vaughn, Cynthia J; Lindsay, J Michael; Hurst, Sachiko M; Earl, Martha

    2013-01-01

    This study utilizes an informatics tool to analyze a robust literature search service in an academic medical center library. Structured interviews with librarians were conducted focusing on the benefits of such a tool, expectations for performance, and visual layout preferences. The resulting application utilizes Microsoft SQL Server and .Net Framework 3.5 technologies, allowing for the use of a web interface. Customer tables and MeSH terms are included. The National Library of Medicine MeSH database and entry terms for each heading are incorporated, resulting in functionality similar to searching the MeSH database through PubMed. Data reports will facilitate analysis of the search service.

  7. Evaluation of an open source tool for indexing and searching enterprise radiology and pathology reports

    NASA Astrophysics Data System (ADS)

    Kim, Woojin; Boonn, William

    2010-03-01

    Data mining of existing radiology and pathology reports within an enterprise health system can be used for clinical decision support, research, education, as well as operational analyses. In our health system, the database of radiology and pathology reports exceeds 13 million entries combined. We are building a web-based tool to allow search and data analysis of these combined databases using freely available and open source tools. This presentation will compare performance of an open source full-text indexing tool to MySQL's full-text indexing and searching and describe implementation procedures to incorporate these capabilities into a radiology-pathology search engine.

  8. Improving e-book access via a library-developed full-text search tool.

    PubMed

    Foust, Jill E; Bergen, Phillip; Maxeiner, Gretchen L; Pawlowski, Peter N

    2007-01-01

    This paper reports on the development of a tool for searching the contents of licensed full-text electronic book (e-book) collections. The Health Sciences Library System (HSLS) provides services to the University of Pittsburgh's medical programs and large academic health system. The HSLS has developed an innovative tool for federated searching of its e-book collections. Built using the XML-based Vivísimo development environment, the tool enables a user to perform a full-text search of over 2,500 titles from the library's seven most highly used e-book collections. From a single "Google-style" query, results are returned as an integrated set of links pointing directly to relevant sections of the full text. Results are also grouped into categories that enable more precise retrieval without reformulation of the search. A heuristic evaluation demonstrated the usability of the tool and a web server log analysis indicated an acceptable level of usage. Based on its success, there are plans to increase the number of online book collections searched. This library's first foray into federated searching has produced an effective tool for searching across large collections of full-text e-books and has provided a good foundation for the development of other library-based federated searching products.

  9. Improving e-book access via a library-developed full-text search tool*

    PubMed Central

    Foust, Jill E.; Bergen, Phillip; Maxeiner, Gretchen L.; Pawlowski, Peter N.

    2007-01-01

    Purpose: This paper reports on the development of a tool for searching the contents of licensed full-text electronic book (e-book) collections. Setting: The Health Sciences Library System (HSLS) provides services to the University of Pittsburgh's medical programs and large academic health system. Brief Description: The HSLS has developed an innovative tool for federated searching of its e-book collections. Built using the XML-based Vivísimo development environment, the tool enables a user to perform a full-text search of over 2,500 titles from the library's seven most highly used e-book collections. From a single “Google-style” query, results are returned as an integrated set of links pointing directly to relevant sections of the full text. Results are also grouped into categories that enable more precise retrieval without reformulation of the search. Results/Evaluation: A heuristic evaluation demonstrated the usability of the tool and a web server log analysis indicated an acceptable level of usage. Based on its success, there are plans to increase the number of online book collections searched. Conclusion: This library's first foray into federated searching has produced an effective tool for searching across large collections of full-text e-books and has provided a good foundation for the development of other library-based federated searching products. PMID:17252065

  10. In Silico PCR Tools for a Fast Primer, Probe, and Advanced Searching.

    PubMed

    Kalendar, Ruslan; Muterko, Alexandr; Shamekova, Malika; Zhambakin, Kabyl

    2017-01-01

    The polymerase chain reaction (PCR) is fundamental to molecular biology and is the most important practical molecular technique for the research laboratory. The principle of this technique has been further used and applied in plenty of other simple or complex nucleic acid amplification technologies (NAAT). In parallel to laboratory "wet bench" experiments for nucleic acid amplification technologies, in silico or virtual (bioinformatics) approaches have been developed, among which in silico PCR analysis. In silico NAAT analysis is a useful and efficient complementary method to ensure the specificity of primers or probes for an extensive range of PCR applications from homology gene discovery, molecular diagnosis, DNA fingerprinting, and repeat searching. Predicting sensitivity and specificity of primers and probes requires a search to determine whether they match a database with an optimal number of mismatches, similarity, and stability. In the development of in silico bioinformatics tools for nucleic acid amplification technologies, the prospects for the development of new NAAT or similar approaches should be taken into account, including forward-looking and comprehensive analysis that is not limited to only one PCR technique variant. The software FastPCR and the online Java web tool are integrated tools for in silico PCR of linear and circular DNA, multiple primer or probe searches in large or small databases and for advanced search. These tools are suitable for processing of batch files that are essential for automation when working with large amounts of data. The FastPCR software is available for download at http://primerdigital.com/fastpcr.html and the online Java version at http://primerdigital.com/tools/pcr.html .

  11. Analysis of semantic search within the domains of uncertainty: using Keyword Effectiveness Indexing as an evaluation tool.

    PubMed

    Lorence, Daniel; Abraham, Joanna

    2006-01-01

    Medical and health-related searches pose a special case of risk when using the web as an information resource. Uninsured consumers, lacking access to a trained provider, will often rely on information from the internet for self-diagnosis and treatment. In areas where treatments are uncertain or controversial, most consumers lack the knowledge to make an informed decision. This exploratory technology assessment examines the use of Keyword Effectiveness Indexing (KEI) analysis as a potential tool for profiling information search and keyword retrieval patterns. Results demonstrate that the KEI methodology can be useful in identifying e-health search patterns, but is limited by semantic or text-based web environments.

  12. Supervised learning of tools for content-based search of image databases

    NASA Astrophysics Data System (ADS)

    Delanoy, Richard L.

    1996-03-01

    A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.

  13. GWFASTA: server for FASTA search in eukaryotic and microbial genomes.

    PubMed

    Issac, Biju; Raghava, G P S

    2002-09-01

    Similarity searches are a powerful method for solving important biological problems such as database scanning, evolutionary studies, gene prediction, and protein structure prediction. FASTA is a widely used sequence comparison tool for rapid database scanning. Here we describe the GWFASTA server that was developed to assist the FASTA user in similarity searches against partially and/or completely sequenced genomes. GWFASTA consists of more than 60 microbial genomes, eight eukaryote genomes, and proteomes of annotatedgenomes. Infact, it provides the maximum number of databases for similarity searching from a single platform. GWFASTA allows the submission of more than one sequence as a single query for a FASTA search. It also provides integrated post-processing of FASTA output, including compositional analysis of proteins, multiple sequences alignment, and phylogenetic analysis. Furthermore, it summarizes the search results organism-wise for prokaryotes and chromosome-wise for eukaryotes. Thus, the integration of different tools for sequence analyses makes GWFASTA a powerful toolfor biologists.

  14. Learn by Yourself: The Self-Learning Tools for Qualitative Analysis Software Packages

    ERIC Educational Resources Information Center

    Freitas, Fábio; Ribeiro, Jaime; Brandão, Catarina; Reis, Luís Paulo; de Souza, Francislê Neri; Costa, António Pedro

    2017-01-01

    Computer Assisted Qualitative Data Analysis Software (CAQDAS) are tools that help researchers to develop qualitative research projects. These software packages help the users with tasks such as transcription analysis, coding and text interpretation, writing and annotation, content search and analysis, recursive abstraction, grounded theory…

  15. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE PAGES

    Yim, Won Cheol; Cushman, John C.

    2017-07-22

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  16. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, Won Cheol; Cushman, John C.

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  17. YersiniaBase: a genomic resource and analysis platform for comparative analysis of Yersinia.

    PubMed

    Tan, Shi Yang; Dutta, Avirup; Jakubovics, Nicholas S; Ang, Mia Yang; Siow, Cheuk Chuen; Mutha, Naresh Vr; Heydari, Hamed; Wee, Wei Yee; Wong, Guat Jah; Choo, Siew Woh

    2015-01-16

    Yersinia is a Gram-negative bacteria that includes serious pathogens such as the Yersinia pestis, which causes plague, Yersinia pseudotuberculosis, Yersinia enterocolitica. The remaining species are generally considered non-pathogenic to humans, although there is evidence that at least some of these species can cause occasional infections using distinct mechanisms from the more pathogenic species. With the advances in sequencing technologies, many genomes of Yersinia have been sequenced. However, there is currently no specialized platform to hold the rapidly-growing Yersinia genomic data and to provide analysis tools particularly for comparative analyses, which are required to provide improved insights into their biology, evolution and pathogenicity. To facilitate the ongoing and future research of Yersinia, especially those generally considered non-pathogenic species, a well-defined repository and analysis platform is needed to hold the Yersinia genomic data and analysis tools for the Yersinia research community. Hence, we have developed the YersiniaBase, a robust and user-friendly Yersinia resource and analysis platform for the analysis of Yersinia genomic data. YersiniaBase has a total of twelve species and 232 genome sequences, of which the majority are Yersinia pestis. In order to smooth the process of searching genomic data in a large database, we implemented an Asynchronous JavaScript and XML (AJAX)-based real-time searching system in YersiniaBase. Besides incorporating existing tools, which include JavaScript-based genome browser (JBrowse) and Basic Local Alignment Search Tool (BLAST), YersiniaBase also has in-house developed tools: (1) Pairwise Genome Comparison tool (PGC) for comparing two user-selected genomes; (2) Pathogenomics Profiling Tool (PathoProT) for comparative pathogenomics analysis of Yersinia genomes; (3) YersiniaTree for constructing phylogenetic tree of Yersinia. We ran analyses based on the tools and genomic data in YersiniaBase and the preliminary results showed differences in virulence genes found in Yersinia pestis and Yersinia pseudotuberculosis compared to other Yersinia species, and differences between Yersinia enterocolitica subsp. enterocolitica and Yersinia enterocolitica subsp. palearctica. YersiniaBase offers free access to wide range of genomic data and analysis tools for the analysis of Yersinia. YersiniaBase can be accessed at http://yersinia.um.edu.my .

  18. Scientific Platform as a Service - Tools and solutions for efficient access to and analysis of oceanographic data

    NASA Astrophysics Data System (ADS)

    Vines, Aleksander; Hansen, Morten W.; Korosov, Anton

    2017-04-01

    Existing infrastructure international and Norwegian projects, e.g., NorDataNet, NMDC and NORMAP, provide open data access through the OPeNDAP protocol following the conventions for CF (Climate and Forecast) metadata, designed to promote the processing and sharing of files created with the NetCDF application programming interface (API). This approach is now also being implemented in the Norwegian Sentinel Data Hub (satellittdata.no) to provide satellite EO data to the user community. Simultaneously with providing simplified and unified data access, these projects also seek to use and establish common standards for use and discovery metadata. This then allows development of standardized tools for data search and (subset) streaming over the internet to perform actual scientific analysis. A combinnation of software tools, which we call a Scientific Platform as a Service (SPaaS), will take advantage of these opportunities to harmonize and streamline the search, retrieval and analysis of integrated satellite and auxiliary observations of the oceans in a seamless system. The SPaaS is a cloud solution for integration of analysis tools with scientific datasets via an API. The core part of the SPaaS is a distributed metadata catalog to store granular metadata describing the structure, location and content of available satellite, model, and in situ datasets. The analysis tools include software for visualization (also online), interactive in-depth analysis, and server-based processing chains. The API conveys search requests between system nodes (i.e., interactive and server tools) and provides easy access to the metadata catalog, data repositories, and the tools. The SPaaS components are integrated in virtual machines, of which provisioning and deployment are automatized using existing state-of-the-art open-source tools (e.g., Vagrant, Ansible, Docker). The open-source code for scientific tools and virtual machine configurations is under version control at https://github.com/nansencenter/, and is coupled to an online continuous integration system (e.g., Travis CI).

  19. Nutrition screening tools: an analysis of the evidence.

    PubMed

    Skipper, Annalynn; Ferguson, Maree; Thompson, Kyle; Castellanos, Victoria H; Porcari, Judy

    2012-05-01

    In response to questions about tools for nutrition screening, an evidence analysis project was developed to identify the most valid and reliable nutrition screening tools for use in acute care and hospital-based ambulatory care settings. An oversight group defined nutrition screening and literature search criteria. A trained analyst conducted structured searches of the literature for studies of nutrition screening tools according to predetermined criteria. Eleven nutrition screening tools designed to detect undernutrition in patients in acute care and hospital-based ambulatory care were identified. Trained analysts evaluated articles for quality using criteria specified by the American Dietetic Association's Evidence Analysis Library. Members of the oversight group assigned quality grades to the tools based on the quality of the supporting evidence, including reliability and validity data. One tool, the NRS-2002, received a grade I, and 4 tools-the Simple Two-Part Tool, the Mini-Nutritional Assessment-Short Form (MNA-SF), the Malnutrition Screening Tool (MST), and Malnutrition Universal Screening Tool (MUST)-received a grade II. The MST was the only tool shown to be both valid and reliable for identifying undernutrition in the settings studied. Thus, validated nutrition screening tools that are simple and easy to use are available for application in acute care and hospital-based ambulatory care settings.

  20. Finding collaborators: toward interactive discovery tools for research network systems.

    PubMed

    Borromeo, Charles D; Schleyer, Titus K; Becich, Michael J; Hochheiser, Harry

    2014-11-04

    Research networking systems hold great promise for helping biomedical scientists identify collaborators with the expertise needed to build interdisciplinary teams. Although efforts to date have focused primarily on collecting and aggregating information, less attention has been paid to the design of end-user tools for using these collections to identify collaborators. To be effective, collaborator search tools must provide researchers with easy access to information relevant to their collaboration needs. The aim was to study user requirements and preferences for research networking system collaborator search tools and to design and evaluate a functional prototype. Paper prototypes exploring possible interface designs were presented to 18 participants in semistructured interviews aimed at eliciting collaborator search needs. Interview data were coded and analyzed to identify recurrent themes and related software requirements. Analysis results and elements from paper prototypes were used to design a Web-based prototype using the D3 JavaScript library and VIVO data. Preliminary usability studies asked 20 participants to use the tool and to provide feedback through semistructured interviews and completion of the System Usability Scale (SUS). Initial interviews identified consensus regarding several novel requirements for collaborator search tools, including chronological display of publication and research funding information, the need for conjunctive keyword searches, and tools for tracking candidate collaborators. Participant responses were positive (SUS score: mean 76.4%, SD 13.9). Opportunities for improving the interface design were identified. Interactive, timeline-based displays that support comparison of researcher productivity in funding and publication have the potential to effectively support searching for collaborators. Further refinement and longitudinal studies may be needed to better understand the implications of collaborator search tools for researcher workflows.

  1. Finding Collaborators: Toward Interactive Discovery Tools for Research Network Systems

    PubMed Central

    Schleyer, Titus K; Becich, Michael J; Hochheiser, Harry

    2014-01-01

    Background Research networking systems hold great promise for helping biomedical scientists identify collaborators with the expertise needed to build interdisciplinary teams. Although efforts to date have focused primarily on collecting and aggregating information, less attention has been paid to the design of end-user tools for using these collections to identify collaborators. To be effective, collaborator search tools must provide researchers with easy access to information relevant to their collaboration needs. Objective The aim was to study user requirements and preferences for research networking system collaborator search tools and to design and evaluate a functional prototype. Methods Paper prototypes exploring possible interface designs were presented to 18 participants in semistructured interviews aimed at eliciting collaborator search needs. Interview data were coded and analyzed to identify recurrent themes and related software requirements. Analysis results and elements from paper prototypes were used to design a Web-based prototype using the D3 JavaScript library and VIVO data. Preliminary usability studies asked 20 participants to use the tool and to provide feedback through semistructured interviews and completion of the System Usability Scale (SUS). Results Initial interviews identified consensus regarding several novel requirements for collaborator search tools, including chronological display of publication and research funding information, the need for conjunctive keyword searches, and tools for tracking candidate collaborators. Participant responses were positive (SUS score: mean 76.4%, SD 13.9). Opportunities for improving the interface design were identified. Conclusions Interactive, timeline-based displays that support comparison of researcher productivity in funding and publication have the potential to effectively support searching for collaborators. Further refinement and longitudinal studies may be needed to better understand the implications of collaborator search tools for researcher workflows. PMID:25370463

  2. BCM Search Launcher--an integrated interface to molecular biology data base search and analysis services available on the World Wide Web.

    PubMed

    Smith, R F; Wiese, B A; Wojzynski, M K; Davison, D B; Worley, K C

    1996-05-01

    The BCM Search Launcher is an integrated set of World Wide Web (WWW) pages that organize molecular biology-related search and analysis services available on the WWW by function, and provide a single point of entry for related searches. The Protein Sequence Search Page, for example, provides a single sequence entry form for submitting sequences to WWW servers that offer remote access to a variety of different protein sequence search tools, including BLAST, FASTA, Smith-Waterman, BEAUTY, PROSITE, and BLOCKS searches. Other Launch pages provide access to (1) nucleic acid sequence searches, (2) multiple and pair-wise sequence alignments, (3) gene feature searches, (4) protein secondary structure prediction, and (5) miscellaneous sequence utilities (e.g., six-frame translation). The BCM Search Launcher also provides a mechanism to extend the utility of other WWW services by adding supplementary hypertext links to results returned by remote servers. For example, links to the NCBI's Entrez data base and to the Sequence Retrieval System (SRS) are added to search results returned by the NCBI's WWW BLAST server. These links provide easy access to auxiliary information, such as Medline abstracts, that can be extremely helpful when analyzing BLAST data base hits. For new or infrequent users of sequence data base search tools, we have preset the default search parameters to provide the most informative first-pass sequence analysis possible. We have also developed a batch client interface for Unix and Macintosh computers that allows multiple input sequences to be searched automatically as a background task, with the results returned as individual HTML documents directly to the user's system. The BCM Search Launcher and batch client are available on the WWW at URL http:@gc.bcm.tmc.edu:8088/search-launcher.html.

  3. Search Analytics: Automated Learning, Analysis, and Search with Open Source

    NASA Astrophysics Data System (ADS)

    Hundman, K.; Mattmann, C. A.; Hyon, J.; Ramirez, P.

    2016-12-01

    The sheer volume of unstructured scientific data makes comprehensive human analysis impossible, resulting in missed opportunities to identify relationships, trends, gaps, and outliers. As the open source community continues to grow, tools like Apache Tika, Apache Solr, Stanford's DeepDive, and Data-Driven Documents (D3) can help address this challenge. With a focus on journal publications and conference abstracts often in the form of PDF and Microsoft Office documents, we've initiated an exploratory NASA Advanced Concepts project aiming to use the aforementioned open source text analytics tools to build a data-driven justification for the HyspIRI Decadal Survey mission. We call this capability Search Analytics, and it fuses and augments these open source tools to enable the automatic discovery and extraction of salient information. In the case of HyspIRI, a hyperspectral infrared imager mission, key findings resulted from the extractions and visualizations of relationships from thousands of unstructured scientific documents. The relationships include links between satellites (e.g. Landsat 8), domain-specific measurements (e.g. spectral coverage) and subjects (e.g. invasive species). Using the above open source tools, Search Analytics mined and characterized a corpus of information that would be infeasible for a human to process. More broadly, Search Analytics offers insights into various scientific and commercial applications enabled through missions and instrumentation with specific technical capabilities. For example, the following phrases were extracted in close proximity within a publication: "In this study, hyperspectral images…with high spatial resolution (1 m) were analyzed to detect cutleaf teasel in two areas. …Classification of cutleaf teasel reached a users accuracy of 82 to 84%." Without reading a single paper we can use Search Analytics to automatically identify that a 1 m spatial resolution provides a cutleaf teasel detection users accuracy of 82-84%, which could have tangible, direct downstream implications for crop protection. Automatically assimilating this information expedites and supplements human analysis, and, ultimately, Search Analytics and its foundation of open source tools will result in more efficient scientific investment and research.

  4. Exploring Gendered Notions: Gender, Job Hunting and Web Searches

    NASA Astrophysics Data System (ADS)

    Martey, R. M.

    Based on analysis of a series of interviews, this chapter suggests that in looking for jobs online, women confront gendered notions of the Internet as well as gendered notions of the jobs themselves. It argues that the social and cultural contexts of both the search tools and the search tasks should be considered in exploring how Web-based technologies serve women in a job search. For these women, the opportunities and limitations of online job-search tools were intimately related to their personal and social needs, especially needs for part-time work, maternity benefits, and career advancement. Although job-seeking services such as Monster.com were used frequently by most of these women, search services did not completely fulfill all their informational needs, and became an — often frustrating — initial starting point for a job search rather than an end-point.

  5. Patscanui: an intuitive web interface for searching patterns in DNA and protein data.

    PubMed

    Blin, Kai; Wohlleben, Wolfgang; Weber, Tilmann

    2018-05-02

    Patterns in biological sequences frequently signify interesting features in the underlying molecule. Many tools exist to search for well-known patterns. Less support is available for exploratory analysis, where no well-defined patterns are known yet. PatScanUI (https://patscan.secondarymetabolites.org/) provides a highly interactive web interface to the powerful generic pattern search tool PatScan. The complex PatScan-patterns are created in a drag-and-drop aware interface allowing researchers to do rapid prototyping of the often complicated patterns useful to identifying features of interest.

  6. DoOPSearch: a web-based tool for finding and analysing common conserved motifs in the promoter regions of different chordate and plant genes

    PubMed Central

    Sebestyén, Endre; Nagy, Tibor; Suhai, Sándor; Barta, Endre

    2009-01-01

    Background The comparative genomic analysis of a large number of orthologous promoter regions of the chordate and plant genes from the DoOP databases shows thousands of conserved motifs. Most of these motifs differ from any known transcription factor binding site (TFBS). To identify common conserved motifs, we need a specific tool to be able to search amongst them. Since conserved motifs from the DoOP databases are linked to genes, the result of such a search can give a list of genes that are potentially regulated by the same transcription factor(s). Results We have developed a new tool called DoOPSearch for the analysis of the conserved motifs in the promoter regions of chordate or plant genes. We used the orthologous promoters of the DoOP database to extract thousands of conserved motifs from different taxonomic groups. The advantage of this approach is that different sets of conserved motifs might be found depending on how broad the taxonomic coverage of the underlying orthologous promoter sequence collection is (consider e.g. primates vs. mammals or Brassicaceae vs. Viridiplantae). The DoOPSearch tool allows the users to search these motif collections or the promoter regions of DoOP with user supplied query sequences or any of the conserved motifs from the DoOP database. To find overrepresented gene ontologies, the gene lists obtained can be analysed further using a modified version of the GeneMerge program. Conclusion We present here a comparative genomics based promoter analysis tool. Our system is based on a unique collection of conserved promoter motifs characteristic of different taxonomic groups. We offer both a command line and a web-based tool for searching in these motif collections using user specified queries. These can be either short promoter sequences or consensus sequences of known transcription factor binding sites. The GeneMerge analysis of the search results allows the user to identify statistically overrepresented Gene Ontology terms that might provide a clue on the function of the motifs and genes. PMID:19534755

  7. Epsilon-Q: An Automated Analyzer Interface for Mass Spectral Library Search and Label-Free Protein Quantification.

    PubMed

    Cho, Jin-Young; Lee, Hyoung-Joo; Jeong, Seul-Ki; Paik, Young-Ki

    2017-12-01

    Mass spectrometry (MS) is a widely used proteome analysis tool for biomedical science. In an MS-based bottom-up proteomic approach to protein identification, sequence database (DB) searching has been routinely used because of its simplicity and convenience. However, searching a sequence DB with multiple variable modification options can increase processing time, false-positive errors in large and complicated MS data sets. Spectral library searching is an alternative solution, avoiding the limitations of sequence DB searching and allowing the detection of more peptides with high sensitivity. Unfortunately, this technique has less proteome coverage, resulting in limitations in the detection of novel and whole peptide sequences in biological samples. To solve these problems, we previously developed the "Combo-Spec Search" method, which uses manually multiple references and simulated spectral library searching to analyze whole proteomes in a biological sample. In this study, we have developed a new analytical interface tool called "Epsilon-Q" to enhance the functions of both the Combo-Spec Search method and label-free protein quantification. Epsilon-Q performs automatically multiple spectral library searching, class-specific false-discovery rate control, and result integration. It has a user-friendly graphical interface and demonstrates good performance in identifying and quantifying proteins by supporting standard MS data formats and spectrum-to-spectrum matching powered by SpectraST. Furthermore, when the Epsilon-Q interface is combined with the Combo-Spec search method, called the Epsilon-Q system, it shows a synergistic function by outperforming other sequence DB search engines for identifying and quantifying low-abundance proteins in biological samples. The Epsilon-Q system can be a versatile tool for comparative proteome analysis based on multiple spectral libraries and label-free quantification.

  8. Efficient bibliographic searches on allergy using ISI databases.

    PubMed

    Sáez Gómez, J M; Annan, J W; Negro Alvarez, J M; Guillen-Grima, F; Bozzola, C M; Ivancevich, J C; Aguinaga Ontoso, E

    2008-01-01

    The aim of this article is to provide an introduction to using databases from the Thomson ISI Web of Knowledge, with special reference to Citation Indexes as an analysis tool for publications, and also to explain the meaning of the well-known Impact Factor. We present the partially modified new Consultation Interface to enhance information search routines of these databases. It introduces distinctive methods in search bibliography, including the correct application of analysis tools, paying particular attention to Journal Citation Reports and Impact Factor. We finish this article with comment on the consequences of using the Impact Factor as a quality indicator for the assessment of journals and publications, and how to ensure measures for indexing in the Thomson ISI Databases.

  9. Internet search trends analysis tools can provide real-time data on kidney stone disease in the United States.

    PubMed

    Willard, Scott D; Nguyen, Mike M

    2013-01-01

    To evaluate the utility of using Internet search trends data to estimate kidney stone occurrence and understand the priorities of patients with kidney stones. Internet search trends data represent a unique resource for monitoring population self-reported illness and health information-seeking behavior. The Google Insights for Search analysis tool was used to study searches related to kidney stones, with each search term returning a search volume index (SVI) according to the search frequency relative to the total search volume. SVIs for the term, "kidney stones," were compiled by location and time parameters and compared with the published weather and stone prevalence data. Linear regression analysis was performed to determine the association of the search interest score with known epidemiologic variations in kidney stone disease, including latitude, temperature, season, and state. The frequency of the related search terms was categorized by theme and qualitatively analyzed. The SVI correlated significantly with established kidney stone epidemiologic predictors. The SVI correlated with the state latitude (R-squared=0.25; P<.001), the state mean annual temperature (R-squared=0.24; P<.001), and state combined sex prevalence (R-squared=0.25; P<.001). Female prevalence correlated more strongly than did male prevalence (R-squared=0.37; P<.001, and R-squared=0.17; P=.003, respectively). The national SVI correlated strongly with the average U.S. temperature by month (R-squared=0.54; P=.007). The search term ranking suggested that Internet users are most interested in the diagnosis, followed by etiology, infections, and treatment. Geographic and temporal variability in kidney stone disease appear to be accurately reflected in Internet search trends data. Internet search trends data might have broader applications for epidemiologic and urologic research. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Analysis Tool Web Services from the EMBL-EBI.

    PubMed

    McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo

    2013-07-01

    Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods.

  11. Analysis Tool Web Services from the EMBL-EBI

    PubMed Central

    McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo

    2013-01-01

    Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods. PMID:23671338

  12. Surging Seas Risk Finder: A Simple Search-Based Web Tool for Local Sea Level Rise Projections, Coastal Flood Risk Forecasts, and Inundation Exposure Analysis

    NASA Astrophysics Data System (ADS)

    Strauss, B.; Dodson, D.; Kulp, S. A.; Rizza, D. H.

    2016-12-01

    Surging Seas Risk Finder (riskfinder.org) is an online tool for accessing extensive local projections and analysis of sea level rise; coastal floods; and land, populations, contamination sources, and infrastructure and other assets that may be exposed to inundation. Risk Finder was first published in 2013 for Florida, New York and New Jersey, expanding to all states in the contiguous U.S. by 2016, when a major new version of the tool was released with a completely new interface. The revised tool was informed by hundreds of survey responses from and conversations with planners, local officials and other coastal stakeholders, plus consideration of modern best practices for responsive web design and user interfaces, and social science-based principles for science communication. Overarching design principles include simplicity and ease of navigation, leading to a landing page with Google-like sparsity and focus on search, and to an architecture based on search, so that each coastal zip code, city, county, state or other place type has its own webpage gathering all relevant analysis in modular, scrollable units. Millions of users have visited the Surging Seas suite of tools to date, and downloaded thousands of files, for stated purposes ranging from planning to business to education to personal decisions; and from institutions ranging from local to federal government agencies, to businesses, to NGOs, and to academia.

  13. Development and Validation of a Self-reported Questionnaire for Measuring Internet Search Dependence

    PubMed Central

    Wang, Yifan; Wu, Lingdan; Zhou, Hongli; Xu, Jiaojing; Dong, Guangheng

    2016-01-01

    Internet search has become the most common way that people deal with issues and problems in everyday life. The wide use of Internet search has largely changed the way people search for and store information. There is a growing interest in the impact of Internet search on users’ affect, cognition, and behavior. Thus, it is essential to develop a tool to measure the changes in psychological characteristics as a result of long-term use of Internet search. The aim of this study is to develop a Questionnaire on Internet Search Dependence (QISD) and test its reliability and validity. We first proposed a preliminary structure and items of the QISD based on literature review, supplemental investigations, and interviews. And then, we assessed the psychometric properties and explored the factor structure of the initial version via exploratory factor analysis (EFA). The EFA results indicated that four dimensions of the QISD were very reliable, i.e., habitual use of Internet search, withdrawal reaction, Internet search trust, and external storage under Internet search. Finally, we tested the factor solution obtained from EFA through confirmatory factor analysis (CFA). The results of CFA confirmed that the four dimensions model fits the data well. In all, this study suggests that the 12-item QISD is of high reliability and validity and can serve as a preliminary tool to measure the features of Internet search dependence. PMID:28066753

  14. Development and Validation of a Self-reported Questionnaire for Measuring Internet Search Dependence.

    PubMed

    Wang, Yifan; Wu, Lingdan; Zhou, Hongli; Xu, Jiaojing; Dong, Guangheng

    2016-01-01

    Internet search has become the most common way that people deal with issues and problems in everyday life. The wide use of Internet search has largely changed the way people search for and store information. There is a growing interest in the impact of Internet search on users' affect, cognition, and behavior. Thus, it is essential to develop a tool to measure the changes in psychological characteristics as a result of long-term use of Internet search. The aim of this study is to develop a Questionnaire on Internet Search Dependence (QISD) and test its reliability and validity. We first proposed a preliminary structure and items of the QISD based on literature review, supplemental investigations, and interviews. And then, we assessed the psychometric properties and explored the factor structure of the initial version via exploratory factor analysis (EFA). The EFA results indicated that four dimensions of the QISD were very reliable, i.e., habitual use of Internet search, withdrawal reaction, Internet search trust, and external storage under Internet search. Finally, we tested the factor solution obtained from EFA through confirmatory factor analysis (CFA). The results of CFA confirmed that the four dimensions model fits the data well. In all, this study suggests that the 12-item QISD is of high reliability and validity and can serve as a preliminary tool to measure the features of Internet search dependence.

  15. The Human Transcript Database: A Catalogue of Full Length cDNA Inserts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouckk John; Michael McLeod; Kim Worley

    1999-09-10

    The BCM Search Launcher provided improved access to web-based sequence analysis services during the granting period and beyond. The Search Launcher web site grouped analysis procedures by function and provided default parameters that provided reasonable search results for most applications. For instance, most queries were automatically masked for repeat sequences prior to sequence database searches to avoid spurious matches. In addition to the web-based access and arrangements that were made using the functions easier, the BCM Search Launcher provided unique value-added applications like the BEAUTY sequence database search tool that combined information about protein domains and sequence database search resultsmore » to give an enhanced, more complete picture of the reliability and relative value of the information reported. This enhanced search tool made evaluating search results more straight-forward and consistent. Some of the favorite features of the web site are the sequence utilities and the batch client functionality that allows processing of multiple samples from the command line interface. One measure of the success of the BCM Search Launcher is the number of sites that have adopted the models first developed on the site. The graphic display on the BLAST search from the NCBI web site is one such outgrowth, as is the display of protein domain search results within BLAST search results, and the design of the Biology Workbench application. The logs of usage and comments from users confirm the great utility of this resource.« less

  16. A National Solar Digital Observatory

    NASA Astrophysics Data System (ADS)

    Hill, F.

    2000-05-01

    The continuing development of the Internet as a research tool, combined with an improving funding climate, has sparked new interest in the development of Internet-linked astronomical data bases and analysis tools. Here I outline a concept for a National Solar Digital Observatory (NSDO), a set of data archives and analysis tools distributed in physical location at sites which already host such systems. A central web site would be implemented from which a user could search all of the component archives, select and download data, and perform analyses. Example components include NSO's Digital Library containing its synoptic and GONG data, and the forthcoming SOLIS archive. Several other archives, in various stages of development, also exist. Potential analysis tools include content-based searches, visualized programming tools, and graphics routines. The existence of an NSDO would greatly facilitate solar physics research, as a user would no longer need to have detailed knowledge of all solar archive sites. It would also improve public outreach efforts. The National Solar Observatory is operated by AURA, Inc. under a cooperative agreement with the National Science Foundation.

  17. Googling DNA sequences on the World Wide Web.

    PubMed

    Hajibabaei, Mehrdad; Singer, Gregory A C

    2009-11-10

    New web-based technologies provide an excellent opportunity for sharing and accessing information and using web as a platform for interaction and collaboration. Although several specialized tools are available for analyzing DNA sequence information, conventional web-based tools have not been utilized for bioinformatics applications. We have developed a novel algorithm and implemented it for searching species-specific genomic sequences, DNA barcodes, by using popular web-based methods such as Google. We developed an alignment independent character based algorithm based on dividing a sequence library (DNA barcodes) and query sequence to words. The actual search is conducted by conventional search tools such as freely available Google Desktop Search. We implemented our algorithm in two exemplar packages. We developed pre and post-processing software to provide customized input and output services, respectively. Our analysis of all publicly available DNA barcode sequences shows a high accuracy as well as rapid results. Our method makes use of conventional web-based technologies for specialized genetic data. It provides a robust and efficient solution for sequence search on the web. The integration of our search method for large-scale sequence libraries such as DNA barcodes provides an excellent web-based tool for accessing this information and linking it to other available categories of information on the web.

  18. 9th Annual Systems Engineering Conference: Volume 4 Thursday

    DTIC Science & Technology

    2006-10-26

    Connectivity, Speed, Volume • Enterprise application integration • Workflow integration or multi-media • Federated search capability • Link analysis and...categorization, federated search & automated discovery of information — Collaborative tools to quickly share relevant information Built on commercial

  19. GCView: the genomic context viewer for protein homology searches

    PubMed Central

    Grin, Iwan; Linke, Dirk

    2011-01-01

    Genomic neighborhood can provide important insights into evolution and function of a protein or gene. When looking at operons, changes in operon structure and composition can only be revealed by looking at the operon as a whole. To facilitate the analysis of the genomic context of a query in multiple organisms we have developed Genomic Context Viewer (GCView). GCView accepts results from one or multiple protein homology searches such as BLASTp as input. For each hit, the neighboring protein-coding genes are extracted, the regions of homology are labeled for each input and the results are presented as a clear, interactive graphical output. It is also possible to add more searches to iteratively refine the output. GCView groups outputs by the hits for different proteins. This allows for easy comparison of different operon compositions and structures. The tool is embedded in the framework of the Bioinformatics Toolkit of the Max-Planck Institute for Developmental Biology (MPI Toolkit). Job results from the homology search tools inside the MPI Toolkit can be forwarded to GCView and results can be subsequently analyzed by sequence analysis tools. Results are stored online, allowing for later reinspection. GCView is freely available at http://toolkit.tuebingen.mpg.de/gcview. PMID:21609955

  20. Design of the VISITOR Tool: A Versatile ImpulSive Interplanetary Trajectory OptimizeR

    NASA Technical Reports Server (NTRS)

    Corpaccioli, Luca; Linskens, Harry; Komar, David R.

    2014-01-01

    The design of trajectories for interplanetary missions represents one of the most complex and important problems to solve during conceptual space mission design. To facilitate conceptual mission sizing activities, it is essential to obtain sufficiently accurate trajectories in a fast and repeatable manner. To this end, the VISITOR tool was developed. This tool modularly augments a patched conic MGA-1DSM model with a mass model, launch window analysis, and the ability to simulate more realistic arrival and departure operations. This was implemented in MATLAB, exploiting the built-in optimization tools and vector analysis routines. The chosen optimization strategy uses a grid search and pattern search, an iterative variable grid method. A genetic algorithm can be selectively used to improve search space pruning, at the cost of losing the repeatability of the results and increased computation time. The tool was validated against seven flown missions: the average total mission (Delta)V offset from the nominal trajectory was 9.1%, which was reduced to 7.3% when using the genetic algorithm at the cost of an increase in computation time by a factor 5.7. It was found that VISITOR was well-suited for the conceptual design of interplanetary trajectories, while also facilitating future improvements due to its modular structure.

  1. Modeling and Analysis of Space Based Transceivers

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Liebetreu, John; Moore, Michael S.; Price, Jeremy C.; Abbott, Ben

    2005-01-01

    This paper presents the tool chain, methodology, and initial results of a study to provide a thorough, objective, and quantitative analysis of the design alternatives for space Software Defined Radio (SDR) transceivers. The approach taken was to develop a set of models and tools for describing communications requirements, the algorithm resource requirements, the available hardware, and the alternative software architectures, and generate analysis data necessary to compare alternative designs. The Space Transceiver Analysis Tool (STAT) was developed to help users identify and select representative designs, calculate the analysis data, and perform a comparative analysis of the representative designs. The tool allows the design space to be searched quickly while permitting incremental refinement in regions of higher payoff.

  2. Modeling and Analysis of Space Based Transceivers

    NASA Technical Reports Server (NTRS)

    Moore, Michael S.; Price, Jeremy C.; Abbott, Ben; Liebetreu, John; Reinhart, Richard C.; Kacpura, Thomas J.

    2007-01-01

    This paper presents the tool chain, methodology, and initial results of a study to provide a thorough, objective, and quantitative analysis of the design alternatives for space Software Defined Radio (SDR) transceivers. The approach taken was to develop a set of models and tools for describing communications requirements, the algorithm resource requirements, the available hardware, and the alternative software architectures, and generate analysis data necessary to compare alternative designs. The Space Transceiver Analysis Tool (STAT) was developed to help users identify and select representative designs, calculate the analysis data, and perform a comparative analysis of the representative designs. The tool allows the design space to be searched quickly while permitting incremental refinement in regions of higher payoff.

  3. PANTHER version 11: expanded annotation data from Gene Ontology and Reactome pathways, and data analysis tool enhancements.

    PubMed

    Mi, Huaiyu; Huang, Xiaosong; Muruganujan, Anushya; Tang, Haiming; Mills, Caitlin; Kang, Diane; Thomas, Paul D

    2017-01-04

    The PANTHER database (Protein ANalysis THrough Evolutionary Relationships, http://pantherdb.org) contains comprehensive information on the evolution and function of protein-coding genes from 104 completely sequenced genomes. PANTHER software tools allow users to classify new protein sequences, and to analyze gene lists obtained from large-scale genomics experiments. In the past year, major improvements include a large expansion of classification information available in PANTHER, as well as significant enhancements to the analysis tools. Protein subfamily functional classifications have more than doubled due to progress of the Gene Ontology Phylogenetic Annotation Project. For human genes (as well as a few other organisms), PANTHER now also supports enrichment analysis using pathway classifications from the Reactome resource. The gene list enrichment tools include a new 'hierarchical view' of results, enabling users to leverage the structure of the classifications/ontologies; the tools also allow users to upload genetic variant data directly, rather than requiring prior conversion to a gene list. The updated coding single-nucleotide polymorphisms (SNP) scoring tool uses an improved algorithm. The hidden Markov model (HMM) search tools now use HMMER3, dramatically reducing search times and improving accuracy of E-value statistics. Finally, the PANTHER Tree-Attribute Viewer has been implemented in JavaScript, with new views for exploring protein sequence evolution. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. SIRW: A web server for the Simple Indexing and Retrieval System that combines sequence motif searches with keyword searches.

    PubMed

    Ramu, Chenna

    2003-07-01

    SIRW (http://sirw.embl.de/) is a World Wide Web interface to the Simple Indexing and Retrieval System (SIR) that is capable of parsing and indexing various flat file databases. In addition it provides a framework for doing sequence analysis (e.g. motif pattern searches) for selected biological sequences through keyword search. SIRW is an ideal tool for the bioinformatics community for searching as well as analyzing biological sequences of interest.

  5. LFQuant: a label-free fast quantitative analysis tool for high-resolution LC-MS/MS proteomics data.

    PubMed

    Zhang, Wei; Zhang, Jiyang; Xu, Changming; Li, Ning; Liu, Hui; Ma, Jie; Zhu, Yunping; Xie, Hongwei

    2012-12-01

    Database searching based methods for label-free quantification aim to reconstruct the peptide extracted ion chromatogram based on the identification information, which can limit the search space and thus make the data processing much faster. The random effect of the MS/MS sampling can be remedied by cross-assignment among different runs. Here, we present a new label-free fast quantitative analysis tool, LFQuant, for high-resolution LC-MS/MS proteomics data based on database searching. It is designed to accept raw data in two common formats (mzXML and Thermo RAW), and database search results from mainstream tools (MASCOT, SEQUEST, and X!Tandem), as input data. LFQuant can handle large-scale label-free data with fractionation such as SDS-PAGE and 2D LC. It is easy to use and provides handy user interfaces for data loading, parameter setting, quantitative analysis, and quantitative data visualization. LFQuant was compared with two common quantification software packages, MaxQuant and IDEAL-Q, on the replication data set and the UPS1 standard data set. The results show that LFQuant performs better than them in terms of both precision and accuracy, and consumes significantly less processing time. LFQuant is freely available under the GNU General Public License v3.0 at http://sourceforge.net/projects/lfquant/. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Sirius PSB: a generic system for analysis of biological sequences.

    PubMed

    Koh, Chuan Hock; Lin, Sharene; Jedd, Gregory; Wong, Limsoon

    2009-12-01

    Computational tools are essential components of modern biological research. For example, BLAST searches can be used to identify related proteins based on sequence homology, or when a new genome is sequenced, prediction models can be used to annotate functional sites such as transcription start sites, translation initiation sites and polyadenylation sites and to predict protein localization. Here we present Sirius Prediction Systems Builder (PSB), a new computational tool for sequence analysis, classification and searching. Sirius PSB has four main operations: (1) Building a classifier, (2) Deploying a classifier, (3) Search for proteins similar to query proteins, (4) Preliminary and post-prediction analysis. Sirius PSB supports all these operations via a simple and interactive graphical user interface. Besides being a convenient tool, Sirius PSB has also introduced two novelties in sequence analysis. Firstly, genetic algorithm is used to identify interesting features in the feature space. Secondly, instead of the conventional method of searching for similar proteins via sequence similarity, we introduced searching via features' similarity. To demonstrate the capabilities of Sirius PSB, we have built two prediction models - one for the recognition of Arabidopsis polyadenylation sites and another for the subcellular localization of proteins. Both systems are competitive against current state-of-the-art models based on evaluation of public datasets. More notably, the time and effort required to build each model is greatly reduced with the assistance of Sirius PSB. Furthermore, we show that under certain conditions when BLAST is unable to find related proteins, Sirius PSB can identify functionally related proteins based on their biophysical similarities. Sirius PSB and its related supplements are available at: http://compbio.ddns.comp.nus.edu.sg/~sirius.

  7. reSpect: Software for Identification of High and Low Abundance Ion Species in Chimeric Tandem Mass Spectra

    PubMed Central

    Shteynberg, David; Mendoza, Luis; Hoopmann, Michael R.; Sun, Zhi; Schmidt, Frank; Deutsch, Eric W.; Moritz, Robert L.

    2016-01-01

    Most shotgun proteomics data analysis workflows are based on the assumption that each fragment ion spectrum is explained by a single species of peptide ion isolated by the mass spectrometer; however, in reality mass spectrometers often isolate more than one peptide ion within the window of isolation that contributes to additional peptide fragment peaks in many spectra. We present a new tool called reSpect, implemented in the Trans-Proteomic Pipeline (TPP), that enables an iterative workflow whereby fragment ion peaks explained by a peptide ion identified in one round of sequence searching or spectral library search are attenuated based on the confidence of the identification, and then the altered spectrum is subjected to further rounds of searching. The reSpect tool is not implemented as a search engine, but rather as a post search engine processing step where only fragment ion intensities are altered. This enables the application of any search engine combination in the following iterations. Thus, reSpect is compatible with all other protein sequence database search engines as well as peptide spectral library search engines that are supported by the TPP. We show that while some datasets are highly amenable to chimeric spectrum identification and lead to additional peptide identification boosts of over 30% with as many as four different peptide ions identified per spectrum, datasets with narrow precursor ion selection only benefit from such processing at the level of a few percent. We demonstrate a technique that facilitates the determination of the degree to which a dataset would benefit from chimeric spectrum analysis. The reSpect tool is free and open source, provided within the TPP and available at the TPP website. PMID:26419769

  8. reSpect: software for identification of high and low abundance ion species in chimeric tandem mass spectra.

    PubMed

    Shteynberg, David; Mendoza, Luis; Hoopmann, Michael R; Sun, Zhi; Schmidt, Frank; Deutsch, Eric W; Moritz, Robert L

    2015-11-01

    Most shotgun proteomics data analysis workflows are based on the assumption that each fragment ion spectrum is explained by a single species of peptide ion isolated by the mass spectrometer; however, in reality mass spectrometers often isolate more than one peptide ion within the window of isolation that contribute to additional peptide fragment peaks in many spectra. We present a new tool called reSpect, implemented in the Trans-Proteomic Pipeline (TPP), which enables an iterative workflow whereby fragment ion peaks explained by a peptide ion identified in one round of sequence searching or spectral library search are attenuated based on the confidence of the identification, and then the altered spectrum is subjected to further rounds of searching. The reSpect tool is not implemented as a search engine, but rather as a post-search engine processing step where only fragment ion intensities are altered. This enables the application of any search engine combination in the iterations that follow. Thus, reSpect is compatible with all other protein sequence database search engines as well as peptide spectral library search engines that are supported by the TPP. We show that while some datasets are highly amenable to chimeric spectrum identification and lead to additional peptide identification boosts of over 30% with as many as four different peptide ions identified per spectrum, datasets with narrow precursor ion selection only benefit from such processing at the level of a few percent. We demonstrate a technique that facilitates the determination of the degree to which a dataset would benefit from chimeric spectrum analysis. The reSpect tool is free and open source, provided within the TPP and available at the TPP website. Graphical Abstract ᅟ.

  9. reSpect: Software for Identification of High and Low Abundance Ion Species in Chimeric Tandem Mass Spectra

    NASA Astrophysics Data System (ADS)

    Shteynberg, David; Mendoza, Luis; Hoopmann, Michael R.; Sun, Zhi; Schmidt, Frank; Deutsch, Eric W.; Moritz, Robert L.

    2015-11-01

    Most shotgun proteomics data analysis workflows are based on the assumption that each fragment ion spectrum is explained by a single species of peptide ion isolated by the mass spectrometer; however, in reality mass spectrometers often isolate more than one peptide ion within the window of isolation that contribute to additional peptide fragment peaks in many spectra. We present a new tool called reSpect, implemented in the Trans-Proteomic Pipeline (TPP), which enables an iterative workflow whereby fragment ion peaks explained by a peptide ion identified in one round of sequence searching or spectral library search are attenuated based on the confidence of the identification, and then the altered spectrum is subjected to further rounds of searching. The reSpect tool is not implemented as a search engine, but rather as a post-search engine processing step where only fragment ion intensities are altered. This enables the application of any search engine combination in the iterations that follow. Thus, reSpect is compatible with all other protein sequence database search engines as well as peptide spectral library search engines that are supported by the TPP. We show that while some datasets are highly amenable to chimeric spectrum identification and lead to additional peptide identification boosts of over 30% with as many as four different peptide ions identified per spectrum, datasets with narrow precursor ion selection only benefit from such processing at the level of a few percent. We demonstrate a technique that facilitates the determination of the degree to which a dataset would benefit from chimeric spectrum analysis. The reSpect tool is free and open source, provided within the TPP and available at the TPP website.

  10. CoryneBase: Corynebacterium Genomic Resources and Analysis Tools at Your Fingertips

    PubMed Central

    Tan, Mui Fern; Jakubovics, Nick S.; Wee, Wei Yee; Mutha, Naresh V. R.; Wong, Guat Jah; Ang, Mia Yang; Yazdi, Amir Hessam; Choo, Siew Woh

    2014-01-01

    Corynebacteria are used for a wide variety of industrial purposes but some species are associated with human diseases. With increasing number of corynebacterial genomes having been sequenced, comparative analysis of these strains may provide better understanding of their biology, phylogeny, virulence and taxonomy that may lead to the discoveries of beneficial industrial strains or contribute to better management of diseases. To facilitate the ongoing research of corynebacteria, a specialized central repository and analysis platform for the corynebacterial research community is needed to host the fast-growing amount of genomic data and facilitate the analysis of these data. Here we present CoryneBase, a genomic database for Corynebacterium with diverse functionality for the analysis of genomes aimed to provide: (1) annotated genome sequences of Corynebacterium where 165,918 coding sequences and 4,180 RNAs can be found in 27 species; (2) access to comprehensive Corynebacterium data through the use of advanced web technologies for interactive web interfaces; and (3) advanced bioinformatic analysis tools consisting of standard BLAST for homology search, VFDB BLAST for sequence homology search against the Virulence Factor Database (VFDB), Pairwise Genome Comparison (PGC) tool for comparative genomic analysis, and a newly designed Pathogenomics Profiling Tool (PathoProT) for comparative pathogenomic analysis. CoryneBase offers the access of a range of Corynebacterium genomic resources as well as analysis tools for comparative genomics and pathogenomics. It is publicly available at http://corynebacterium.um.edu.my/. PMID:24466021

  11. Rhodobase, a meta-analytical tool for reconstructing gene regulatory networks in a model photosynthetic bacterium.

    PubMed

    Moskvin, Oleg V; Bolotin, Dmitry; Wang, Andrew; Ivanov, Pavel S; Gomelsky, Mark

    2011-02-01

    We present Rhodobase, a web-based meta-analytical tool for analysis of transcriptional regulation in a model anoxygenic photosynthetic bacterium, Rhodobacter sphaeroides. The gene association meta-analysis is based on the pooled data from 100 of R. sphaeroides whole-genome DNA microarrays. Gene-centric regulatory networks were visualized using the StarNet approach (Jupiter, D.C., VanBuren, V., 2008. A visual data mining tool that facilitates reconstruction of transcription regulatory networks. PLoS ONE 3, e1717) with several modifications. We developed a means to identify and visualize operons and superoperons. We designed a framework for the cross-genome search for transcription factor binding sites that takes into account high GC-content and oligonucleotide usage profile characteristic of the R. sphaeroides genome. To facilitate reconstruction of directional relationships between co-regulated genes, we screened upstream sequences (-400 to +20bp from start codons) of all genes for putative binding sites of bacterial transcription factors using a self-optimizing search method developed here. To test performance of the meta-analysis tools and transcription factor site predictions, we reconstructed selected nodes of the R. sphaeroides transcription factor-centric regulatory matrix. The test revealed regulatory relationships that correlate well with the experimentally derived data. The database of transcriptional profile correlations, the network visualization engine and the optimized search engine for transcription factor binding sites analysis are available at http://rhodobase.org. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  12. SearchLight: a freely available web-based quantitative spectral analysis tool (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Prabhat, Prashant; Peet, Michael; Erdogan, Turan

    2016-03-01

    In order to design a fluorescence experiment, typically the spectra of a fluorophore and of a filter set are overlaid on a single graph and the spectral overlap is evaluated intuitively. However, in a typical fluorescence imaging system the fluorophores and optical filters are not the only wavelength dependent variables - even the excitation light sources have been changing. For example, LED Light Engines may have a significantly different spectral response compared to the traditional metal-halide lamps. Therefore, for a more accurate assessment of fluorophore-to-filter-set compatibility, all sources of spectral variation should be taken into account simultaneously. Additionally, intuitive or qualitative evaluation of many spectra does not necessarily provide a realistic assessment of the system performance. "SearchLight" is a freely available web-based spectral plotting and analysis tool that can be used to address the need for accurate, quantitative spectral evaluation of fluorescence measurement systems. This tool is available at: http://searchlight.semrock.com/. Based on a detailed mathematical framework [1], SearchLight calculates signal, noise, and signal-to-noise ratio for multiple combinations of fluorophores, filter sets, light sources and detectors. SearchLight allows for qualitative and quantitative evaluation of the compatibility of filter sets with fluorophores, analysis of bleed-through, identification of optimized spectral edge locations for a set of filters under specific experimental conditions, and guidance regarding labeling protocols in multiplexing imaging assays. Entire SearchLight sessions can be shared with colleagues and collaborators and saved for future reference. [1] Anderson, N., Prabhat, P. and Erdogan, T., Spectral Modeling in Fluorescence Microscopy, http://www.semrock.com (2010).

  13. Global Search Trends of Oral Problems using Google Trends from 2004 to 2016: An Exploratory Analysis.

    PubMed

    Patthi, Basavaraj; Kumar, Jishnu Krishna; Singla, Ashish; Gupta, Ritu; Prasad, Monika; Ali, Irfan; Dhama, Kuldeep; Niraj, Lav Kumar

    2017-09-01

    Oral diseases are pandemic cause of morbidity with widespread geographic distribution. This technology based era has brought about easy knowledge transfer than traditional dependency on information obtained from family doctors. Hence, harvesting this system of trends can aid in oral disease quantification. To conduct an exploratory analysis of the changes in internet search volumes of oral diseases by using Google Trends © (GT © ). GT © were utilized to provide real world facts based on search terms related to categories, interest by region and interest over time. Time period chosen was from January 2004 to December 2016. Five different search terms were explored and compared based on the highest relative search volumes along with comma separated value files to obtain an insight into highest search traffic. The search volume measured over the time span noted the term "Dental caries" to be the most searched in Japan, "Gingivitis" in Jordan, "Oral Cancer" in Taiwan, "No Teeth" in Australia, "HIV symptoms" in Zimbabwe, "Broken Teeth" in United Kingdom, "Cleft palate" in Philippines, "Toothache" in Indonesia and the comparison of top five searched terms provided the "Gingivitis" with highest search volume. The results from the present study offers an insight into a competent tool that can analyse and compare oral diseases over time. The trend research platform can be used on emerging diseases and their drift in geographic population with great acumen. This tool can be utilized in forecasting, modulating marketing strategies and planning disability limitation techniques.

  14. The Exponential Expansion of Simulation: How Simulation has Grown as a Research Tool

    DTIC Science & Technology

    2012-09-01

    exponential growth of computing power. Although other analytic approaches also benefit from this trend, keyword searches of several scholarly search ... engines reveal that the reliance on simulation is increasing more rapidly. A descriptive analysis paints a compelling picture: simulation is frequently

  15. Collaborative search in electronic health records.

    PubMed

    Zheng, Kai; Mei, Qiaozhu; Hanauer, David A

    2011-05-01

    A full-text search engine can be a useful tool for augmenting the reuse value of unstructured narrative data stored in electronic health records (EHR). A prominent barrier to the effective utilization of such tools originates from users' lack of search expertise and/or medical-domain knowledge. To mitigate the issue, the authors experimented with a 'collaborative search' feature through a homegrown EHR search engine that allows users to preserve their search knowledge and share it with others. This feature was inspired by the success of many social information-foraging techniques used on the web that leverage users' collective wisdom to improve the quality and efficiency of information retrieval. The authors conducted an empirical evaluation study over a 4-year period. The user sample consisted of 451 academic researchers, medical practitioners, and hospital administrators. The data were analyzed using a social-network analysis to delineate the structure of the user collaboration networks that mediated the diffusion of knowledge of search. The users embraced the concept with considerable enthusiasm. About half of the EHR searches processed by the system (0.44 million) were based on stored search knowledge; 0.16 million utilized shared knowledge made available by other users. The social-network analysis results also suggest that the user-collaboration networks engendered by the collaborative search feature played an instrumental role in enabling the transfer of search knowledge across people and domains. Applying collaborative search, a social information-foraging technique popularly used on the web, may provide the potential to improve the quality and efficiency of information retrieval in healthcare.

  16. SATRAT: Staphylococcus aureus transcript regulatory network analysis tool.

    PubMed

    Gopal, Tamilselvi; Nagarajan, Vijayaraj; Elasri, Mohamed O

    2015-01-01

    Staphylococcus aureus is a commensal organism that primarily colonizes the nose of healthy individuals. S. aureus causes a spectrum of infections that range from skin and soft-tissue infections to fatal invasive diseases. S. aureus uses a large number of virulence factors that are regulated in a coordinated fashion. The complex regulatory mechanisms have been investigated in numerous high-throughput experiments. Access to this data is critical to studying this pathogen. Previously, we developed a compilation of microarray experimental data to enable researchers to search, browse, compare, and contrast transcript profiles. We have substantially updated this database and have built a novel exploratory tool-SATRAT-the S. aureus transcript regulatory network analysis tool, based on the updated database. This tool is capable of performing deep searches using a query and generating an interactive regulatory network based on associations among the regulators of any query gene. We believe this integrated regulatory network analysis tool would help researchers explore the missing links and identify novel pathways that regulate virulence in S. aureus. Also, the data model and the network generation code used to build this resource is open sourced, enabling researchers to build similar resources for other bacterial systems.

  17. Collaborative search in electronic health records

    PubMed Central

    Mei, Qiaozhu; Hanauer, David A

    2011-01-01

    Objective A full-text search engine can be a useful tool for augmenting the reuse value of unstructured narrative data stored in electronic health records (EHR). A prominent barrier to the effective utilization of such tools originates from users' lack of search expertise and/or medical-domain knowledge. To mitigate the issue, the authors experimented with a ‘collaborative search’ feature through a homegrown EHR search engine that allows users to preserve their search knowledge and share it with others. This feature was inspired by the success of many social information-foraging techniques used on the web that leverage users' collective wisdom to improve the quality and efficiency of information retrieval. Design The authors conducted an empirical evaluation study over a 4-year period. The user sample consisted of 451 academic researchers, medical practitioners, and hospital administrators. The data were analyzed using a social-network analysis to delineate the structure of the user collaboration networks that mediated the diffusion of knowledge of search. Results The users embraced the concept with considerable enthusiasm. About half of the EHR searches processed by the system (0.44 million) were based on stored search knowledge; 0.16 million utilized shared knowledge made available by other users. The social-network analysis results also suggest that the user-collaboration networks engendered by the collaborative search feature played an instrumental role in enabling the transfer of search knowledge across people and domains. Conclusion Applying collaborative search, a social information-foraging technique popularly used on the web, may provide the potential to improve the quality and efficiency of information retrieval in healthcare. PMID:21486887

  18. Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science

    NASA Technical Reports Server (NTRS)

    Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon

    2014-01-01

    One of the largest continuing challenges in any Earth science investigation is the discovery and access of useful science content from the increasingly large volumes of Earth science data and related information available. Approaches used in Earth science research such as case study analysis and climatology studies involve gathering discovering and gathering diverse data sets and information to support the research goals. Research based on case studies involves a detailed description of specific weather events using data from different sources, to characterize physical processes in play for a specific event. Climatology-based research tends to focus on the representativeness of a given event, by studying the characteristics and distribution of a large number of events. This allows researchers to generalize characteristics such as spatio-temporal distribution, intensity, annual cycle, duration, etc. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. Those who know exactly the datasets of interest can obtain the specific files they need using these systems. However, in cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. In these cases, a search process needs to be organized around the event rather than observing instruments. In addition, the existing data systems assume users have sufficient knowledge regarding the domain vocabulary to be able to effectively utilize their catalogs. These systems do not support new or interdisciplinary researchers who may be unfamiliar with the domain terminology. This paper presents a specialized search, aggregation and curation tool for Earth science to address these existing challenges. The search tool automatically creates curated "Data Albums", aggregated collections of information related to a specific science topic or event, containing links to relevant data files (granules) from different instruments; tools and services for visualization and analysis; and information about the event contained in news reports, images or videos to supplement research analysis. Curation in the tool is driven via an ontology based relevancy ranking algorithm to filter out non-relevant information and data.

  19. Patient-Centered Tools for Medication Information Search

    PubMed Central

    Wilcox, Lauren; Feiner, Steven; Elhadad, Noémie; Vawdrey, David; Tran, Tran H.

    2016-01-01

    Recent research focused on online health information seeking highlights a heavy reliance on general-purpose search engines. However, current general-purpose search interfaces do not necessarily provide adequate support for non-experts in identifying suitable sources of health information. Popular search engines have recently introduced search tools in their user interfaces for a range of topics. In this work, we explore how such tools can support non-expert, patient-centered health information search. Scoping the current work to medication-related search, we report on findings from a formative study focused on the design of patient-centered, medication-information search tools. Our study included qualitative interviews with patients, family members, and domain experts, as well as observations of their use of Remedy, a technology probe embodying a set of search tools. Post-operative cardiothoracic surgery patients and their visiting family members used the tools to find information about their hospital medications and were interviewed before and after their use. Domain experts conducted similar search tasks and provided qualitative feedback on their preferences and recommendations for designing these tools. Findings from our study suggest the importance of four valuation principles underlying our tools: credibility, readability, consumer perspective, and topical relevance. PMID:28163972

  20. Patient-Centered Tools for Medication Information Search.

    PubMed

    Wilcox, Lauren; Feiner, Steven; Elhadad, Noémie; Vawdrey, David; Tran, Tran H

    2014-05-20

    Recent research focused on online health information seeking highlights a heavy reliance on general-purpose search engines. However, current general-purpose search interfaces do not necessarily provide adequate support for non-experts in identifying suitable sources of health information. Popular search engines have recently introduced search tools in their user interfaces for a range of topics. In this work, we explore how such tools can support non-expert, patient-centered health information search. Scoping the current work to medication-related search, we report on findings from a formative study focused on the design of patient-centered, medication-information search tools. Our study included qualitative interviews with patients, family members, and domain experts, as well as observations of their use of Remedy, a technology probe embodying a set of search tools. Post-operative cardiothoracic surgery patients and their visiting family members used the tools to find information about their hospital medications and were interviewed before and after their use. Domain experts conducted similar search tasks and provided qualitative feedback on their preferences and recommendations for designing these tools. Findings from our study suggest the importance of four valuation principles underlying our tools: credibility, readability, consumer perspective, and topical relevance.

  1. Search Engines for Tomorrow's Scholars

    ERIC Educational Resources Information Center

    Fagan, Jody Condit

    2011-01-01

    Today's scholars face an outstanding array of choices when choosing search tools: Google Scholar, discipline-specific abstracts and index databases, library discovery tools, and more recently, Microsoft's re-launch of their academic search tool, now dubbed Microsoft Academic Search. What are these tools' strengths for the emerging needs of…

  2. Global Search Trends of Oral Problems using Google Trends from 2004 to 2016: An Exploratory Analysis

    PubMed Central

    Patthi, Basavaraj; Singla, Ashish; Gupta, Ritu; Prasad, Monika; Ali, Irfan; Dhama, Kuldeep; Niraj, Lav Kumar

    2017-01-01

    Introduction Oral diseases are pandemic cause of morbidity with widespread geographic distribution. This technology based era has brought about easy knowledge transfer than traditional dependency on information obtained from family doctors. Hence, harvesting this system of trends can aid in oral disease quantification. Aim To conduct an exploratory analysis of the changes in internet search volumes of oral diseases by using Google Trends© (GT©). Materials and Methods GT© were utilized to provide real world facts based on search terms related to categories, interest by region and interest over time. Time period chosen was from January 2004 to December 2016. Five different search terms were explored and compared based on the highest relative search volumes along with comma separated value files to obtain an insight into highest search traffic. Results The search volume measured over the time span noted the term “Dental caries” to be the most searched in Japan, “Gingivitis” in Jordan, “Oral Cancer” in Taiwan, “No Teeth” in Australia, “HIV symptoms” in Zimbabwe, “Broken Teeth” in United Kingdom, “Cleft palate” in Philippines, “Toothache” in Indonesia and the comparison of top five searched terms provided the “Gingivitis” with highest search volume. Conclusion The results from the present study offers an insight into a competent tool that can analyse and compare oral diseases over time. The trend research platform can be used on emerging diseases and their drift in geographic population with great acumen. This tool can be utilized in forecasting, modulating marketing strategies and planning disability limitation techniques. PMID:29207825

  3. In Search of Strategic Perspective: A Tool for Mapping the Market in Postsecondary Education.

    ERIC Educational Resources Information Center

    Change, 1997

    1997-01-01

    New research from the National Center for Postsecondary Improvement provides a tool that colleges and universities can use to describe the higher education market, find their places within it, and identify what they need to do in the future. The market analysis and segmentation tool is outlined, and a worksheet for institutional planning is…

  4. Pathway Analysis and the Search for Causal Mechanisms

    ERIC Educational Resources Information Center

    Weller, Nicholas; Barnes, Jeb

    2016-01-01

    The study of causal mechanisms interests scholars across the social sciences. Case studies can be a valuable tool in developing knowledge and hypotheses about how causal mechanisms function. The usefulness of case studies in the search for causal mechanisms depends on effective case selection, and there are few existing guidelines for selecting…

  5. From Data to Knowledge – Promising Analytical Tools and Techniques for Capture and Reuse of Corporate Knowledge and to Aid in the State Evaluation Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Danielson, Gary R.; Augustenborg, Elsa C.; Beck, Andrew E.

    2010-10-29

    The IAEA is challenged with limited availability of human resources for inspection and data analysis while proliferation threats increase. PNNL has a variety of IT solutions and techniques (at varying levels of maturity and development) that take raw data closer to useful knowledge, thereby assisting with and standardizing the analytical processes. This paper highlights some PNNL tools and techniques which are applicable to the international safeguards community, including: • Intelligent in-situ triage of data prior to reliable transmission to an analysis center resulting in the transmission of smaller and more relevant data sets • Capture of expert knowledge in re-usablemore » search strings tailored to specific mission outcomes • Image based searching fused with text based searching • Use of gaming to discover unexpected proliferation scenarios • Process modeling (e.g. Physical Model) as the basis for an information integration portal, which links to data storage locations along with analyst annotations, categorizations, geographic data, search strings and visualization outputs.« less

  6. Modification site localization scoring integrated into a search engine.

    PubMed

    Baker, Peter R; Trinidad, Jonathan C; Chalkley, Robert J

    2011-07-01

    Large proteomic data sets identifying hundreds or thousands of modified peptides are becoming increasingly common in the literature. Several methods for assessing the reliability of peptide identifications both at the individual peptide or data set level have become established. However, tools for measuring the confidence of modification site assignments are sparse and are not often employed. A few tools for estimating phosphorylation site assignment reliabilities have been developed, but these are not integral to a search engine, so require a particular search engine output for a second step of processing. They may also require use of a particular fragmentation method and are mostly only applicable for phosphorylation analysis, rather than post-translational modifications analysis in general. In this study, we present the performance of site assignment scoring that is directly integrated into the search engine Protein Prospector, which allows site assignment reliability to be automatically reported for all modifications present in an identified peptide. It clearly indicates when a site assignment is ambiguous (and if so, between which residues), and reports an assignment score that can be translated into a reliability measure for individual site assignments.

  7. Functional Analysis of OMICs Data and Small Molecule Compounds in an Integrated "Knowledge-Based" Platform.

    PubMed

    Dubovenko, Alexey; Nikolsky, Yuri; Rakhmatulin, Eugene; Nikolskaya, Tatiana

    2017-01-01

    Analysis of NGS and other sequencing data, gene variants, gene expression, proteomics, and other high-throughput (OMICs) data is challenging because of its biological complexity and high level of technical and biological noise. One way to deal with both problems is to perform analysis with a high fidelity annotated knowledgebase of protein interactions, pathways, and functional ontologies. This knowledgebase has to be structured in a computer-readable format and must include software tools for managing experimental data, analysis, and reporting. Here, we present MetaCore™ and Key Pathway Advisor (KPA), an integrated platform for functional data analysis. On the content side, MetaCore and KPA encompass a comprehensive database of molecular interactions of different types, pathways, network models, and ten functional ontologies covering human, mouse, and rat genes. The analytical toolkit includes tools for gene/protein list enrichment analysis, statistical "interactome" tool for the identification of over- and under-connected proteins in the dataset, and a biological network analysis module made up of network generation algorithms and filters. The suite also features Advanced Search, an application for combinatorial search of the database content, as well as a Java-based tool called Pathway Map Creator for drawing and editing custom pathway maps. Applications of MetaCore and KPA include molecular mode of action of disease research, identification of potential biomarkers and drug targets, pathway hypothesis generation, analysis of biological effects for novel small molecule compounds and clinical applications (analysis of large cohorts of patients, and translational and personalized medicine).

  8. 2016 Summer Series - Ophir Frieder - Searching Harsh Environments

    NASA Image and Video Library

    2016-07-12

    Analysis of selective data that fits our investigative tool may lead to erroneous or limited conclusions. The universe consists of multi states and our recording of them adds complexity. By finding methods to increase the robustness of our digital data collection and applying likely relationship search methods that can handle all the data, we will increase the resolution of our conclusions. Frieder will present methods to increase our ability to capture and search digital data.

  9. Comet: an open-source MS/MS sequence database search tool.

    PubMed

    Eng, Jimmy K; Jahan, Tahmina A; Hoopmann, Michael R

    2013-01-01

    Proteomics research routinely involves identifying peptides and proteins via MS/MS sequence database search. Thus the database search engine is an integral tool in many proteomics research groups. Here, we introduce the Comet search engine to the existing landscape of commercial and open-source database search tools. Comet is open source, freely available, and based on one of the original sequence database search tools that has been widely used for many years. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. PlateRunner: A Search Engine to Identify EMR Boilerplates.

    PubMed

    Divita, Guy; Workman, T Elizabeth; Carter, Marjorie E; Redd, Andrew; Samore, Matthew H; Gundlapalli, Adi V

    2016-01-01

    Medical text contains boilerplated content, an artifact of pull-down forms from EMRs. Boilerplated content is the source of challenges for concept extraction on clinical text. This paper introduces PlateRunner, a search engine on boilerplates from the US Department of Veterans Affairs (VA) EMR. Boilerplates containing concepts should be identified and reviewed to recognize challenging formats, identify high yield document titles, and fine tune section zoning. This search engine has the capability to filter negated and asserted concepts, save and search query results. This tool can save queries, search results, and documents found for later analysis.

  11. Text mining for search term development in systematic reviewing: A discussion of some methods and challenges.

    PubMed

    Stansfield, Claire; O'Mara-Eves, Alison; Thomas, James

    2017-09-01

    Using text mining to aid the development of database search strings for topics described by diverse terminology has potential benefits for systematic reviews; however, methods and tools for accomplishing this are poorly covered in the research methods literature. We briefly review the literature on applications of text mining for search term development for systematic reviewing. We found that the tools can be used in 5 overarching ways: improving the precision of searches; identifying search terms to improve search sensitivity; aiding the translation of search strategies across databases; searching and screening within an integrated system; and developing objectively derived search strategies. Using a case study and selected examples, we then reflect on the utility of certain technologies (term frequency-inverse document frequency and Termine, term frequency, and clustering) in improving the precision and sensitivity of searches. Challenges in using these tools are discussed. The utility of these tools is influenced by the different capabilities of the tools, the way the tools are used, and the text that is analysed. Increased awareness of how the tools perform facilitates the further development of methods for their use in systematic reviews. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Know your market: use of online query tools to quantify trends in patient information-seeking behavior for varicose vein treatment.

    PubMed

    Harsha, Asheesh K; Schmitt, J Eric; Stavropoulos, S William

    2014-01-01

    To analyze Internet search data to characterize the temporal and geographic interest of Internet users in the United States in varicose vein treatment. From January 1, 2004, to September 1, 2012, the Google Trends tool was used to analyze query data for "varicose vein treatment" to identify individuals seeking treatment information for varicose veins. The term "varicose vein treatment" returned a search volume index (SVI), representing the search frequency relative to the total search volume during a specific time interval and region. Linear regression analysis and Kruskal-Wallis one-way analysis of variance were employed to characterize search results. Search traffic for varicose vein treatment increased by 520% over the 104-month study period. There was an annual mean increase of 28% (range, -18%-100%; standard deviation [SD], 35%), with a statistically significant linear increase in average yearly SVI over time (R(2) = 0.94, P < .0001). All years showed positive growth in mean SVI except for 2008 (18% decrease). There were statistically significant differences in SVI by month (Kruskal-Wallis, P < .0001) with significantly higher mean SVI compared with other months in May (190% increase; range, 26%-670%; SD, 15%) and June (209% increase; range, 35%-700%; SD, 20%). The southern United States showed significantly higher search traffic than all other regions (Tukey-Kramer, P < .00001). There have been significant increases in Internet search traffic related to varicose vein treatment in the past 8 years. Reflected in this trend is an annual peak in search traffic in the late spring months with an overall geographic bias toward southern states. Rigorous analysis of Internet search queries for medical procedures may prove useful to guide the efficient use of limited resources and marketing dollars. © 2013 The Society of Interventional Radiology Published by SIR All rights reserved.

  13. Automatic analysis of online image data for law enforcement agencies by concept detection and instance search

    NASA Astrophysics Data System (ADS)

    de Boer, Maaike H. T.; Bouma, Henri; Kruithof, Maarten C.; ter Haar, Frank B.; Fischer, Noëlle M.; Hagendoorn, Laurens K.; Joosten, Bart; Raaijmakers, Stephan

    2017-10-01

    The information available on-line and off-line, from open as well as from private sources, is growing at an exponential rate and places an increasing demand on the limited resources of Law Enforcement Agencies (LEAs). The absence of appropriate tools and techniques to collect, process, and analyze the volumes of complex and heterogeneous data has created a severe information overload. If a solution is not found, the impact on law enforcement will be dramatic, e.g. because important evidence is missed or the investigation time is too long. Furthermore, there is an uneven level of capabilities to deal with the large volumes of complex and heterogeneous data that come from multiple open and private sources at national level across the EU, which hinders cooperation and information sharing. Consequently, there is a pertinent need to develop tools, systems and processes which expedite online investigations. In this paper, we describe a suite of analysis tools to identify and localize generic concepts, instances of objects and logos in images, which constitutes a significant portion of everyday law enforcement data. We describe how incremental learning based on only a few examples and large-scale indexing are addressed in both concept detection and instance search. Our search technology allows querying of the database by visual examples and by keywords. Our tools are packaged in a Docker container to guarantee easy deployment on a system and our tools exploit possibilities provided by open source toolboxes, contributing to the technical autonomy of LEAs.

  14. Direct glycan structure determination of intact N-linked glycopeptides by low-energy collision-induced dissociation tandem mass spectrometry and predicted spectral library searching.

    PubMed

    Pai, Pei-Jing; Hu, Yingwei; Lam, Henry

    2016-08-31

    Intact glycopeptide MS analysis to reveal site-specific protein glycosylation is an important frontier of proteomics. However, computational tools for analyzing MS/MS spectra of intact glycopeptides are still limited and not well-integrated into existing workflows. In this work, a new computational tool which combines the spectral library building/searching tool, SpectraST (Lam et al. Nat. Methods2008, 5, 873-875), and the glycopeptide fragmentation prediction tool, MassAnalyzer (Zhang et al. Anal. Chem.2010, 82, 10194-10202) for intact glycopeptide analysis has been developed. Specifically, this tool enables the determination of the glycan structure directly from low-energy collision-induced dissociation (CID) spectra of intact glycopeptides. Given a list of possible glycopeptide sequences as input, a sample-specific spectral library of MassAnalyzer-predicted spectra is built using SpectraST. Glycan identification from CID spectra is achieved by spectral library searching against this library, in which both m/z and intensity information of the possible fragmentation ions are taken into consideration for improved accuracy. We validated our method using a standard glycoprotein, human transferrin, and evaluated its potential to be used in site-specific glycosylation profiling of glycoprotein datasets from LC-MS/MS. In addition, we further applied our method to reveal, for the first time, the site-specific N-glycosylation profile of recombinant human acetylcholinesterase expressed in HEK293 cells. For maximum usability, SpectraST is developed as part of the Trans-Proteomic Pipeline (TPP), a freely available and open-source software suite for MS data analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. TokSearch: A search engine for fusion experimental data

    DOE PAGES

    Sammuli, Brian S.; Barr, Jayson L.; Eidietis, Nicholas W.; ...

    2018-04-01

    At a typical fusion research site, experimental data is stored using archive technologies that deal with each discharge as an independent set of data. These technologies (e.g. MDSplus or HDF5) are typically supplemented with a database that aggregates metadata for multiple shots to allow for efficient querying of certain predefined quantities. Often, however, a researcher will need to extract information from the archives, possibly for many shots, that is not available in the metadata store or otherwise indexed for quick retrieval. To address this need, a new search tool called TokSearch has been added to the General Atomics TokSys controlmore » design and analysis suite [1]. This tool provides the ability to rapidly perform arbitrary, parallelized queries of archived tokamak shot data (both raw and analyzed) over large numbers of shots. The TokSearch query API borrows concepts from SQL, and users can choose to implement queries in either MatlabTM or Python.« less

  16. TokSearch: A search engine for fusion experimental data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sammuli, Brian S.; Barr, Jayson L.; Eidietis, Nicholas W.

    At a typical fusion research site, experimental data is stored using archive technologies that deal with each discharge as an independent set of data. These technologies (e.g. MDSplus or HDF5) are typically supplemented with a database that aggregates metadata for multiple shots to allow for efficient querying of certain predefined quantities. Often, however, a researcher will need to extract information from the archives, possibly for many shots, that is not available in the metadata store or otherwise indexed for quick retrieval. To address this need, a new search tool called TokSearch has been added to the General Atomics TokSys controlmore » design and analysis suite [1]. This tool provides the ability to rapidly perform arbitrary, parallelized queries of archived tokamak shot data (both raw and analyzed) over large numbers of shots. The TokSearch query API borrows concepts from SQL, and users can choose to implement queries in either MatlabTM or Python.« less

  17. LoopX: A Graphical User Interface-Based Database for Comprehensive Analysis and Comparative Evaluation of Loops from Protein Structures.

    PubMed

    Kadumuri, Rajashekar Varma; Vadrevu, Ramakrishna

    2017-10-01

    Due to their crucial role in function, folding, and stability, protein loops are being targeted for grafting/designing to create novel or alter existing functionality and improve stability and foldability. With a view to facilitate a thorough analysis and effectual search options for extracting and comparing loops for sequence and structural compatibility, we developed, LoopX a comprehensively compiled library of sequence and conformational features of ∼700,000 loops from protein structures. The database equipped with a graphical user interface is empowered with diverse query tools and search algorithms, with various rendering options to visualize the sequence- and structural-level information along with hydrogen bonding patterns, backbone φ, ψ dihedral angles of both the target and candidate loops. Two new features (i) conservation of the polar/nonpolar environment and (ii) conservation of sequence and conformation of specific residues within the loops have also been incorporated in the search and retrieval of compatible loops for a chosen target loop. Thus, the LoopX server not only serves as a database and visualization tool for sequence and structural analysis of protein loops but also aids in extracting and comparing candidate loops for a given target loop based on user-defined search options.

  18. A Systematic Review of Assessment Tools Measuring Interprofessional Education Outcomes Relevant to Pharmacy Education.

    PubMed

    Shrader, Sarah; Farland, Michelle Z; Danielson, Jennifer; Sicat, Brigitte; Umland, Elena M

    2017-08-01

    Objective. To identify and describe the available quantitative tools that assess interprofessional education (IPE) relevant to pharmacy education. Methods. A systematic approach was used to identify quantitative IPE assessment tools relevant to pharmacy education. The search strategy included the National Center for Interprofessional Practice and Education Resource Exchange (Nexus) website, a systematic search of the literature, and a manual search of journals deemed likely to include relevant tools. Results. The search identified a total of 44 tools from the Nexus website, 158 abstracts from the systematic literature search, and 570 abstracts from the manual search. A total of 36 assessment tools met the criteria to be included in the summary, and their application to IPE relevant to pharmacy education was discussed. Conclusion. Each of the tools has advantages and disadvantages. No single comprehensive tool exists to fulfill assessment needs. However, numerous tools are available that can be mapped to IPE-related accreditation standards for pharmacy education.

  19. Coming Full Circle with Boyd’s OODA Loop Ideas: An Analysis of Innovation Diffusion and Evolution

    DTIC Science & Technology

    2004-03-01

    answer “what” and “how” exploratory questions and was focused on the discovery of aspects of complex ideas, a qualitative methodology was used. A... discovery rather than explanation (Maykut & Morehouse, 1994). This thesis research followed the Miles and Huberman (1994) interactive model of...conducted using the research tools FirstSearch and EBSCO and the on-line search engine Google (www.google.com). Within FirstSearch, academic and business

  20. Federated Search Tools in Fusion Centers: Bridging Databases in the Information Sharing Environment

    DTIC Science & Technology

    2012-09-01

    considerable variation in how fusion centers plan for, gather requirements, select and acquire federated search tools to bridge disparate databases...centers, when considering integrating federated search tools; by evaluating the importance of the planning, requirements gathering, selection and...acquisition processes for integrating federated search tools; by acknowledging the challenges faced by some fusion centers during these integration processes

  1. Ursgal, Universal Python Module Combining Common Bottom-Up Proteomics Tools for Large-Scale Analysis.

    PubMed

    Kremer, Lukas P M; Leufken, Johannes; Oyunchimeg, Purevdulam; Schulze, Stefan; Fufezan, Christian

    2016-03-04

    Proteomics data integration has become a broad field with a variety of programs offering innovative algorithms to analyze increasing amounts of data. Unfortunately, this software diversity leads to many problems as soon as the data is analyzed using more than one algorithm for the same task. Although it was shown that the combination of multiple peptide identification algorithms yields more robust results, it is only recently that unified approaches are emerging; however, workflows that, for example, aim to optimize search parameters or that employ cascaded style searches can only be made accessible if data analysis becomes not only unified but also and most importantly scriptable. Here we introduce Ursgal, a Python interface to many commonly used bottom-up proteomics tools and to additional auxiliary programs. Complex workflows can thus be composed using the Python scripting language using a few lines of code. Ursgal is easily extensible, and we have made several database search engines (X!Tandem, OMSSA, MS-GF+, Myrimatch, MS Amanda), statistical postprocessing algorithms (qvality, Percolator), and one algorithm that combines statistically postprocessed outputs from multiple search engines ("combined FDR") accessible as an interface in Python. Furthermore, we have implemented a new algorithm ("combined PEP") that combines multiple search engines employing elements of "combined FDR", PeptideShaker, and Bayes' theorem.

  2. Open, Cross Platform Chemistry Application Unifying Structure Manipulation, External Tools, Databases and Visualization

    DTIC Science & Technology

    2012-11-27

    with powerful analysis tools and an informatics approach leveraging best-of-breed NoSQL databases, in order to store, search and retrieve relevant...dictionaries, and JavaScript also has good support. The MongoDB project[15] was chosen as a scalable NoSQL data store for the cheminfor- matics components

  3. Genetic Epidemiology of Glucose-6-Dehydrogenase Deficiency in the Arab World.

    PubMed

    Doss, C George Priya; Alasmar, Dima R; Bux, Reem I; Sneha, P; Bakhsh, Fadheela Dad; Al-Azwani, Iman; Bekay, Rajaa El; Zayed, Hatem

    2016-11-17

    A systematic search was implemented using four literature databases (PubMed, Embase, Science Direct and Web of Science) to capture all the causative mutations of Glucose-6-phosphate dehydrogenase (G6PD) deficiency (G6PDD) in the 22 Arab countries. Our search yielded 43 studies that captured 33 mutations (23 missense, one silent, two deletions, and seven intronic mutations), in 3,430 Arab patients with G6PDD. The 23 missense mutations were then subjected to phenotypic classification using in silico prediction tools, which were compared to the WHO pathogenicity scale as a reference. These in silico tools were tested for their predicting efficiency using rigorous statistical analyses. Of the 23 missense mutations, p.S188F, p.I48T, p.N126D, and p.V68M, were identified as the most common mutations among Arab populations, but were not unique to the Arab world, interestingly, our search strategy found four other mutations (p.N135T, p.S179N, p.R246L, and p.Q307P) that are unique to Arabs. These mutations were exposed to structural analysis and molecular dynamics simulation analysis (MDSA), which predicting these mutant forms as potentially affect the enzyme function. The combination of the MDSA, structural analysis, and in silico predictions and statistical tools we used will provide a platform for future prediction accuracy for the pathogenicity of genetic mutations.

  4. In Search of Practitioner-Based Social Capital: A Social Network Analysis Tool for Understanding and Facilitating Teacher Collaboration in a US-Based STEM Professional Development Program

    ERIC Educational Resources Information Center

    Baker-Doyle, Kira J.; Yoon, Susan A.

    2011-01-01

    This paper presents the first in a series of studies on the informal advice networks of a community of teachers in an in-service professional development program. The aim of the research was to use Social Network Analysis as a methodological tool to reveal the social networks developed by the teachers, and to examine whether these networks…

  5. Dataflow Design Tool: User's Manual

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1996-01-01

    The Dataflow Design Tool is a software tool for selecting a multiprocessor scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. The software tool implements graph-search algorithms and analysis techniques based on the dataflow paradigm. Dataflow analyses provided by the software are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool provides performance optimization through the inclusion of artificial precedence constraints among the schedulable tasks. The user interface and tool capabilities are described. Examples are provided to demonstrate the analysis, scheduling, and optimization functions facilitated by the tool.

  6. An end user evaluation of query formulation and results review tools in three medical meta-search engines.

    PubMed

    Leroy, Gondy; Xu, Jennifer; Chung, Wingyan; Eggers, Shauna; Chen, Hsinchun

    2007-01-01

    Retrieving sufficient relevant information online is difficult for many people because they use too few keywords to search and search engines do not provide many support tools. To further complicate the search, users often ignore support tools when available. Our goal is to evaluate in a realistic setting when users use support tools and how they perceive these tools. We compared three medical search engines with support tools that require more or less effort from users to form a query and evaluate results. We carried out an end user study with 23 users who were asked to find information, i.e., subtopics and supporting abstracts, for a given theme. We used a balanced within-subjects design and report on the effectiveness, efficiency and usability of the support tools from the end user perspective. We found significant differences in efficiency but did not find significant differences in effectiveness between the three search engines. Dynamic user support tools requiring less effort led to higher efficiency. Fewer searches were needed and more documents were found per search when both query reformulation and result review tools dynamically adjust to the user query. The query reformulation tool that provided a long list of keywords, dynamically adjusted to the user query, was used most often and led to more subtopics. As hypothesized, the dynamic result review tools were used more often and led to more subtopics than static ones. These results were corroborated by the usability questionnaires, which showed that support tools that dynamically optimize output were preferred.

  7. Development of a PubMed Based Search Tool for Identifying Sex and Gender Specific Health Literature.

    PubMed

    Song, Michael M; Simonsen, Cheryl K; Wilson, Joanna D; Jenkins, Marjorie R

    2016-02-01

    An effective literature search strategy is critical to achieving the aims of Sex and Gender Specific Health (SGSH): to understand sex and gender differences through research and to effectively incorporate the new knowledge into the clinical decision making process to benefit both male and female patients. The goal of this project was to develop and validate an SGSH literature search tool that is readily and freely available to clinical researchers and practitioners. PubMed, a freely available search engine for the Medline database, was selected as the platform to build the SGSH literature search tool. Combinations of Medical Subject Heading terms, text words, and title words were evaluated for optimal specificity and sensitivity. The search tool was then validated against reference bases compiled for two disease states, diabetes and stroke. Key sex and gender terms and limits were bundled to create a search tool to facilitate PubMed SGSH literature searches. During validation, the search tool retrieved 50 of 94 (53.2%) stroke and 62 of 95 (65.3%) diabetes reference articles selected for validation. A general keyword search of stroke or diabetes combined with sex difference retrieved 33 of 94 (35.1%) stroke and 22 of 95 (23.2%) diabetes reference base articles, with lower sensitivity and specificity for SGSH content. The Texas Tech University Health Sciences Center SGSH PubMed Search Tool provides higher sensitivity and specificity to sex and gender specific health literature. The tool will facilitate research, clinical decision-making, and guideline development relevant to SGSH.

  8. Development of a PubMed Based Search Tool for Identifying Sex and Gender Specific Health Literature

    PubMed Central

    Song, Michael M.; Simonsen, Cheryl K.; Wilson, Joanna D.

    2016-01-01

    Abstract Background: An effective literature search strategy is critical to achieving the aims of Sex and Gender Specific Health (SGSH): to understand sex and gender differences through research and to effectively incorporate the new knowledge into the clinical decision making process to benefit both male and female patients. The goal of this project was to develop and validate an SGSH literature search tool that is readily and freely available to clinical researchers and practitioners. Methods: PubMed, a freely available search engine for the Medline database, was selected as the platform to build the SGSH literature search tool. Combinations of Medical Subject Heading terms, text words, and title words were evaluated for optimal specificity and sensitivity. The search tool was then validated against reference bases compiled for two disease states, diabetes and stroke. Results: Key sex and gender terms and limits were bundled to create a search tool to facilitate PubMed SGSH literature searches. During validation, the search tool retrieved 50 of 94 (53.2%) stroke and 62 of 95 (65.3%) diabetes reference articles selected for validation. A general keyword search of stroke or diabetes combined with sex difference retrieved 33 of 94 (35.1%) stroke and 22 of 95 (23.2%) diabetes reference base articles, with lower sensitivity and specificity for SGSH content. Conclusions: The Texas Tech University Health Sciences Center SGSH PubMed Search Tool provides higher sensitivity and specificity to sex and gender specific health literature. The tool will facilitate research, clinical decision-making, and guideline development relevant to SGSH. PMID:26555409

  9. 'Practical' resources to support patient and family engagement in healthcare decisions: a scoping review.

    PubMed

    Kovacs Burns, Katharina; Bellows, Mandy; Eigenseher, Carol; Gallivan, Jennifer

    2014-04-15

    Extensive literature exists on public involvement or engagement, but what actual tools or guides exist that are practical, tested and easy to use specifically for initiating and implementing patient and family engagement, is uncertain. No comprehensive review and synthesis of general international published or grey literature on this specific topic was found. A systematic scoping review of published and grey literature is, therefore, appropriate for searching through the vast general engagement literature to identify 'patient/family engagement' tools and guides applicable in health organization decision-making, such as within Alberta Health Services in Alberta, Canada. This latter organization requested this search and review to inform the contents of a patient engagement resource kit for patients, providers and leaders. Search terms related to 'patient engagement', tools, guides, education and infrastructure or resources, were applied to published literature databases and grey literature search engines. Grey literature also included United States, Australia and Europe where most known public engagement practices exist, and Canada as the location for this study. Inclusion and exclusion criteria were set, and include: English documents referencing 'patient engagement' with specific criteria, and published between 1995 and 2011. For document analysis and synthesis, document analysis worksheets were used by three reviewers for the selected 224 published and 193 grey literature documents. Inter-rater reliability was ensured for the final reviews and syntheses of 76 published and 193 grey documents. Seven key themes emerged from the literature synthesis analysis, and were identified for patient, provider and/or leader groups. Articles/items within each theme were clustered under main topic areas of 'tools', 'education' and 'infrastructure'. The synthesis and findings in the literature include 15 different terms and definitions for 'patient engagement', 17 different engagement models, numerous barriers and benefits, and 34 toolkits for various patient engagement and evaluation initiatives. Patient engagement is very complex. This scoping review for patient/family engagement tools and guides is a good start for a resource inventory and can guide the content development of a patient engagement resource kit to be used by patients/families, healthcare providers and administrators.

  10. The PathoYeastract database: an information system for the analysis of gene and genomic transcription regulation in pathogenic yeasts.

    PubMed

    Monteiro, Pedro Tiago; Pais, Pedro; Costa, Catarina; Manna, Sauvagya; Sá-Correia, Isabel; Teixeira, Miguel Cacho

    2017-01-04

    We present the PATHOgenic YEAst Search for Transcriptional Regulators And Consensus Tracking (PathoYeastract - http://pathoyeastract.org) database, a tool for the analysis and prediction of transcription regulatory associations at the gene and genomic levels in the pathogenic yeasts Candida albicans and C. glabrata Upon data retrieval from hundreds of publications, followed by curation, the database currently includes 28 000 unique documented regulatory associations between transcription factors (TF) and target genes and 107 DNA binding sites, considering 134 TFs in both species. Following the structure used for the YEASTRACT database, PathoYeastract makes available bioinformatics tools that enable the user to exploit the existing information to predict the TFs involved in the regulation of a gene or genome-wide transcriptional response, while ranking those TFs in order of their relative importance. Each search can be filtered based on the selection of specific environmental conditions, experimental evidence or positive/negative regulatory effect. Promoter analysis tools and interactive visualization tools for the representation of TF regulatory networks are also provided. The PathoYeastract database further provides simple tools for the prediction of gene and genomic regulation based on orthologous regulatory associations described for other yeast species, a comparative genomics setup for the study of cross-species evolution of regulatory networks. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. CRCDA—Comprehensive resources for cancer NGS data analysis

    PubMed Central

    Thangam, Manonanthini; Gopal, Ramesh Kumar

    2015-01-01

    Next generation sequencing (NGS) innovations put a compelling landmark in life science and changed the direction of research in clinical oncology with its productivity to diagnose and treat cancer. The aim of our portal comprehensive resources for cancer NGS data analysis (CRCDA) is to provide a collection of different NGS tools and pipelines under diverse classes with cancer pathways and databases and furthermore, literature information from PubMed. The literature data was constrained to 18 most common cancer types such as breast cancer, colon cancer and other cancers that exhibit in worldwide population. NGS-cancer tools for the convenience have been categorized into cancer genomics, cancer transcriptomics, cancer epigenomics, quality control and visualization. Pipelines for variant detection, quality control and data analysis were listed to provide out-of-the box solution for NGS data analysis, which may help researchers to overcome challenges in selecting and configuring individual tools for analysing exome, whole genome and transcriptome data. An extensive search page was developed that can be queried by using (i) type of data [literature, gene data and sequence read archive (SRA) data] and (ii) type of cancer (selected based on global incidence and accessibility of data). For each category of analysis, variety of tools are available and the biggest challenge is in searching and using the right tool for the right application. The objective of the work is collecting tools in each category available at various places and arranging the tools and other data in a simple and user-friendly manner for biologists and oncologists to find information easier. To the best of our knowledge, we have collected and presented a comprehensive package of most of the resources available in cancer for NGS data analysis. Given these factors, we believe that this website will be an useful resource to the NGS research community working on cancer. Database URL: http://bioinfo.au-kbc.org.in/ngs/ngshome.html. PMID:26450948

  12. Criteria for Comparing Children's Web Search Tools.

    ERIC Educational Resources Information Center

    Kuntz, Jerry

    1999-01-01

    Presents criteria for evaluating and comparing Web search tools designed for children. Highlights include database size; accountability; categorization; search access methods; help files; spell check; URL searching; links to alternative search services; advertising; privacy policy; and layout and design. (LRW)

  13. Human Disease Insight: An integrated knowledge-based platform for disease-gene-drug information.

    PubMed

    Tasleem, Munazzah; Ishrat, Romana; Islam, Asimul; Ahmad, Faizan; Hassan, Md Imtaiyaz

    2016-01-01

    The scope of the Human Disease Insight (HDI) database is not limited to researchers or physicians as it also provides basic information to non-professionals and creates disease awareness, thereby reducing the chances of patient suffering due to ignorance. HDI is a knowledge-based resource providing information on human diseases to both scientists and the general public. Here, our mission is to provide a comprehensive human disease database containing most of the available useful information, with extensive cross-referencing. HDI is a knowledge management system that acts as a central hub to access information about human diseases and associated drugs and genes. In addition, HDI contains well-classified bioinformatics tools with helpful descriptions. These integrated bioinformatics tools enable researchers to annotate disease-specific genes and perform protein analysis, search for biomarkers and identify potential vaccine candidates. Eventually, these tools will facilitate the analysis of disease-associated data. The HDI provides two types of search capabilities and includes provisions for downloading, uploading and searching disease/gene/drug-related information. The logistical design of the HDI allows for regular updating. The database is designed to work best with Mozilla Firefox and Google Chrome and is freely accessible at http://humandiseaseinsight.com. Copyright © 2015 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.

  14. Reducing Information Overload in Large Seismic Data Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HAMPTON,JEFFERY W.; YOUNG,CHRISTOPHER J.; MERCHANT,BION J.

    2000-08-02

    Event catalogs for seismic data can become very large. Furthermore, as researchers collect multiple catalogs and reconcile them into a single catalog that is stored in a relational database, the reconciled set becomes even larger. The sheer number of these events makes searching for relevant events to compare with events of interest problematic. Information overload in this form can lead to the data sets being under-utilized and/or used incorrectly or inconsistently. Thus, efforts have been initiated to research techniques and strategies for helping researchers to make better use of large data sets. In this paper, the authors present their effortsmore » to do so in two ways: (1) the Event Search Engine, which is a waveform correlation tool and (2) some content analysis tools, which area combination of custom-built and commercial off-the-shelf tools for accessing, managing, and querying seismic data stored in a relational database. The current Event Search Engine is based on a hierarchical clustering tool known as the dendrogram tool, which is written as a MatSeis graphical user interface. The dendrogram tool allows the user to build dendrogram diagrams for a set of waveforms by controlling phase windowing, down-sampling, filtering, enveloping, and the clustering method (e.g. single linkage, complete linkage, flexible method). It also allows the clustering to be based on two or more stations simultaneously, which is important to bridge gaps in the sparsely recorded event sets anticipated in such a large reconciled event set. Current efforts are focusing on tools to help the researcher winnow the clusters defined using the dendrogram tool down to the minimum optimal identification set. This will become critical as the number of reference events in the reconciled event set continually grows. The dendrogram tool is part of the MatSeis analysis package, which is available on the Nuclear Explosion Monitoring Research and Engineering Program Web Site. As part of the research into how to winnow the reference events in these large reconciled event sets, additional database query approaches have been developed to provide windows into these datasets. These custom built content analysis tools help identify dataset characteristics that can potentially aid in providing a basis for comparing similar reference events in these large reconciled event sets. Once these characteristics can be identified, algorithms can be developed to create and add to the reduced set of events used by the Event Search Engine. These content analysis tools have already been useful in providing information on station coverage of the referenced events and basic statistical, information on events in the research datasets. The tools can also provide researchers with a quick way to find interesting and useful events within the research datasets. The tools could also be used as a means to review reference event datasets as part of a dataset delivery verification process. There has also been an effort to explore the usefulness of commercially available web-based software to help with this problem. The advantages of using off-the-shelf software applications, such as Oracle's WebDB, to manipulate, customize and manage research data are being investigated. These types of applications are being examined to provide access to large integrated data sets for regional seismic research in Asia. All of these software tools would provide the researcher with unprecedented power without having to learn the intricacies and complexities of relational database systems.« less

  15. [On the seasonality of dermatoses: a retrospective analysis of search engine query data depending on the season].

    PubMed

    Köhler, M J; Springer, S; Kaatz, M

    2014-09-01

    The volume of search engine queries about disease-relevant items reflects public interest and correlates with disease prevalence as proven by the example of flu (influenza). Other influences include media attention or holidays. The present work investigates if the seasonality of prevalence or symptom severity of dermatoses correlates with search engine query data. The relative weekly volume of dermatological relevant search terms was assessed by the online tool Google Trends for the years 2009-2013. For each item, the degree of seasonality was calculated via frequency analysis and a geometric approach. Many dermatoses show a marked seasonality, reflected by search engine query volumes. Unexpected seasonal variations of these queries suggest a previously unknown variability of the respective disease prevalence. Furthermore, using the example of allergic rhinitis, a close correlation of search engine query data with actual pollen count can be demonstrated. In many cases, search engine query data are appropriate to estimate seasonal variability in prevalence of common dermatoses. This finding may be useful for real-time analysis and formation of hypotheses concerning pathogenetic or symptom aggravating mechanisms and may thus contribute to improvement of diagnostics and prevention of skin diseases.

  16. Influenza Virus Database (IVDB): an integrated information resource and analysis platform for influenza virus research.

    PubMed

    Chang, Suhua; Zhang, Jiajie; Liao, Xiaoyun; Zhu, Xinxing; Wang, Dahai; Zhu, Jiang; Feng, Tao; Zhu, Baoli; Gao, George F; Wang, Jian; Yang, Huanming; Yu, Jun; Wang, Jing

    2007-01-01

    Frequent outbreaks of highly pathogenic avian influenza and the increasing data available for comparative analysis require a central database specialized in influenza viruses (IVs). We have established the Influenza Virus Database (IVDB) to integrate information and create an analysis platform for genetic, genomic, and phylogenetic studies of the virus. IVDB hosts complete genome sequences of influenza A virus generated by Beijing Institute of Genomics (BIG) and curates all other published IV sequences after expert annotation. Our Q-Filter system classifies and ranks all nucleotide sequences into seven categories according to sequence content and integrity. IVDB provides a series of tools and viewers for comparative analysis of the viral genomes, genes, genetic polymorphisms and phylogenetic relationships. A search system has been developed for users to retrieve a combination of different data types by setting search options. To facilitate analysis of global viral transmission and evolution, the IV Sequence Distribution Tool (IVDT) has been developed to display the worldwide geographic distribution of chosen viral genotypes and to couple genomic data with epidemiological data. The BLAST, multiple sequence alignment and phylogenetic analysis tools were integrated for online data analysis. Furthermore, IVDB offers instant access to pre-computed alignments and polymorphisms of IV genes and proteins, and presents the results as SNP distribution plots and minor allele distributions. IVDB is publicly available at http://influenza.genomics.org.cn.

  17. Optimization process planning using hybrid genetic algorithm and intelligent search for job shop machining.

    PubMed

    Salehi, Mojtaba; Bahreininejad, Ardeshir

    2011-08-01

    Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously.

  18. Optimization process planning using hybrid genetic algorithm and intelligent search for job shop machining

    PubMed Central

    Salehi, Mojtaba

    2010-01-01

    Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously. PMID:21845020

  19. orthoFind Facilitates the Discovery of Homologous and Orthologous Proteins.

    PubMed

    Mier, Pablo; Andrade-Navarro, Miguel A; Pérez-Pulido, Antonio J

    2015-01-01

    Finding homologous and orthologous protein sequences is often the first step in evolutionary studies, annotation projects, and experiments of functional complementation. Despite all currently available computational tools, there is a requirement for easy-to-use tools that provide functional information. Here, a new web application called orthoFind is presented, which allows a quick search for homologous and orthologous proteins given one or more query sequences, allowing a recurrent and exhaustive search against reference proteomes, and being able to include user databases. It addresses the protein multidomain problem, searching for homologs with the same domain architecture, and gives a simple functional analysis of the results to help in the annotation process. orthoFind is easy to use and has been proven to provide accurate results with different datasets. Availability: http://www.bioinfocabd.upo.es/orthofind/.

  20. The reliability, validity and feasibility of tools used to screen for caregiver burden: a systematic review.

    PubMed

    Whalen, Kimberly J; Buchholz, Susan W

    The overall objective of this review is to quantitatively measure the psychometric properties and the feasibility of caregiver burden screening tools. The more specific objectives were to determine the reliability, validity as well as feasibility of tools that are used to screen for caregiver burden and strain. This review considered international quantitative research papers that addressed the psychometric properties and feasibility of caregiver burden screening tools. The search strategy aimed to find both published and unpublished studies from 1980-2007 published only in the English language. An initial limited search of MEDLINE and CINAHL was undertaken followed by analysis of the text words contained in the title and abstract and the index terms used to describe the article. A second search identified keywords and index terms across major databases. Third, the reference list of identified reports and articles was searched for additional studies. Each paper was assessed by two independent reviewers for methodological quality prior to inclusion in the review using an appropriate critical appraisal instrument from the Joanna Briggs Institutes' System for the Unified Management, Assessment and Review (SUMARI) package. Because burden is a multidimensional construct defined internationally with a multitude of other terms, only those studies whose title, abstract or keywords contained the search terminology developed for this review were identified for retrieval. The construct of caregiver burden is not standardized, and many terms are used to describe burden. A caregiver is also identified as a carer. Instruments exist in multiple languages and have been tested in multiple populations. A total of 112 papers, experimental and non-experimental in nature, were included in the review. The majority of papers were non-experimental studies that tested or used a caregiver burden screening tool. Because of the nature of these papers, a meta-analysis of the results was not possible. Instead a table is used to depict the 74 caregiver burden screening tools that meet the psychometric and feasibility standards of this review. The Zarit Burden Interview (ZBI), in particular the 22-item version, has been examined the most throughout the literature. In addition to its sound psychometric properties, the ZBI has been widely used across languages and cultures. The significant amount of research that has already been done on psychometric testing of caregiver burden tools has provided a solid foundation for additional research. Although some tools have been well tested, many tools have published limited psychometric properties and feasibility data. The clinician needs to be aware of this and may need to team up with a researcher to obtain additional research data on their specific population before using a minimally tested caregiver burden screening tool. Because caregiver burden is multidimensional and many different terms are used to describe burden, both the clinician and researcher need to be precise in their selection of the appropriate tool for their work.

  1. Accessing Biomedical Literature in the Current Information Landscape

    PubMed Central

    Khare, Ritu; Leaman, Robert; Lu, Zhiyong

    2015-01-01

    i. Summary Biomedical and life sciences literature is unique because of its exponentially increasing volume and interdisciplinary nature. Biomedical literature access is essential for several types of users including biomedical researchers, clinicians, database curators, and bibliometricians. In the past few decades, several online search tools and literature archives, generic as well as biomedicine-specific, have been developed. We present this chapter in the light of three consecutive steps of literature access: searching for citations, retrieving full-text, and viewing the article. The first section presents the current state of practice of biomedical literature access, including an analysis of the search tools most frequently used by the users, including PubMed, Google Scholar, Web of Science, Scopus, and Embase, and a study on biomedical literature archives such as PubMed Central. The next section describes current research and the state-of-the-art systems motivated by the challenges a user faces during query formulation and interpretation of search results. The research solutions are classified into five key areas related to text and data mining, text similarity search, semantic search, query support, relevance ranking, and clustering results. Finally, the last section describes some predicted future trends for improving biomedical literature access, such as searching and reading articles on portable devices, and adoption of the open access policy. PMID:24788259

  2. Bioenergy Knowledge Discovery Framework Fact Sheet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The Bioenergy Knowledge Discovery Framework (KDF) supports the development of a sustainable bioenergy industry by providing access to a variety of data sets, publications, and collaboration and mapping tools that support bioenergy research, analysis, and decision making. In the KDF, users can search for information, contribute data, and use the tools and map interface to synthesize, analyze, and visualize information in a spatially integrated manner.

  3. rVISTA 2.0: Evolutionary Analysis of Transcription Factor Binding Sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loots, G G; Ovcharenko, I

    2004-01-28

    Identifying and characterizing the patterns of DNA cis-regulatory modules represents a challenge that has the potential to reveal the regulatory language the genome uses to dictate transcriptional dynamics. Several studies have demonstrated that regulatory modules are under positive selection and therefore are often conserved between related species. Using this evolutionary principle we have created a comparative tool, rVISTA, for analyzing the regulatory potential of noncoding sequences. The rVISTA tool combines transcription factor binding site (TFBS) predictions, sequence comparisons and cluster analysis to identify noncoding DNA regions that are highly conserved and present in a specific configuration within an alignment. Heremore » we present the newly developed version 2.0 of the rVISTA tool that can process alignments generated by both zPicture and PipMaker alignment programs or use pre-computed pairwise alignments of seven vertebrate genomes available from the ECR Browser. The rVISTA web server is closely interconnected with the TRANSFAC database, allowing users to either search for matrices present in the TRANSFAC library collection or search for user-defined consensus sequences. rVISTA tool is publicly available at http://rvista.dcode.org/.« less

  4. MassSieve: Panning MS/MS peptide data for proteins

    PubMed Central

    Slotta, Douglas J.; McFarland, Melinda A.; Markey, Sanford P.

    2010-01-01

    We present MassSieve, a Java-based platform for visualization and parsimony analysis of single and comparative LC-MS/MS database search engine results. The success of mass spectrometric peptide sequence assignment algorithms has led to the need for a tool to merge and evaluate the increasing data set sizes that result from LC-MS/MS-based shotgun proteomic experiments. MassSieve supports reports from multiple search engines with differing search characteristics, which can increase peptide sequence coverage and/or identify conflicting or ambiguous spectral assignments. PMID:20564260

  5. Understanding PubMed user search behavior through log analysis.

    PubMed

    Islamaj Dogan, Rezarta; Murray, G Craig; Névéol, Aurélie; Lu, Zhiyong

    2009-01-01

    This article reports on a detailed investigation of PubMed users' needs and behavior as a step toward improving biomedical information retrieval. PubMed is providing free service to researchers with access to more than 19 million citations for biomedical articles from MEDLINE and life science journals. It is accessed by millions of users each day. Efficient search tools are crucial for biomedical researchers to keep abreast of the biomedical literature relating to their own research. This study provides insight into PubMed users' needs and their behavior. This investigation was conducted through the analysis of one month of log data, consisting of more than 23 million user sessions and more than 58 million user queries. Multiple aspects of users' interactions with PubMed are characterized in detail with evidence from these logs. Despite having many features in common with general Web searches, biomedical information searches have unique characteristics that are made evident in this study. PubMed users are more persistent in seeking information and they reformulate queries often. The three most frequent types of search are search by author name, search by gene/protein, and search by disease. Use of abbreviation in queries is very frequent. Factors such as result set size influence users' decisions. Analysis of characteristics such as these plays a critical role in identifying users' information needs and their search habits. In turn, such an analysis also provides useful insight for improving biomedical information retrieval.Database URL:http://www.ncbi.nlm.nih.gov/PubMed.

  6. Initial development of a computer-aided diagnosis tool for solitary pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Catarious, David M., Jr.; Baydush, Alan H.; Floyd, Carey E., Jr.

    2001-07-01

    This paper describes the development of a computer-aided diagnosis (CAD) tool for solitary pulmonary nodules. This CAD tool is built upon physically meaningful features that were selected because of their relevance to shape and texture. These features included a modified version of the Hotelling statistic (HS), a channelized HS, three measures of fractal properties, two measures of spicularity, and three manually measured shape features. These features were measured from a difficult database consisting of 237 regions of interest (ROIs) extracted from digitized chest radiographs. The center of each 256x256 pixel ROI contained a suspicious lesion which was sent to follow-up by a radiologist and whose nature was later clinically determined. Linear discriminant analysis (LDA) was used to search the feature space via sequential forward search using percentage correct as the performance metric. An optimized feature subset, selected for the highest accuracy, was then fed into a three layer artificial neural network (ANN). The ANN's performance was assessed by receiver operating characteristic (ROC) analysis. A leave-one-out testing/training methodology was employed for the ROC analysis. The performance of this system is competitive with that of three radiologists on the same database.

  7. Evidence-based Medicine Search: a customizable federated search engine.

    PubMed

    Bracke, Paul J; Howse, David K; Keim, Samuel M

    2008-04-01

    This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center.

  8. Evidence-based Medicine Search: a customizable federated search engine

    PubMed Central

    Bracke, Paul J.; Howse, David K.; Keim, Samuel M.

    2008-01-01

    Purpose: This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. Brief Description: The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Outcomes/Conclusion: Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center. PMID:18379665

  9. A Fast, Minimalist Search Tool for Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Lynnes, C. S.; Macharrie, P. G.; Elkins, M.; Joshi, T.; Fenichel, L. H.

    2005-12-01

    We present a tool that emphasizes speed and simplicity in searching remotely sensed Earth Science data. The tool, nicknamed "Mirador" (Spanish for a scenic overlook), provides only four freetext search form fields, for Keywords, Location, Data Start and Data Stop. This contrasts with many current Earth Science search tools that offer highly structured interfaces in order to ensure precise, non-zero results. The disadvantages of the structured approach lie in its complexity and resultant learning curve, as well as the time it takes to formulate and execute the search, thus discouraging iterative discovery. On the other hand, the success of the basic Google search interface shows that many users are willing to forgo high search precision if the search process is fast enough to enable rapid iteration. Therefore, we employ several methods to increase the speed of search formulation and execution. Search formulation is expedited by the minimalist search form, with only one required field. Also, a gazetteer enables the use of geographic terms as shorthand for latitude/longitude coordinates. The search execution is accelerated by initially presenting dataset results (returned from a Google Mini appliance) with an estimated number of "hits" for each dataset based on the user's space-time constraints. The more costly file-level search is executed against a PostGres database only when the user "drills down", and then covering only the fraction of the time period needed to return the next page of results. The simplicity of the search form makes the tool easy to learn and use, and the speed of the searches enables an iterative form of data discovery.

  10. Analytical and CASE study on Limited Search, ID3, CHAID, C4.5, Improved C4.5 and OVA Decision Tree Algorithms to design Decision Support System

    NASA Astrophysics Data System (ADS)

    Kaur, Parneet; Singh, Sukhwinder; Garg, Sushil; Harmanpreet

    2010-11-01

    In this paper we study about classification algorithms for farm DSS. By applying classification algorithms i.e. Limited search, ID3, CHAID, C4.5, Improved C4.5 and One VS all Decision Tree on common data set of crop with specified class, results are obtained. The tool used to derive results is SPINA. The graphical results obtained from tool are compared to suggest best technique to develop farm Decision Support System. This analysis would help to researchers to design effective and fast DSS for farmer to take decision for enhancing their yield.

  11. Biosequence Similarity Search on the Mercury System

    PubMed Central

    Krishnamurthy, Praveen; Buhler, Jeremy; Chamberlain, Roger; Franklin, Mark; Gyang, Kwame; Jacob, Arpith; Lancaster, Joseph

    2007-01-01

    Biosequence similarity search is an important application in modern molecular biology. Search algorithms aim to identify sets of sequences whose extensional similarity suggests a common evolutionary origin or function. The most widely used similarity search tool for biosequences is BLAST, a program designed to compare query sequences to a database. Here, we present the design of BLASTN, the version of BLAST that searches DNA sequences, on the Mercury system, an architecture that supports high-volume, high-throughput data movement off a data store and into reconfigurable hardware. An important component of application deployment on the Mercury system is the functional decomposition of the application onto both the reconfigurable hardware and the traditional processor. Both the Mercury BLASTN application design and its performance analysis are described. PMID:18846267

  12. Using the Saccharomyces Genome Database (SGD) for analysis of genomic information

    PubMed Central

    Skrzypek, Marek S.; Hirschman, Jodi

    2011-01-01

    Analysis of genomic data requires access to software tools that place the sequence-derived information in the context of biology. The Saccharomyces Genome Database (SGD) integrates functional information about budding yeast genes and their products with a set of analysis tools that facilitate exploring their biological details. This unit describes how the various types of functional data available at SGD can be searched, retrieved, and analyzed. Starting with the guided tour of the SGD Home page and Locus Summary page, this unit highlights how to retrieve data using YeastMine, how to visualize genomic information with GBrowse, how to explore gene expression patterns with SPELL, and how to use Gene Ontology tools to characterize large-scale datasets. PMID:21901739

  13. Challenging Google, Microsoft Unveils a Search Tool for Scholarly Articles

    ERIC Educational Resources Information Center

    Carlson, Scott

    2006-01-01

    Microsoft has introduced a new search tool to help people find scholarly articles online. The service, which includes journal articles from prominent academic societies and publishers, puts Microsoft in direct competition with Google Scholar. The new free search tool, which should work on most Web browsers, is called Windows Live Academic Search…

  14. ‘Practical’ resources to support patient and family engagement in healthcare decisions: a scoping review

    PubMed Central

    2014-01-01

    Background Extensive literature exists on public involvement or engagement, but what actual tools or guides exist that are practical, tested and easy to use specifically for initiating and implementing patient and family engagement, is uncertain. No comprehensive review and synthesis of general international published or grey literature on this specific topic was found. A systematic scoping review of published and grey literature is, therefore, appropriate for searching through the vast general engagement literature to identify ‘patient/family engagement’ tools and guides applicable in health organization decision-making, such as within Alberta Health Services in Alberta, Canada. This latter organization requested this search and review to inform the contents of a patient engagement resource kit for patients, providers and leaders. Methods Search terms related to ‘patient engagement’, tools, guides, education and infrastructure or resources, were applied to published literature databases and grey literature search engines. Grey literature also included United States, Australia and Europe where most known public engagement practices exist, and Canada as the location for this study. Inclusion and exclusion criteria were set, and include: English documents referencing ‘patient engagement’ with specific criteria, and published between 1995 and 2011. For document analysis and synthesis, document analysis worksheets were used by three reviewers for the selected 224 published and 193 grey literature documents. Inter-rater reliability was ensured for the final reviews and syntheses of 76 published and 193 grey documents. Results Seven key themes emerged from the literature synthesis analysis, and were identified for patient, provider and/or leader groups. Articles/items within each theme were clustered under main topic areas of ‘tools’, ‘education’ and ‘infrastructure’. The synthesis and findings in the literature include 15 different terms and definitions for ‘patient engagement’, 17 different engagement models, numerous barriers and benefits, and 34 toolkits for various patient engagement and evaluation initiatives. Conclusions Patient engagement is very complex. This scoping review for patient/family engagement tools and guides is a good start for a resource inventory and can guide the content development of a patient engagement resource kit to be used by patients/families, healthcare providers and administrators. PMID:24735787

  15. On contact modelling in isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Cardoso, R. P. R.; Adetoro, O. B.

    2017-11-01

    IsoGeometric Analysis (IGA) has proved to be a reliable numerical tool for the simulation of structural behaviour and fluid mechanics. The main reasons for this popularity are essentially due to: (i) the possibility of using higher order polynomials for the basis functions; (ii) the high convergence rates possible to achieve; (iii) the possibility to operate directly on CAD geometry without the need to resort to a mesh of elements. The major drawback of IGA is the non-interpolatory characteristic of the basis functions, which adds a difficulty in handling essential boundary conditions and makes it particularly challenging for contact analysis. In this work, the IGA is expanded to include frictionless contact procedures for sheet metal forming analyses. Non-Uniform Rational B-Splines (NURBS) are going to be used for the modelling of rigid tools as well as for the modelling of the deformable blank sheet. The contact methods developed are based on a two-step contact search scheme, where during the first step a global search algorithm is used for the allocation of contact knots into potential contact faces and a second (local) contact search scheme where point inversion techniques are used for the calculation of the contact penetration gap. For completeness, elastoplastic procedures are also included for a proper description of the entire IGA of sheet metal forming processes.

  16. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.

  17. A comparison of visual search strategies of elite and non-elite tennis players through cluster analysis.

    PubMed

    Murray, Nicholas P; Hunfalvay, Melissa

    2017-02-01

    Considerable research has documented that successful performance in interceptive tasks (such as return of serve in tennis) is based on the performers' capability to capture appropriate anticipatory information prior to the flight path of the approaching object. Athletes of higher skill tend to fixate on different locations in the playing environment prior to initiation of a skill than their lesser skilled counterparts. The purpose of this study was to examine visual search behaviour strategies of elite (world ranked) tennis players and non-ranked competitive tennis players (n = 43) utilising cluster analysis. The results of hierarchical (Ward's method) and nonhierarchical (k means) cluster analyses revealed three different clusters. The clustering method distinguished visual behaviour of high, middle-and low-ranked players. Specifically, high-ranked players demonstrated longer mean fixation duration and lower variation of visual search than middle-and low-ranked players. In conclusion, the results demonstrated that cluster analysis is a useful tool for detecting and analysing the areas of interest for use in experimental analysis of expertise and to distinguish visual search variables among participants'.

  18. OSTI.GOV | OSTI, US Dept of Energy Office of Scientific and Technical

    Science.gov Websites

    Information Skip to main content ☰ Submit Research Results Search Tools Public Access Policy Data Services & Dev Tools About FAQs News Sign In Create Account Sign In Create Account Department Information Search terms: Advanced search options Advanced Search OptionsAdvanced Search queries use a

  19. Understanding the foundation: the state of generalist search education in library schools as related to the needs of expert searchers in medical libraries.

    PubMed

    Nicholson, Scott

    2005-01-01

    The paper explores the current state of generalist search education in library schools and considers that foundation in respect to the Medical Library Association's statement on expert searching. Syllabi from courses with significant searching components were examined from ten of the top library schools, as determined by the U.S. News & World Report rankings. Mixed methods were used, but primarily quantitative bibliometric methods were used. The educational focus in these searching components was on understanding the generalist searching resources and typical users and on performing a reflective search through application of search strategies, controlled vocabulary, and logic appropriate to the search tool. There is a growing emphasis on Web-based search tools and a movement away from traditional set-based searching and toward free-text search strategies. While a core set of authors is used in these courses, no core set of readings is used. While library schools provide a strong foundation, future medical librarians still need to take courses that introduce them to the resources, settings, and users associated with medical libraries. In addition, as more emphasis is placed on Web-based search tools and free-text searching, instructors of the specialist medical informatics courses will need to focus on teaching traditional search methods appropriate for common tools in the medical domain.

  20. Understanding the foundation: the state of generalist search education in library schools as related to the needs of expert searchers in medical libraries

    PubMed Central

    Nicholson, Scott

    2005-01-01

    Purpose: The paper explores the current state of generalist search education in library schools and considers that foundation in respect to the Medical Library Association's statement on expert searching. Setting/Subjects: Syllabi from courses with significant searching components were examined from ten of the top library schools, as determined by the U.S. News & World Report rankings. Methodology: Mixed methods were used, but primarily quantitative bibliometric methods were used. Results: The educational focus in these searching components was on understanding the generalist searching resources and typical users and on performing a reflective search through application of search strategies, controlled vocabulary, and logic appropriate to the search tool. There is a growing emphasis on Web-based search tools and a movement away from traditional set-based searching and toward free-text search strategies. While a core set of authors is used in these courses, no core set of readings is used. Discussion/Conclusion: While library schools provide a strong foundation, future medical librarians still need to take courses that introduce them to the resources, settings, and users associated with medical libraries. In addition, as more emphasis is placed on Web-based search tools and free-text searching, instructors of the specialist medical informatics courses will need to focus on teaching traditional search methods appropriate for common tools in the medical domain. PMID:15685276

  1. Searching for Tools versus Asking for Answers: A Taxonomy of Counselee Behavioral Styles during Career Counseling.

    ERIC Educational Resources Information Center

    Sagiv, Lilach

    1999-01-01

    A taxonomy of decision behavior styles (independence/dependence, active/passive, insightful/not) tested with 372 career counseling clients was supported by similar structure analysis and confirmatory factor analysis. Counselors were more likely to be satisfied with decisions of clients they perceived to be insightful. (SK)

  2. New generation of the multimedia search engines

    NASA Astrophysics Data System (ADS)

    Mijes Cruz, Mario Humberto; Soto Aldaco, Andrea; Maldonado Cano, Luis Alejandro; López Rodríguez, Mario; Rodríguez Vázqueza, Manuel Antonio; Amaya Reyes, Laura Mariel; Cano Martínez, Elizabeth; Pérez Rosas, Osvaldo Gerardo; Rodríguez Espejo, Luis; Flores Secundino, Jesús Abimelek; Rivera Martínez, José Luis; García Vázquez, Mireya Saraí; Zamudio Fuentes, Luis Miguel; Sánchez Valenzuela, Juan Carlos; Montoya Obeso, Abraham; Ramírez Acosta, Alejandro Álvaro

    2016-09-01

    Current search engines are based upon search methods that involve the combination of words (text-based search); which has been efficient until now. However, the Internet's growing demand indicates that there's more diversity on it with each passing day. Text-based searches are becoming limited, as most of the information on the Internet can be found in different types of content denominated multimedia content (images, audio files, video files). Indeed, what needs to be improved in current search engines is: search content, and precision; as well as an accurate display of expected search results by the user. Any search can be more precise if it uses more text parameters, but it doesn't help improve the content or speed of the search itself. One solution is to improve them through the characterization of the content for the search in multimedia files. In this article, an analysis of the new generation multimedia search engines is presented, focusing the needs according to new technologies. Multimedia content has become a central part of the flow of information in our daily life. This reflects the necessity of having multimedia search engines, as well as knowing the real tasks that it must comply. Through this analysis, it is shown that there are not many search engines that can perform content searches. The area of research of multimedia search engines of new generation is a multidisciplinary area that's in constant growth, generating tools that satisfy the different needs of new generation systems.

  3. Utilization and perceived problems of online medical resources and search tools among different groups of European physicians.

    PubMed

    Kritz, Marlene; Gschwandtner, Manfred; Stefanov, Veronika; Hanbury, Allan; Samwald, Matthias

    2013-06-26

    There is a large body of research suggesting that medical professionals have unmet information needs during their daily routines. To investigate which online resources and tools different groups of European physicians use to gather medical information and to identify barriers that prevent the successful retrieval of medical information from the Internet. A detailed Web-based questionnaire was sent out to approximately 15,000 physicians across Europe and disseminated through partner websites. 500 European physicians of different levels of academic qualification and medical specialization were included in the analysis. Self-reported frequency of use of different types of online resources, perceived importance of search tools, and perceived search barriers were measured. Comparisons were made across different levels of qualification (qualified physicians vs physicians in training, medical specialists without professorships vs medical professors) and specialization (general practitioners vs specialists). Most participants were Internet-savvy, came from Austria (43%, 190/440) and Switzerland (31%, 137/440), were above 50 years old (56%, 239/430), stated high levels of medical work experience, had regular patient contact and were employed in nonacademic health care settings (41%, 177/432). All groups reported frequent use of general search engines and cited "restricted accessibility to good quality information" as a dominant barrier to finding medical information on the Internet. Physicians in training reported the most frequent use of Wikipedia (56%, 31/55). Specialists were more likely than general practitioners to use medical research databases (68%, 185/274 vs 27%, 24/88; χ²₂=44.905, P<.001). General practitioners were more likely than specialists to report "lack of time" as a barrier towards finding information on the Internet (59%, 50/85 vs 43%, 111/260; χ²₁=7.231, P=.007) and to restrict their search by language (48%, 43/89 vs 35%, 97/278; χ²₁=5.148, P=.023). They frequently consult general health websites (36%, 31/87 vs 19%, 51/269; χ²₂=12.813, P=.002) and online physician network communities (17%, 15/86, χ²₂=9.841 vs 6%, 17/270, P<.001). The reported inaccessibility of relevant, trustworthy resources on the Internet and frequent reliance on general search engines and social media among physicians require further attention. Possible solutions may be increased governmental support for the development and popularization of user-tailored medical search tools and open access to high-quality content for physicians. The potential role of collaborative tools in providing the psychological support and affirmation normally given by medical colleagues needs further consideration. Tools that speed up quality evaluation and aid selection of relevant search results need to be identified. In order to develop an adequate search tool, a differentiated approach considering the differing needs of physician subgroups may be beneficial.

  4. Health literacy and usability of clinical trial search engines.

    PubMed

    Utami, Dina; Bickmore, Timothy W; Barry, Barbara; Paasche-Orlow, Michael K

    2014-01-01

    Several web-based search engines have been developed to assist individuals to find clinical trials for which they may be interested in volunteering. However, these search engines may be difficult for individuals with low health and computer literacy to navigate. The authors present findings from a usability evaluation of clinical trial search tools with 41 participants across the health and computer literacy spectrum. The study consisted of 3 parts: (a) a usability study of an existing web-based clinical trial search tool; (b) a usability study of a keyword-based clinical trial search tool; and (c) an exploratory study investigating users' information needs when deciding among 2 or more candidate clinical trials. From the first 2 studies, the authors found that users with low health literacy have difficulty forming queries using keywords and have significantly more difficulty using a standard web-based clinical trial search tool compared with users with adequate health literacy. From the third study, the authors identified the search factors most important to individuals searching for clinical trials and how these varied by health literacy level.

  5. Aggregation Tool to Create Curated Data albums to Support Disaster Recovery and Response

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Kulkarni, A.; Maskey, M.; Li, X.; Flynn, S.

    2014-12-01

    Economic losses due to natural hazards are estimated to be around 6-10 billion dollars annually for the U.S. and this number keeps increasing every year. This increase has been attributed to population growth and migration to more hazard prone locations. As this trend continues, in concert with shifts in weather patterns caused by climate change, it is anticipated that losses associated with natural disasters will keep growing substantially. One of challenges disaster response and recovery analysts face is to quickly find, access and utilize a vast variety of relevant geospatial data collected by different federal agencies. More often analysts may be familiar with limited, but specific datasets and are often unaware of or unfamiliar with a large quantity of other useful resources. Finding airborne or satellite data useful to a natural disaster event often requires a time consuming search through web pages and data archives. The search process for the analyst could be made much more efficient and productive if a tool could go beyond a typical search engine and provide not just links to web sites but actual links to specific data relevant to the natural disaster, parse unstructured reports for useful information nuggets, as well as gather other related reports, summaries, news stories, and images. This presentation will describe a semantic aggregation tool developed to address similar problem for Earth Science researchers. This tool provides automated curation, and creates "Data Albums" to support case studies. The generated "Data Albums" are compiled collections of information related to a specific science topic or event, containing links to relevant data files (granules) from different instruments; tools and services for visualization and analysis; information about the event contained in news reports, and images or videos to supplement research analysis. An ontology-based relevancy-ranking algorithm drives the curation of relevant data sets for a given event. This tool is now being used to generate a catalog of case studies focusing on hurricanes and severe storms.

  6. LandEx - Fast, FOSS-Based Application for Query and Retrieval of Land Cover Patterns

    NASA Astrophysics Data System (ADS)

    Netzel, P.; Stepinski, T.

    2012-12-01

    The amount of satellite-based spatial data is continuously increasing making a development of efficient data search tools a priority. The bulk of existing research on searching satellite-gathered data concentrates on images and is based on the concept of Content-Based Image Retrieval (CBIR); however, available solutions are not efficient and robust enough to be put to use as deployable web-based search tools. Here we report on development of a practical, deployable tool that searches classified, rather than raw image. LandEx (Landscape Explorer) is a GeoWeb-based tool for Content-Based Pattern Retrieval (CBPR) contained within the National Land Cover Dataset 2006 (NLCD2006). The USGS-developed NLCD2006 is derived from Landsat multispectral images; it covers the entire conterminous U.S. with the resolution of 30 meters/pixel and it depicts 16 land cover classes. The size of NLCD2006 is about 10 Gpixels (161,000 x 100,000 pixels). LandEx is a multi-tier GeoWeb application based on Open Source Software. Main components are: GeoExt/OpenLayers (user interface), GeoServer (OGC WMS, WCS and WPS server), and GRASS (calculation engine). LandEx performs search using query-by-example approach: user selects a reference scene (exhibiting a chosen pattern of land cover classes) and the tool produces, in real time, a map indicating a degree of similarity between the reference pattern and all local patterns across the U.S. Scene pattern is encapsulated by a 2D histogram of classes and sizes of single-class clumps. Pattern similarity is based on the notion of mutual information. The resultant similarity map can be viewed and navigated in a web browser, or it can download as a GeoTiff file for more in-depth analysis. The LandEx is available at http://sil.uc.edu

  7. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and ARM

    NASA Astrophysics Data System (ADS)

    Crow, M. C.; Devarakonda, R.; Killeffer, T.; Hook, L.; Boden, T.; Wullschleger, S.

    2017-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This poster describes tools being used in several projects at Oak Ridge National Laboratory (ORNL), with a focus on the U.S. Department of Energy's Next Generation Ecosystem Experiment in the Arctic (NGEE Arctic) and Atmospheric Radiation Measurements (ARM) project, and their usage at different stages of the data lifecycle. The Online Metadata Editor (OME) is used for the documentation and archival stages while a Data Search tool supports indexing, cataloging, and searching. The NGEE Arctic OME Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload while adhering to standard metadata formats. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The Data Search Tool conveniently displays each data record in a thumbnail containing the title, source, and date range, and features a quick view of the metadata associated with that record, as well as a direct link to the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for geo-searching. These tools are supported by the Mercury [2] consortium (funded by DOE, NASA, USGS, and ARM) and developed and managed at Oak Ridge National Laboratory. Mercury is a set of tools for collecting, searching, and retrieving metadata and data. Mercury collects metadata from contributing project servers, then indexes the metadata to make it searchable using Apache Solr, and provides access to retrieve it from the web page. Metadata standards that Mercury supports include: XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115.

  8. User Guidelines for the Brassica Database: BRAD.

    PubMed

    Wang, Xiaobo; Cheng, Feng; Wang, Xiaowu

    2016-01-01

    The genome sequence of Brassica rapa was first released in 2011. Since then, further Brassica genomes have been sequenced or are undergoing sequencing. It is therefore necessary to develop tools that help users to mine information from genomic data efficiently. This will greatly aid scientific exploration and breeding application, especially for those with low levels of bioinformatic training. Therefore, the Brassica database (BRAD) was built to collect, integrate, illustrate, and visualize Brassica genomic datasets. BRAD provides useful searching and data mining tools, and facilitates the search of gene annotation datasets, syntenic or non-syntenic orthologs, and flanking regions of functional genomic elements. It also includes genome-analysis tools such as BLAST and GBrowse. One of the important aims of BRAD is to build a bridge between Brassica crop genomes with the genome of the model species Arabidopsis thaliana, thus transferring the bulk of A. thaliana gene study information for use with newly sequenced Brassica crops.

  9. Information Discovery and Retrieval Tools

    DTIC Science & Technology

    2004-12-01

    information. This session will focus on the various Internet search engines , directories, and how to improve the user experience through the use of...such techniques as metadata, meta- search engines , subject specific search tools, and other developing technologies.

  10. Information Discovery and Retrieval Tools

    DTIC Science & Technology

    2003-04-01

    information. This session will focus on the various Internet search engines , directories, and how to improve the user experience through the use of...such techniques as metadata, meta- search engines , subject specific search tools, and other developing technologies.

  11. Government Search Tools: Evaluating Fee and Free Search Alternatives.

    ERIC Educational Resources Information Center

    Gordon-Murnane, Laura

    1999-01-01

    Examines four tools that provide access to federal government information: FedWorld, Usgovsearch.com, Google/Unclesam, and GovBot. Compares search features, size of collection, ease of use, and cost or subscription requirements. (LRW)

  12. Open source cardiology electronic health record development for DIGICARDIAC implementation

    NASA Astrophysics Data System (ADS)

    Dugarte, Nelson; Medina, Rubén.; Huiracocha, Lourdes; Rojas, Rubén.

    2015-12-01

    This article presents the development of a Cardiology Electronic Health Record (CEHR) system. Software consists of a structured algorithm designed under Health Level-7 (HL7) international standards. Novelty of the system is the integration of high resolution ECG (HRECG) signal acquisition and processing tools, patient information management tools and telecardiology tools. Acquisition tools are for management and control of the DIGICARDIAC electrocardiograph functions. Processing tools allow management of HRECG signal analysis searching for indicative patterns of cardiovascular pathologies. Telecardiology tools incorporation allows system communication with other health care centers decreasing access time to the patient information. CEHR system was completely developed using open source software. Preliminary results of process validation showed the system efficiency.

  13. STEPP--Search Tool for Exploration of Petri net Paths: a new tool for Petri net-based path analysis in biochemical networks.

    PubMed

    Koch, Ina; Schueler, Markus; Heiner, Monika

    2005-01-01

    To understand biochemical processes caused by, e. g., mutations or deletions in the genome, the knowledge of possible alternative paths between two arbitrary chemical compounds is of increasing interest for biotechnology, pharmacology, medicine, and drug design. With the steadily increasing amount of data from high-throughput experiments new biochemical networks can be constructed and existing ones can be extended, which results in many large metabolic, signal transduction, and gene regulatory networks. The search for alternative paths within these complex and large networks can provide a huge amount of solutions, which can not be handled manually. Moreover, not all of the alternative paths are generally of interest. Therefore, we have developed and implemented a method, which allows us to define constraints to reduce the set of all structurally possible paths to the truly interesting path set. The paper describes the search algorithm and the constraints definition language. We give examples for path searches using this dedicated special language for a Petri net model of the sucrose-to-starch breakdown in the potato tuber.

  14. STEPP - Search Tool for Exploration of Petri net Paths: A New Tool for Petri Net-Based Path Analysis in Biochemical Networks.

    PubMed

    Koch, Ina; Schüler, Markus; Heiner, Monika

    2011-01-01

    To understand biochemical processes caused by, e.g., mutations or deletions in the genome, the knowledge of possible alternative paths between two arbitrary chemical compounds is of increasing interest for biotechnology, pharmacology, medicine, and drug design. With the steadily increasing amount of data from high-throughput experiments new biochemical networks can be constructed and existing ones can be extended, which results in many large metabolic, signal transduction, and gene regulatory networks. The search for alternative paths within these complex and large networks can provide a huge amount of solutions, which can not be handled manually. Moreover, not all of the alternative paths are generally of interest. Therefore, we have developed and implemented a method, which allows us to define constraints to reduce the set of all structurally possible paths to the truly interesting path set. The paper describes the search algorithm and the constraints definition language. We give examples for path searches using this dedicated special language for a Petri net model of the sucrose-to-starch breakdown in the potato tuber. http://sanaga.tfh-berlin.de/~stepp/

  15. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information.

    PubMed

    Khushi, Matloob; Edwards, Georgina; de Marcos, Diego Alonso; Carpenter, Jane E; Graham, J Dinny; Clarke, Christine L

    2013-02-12

    Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient's clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934.

  16. SPIN or LURCH : a Comparative Assessment of Model Checking and Stochastic Search for Temporal Properties in Procedural Code

    NASA Technical Reports Server (NTRS)

    Powell, John D.; Owens, David; Menzies, Tim

    2004-01-01

    The difficulty of how to test large systems, such as the one on board a NASA robotic remote explorer (RRE) vehicle, is fundamentally a search issue: the global state space representing all possible has yet to be solved, even after many decades of work. Randomized algorithms have been known to outperform their deterministic counterparts for search problems representing a wide range of applications. In the case study presented here, the LURCH randomized algorithm proved to be adequate to the task of testing a NASA RRE vehicle. LURCH found all the errors found by an earlier analysis of a more complete method (SPIN). Our empirical results are that LURCH can scale to much larger models than standard model checkers like SMV and SPIN. Further, the LURCH analysis was simpler than the SPIN analysis. The simplicity and scalability of LURCH are two compelling reasons for experimenting further with this tool.

  17. Design and Testing of BACRA, a Web-Based Tool for Middle Managers at Health Care Facilities to Lead the Search for Solutions to Patient Safety Incidents

    PubMed Central

    Mira, José Joaquín; Vicente, Maria Asuncion; Fernandez, Cesar; Guilabert, Mercedes; Ferrús, Lena; Zavala, Elena; Silvestre, Carmen; Pérez-Pérez, Pastora

    2016-01-01

    Background Lack of time, lack of familiarity with root cause analysis, or suspicion that the reporting may result in negative consequences hinder involvement in the analysis of safety incidents and the search for preventive actions that can improve patient safety. Objective The aim was develop a tool that enables hospitals and primary care professionals to immediately analyze the causes of incidents and to propose and implement measures intended to prevent their recurrence. Methods The design of the Web-based tool (BACRA) considered research on the barriers for reporting, review of incident analysis tools, and the experience of eight managers from the field of patient safety. BACRA’s design was improved in successive versions (BACRA v1.1 and BACRA v1.2) based on feedback from 86 middle managers. BACRA v1.1 was used by 13 frontline professionals to analyze incidents of safety; 59 professionals used BACRA v1.2 and assessed the respective usefulness and ease of use of both versions. Results BACRA contains seven tabs that guide the user through the process of analyzing a safety incident and proposing preventive actions for similar future incidents. BACRA does not identify the person completing each analysis since the password introduced to hide said analysis only is linked to the information concerning the incident and not to any personal data. The tool was used by 72 professionals from hospitals and primary care centers. BACRA v1.2 was assessed more favorably than BACRA v1.1, both in terms of its usefulness (z=2.2, P=.03) and its ease of use (z=3.0, P=.003). Conclusions BACRA helps to analyze incidents of safety and to propose preventive actions. BACRA guarantees anonymity of the analysis and reduces the reluctance of professionals to carry out this task. BACRA is useful and easy to use. PMID:27678308

  18. Design and Testing of BACRA, a Web-Based Tool for Middle Managers at Health Care Facilities to Lead the Search for Solutions to Patient Safety Incidents.

    PubMed

    Carrillo, Irene; Mira, José Joaquín; Vicente, Maria Asuncion; Fernandez, Cesar; Guilabert, Mercedes; Ferrús, Lena; Zavala, Elena; Silvestre, Carmen; Pérez-Pérez, Pastora

    2016-09-27

    Lack of time, lack of familiarity with root cause analysis, or suspicion that the reporting may result in negative consequences hinder involvement in the analysis of safety incidents and the search for preventive actions that can improve patient safety. The aim was develop a tool that enables hospitals and primary care professionals to immediately analyze the causes of incidents and to propose and implement measures intended to prevent their recurrence. The design of the Web-based tool (BACRA) considered research on the barriers for reporting, review of incident analysis tools, and the experience of eight managers from the field of patient safety. BACRA's design was improved in successive versions (BACRA v1.1 and BACRA v1.2) based on feedback from 86 middle managers. BACRA v1.1 was used by 13 frontline professionals to analyze incidents of safety; 59 professionals used BACRA v1.2 and assessed the respective usefulness and ease of use of both versions. BACRA contains seven tabs that guide the user through the process of analyzing a safety incident and proposing preventive actions for similar future incidents. BACRA does not identify the person completing each analysis since the password introduced to hide said analysis only is linked to the information concerning the incident and not to any personal data. The tool was used by 72 professionals from hospitals and primary care centers. BACRA v1.2 was assessed more favorably than BACRA v1.1, both in terms of its usefulness (z=2.2, P=.03) and its ease of use (z=3.0, P=.003). BACRA helps to analyze incidents of safety and to propose preventive actions. BACRA guarantees anonymity of the analysis and reduces the reluctance of professionals to carry out this task. BACRA is useful and easy to use.

  19. Visualizing Phylogenetic Treespace Using Cartographic Projections

    NASA Astrophysics Data System (ADS)

    Sundberg, Kenneth; Clement, Mark; Snell, Quinn

    Phylogenetic analysis is becoming an increasingly important tool for biological research. Applications include epidemiological studies, drug development, and evolutionary analysis. Phylogenetic search is a known NP-Hard problem. The size of the data sets which can be analyzed is limited by the exponential growth in the number of trees that must be considered as the problem size increases. A better understanding of the problem space could lead to better methods, which in turn could lead to the feasible analysis of more data sets. We present a definition of phylogenetic tree space and a visualization of this space that shows significant exploitable structure. This structure can be used to develop search methods capable of handling much larger datasets.

  20. Java Radar Analysis Tool

    NASA Technical Reports Server (NTRS)

    Zaczek, Mariusz P.

    2005-01-01

    Java Radar Analysis Tool (JRAT) is a computer program for analyzing two-dimensional (2D) scatter plots derived from radar returns showing pieces of the disintegrating Space Shuttle Columbia. JRAT can also be applied to similar plots representing radar returns showing aviation accidents, and to scatter plots in general. The 2D scatter plots include overhead map views and side altitude views. The superposition of points in these views makes searching difficult. JRAT enables three-dimensional (3D) viewing: by use of a mouse and keyboard, the user can rotate to any desired viewing angle. The 3D view can include overlaid trajectories and search footprints to enhance situational awareness in searching for pieces. JRAT also enables playback: time-tagged radar-return data can be displayed in time order and an animated 3D model can be moved through the scene to show the locations of the Columbia (or other vehicle) at the times of the corresponding radar events. The combination of overlays and playback enables the user to correlate a radar return with a position of the vehicle to determine whether the return is valid. JRAT can optionally filter single radar returns, enabling the user to selectively hide or highlight a desired radar return.

  1. Computational Methods for Tracking, Quantitative Assessment, and Visualization of C. elegans Locomotory Behavior

    PubMed Central

    Moy, Kyle; Li, Weiyu; Tran, Huu Phuoc; Simonis, Valerie; Story, Evan; Brandon, Christopher; Furst, Jacob; Raicu, Daniela; Kim, Hongkyun

    2015-01-01

    The nematode Caenorhabditis elegans provides a unique opportunity to interrogate the neural basis of behavior at single neuron resolution. In C. elegans, neural circuits that control behaviors can be formulated based on its complete neural connection map, and easily assessed by applying advanced genetic tools that allow for modulation in the activity of specific neurons. Importantly, C. elegans exhibits several elaborate behaviors that can be empirically quantified and analyzed, thus providing a means to assess the contribution of specific neural circuits to behavioral output. Particularly, locomotory behavior can be recorded and analyzed with computational and mathematical tools. Here, we describe a robust single worm-tracking system, which is based on the open-source Python programming language, and an analysis system, which implements path-related algorithms. Our tracking system was designed to accommodate worms that explore a large area with frequent turns and reversals at high speeds. As a proof of principle, we used our tracker to record the movements of wild-type animals that were freshly removed from abundant bacterial food, and determined how wild-type animals change locomotory behavior over a long period of time. Consistent with previous findings, we observed that wild-type animals show a transition from area-restricted local search to global search over time. Intriguingly, we found that wild-type animals initially exhibit short, random movements interrupted by infrequent long trajectories. This movement pattern often coincides with local/global search behavior, and visually resembles Lévy flight search, a search behavior conserved across species. Our mathematical analysis showed that while most of the animals exhibited Brownian walks, approximately 20% of the animals exhibited Lévy flights, indicating that C. elegans can use Lévy flights for efficient food search. In summary, our tracker and analysis software will help analyze the neural basis of the alteration and transition of C. elegans locomotory behavior in a food-deprived condition. PMID:26713869

  2. EuPathDB: the eukaryotic pathogen genomics database resource

    PubMed Central

    Aurrecoechea, Cristina; Barreto, Ana; Basenko, Evelina Y.; Brestelli, John; Brunk, Brian P.; Cade, Shon; Crouch, Kathryn; Doherty, Ryan; Falke, Dave; Fischer, Steve; Gajria, Bindu; Harb, Omar S.; Heiges, Mark; Hertz-Fowler, Christiane; Hu, Sufen; Iodice, John; Kissinger, Jessica C.; Lawrence, Cris; Li, Wei; Pinney, Deborah F.; Pulman, Jane A.; Roos, David S.; Shanmugasundram, Achchuthan; Silva-Franco, Fatima; Steinbiss, Sascha; Stoeckert, Christian J.; Spruill, Drew; Wang, Haiming; Warrenfeltz, Susanne; Zheng, Jie

    2017-01-01

    The Eukaryotic Pathogen Genomics Database Resource (EuPathDB, http://eupathdb.org) is a collection of databases covering 170+ eukaryotic pathogens (protists & fungi), along with relevant free-living and non-pathogenic species, and select pathogen hosts. To facilitate the discovery of meaningful biological relationships, the databases couple preconfigured searches with visualization and analysis tools for comprehensive data mining via intuitive graphical interfaces and APIs. All data are analyzed with the same workflows, including creation of gene orthology profiles, so data are easily compared across data sets, data types and organisms. EuPathDB is updated with numerous new analysis tools, features, data sets and data types. New tools include GO, metabolic pathway and word enrichment analyses plus an online workspace for analysis of personal, non-public, large-scale data. Expanded data content is mostly genomic and functional genomic data while new data types include protein microarray, metabolic pathways, compounds, quantitative proteomics, copy number variation, and polysomal transcriptomics. New features include consistent categorization of searches, data sets and genome browser tracks; redesigned gene pages; effective integration of alternative transcripts; and a EuPathDB Galaxy instance for private analyses of a user's data. Forthcoming upgrades include user workspaces for private integration of data with existing EuPathDB data and improved integration and presentation of host–pathogen interactions. PMID:27903906

  3. Pain assessment tools: is the content appropriate for use in palliative care?

    PubMed

    Hølen, Jacob Chr; Hjermstad, Marianne Jensen; Loge, Jon Håvard; Fayers, Peter M; Caraceni, Augusto; De Conno, Franco; Forbes, Karen; Fürst, Carl Johan; Radbruch, Lukas; Kaasa, Stein

    2006-12-01

    Inadequate pain assessment prevents optimal treatment in palliative care. The content of pain assessment tools might limit their usefulness for proper pain assessment, but data on the content validity of the tools are scarce. The objective of this study was to examine the content of the existing pain assessment tools, and to evaluate the appropriateness of different dimensions and items for pain assessment in palliative care. A systematic search was performed to find pain assessment tools for patients with advanced cancer who were receiving palliative care. An ad hoc search with broader search criteria supplemented the systematic search. The items of the identified tools were allocated to appropriate dimensions. This was reviewed by an international panel of experts, who also evaluated the relevance of the different dimensions for pain assessment in palliative care. The systematic literature search generated 16 assessment tools while the ad hoc search generated 64. Ten pain dimensions containing 1,011 pain items were identified by the experts. The experts ranked intensity, temporal pattern, treatment and exacerbating/relieving factors, location, and interference with health-related quality of life as the most important dimensions. None of the assessment tools covered these dimensions satisfactorily. Most items were related to interference (231) and intensity (138). Temporal pattern (which includes breakthrough pain), ranked as the second most important dimension, was covered by 29 items only. Many tools include dimensions and items of limited relevance for patients with advanced cancer. This might reduce compliance and threaten the validity of the assessment. New tools should reflect the clinical relevance of different dimensions and be user-friendly.

  4. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quock, D. E. R.; Cianciarulo, M. B.; APS Engineering Support Division

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, themore » necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.« less

  5. Search for CP violation in singly Cabibbo suppressed four-body D decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinelli, Maurizio

    2011-02-01

    We search for CP violation in a sample of 4.7 x 10 4 singly Cabibbo suppressed D 0 → K + K - π +π - decays and 1.8(2.6) x 10 4 D (s) + → K S 0 K + π + π - decays. CP violation is searched for in the difference between the T-odd asymmetries, obtained using triple product correlations, measured for D and D decays. The measured CP violation parameters are A T(D 0) = (1.0 ± 5.1(stat) ± 4.4(syst)) x 10 -3, A T(D +) = (-11.96 ± 10.04(stat) ± 4.81(syst)) x 10 -3 andmore » A T(D s +) = (-13.57 ± 7.67(stat) ± 4.82(syst)) x 10 -3. This search for CP violation showed that the T-odd correlations are a powerful tool to measure the CP violating observable A T. The relative simplicity of an analysis based on T-odd correlations and the high quality results that can be obtained, allow to consider this tool as fundamental to search for CP violation in four-body decays. Even if the CP violation has not been found, excluding any New Physics effect to the sensitivity of about 0.5%, it is still worth to search for CP violation in D decays. The high statistics that can be obtained at the LHC or by the proposed high luminosity B-factories, make this topic to be considered in high consideration by experiments such as LHCb, SuperB or SuperBelle. The results outlined in this thesis strongly suggest to include a similar analysis into the Physics program of these experiments.« less

  6. School environment assessment tools to address behavioural risk factors of non-communicable diseases: A scoping review.

    PubMed

    Saluja, Kiran; Rawal, Tina; Bassi, Shalini; Bhaumik, Soumyadeep; Singh, Ankur; Park, Min Hae; Kinra, Sanjay; Arora, Monika

    2018-06-01

    We aimed to identify, describe and analyse school environment assessment (SEA) tools that address behavioural risk factors (unhealthy diet, physical inactivity, tobacco and alcohol consumption) for non-communicable diseases (NCD). We searched in MEDLINE and Web of Science, hand-searched reference lists and contacted experts. Basic characteristics, measures assessed and measurement properties (validity, reliability, usability) of identified tools were extracted. We narratively synthesized the data and used content analysis to develop a list of measures used in the SEA tools. Twenty-four SEA tools were identified, mostly from developed countries. Out of these, 15 were questionnaire based, 8 were checklists or observation based tools and one tool used a combined checklist/observation based and telephonic questionnaire approach. Only 1 SEA tool had components related to all the four NCD risk factors, 2 SEA tools has assessed three NCD risk factors (diet/nutrition, physical activity, tobacco), 10 SEA tools has assessed two NCD risk factors (diet/nutrition and physical activity) and 11 SEA tools has assessed only one of the NCD risk factor. Several measures were used in the tools to assess the four NCD risk factors, but tobacco and alcohol was sparingly included. Measurement properties were reported for 14 tools. The review provides a comprehensive list of measures used in SEA tools which could be a valuable resource to guide future development of such tools. A valid and reliable SEA tool which could simultaneously evaluate all NCD risk factors, that has been tested in different settings with varying resource availability is needed.

  7. Research resource: Update and extension of a glycoprotein hormone receptors web application.

    PubMed

    Kreuchwig, Annika; Kleinau, Gunnar; Kreuchwig, Franziska; Worth, Catherine L; Krause, Gerd

    2011-04-01

    The SSFA-GPHR (Sequence-Structure-Function-Analysis of Glycoprotein Hormone Receptors) database provides a comprehensive set of mutation data for the glycoprotein hormone receptors (covering the lutropin, the FSH, and the TSH receptors). Moreover, it provides a platform for comparison and investigation of these homologous receptors and helps in understanding protein malfunctions associated with several diseases. Besides extending the data set (> 1100 mutations), the database has been completely redesigned and several novel features and analysis tools have been added to the web site. These tools allow the focused extraction of semiquantitative mutant data from the GPHR subtypes and different experimental approaches. Functional and structural data of the GPHRs are now linked interactively at the web interface, and new tools for data visualization (on three-dimensional protein structures) are provided. The interpretation of functional findings is supported by receptor morphings simulating intramolecular changes during the activation process, which thus help to trace the potential function of each amino acid and provide clues to the local structural environment, including potentially relocated spatial counterpart residues. Furthermore, double and triple mutations are newly included to allow the analysis of their functional effects related to their spatial interrelationship in structures or homology models. A new important feature is the search option and data visualization by interactive and user-defined snake-plots. These new tools allow fast and easy searches for specific functional data and thereby give deeper insights in the mechanisms of hormone binding, signal transduction, and signaling regulation. The web application "Sequence-Structure-Function-Analysis of GPHRs" is accessible on the internet at http://www.ssfa-gphr.de/.

  8. Research Trend Visualization by MeSH Terms from PubMed.

    PubMed

    Yang, Heyoung; Lee, Hyuck Jai

    2018-05-30

    Motivation : PubMed is a primary source of biomedical information comprising search tool function and the biomedical literature from MEDLINE which is the US National Library of Medicine premier bibliographic database, life science journals and online books. Complimentary tools to PubMed have been developed to help the users search for literature and acquire knowledge. However, these tools are insufficient to overcome the difficulties of the users due to the proliferation of biomedical literature. A new method is needed for searching the knowledge in biomedical field. Methods : A new method is proposed in this study for visualizing the recent research trends based on the retrieved documents corresponding to a search query given by the user. The Medical Subject Headings (MeSH) are used as the primary analytical element. MeSH terms are extracted from the literature and the correlations between them are calculated. A MeSH network, called MeSH Net, is generated as the final result based on the Pathfinder Network algorithm. Results : A case study for the verification of proposed method was carried out on a research area defined by the search query (immunotherapy and cancer and "tumor microenvironment"). The MeSH Net generated by the method is in good agreement with the actual research activities in the research area (immunotherapy). Conclusion : A prototype application generating MeSH Net was developed. The application, which could be used as a "guide map for travelers", allows the users to quickly and easily acquire the knowledge of research trends. Combination of PubMed and MeSH Net is expected to be an effective complementary system for the researchers in biomedical field experiencing difficulties with search and information analysis.

  9. FOCuS: a metaheuristic algorithm for computing knockouts from genome-scale models for strain optimization.

    PubMed

    Mutturi, Sarma

    2017-06-27

    Although handful tools are available for constraint-based flux analysis to generate knockout strains, most of these are either based on bilevel-MIP or its modifications. However, metaheuristic approaches that are known for their flexibility and scalability have been less studied. Moreover, in the existing tools, sectioning of search space to find optimal knocks has not been considered. Herein, a novel computational procedure, termed as FOCuS (Flower-pOllination coupled Clonal Selection algorithm), was developed to find the optimal reaction knockouts from a metabolic network to maximize the production of specific metabolites. FOCuS derives its benefits from nature-inspired flower pollination algorithm and artificial immune system-inspired clonal selection algorithm to converge to an optimal solution. To evaluate the performance of FOCuS, reported results obtained from both MIP and other metaheuristic-based tools were compared in selected case studies. The results demonstrated the robustness of FOCuS irrespective of the size of metabolic network and number of knockouts. Moreover, sectioning of search space coupled with pooling of priority reactions based on their contribution to objective function for generating smaller search space significantly reduced the computational time.

  10. The APIS service : a tool for accessing value-added HST planetary auroral observations over 1997-2015

    NASA Astrophysics Data System (ADS)

    Lamy, L.; Henry, F.; Prangé, R.; Le Sidaner, P.

    2015-10-01

    The Auroral Planetary Imaging and Spectroscopy (APIS) service http://obspm.fr/apis/ provides an open and interactive access to processed auroral observations of the outer planets and their satellites. Such observations are of interest for a wide community at the interface between planetology, magnetospheric and heliospheric physics. APIS consists of (i) a high level database, built from planetary auroral observations acquired by the Hubble Space Telescope (HST) since 1997 with its mostly used Far-Ultraviolet spectro- imagers, (ii) a dedicated search interface aimed at browsing efficiently this database through relevant conditional search criteria (Figure 1) and (iii) the ability to interactively work with the data online through plotting tools developed by the Virtual Observatory (VO) community, such as Aladin and Specview. This service is VO compliant and can therefore also been queried by external search tools of the VO community. The diversity of available data and the capability to sort them out by relevant physical criteria shall in particular facilitate statistical studies, on long-term scales and/or multi-instrumental multispectral combined analysis [1,2]. We will present the updated capabilities of APIS with several examples. Several tutorials are available online.

  11. A comparative analysis of Patient-Reported Expanded Disability Status Scale tools.

    PubMed

    Collins, Christian DE; Ivry, Ben; Bowen, James D; Cheng, Eric M; Dobson, Ruth; Goodin, Douglas S; Lechner-Scott, Jeannette; Kappos, Ludwig; Galea, Ian

    2016-09-01

    Patient-Reported Expanded Disability Status Scale (PREDSS) tools are an attractive alternative to the Expanded Disability Status Scale (EDSS) during long term or geographically challenging studies, or in pressured clinical service environments. Because the studies reporting these tools have used different metrics to compare the PREDSS and EDSS, we undertook an individual patient data level analysis of all available tools. Spearman's rho and the Bland-Altman method were used to assess correlation and agreement respectively. A systematic search for validated PREDSS tools covering the full EDSS range identified eight such tools. Individual patient data were available for five PREDSS tools. Excellent correlation was observed between EDSS and PREDSS with all tools. A higher level of agreement was observed with increasing levels of disability. In all tools, the 95% limits of agreement were greater than the minimum EDSS difference considered to be clinically significant. However, the intra-class coefficient was greater than that reported for EDSS raters of mixed seniority. The visual functional system was identified as the most significant predictor of the PREDSS-EDSS difference. This analysis will (1) enable researchers and service providers to make an informed choice of PREDSS tool, depending on their individual requirements, and (2) facilitate improvement of current PREDSS tools. © The Author(s), 2015.

  12. Evaluation of Dengue-Related Health Information on the Internet

    PubMed Central

    Rao, Navya R.; Mohapatra, Manaswini; Mishra, Swayamprabha; Joshi, Ashish

    2012-01-01

    The objective of this study was to examine the quality of dengue-related health information on the Internet. Three raters used the keyword dengue to search the Google, Yahoo!, and Bing search engines during August 2011. The first 20 websites from each search engine were examined for a total of 60 sites. Duplicate, nonfunctional, non-English, and nonoperational websites were excluded from the study, resulting in 36 sites for final analysis. The 16-item DISCERN tool was used to evaluate the quality of dengue-related health information on the Internet. Chi-square analysis and analysis of variance were performed to compare the DISCERN scores. Inter-rater reliability analysis showed significant differences in the level of agreement among the three raters. The 36 unique websites were categorized into .com, .edu, .gov, .org, and other sites. The .com sites had the lowest DISCERN scores. Educating consumers on how to find and recognize valid health information on the Internet may lead to better informed decision making. PMID:22783151

  13. Protein structural similarity search by Ramachandran codes

    PubMed Central

    Lo, Wei-Cheng; Huang, Po-Jung; Chang, Chih-Hung; Lyu, Ping-Chiang

    2007-01-01

    Background Protein structural data has increased exponentially, such that fast and accurate tools are necessary to access structure similarity search. To improve the search speed, several methods have been designed to reduce three-dimensional protein structures to one-dimensional text strings that are then analyzed by traditional sequence alignment methods; however, the accuracy is usually sacrificed and the speed is still unable to match sequence similarity search tools. Here, we aimed to improve the linear encoding methodology and develop efficient search tools that can rapidly retrieve structural homologs from large protein databases. Results We propose a new linear encoding method, SARST (Structural similarity search Aided by Ramachandran Sequential Transformation). SARST transforms protein structures into text strings through a Ramachandran map organized by nearest-neighbor clustering and uses a regenerative approach to produce substitution matrices. Then, classical sequence similarity search methods can be applied to the structural similarity search. Its accuracy is similar to Combinatorial Extension (CE) and works over 243,000 times faster, searching 34,000 proteins in 0.34 sec with a 3.2-GHz CPU. SARST provides statistically meaningful expectation values to assess the retrieved information. It has been implemented into a web service and a stand-alone Java program that is able to run on many different platforms. Conclusion As a database search method, SARST can rapidly distinguish high from low similarities and efficiently retrieve homologous structures. It demonstrates that the easily accessible linear encoding methodology has the potential to serve as a foundation for efficient protein structural similarity search tools. These search tools are supposed applicable to automated and high-throughput functional annotations or predictions for the ever increasing number of published protein structures in this post-genomic era. PMID:17716377

  14. Lung ultrasound in diagnosing pneumonia in childhood: a systematic review and meta-analysis.

    PubMed

    Orso, Daniele; Ban, Alessio; Guglielmo, Nicola

    2018-06-21

    Pneumonia is the third leading cause of death in children under 5 years of age worldwide. In pediatrics, both the accuracy and safety of diagnostic tools are important. Lung ultrasound (LUS) could be a safe diagnostic tool for this reason. We searched in the literature for diagnostic studies about LUS to predict pneumonia in pediatric patients using systematic review and meta-analysis. The Medline, CINAHL, Cochrane Library, Embase, SPORTDiscus, ScienceDirect, and Web of Science databases from inception to September 2017 were searched. All studies that evaluated the diagnostic accuracy of LUS in determining the presence of pneumonia in patients under 18 years of age were included. 1042 articles were found by systematic search. 76 articles were assessed for eligibility. Seventeen studies were included in the systematic review. We included 2612 pooled cases. The age of the pooled sample population ranged from 0 to about 21 years old. Summary sensitivity, specificity, and AUC were 0.94 (IQR: 0.89-0.97), 0.93 (IQR: 0.86-0.98), and 0.98 (IQR: 0.94-0.99), respectively. No agreement on reference standard was detected: nine studies used chest X-rays, while four studies considered the clinical diagnosis. Only one study used computed tomography. LUS seems to be a promise tool for diagnosing pneumonia in children. However, the high heterogeneity found across the individual studies, and the absence of a reliable reference standard, make the finding questionable. More methodologically rigorous studies are needed.

  15. Project Lefty: More Bang for the Search Query

    ERIC Educational Resources Information Center

    Varnum, Ken

    2010-01-01

    This article describes the Project Lefty, a search system that, at a minimum, adds a layer on top of traditional federated search tools that will make the wait for results more worthwhile for researchers. At best, Project Lefty improves search queries and relevance rankings for web-scale discovery tools to make the results themselves more relevant…

  16. Reviewing and Viewing.

    ERIC Educational Resources Information Center

    Clements, Douglas H., Ed.; And Others

    1988-01-01

    Presents reviews of three software packages. Includes "Cube Builder: A 3-D Geometry Tool," which allows students to build three-dimensional shapes; "Number Master," a multipurpose practice program for whole number computation; and "Safari Search: Problem Solving and Inference," which focuses on decision making in mathematical analysis. (PK)

  17. Development of a Searchable Metabolite Database and Simulator of Xenobiotic Metabolism

    EPA Science Inventory

    A computational tool (MetaPath) has been developed for storage and analysis of metabolic pathways and associated metadata. The system is capable of sophisticated text and chemical structure/substructure searching as well as rapid comparison of metabolites formed across chemicals,...

  18. Multiple Signal Classification for Gravitational Wave Burst Search

    NASA Astrophysics Data System (ADS)

    Cao, Junwei; He, Zhengqi

    2013-01-01

    This work is mainly focused on the application of the multiple signal classification (MUSIC) algorithm for gravitational wave burst search. This algorithm extracts important gravitational wave characteristics from signals coming from detectors with arbitrary position, orientation and noise covariance. In this paper, the MUSIC algorithm is described in detail along with the necessary adjustments required for gravitational wave burst search. The algorithm's performance is measured using simulated signals and noise. MUSIC is compared with the Q-transform for signal triggering and with Bayesian analysis for direction of arrival (DOA) estimation, using the Ω-pipeline. Experimental results show that MUSIC has a lower resolution but is faster. MUSIC is a promising tool for real-time gravitational wave search for multi-messenger astronomy.

  19. A validated search assessment tool: assessing practice-based learning and improvement in a residency program.

    PubMed

    Rana, Gurpreet K; Bradley, Doreen R; Hamstra, Stanley J; Ross, Paula T; Schumacher, Robert E; Frohna, John G; Haftel, Hilary M; Lypson, Monica L

    2011-01-01

    The objective of this study was to validate an assessment instrument for MEDLINE search strategies at an academic medical center. Two approaches were used to investigate if the search assessment tool could capture performance differences in search strategy construction. First, data from an evaluation of MEDLINE searches from a pediatric resident's longitudinal assessment were investigated. Second, a cross-section of search strategies from residents in one incoming class was compared with strategies of residents graduating a year later. MEDLINE search strategies formulated by faculty who had been identified as having search expertise were used as a gold standard comparison. Participants were presented with a clinical scenario and asked to identify the search question and conduct a MEDLINE search. Two librarians rated the blinded search strategies. Search strategy scores were significantly higher for residents who received training than the comparison group with no training. There was no significant difference in search strategy scores between senior residents who received training and faculty experts. The results provide evidence for the validity of the instrument to evaluate MEDLINE search strategies. This assessment tool can measure improvements in information-seeking skills and provide data to fulfill Accreditation Council for Graduate Medical Education competencies.

  20. Spectral comb mitigation to improve continuous-wave search sensitivity in Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Neunzert, Ansel; LIGO Scientific Collaboration; Virgo Collaboration

    2017-01-01

    Searches for continuous gravitational waves, such as those emitted by rapidly spinning non-axisymmetric neutron stars, are degraded by the presence of narrow noise ``lines'' in detector data. These lines either reduce the spectral band available for analysis (if identified as noise and removed) or cause spurious outliers (if unidentified). Many belong to larger structures known as combs: series of evenly-spaced lines which appear across wide frequency ranges. This talk will focus on the challenges of comb identification and mitigation. I will discuss tools and methods for comb analysis, and case studies of comb mitigation at the LIGO Hanford detector site.

  1. Enabling Searches on Wavelengths in a Hyperspectral Indices Database

    NASA Astrophysics Data System (ADS)

    Piñuela, F.; Cerra, D.; Müller, R.

    2017-10-01

    Spectral indices derived from hyperspectral reflectance measurements are powerful tools to estimate physical parameters in a non-destructive and precise way for several fields of applications, among others vegetation health analysis, coastal and deep water constituents, geology, and atmosphere composition. In the last years, several micro-hyperspectral sensors have appeared, with both full-frame and push-broom acquisition technologies, while in the near future several hyperspectral spaceborne missions are planned to be launched. This is fostering the use of hyperspectral data in basic and applied research causing a large number of spectral indices to be defined and used in various applications. Ad hoc search engines are therefore needed to retrieve the most appropriate indices for a given application. In traditional systems, query input parameters are limited to alphanumeric strings, while characteristics such as spectral range/ bandwidth are not used in any existing search engine. Such information would be relevant, as it enables an inverse type of search: given the spectral capabilities of a given sensor or a specific spectral band, find all indices which can be derived from it. This paper describes a tool which enables a search as described above, by using the central wavelength or spectral range used by a given index as a search parameter. This offers the ability to manage numeric wavelength ranges in order to select indices which work at best in a given set of wavelengths or wavelength ranges.

  2. The diagnostic accuracy of magnetic resonance venography in the detection of deep venous thrombosis: a systematic review and meta-analysis.

    PubMed

    Abdalla, G; Fawzi Matuk, R; Venugopal, V; Verde, F; Magnuson, T H; Schweitzer, M A; Steele, K E

    2015-08-01

    To search the literature for further evidence for the use of magnetic resonance venography (MRV) in the detection of suspected DVT and to re-evaluate the accuracy of MRV in the detection of suspected deep vein thrombosis (DVT). PubMed, EMBASE, Scopus, Cochrane, and Web of Science were searched. Study quality and the risk of bias were evaluated using the QUADAS 2. A random effects meta-analysis including subgroup and sensitivity analyses were performed. The search resulted in 23 observational studies all from academic centres. Sixteen articles were included in the meta-analysis. The summary estimates for MRV as a diagnostic non-invasive tool revealed a sensitivity of 93% (95% confidence interval [CI]: 89% to 95%) and specificity of 96% (95% CI: 94% to 97%). The heterogeneity of the studies was high. Inconsistency (I2) for sensitivity and specificity was 80.7% and 77.9%, respectively. Further studies investigating the use of MRV in the detection of suspected DVT did not offer further evidence to support the replacement of ultrasound with MRV as the first-line investigation. However, MRV may offer an alternative tool in the detection/diagnosis of DVT for whom ultrasound is inadequate or not feasible (such as in the obese patient). Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  3. The Auroral Planetary Imaging and Spectroscopy (APIS) service

    NASA Astrophysics Data System (ADS)

    Lamy, L.; Prangé, R.; Henry, F.; Le Sidaner, P.

    2015-06-01

    The Auroral Planetary Imaging and Spectroscopy (APIS) service, accessible online, provides an open and interactive access to processed auroral observations of the outer planets and their satellites. Such observations are of interest for a wide community at the interface between planetology, magnetospheric and heliospheric physics. APIS consists of (i) a high level database, built from planetary auroral observations acquired by the Hubble Space Telescope (HST) since 1997 with its mostly used Far-Ultraviolet spectro-imagers, (ii) a dedicated search interface aimed at browsing efficiently this database through relevant conditional search criteria and (iii) the ability to interactively work with the data online through plotting tools developed by the Virtual Observatory (VO) community, such as Aladin and Specview. This service is VO compliant and can therefore also been queried by external search tools of the VO community. The diversity of available data and the capability to sort them out by relevant physical criteria shall in particular facilitate statistical studies, on long-term scales and/or multi-instrumental multi-spectral combined analysis.

  4. Dfam: a database of repetitive DNA based on profile hidden Markov models.

    PubMed

    Wheeler, Travis J; Clements, Jody; Eddy, Sean R; Hubley, Robert; Jones, Thomas A; Jurka, Jerzy; Smit, Arian F A; Finn, Robert D

    2013-01-01

    We present a database of repetitive DNA elements, called Dfam (http://dfam.janelia.org). Many genomes contain a large fraction of repetitive DNA, much of which is made up of remnants of transposable elements (TEs). Accurate annotation of TEs enables research into their biology and can shed light on the evolutionary processes that shape genomes. Identification and masking of TEs can also greatly simplify many downstream genome annotation and sequence analysis tasks. The commonly used TE annotation tools RepeatMasker and Censor depend on sequence homology search tools such as cross_match and BLAST variants, as well as Repbase, a collection of known TE families each represented by a single consensus sequence. Dfam contains entries corresponding to all Repbase TE entries for which instances have been found in the human genome. Each Dfam entry is represented by a profile hidden Markov model, built from alignments generated using RepeatMasker and Repbase. When used in conjunction with the hidden Markov model search tool nhmmer, Dfam produces a 2.9% increase in coverage over consensus sequence search methods on a large human benchmark, while maintaining low false discovery rates, and coverage of the full human genome is 54.5%. The website provides a collection of tools and data views to support improved TE curation and annotation efforts. Dfam is also available for download in flat file format or in the form of MySQL table dumps.

  5. ProCon - PROteomics CONversion tool.

    PubMed

    Mayer, Gerhard; Stephan, Christian; Meyer, Helmut E; Kohl, Michael; Marcus, Katrin; Eisenacher, Martin

    2015-11-03

    With the growing amount of experimental data produced in proteomics experiments and the requirements/recommendations of journals in the proteomics field to publicly make available data described in papers, a need for long-term storage of proteomics data in public repositories arises. For such an upload one needs proteomics data in a standardized format. Therefore, it is desirable, that the proprietary vendor's software will integrate in the future such an export functionality using the standard formats for proteomics results defined by the HUPO-PSI group. Currently not all search engines and analysis tools support these standard formats. In the meantime there is a need to provide user-friendly free-to-use conversion tools that can convert the data into such standard formats in order to support wet-lab scientists in creating proteomics data files ready for upload into the public repositories. ProCon is such a conversion tool written in Java for conversion of proteomics identification data into standard formats mzIdentML and Pride XML. It allows the conversion of Sequest™/Comet .out files, of search results from the popular and often used ProteomeDiscoverer® 1.x (x=versions 1.1 to1.4) software and search results stored in the LIMS systems ProteinScape® 1.3 and 2.1 into mzIdentML and PRIDE XML. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015. Published by Elsevier B.V.

  6. Prospective Molecular Characterization of Burn Wound Colonization: Novel Tools and Analysis

    DTIC Science & Technology

    2014-02-01

    from patients with endocarditis and wound/soft tissue infections, have been sequenced and an initial analysis performed. Finally, enrollment in the...Price LB. Analysis of S. aureus isolates from endocarditis and skin/soft tissue infections A strict blast search was performed on the S. aureus...targets differentiating the cellulitis and endocarditis isolates. This was followed with a basic Pearson’s Chi-squared test with Yates’ continuity

  7. Google vs. the Library (Part II): Student Search Patterns and Behaviors When Using Google and a Federated Search Tool

    ERIC Educational Resources Information Center

    Georgas, Helen

    2014-01-01

    This study examines the information-seeking behavior of undergraduate students within a research context. Student searches were recorded while the participants used Google and a library (federated) search tool to find sources (one book, two articles, and one other source of their choosing) for a selected topic. The undergraduates in this study…

  8. Google vs. the Library: Student Preferences and Perceptions when Doing Research Using Google and a Federated Search Tool

    ERIC Educational Resources Information Center

    Georgas, Helen

    2013-01-01

    Federated searching was once touted as the library world's answer to Google, but ten years since federated searching technology's inception, how does it actually compare? This study focuses on undergraduate student preferences and perceptions when doing research using both Google and a federated search tool. Students were asked about their…

  9. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures

    PubMed Central

    2010-01-01

    Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB) in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics) is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA structures is provided. RNA FRABASE 2.0 is freely available at http://rnafrabase.cs.put.poznan.pl. Conclusions RNA FRABASE 2.0 provides a novel database and powerful search engine which is equipped with new data and functionalities that are unavailable elsewhere. Our intention is that this advanced version of the RNA FRABASE will be of interest to all researchers working in the RNA field. PMID:20459631

  10. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures.

    PubMed

    Popenda, Mariusz; Szachniuk, Marta; Blazewicz, Marek; Wasik, Szymon; Burke, Edmund K; Blazewicz, Jacek; Adamiak, Ryszard W

    2010-05-06

    Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB) in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics) is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA structures is provided. RNA FRABASE 2.0 is freely available at http://rnafrabase.cs.put.poznan.pl. RNA FRABASE 2.0 provides a novel database and powerful search engine which is equipped with new data and functionalities that are unavailable elsewhere. Our intention is that this advanced version of the RNA FRABASE will be of interest to all researchers working in the RNA field.

  11. Costing 'healthy' food baskets in Australia - a systematic review of food price and affordability monitoring tools, protocols and methods.

    PubMed

    Lewis, Meron; Lee, Amanda

    2016-11-01

    To undertake a systematic review to determine similarities and differences in metrics and results between recently and/or currently used tools, protocols and methods for monitoring Australian healthy food prices and affordability. Electronic databases of peer-reviewed literature and online grey literature were systematically searched using the PRISMA approach for articles and reports relating to healthy food and diet price assessment tools, protocols, methods and results that utilised retail pricing. National, state, regional and local areas of Australia from 1995 to 2015. Assessment tools, protocols and methods to measure the price of 'healthy' foods and diets. The search identified fifty-nine discrete surveys of 'healthy' food pricing incorporating six major food pricing tools (those used in multiple areas and time periods) and five minor food pricing tools (those used in a single survey area or time period). Analysis demonstrated methodological differences regarding: included foods; reference households; use of availability and/or quality measures; household income sources; store sampling methods; data collection protocols; analysis methods; and results. 'Healthy' food price assessment methods used in Australia lack comparability across all metrics and most do not fully align with a 'healthy' diet as recommended by the current Australian Dietary Guidelines. None have been applied nationally. Assessment of the price, price differential and affordability of healthy (recommended) and current (unhealthy) diets would provide more robust and meaningful data to inform health and fiscal policy in Australia. The INFORMAS 'optimal' approach provides a potential framework for development of these methods.

  12. Joint Improvised Explosive Device Defeat Organization

    DTIC Science & Technology

    2009-01-01

    searches increased exponentially. Palantir . Developed to provide C-IED network analysts with a collaborative link analysis tool, Palantir is used for...share data between teams and between other link analysis applications. Palantir outputs portray linked nodal networks, histogram data, and timeline...views. During FY 2008, the Palantir system was accessed by over 160 people investigating IED networks. Analyses by these people supported over

  13. COMPASS: a suite of pre- and post-search proteomics software tools for OMSSA

    PubMed Central

    Wenger, Craig D.; Phanstiel, Douglas H.; Lee, M. Violet; Bailey, Derek J.; Coon, Joshua J.

    2011-01-01

    Here we present the Coon OMSSA Proteomic Analysis Software Suite (COMPASS): a free and open-source software pipeline for high-throughput analysis of proteomics data, designed around the Open Mass Spectrometry Search Algorithm. We detail a synergistic set of tools for protein database generation, spectral reduction, peptide false discovery rate analysis, peptide quantitation via isobaric labeling, protein parsimony and protein false discovery rate analysis, and protein quantitation. We strive for maximum ease of use, utilizing graphical user interfaces and working with data files in the original instrument vendor format. Results are stored in plain text comma-separated values files, which are easy to view and manipulate with a text editor or spreadsheet program. We illustrate the operation and efficacy of COMPASS through the use of two LC–MS/MS datasets. The first is a dataset of a highly annotated mixture of standard proteins and manually validated contaminants that exhibits the identification workflow. The second is a dataset of yeast peptides, labeled with isobaric stable isotope tags and mixed in known ratios, to demonstrate the quantitative workflow. For these two datasets, COMPASS performs equivalently or better than the current de facto standard, the Trans-Proteomic Pipeline. PMID:21298793

  14. Pseudodiagnosticity Revisited

    ERIC Educational Resources Information Center

    Crupi, Vincenzo; Tentori, Katya; Lombardi, Luigi

    2009-01-01

    In the psychology of reasoning and judgment, the pseudodiagnosticity task has been a major tool for the empirical investigation of people's ability to search for diagnostic information. A novel normative analysis of this experimental paradigm is presented, by which the participants' prevailing responses turn out not to support the generally…

  15. On the use of cartographic projections in visualizing phylo-genetic tree space

    PubMed Central

    2010-01-01

    Phylogenetic analysis is becoming an increasingly important tool for biological research. Applications include epidemiological studies, drug development, and evolutionary analysis. Phylogenetic search is a known NP-Hard problem. The size of the data sets which can be analyzed is limited by the exponential growth in the number of trees that must be considered as the problem size increases. A better understanding of the problem space could lead to better methods, which in turn could lead to the feasible analysis of more data sets. We present a definition of phylogenetic tree space and a visualization of this space that shows significant exploitable structure. This structure can be used to develop search methods capable of handling much larger data sets. PMID:20529355

  16. Searching the Internet for information on prostate cancer screening: an assessment of quality.

    PubMed

    Ilic, Dragan; Risbridger, Gail; Green, Sally

    2004-07-01

    To identify how on-line information relating to prostate cancer screening (PCS) is best sourced, whether through general, medical, or meta-search engines, and to assess the quality of that information. Websites providing information about PCS were searched across 15 search engines representing three distinct types: general, medical, and meta-search engines. The quality of on-line information was assessed using the DISCERN quality assessment tool. Quality performance characteristics were analyzed by performing Mann-Whitney U tests. Search engine efficiency was measured by each search query as a percentage of the relevant websites included for analysis from the total returned and analyzed by performing Kruskal-Wallis analysis of variance. Of 6690 websites reviewed, 84 unique websites were identified as providing information relevant to PCS. General and meta-search engines were significantly more efficient at retrieving relevant information on PCS compared with medical search engines. The quality of information was variable, with most of a poor standard. Websites that provided referral links to other resources and a citation of evidence provided a significantly better quality of information. In contrast, websites offering a direct service were more likely to provide a significantly poorer quality of information. The current lack of a clear consensus on guidelines and recommendation in published data is also reflected by the variable quality of information found on-line. Specialized medical search engines were no more likely to retrieve relevant, high-quality information than general or meta-search engines.

  17. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and UrbIS

    NASA Astrophysics Data System (ADS)

    Crow, M. C.; Devarakonda, R.; Hook, L.; Killeffer, T.; Krassovski, M.; Boden, T.; King, A. W.; Wullschleger, S. D.

    2016-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This discussion describes tools being used in two different projects at Oak Ridge National Laboratory (ORNL), but at different stages of the data lifecycle. The Metadata Entry and Data Search Tool is being used for the documentation, archival, and data discovery stages for the Next Generation Ecosystem Experiment - Arctic (NGEE Arctic) project while the Urban Information Systems (UrbIS) Data Catalog is being used to support indexing, cataloging, and searching. The NGEE Arctic Online Metadata Entry Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The UrbIS Data Catalog is a data discovery tool supported by the Mercury cataloging framework [2] which aims to compile urban environmental data from around the world into one location, and be searchable via a user-friendly interface. Each data record conveniently displays its title, source, and date range, and features: (1) a button for a quick view of the metadata, (2) a direct link to the data and, for some data sets, (3) a button for visualizing the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for searching by area. References: [1] Devarakonda, Ranjeet, et al. "Use of a metadata documentation and search tool for large data volumes: The NGEE arctic example." Big Data (Big Data), 2015 IEEE International Conference on. IEEE, 2015. [2] Devarakonda, R., Palanisamy, G., Wilson, B. E., & Green, J. M. (2010). Mercury: reusable metadata management, data discovery and access system. Earth Science Informatics, 3(1-2), 87-94.

  18. Using Enabling Technologies to Advance Data Intensive Analysis Tools in the JPL Tropical Cyclone Information System

    NASA Astrophysics Data System (ADS)

    Knosp, B.; Gangl, M. E.; Hristova-Veleva, S. M.; Kim, R. M.; Lambrigtsen, B.; Li, P.; Niamsuwan, N.; Shen, T. P. J.; Turk, F. J.; Vu, Q. A.

    2014-12-01

    The JPL Tropical Cyclone Information System (TCIS) brings together satellite, aircraft, and model forecast data from several NASA, NOAA, and other data centers to assist researchers in comparing and analyzing data related to tropical cyclones. The TCIS has been supporting specific science field campaigns, such as the Genesis and Rapid Intensification Processes (GRIP) campaign and the Hurricane and Severe Storm Sentinel (HS3) campaign, by creating near real-time (NRT) data visualization portals. These portals are intended to assist in mission planning, enhance the understanding of current physical processes, and improve model data by comparing it to satellite and aircraft observations. The TCIS NRT portals allow the user to view plots on a Google Earth interface. To compliment these visualizations, the team has been working on developing data analysis tools to let the user actively interrogate areas of Level 2 swath and two-dimensional plots they see on their screen. As expected, these observation and model data are quite voluminous and bottlenecks in the system architecture can occur when the databases try to run geospatial searches for data files that need to be read by the tools. To improve the responsiveness of the data analysis tools, the TCIS team has been conducting studies on how to best store Level 2 swath footprints and run sub-second geospatial searches to discover data. The first objective was to improve the sampling accuracy of the footprints being stored in the TCIS database by comparing the Java-based NASA PO.DAAC Level 2 Swath Generator with a TCIS Python swath generator. The second objective was to compare the performance of four database implementations - MySQL, MySQL+Solr, MongoDB, and PostgreSQL - to see which database management system would yield the best geospatial query and storage performance. The final objective was to integrate our chosen technologies with our Joint Probability Density Function (Joint PDF), Wave Number Analysis, and Automated Rotational Center Hurricane Eye Retrieval (ARCHER) tools. In this presentation, we will compare the enabling technologies we tested and discuss which ones we selected for integration into the TCIS' data analysis tool architecture. We will also show how these techniques have been automated to provide access to NRT data through our analysis tools.

  19. High Frequency Scattering Code in a Distributed Processing Environment

    DTIC Science & Technology

    1991-06-01

    Block 6. Author(s). Name(s) of person (s) Block 14. Subiect Terms. Keywords or phrases responsible for writing the report, performing identifying major...use of auttomated analysis tools is indicated. One tool developed by Pacific-Sierra Re- 22 search Corporation and marketed by Intel Corporation for...XQ: EXECUTE CODE EN : END CODE This input deck differs from that in the manual because the "PP" option is disabled in the modified code. 45 A.3

  20. Electronic Collection Management and Electronic Information Services

    DTIC Science & Technology

    2004-12-01

    federated search tools are still being perfected with much debate surrounding their use. Encouragingly, as the federated search tools have evolved...institutional repositories to be included in a federated search process, libraries would have to harvest the metadata from the repositories and then make...providers in Library High Tech News. At this time, federated search engines serve some user groups better than others. Undergraduate students are well

  1. A collection of open source applications for mass spectrometry data mining.

    PubMed

    Gallardo, Óscar; Ovelleiro, David; Gay, Marina; Carrascal, Montserrat; Abian, Joaquin

    2014-10-01

    We present several bioinformatics applications for the identification and quantification of phosphoproteome components by MS. These applications include a front-end graphical user interface that combines several Thermo RAW formats to MASCOT™ Generic Format extractors (EasierMgf), two graphical user interfaces for search engines OMSSA and SEQUEST (OmssaGui and SequestGui), and three applications, one for the management of databases in FASTA format (FastaTools), another for the integration of search results from up to three search engines (Integrator), and another one for the visualization of mass spectra and their corresponding database search results (JsonVisor). These applications were developed to solve some of the common problems found in proteomic and phosphoproteomic data analysis and were integrated in the workflow for data processing and feeding on our LymPHOS database. Applications were designed modularly and can be used standalone. These tools are written in Perl and Python programming languages and are supported on Windows platforms. They are all released under an Open Source Software license and can be freely downloaded from our software repository hosted at GoogleCode. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. PIA: An Intuitive Protein Inference Engine with a Web-Based User Interface.

    PubMed

    Uszkoreit, Julian; Maerkens, Alexandra; Perez-Riverol, Yasset; Meyer, Helmut E; Marcus, Katrin; Stephan, Christian; Kohlbacher, Oliver; Eisenacher, Martin

    2015-07-02

    Protein inference connects the peptide spectrum matches (PSMs) obtained from database search engines back to proteins, which are typically at the heart of most proteomics studies. Different search engines yield different PSMs and thus different protein lists. Analysis of results from one or multiple search engines is often hampered by different data exchange formats and lack of convenient and intuitive user interfaces. We present PIA, a flexible software suite for combining PSMs from different search engine runs and turning these into consistent results. PIA can be integrated into proteomics data analysis workflows in several ways. A user-friendly graphical user interface can be run either locally or (e.g., for larger core facilities) from a central server. For automated data processing, stand-alone tools are available. PIA implements several established protein inference algorithms and can combine results from different search engines seamlessly. On several benchmark data sets, we show that PIA can identify a larger number of proteins at the same protein FDR when compared to that using inference based on a single search engine. PIA supports the majority of established search engines and data in the mzIdentML standard format. It is implemented in Java and freely available at https://github.com/mpc-bioinformatics/pia.

  3. Analysis of queries sent to PubMed at the point of care: Observation of search behaviour in a medical teaching hospital

    PubMed Central

    Hoogendam, Arjen; Stalenhoef, Anton FH; Robbé, Pieter F de Vries; Overbeke, A John PM

    2008-01-01

    Background The use of PubMed to answer daily medical care questions is limited because it is challenging to retrieve a small set of relevant articles and time is restricted. Knowing what aspects of queries are likely to retrieve relevant articles can increase the effectiveness of PubMed searches. The objectives of our study were to identify queries that are likely to retrieve relevant articles by relating PubMed search techniques and tools to the number of articles retrieved and the selection of articles for further reading. Methods This was a prospective observational study of queries regarding patient-related problems sent to PubMed by residents and internists in internal medicine working in an Academic Medical Centre. We analyzed queries, search results, query tools (Mesh, Limits, wildcards, operators), selection of abstract and full-text for further reading, using a portal that mimics PubMed. Results PubMed was used to solve 1121 patient-related problems, resulting in 3205 distinct queries. Abstracts were viewed in 999 (31%) of these queries, and in 126 (39%) of 321 queries using query tools. The average term count per query was 2.5. Abstracts were selected in more than 40% of queries using four or five terms, increasing to 63% if the use of four or five terms yielded 2–161 articles. Conclusion Queries sent to PubMed by physicians at our hospital during daily medical care contain fewer than three terms. Queries using four to five terms, retrieving less than 161 article titles, are most likely to result in abstract viewing. PubMed search tools are used infrequently by our population and are less effective than the use of four or five terms. Methods to facilitate the formulation of precise queries, using more relevant terms, should be the focus of education and research. PMID:18816391

  4. Analysis of norovirus contamination of seafood

    USDA-ARS?s Scientific Manuscript database

    The study of human norovirus (NoVs) replication in vitro would be a highly useful tool to virologists and immunologists. For this reason, we have searched for new approaches to determine viability of noroviruses in food samples (especially sea food). Our research team has multiple years of experie...

  5. Lunar Habitat Optimization Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    SanScoucie, M. P.; Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Long-duration surface missions to the Moon and Mars will require bases to accommodate habitats for the astronauts. Transporting the materials and equipment required to build the necessary habitats is costly and difficult. The materials chosen for the habitat walls play a direct role in protection against each of the mentioned hazards. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Clearly, an optimization method is warranted for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat wall design tool utilizing genetic algorithms (GAs) has been developed. GAs use a "survival of the fittest" philosophy where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multiobjective formulation of up-mass, heat loss, structural analysis, meteoroid impact protection, and radiation protection. This Technical Publication presents the research and development of this tool as well as a technique for finding the optimal GA search parameters.

  6. Effect of the antimicrobial photodynamic therapy on microorganism reduction in deep caries lesions: a systematic review and meta-analysis.

    PubMed

    Ornellas, Pâmela Oliveira; Antunes, Leonardo Dos Santos; Fontes, Karla Bianca Fernandes da Costa; Póvoa, Helvécio Cardoso Corrêa; Küchler, Erika Calvano; Iorio, Natalia Lopes Pontes; Antunes, Lívia Azeredo Alves

    2016-09-01

    This study aimed to perform a systematic review to assess the effectiveness of antimicrobial photodynamic therapy (aPDT) in the reduction of microorganisms in deep carious lesions. An electronic search was conducted in Pubmed, Web of Science, Scopus, Lilacs, and Cochrane Library, followed by a manual search. The MeSH terms, MeSH synonyms, related terms, and free terms were used in the search. As eligibility criteria, only clinical studies were included. Initially, 227 articles were identified in the electronic search, and 152 studies remained after analysis and exclusion of the duplicated studies; 6 remained after application of the eligibility criteria; and 3 additional studies were found in the manual search. After access to the full articles, three were excluded, leaving six for evaluation by the criteria of the Cochrane Collaboration’s tool for assessing risk of bias. Of these, five had some risk of punctuated bias. All results from the selected studies showed a significant reduction of microorganisms in deep carious lesions for both primary and permanent teeth. The meta-analysis demonstrated a significant reduction in microorganism counts in all analyses (p<0.00001). Based on these findings, there is scientific evidence emphasizing the effectiveness of aPDT in reducing microorganisms in deep carious lesions.

  7. Effect of the antimicrobial photodynamic therapy on microorganism reduction in deep caries lesions: a systematic review and meta-analysis

    NASA Astrophysics Data System (ADS)

    Ornellas, Pâmela Oliveira; Antunes, Leonardo Santos; Fontes, Karla Bianca Fernandes da Costa; Póvoa, Helvécio Cardoso Corrêa; Küchler, Erika Calvano; Iorio, Natalia Lopes Pontes; Antunes, Lívia Azeredo Alves

    2016-09-01

    This study aimed to perform a systematic review to assess the effectiveness of antimicrobial photodynamic therapy (aPDT) in the reduction of microorganisms in deep carious lesions. An electronic search was conducted in Pubmed, Web of Science, Scopus, Lilacs, and Cochrane Library, followed by a manual search. The MeSH terms, MeSH synonyms, related terms, and free terms were used in the search. As eligibility criteria, only clinical studies were included. Initially, 227 articles were identified in the electronic search, and 152 studies remained after analysis and exclusion of the duplicated studies; 6 remained after application of the eligibility criteria; and 3 additional studies were found in the manual search. After access to the full articles, three were excluded, leaving six for evaluation by the criteria of the Cochrane Collaboration's tool for assessing risk of bias. Of these, five had some risk of punctuated bias. All results from the selected studies showed a significant reduction of microorganisms in deep carious lesions for both primary and permanent teeth. The meta-analysis demonstrated a significant reduction in microorganism counts in all analyses (p<0.00001). Based on these findings, there is scientific evidence emphasizing the effectiveness of aPDT in reducing microorganisms in deep carious lesions.

  8. Knowledge, skills and attitudes of hospital pharmacists in the use of information technology and electronic tools to support clinical practice: A Brazilian survey

    PubMed Central

    Vasconcelos, Hemerson Bruno da Silva; Woods, David John

    2017-01-01

    This study aimed to identify the knowledge, skills and attitudes of Brazilian hospital pharmacists in the use of information technology and electronic tools to support clinical practice. Methods: A questionnaire was sent by email to clinical pharmacists working public and private hospitals in Brazil. The instrument was validated using the method of Polit and Beck to determine the content validity index. Data (n = 348) were analyzed using descriptive statistics, Pearson's Chi-square test and Gamma correlation tests. Results: Pharmacists had 1–4 electronic devices for personal use, mainly smartphones (84.8%; n = 295) and laptops (81.6%; n = 284). At work, pharmacists had access to a computer (89.4%; n = 311), mostly connected to the internet (83.9%; n = 292). They felt competent (very capable/capable) searching for a web page/web site on a specific subject (100%; n = 348), downloading files (99.7%; n = 347), using spreadsheets (90.2%; n = 314), searching using MeSH terms in PubMed (97.4%; n = 339) and general searching for articles in bibliographic databases (such as Medline/PubMed: 93.4%; n = 325). Pharmacists did not feel competent in using statistical analysis software (somewhat capable/incapable: 78.4%; n = 273). Most pharmacists reported that they had not received formal education to perform most of these actions except searching using MeSH terms. Access to bibliographic databases was available in Brazilian hospitals, however, most pharmacists (78.7%; n = 274) reported daily use of a non-specific search engine such as Google. This result may reflect the lack of formal knowledge and training in the use of bibliographic databases and difficulty with the English language. The need to expand knowledge about information search tools was recognized by most pharmacists in clinical practice in Brazil, especially those with less time dedicated exclusively to clinical activity (Chi-square, p = 0.006). Conclusion: These results will assist in defining minimal competencies for the training of pharmacists in the field of information technology to support clinical practice. Knowledge and skill gaps are evident in the use of bibliographic databases, spreadsheets and statistical tools. PMID:29272292

  9. Knowledge, skills and attitudes of hospital pharmacists in the use of information technology and electronic tools to support clinical practice: A Brazilian survey.

    PubMed

    Néri, Eugenie Desirèe Rabelo; Meira, Assuero Silva; Vasconcelos, Hemerson Bruno da Silva; Woods, David John; Fonteles, Marta Maria de França

    2017-01-01

    This study aimed to identify the knowledge, skills and attitudes of Brazilian hospital pharmacists in the use of information technology and electronic tools to support clinical practice. A questionnaire was sent by email to clinical pharmacists working public and private hospitals in Brazil. The instrument was validated using the method of Polit and Beck to determine the content validity index. Data (n = 348) were analyzed using descriptive statistics, Pearson's Chi-square test and Gamma correlation tests. Pharmacists had 1-4 electronic devices for personal use, mainly smartphones (84.8%; n = 295) and laptops (81.6%; n = 284). At work, pharmacists had access to a computer (89.4%; n = 311), mostly connected to the internet (83.9%; n = 292). They felt competent (very capable/capable) searching for a web page/web site on a specific subject (100%; n = 348), downloading files (99.7%; n = 347), using spreadsheets (90.2%; n = 314), searching using MeSH terms in PubMed (97.4%; n = 339) and general searching for articles in bibliographic databases (such as Medline/PubMed: 93.4%; n = 325). Pharmacists did not feel competent in using statistical analysis software (somewhat capable/incapable: 78.4%; n = 273). Most pharmacists reported that they had not received formal education to perform most of these actions except searching using MeSH terms. Access to bibliographic databases was available in Brazilian hospitals, however, most pharmacists (78.7%; n = 274) reported daily use of a non-specific search engine such as Google. This result may reflect the lack of formal knowledge and training in the use of bibliographic databases and difficulty with the English language. The need to expand knowledge about information search tools was recognized by most pharmacists in clinical practice in Brazil, especially those with less time dedicated exclusively to clinical activity (Chi-square, p = 0.006). These results will assist in defining minimal competencies for the training of pharmacists in the field of information technology to support clinical practice. Knowledge and skill gaps are evident in the use of bibliographic databases, spreadsheets and statistical tools.

  10. Overview of Virtual Observatory Tools

    NASA Astrophysics Data System (ADS)

    Allen, M. G.

    2009-07-01

    I provide a brief introduction and tour of selected Virtual Observatory tools to highlight some of the core functions provided by the VO, and the way that astronomers may use the tools and services for doing science. VO tools provide advanced functions for searching and using images, catalogues and spectra that have been made available in the VO. The tools may work together by providing efficient and innovative browsing and analysis of data, and I also describe how many VO services may be accessed by a scripting or command line environment. Early science usage of the VO provides important feedback on the development of the system, and I show how VO portals try to address early user comments about the navigation and use of the VO.

  11. Comparison of three web-scale discovery services for health sciences research.

    PubMed

    Hanneke, Rosie; O'Brien, Kelly K

    2016-04-01

    The purpose of this study was to investigate the relative effectiveness of three web-scale discovery (WSD) tools in answering health sciences search queries. Simple keyword searches, based on topics from six health sciences disciplines, were run at multiple real-world implementations of EBSCO Discovery Service (EDS), Ex Libris's Primo, and ProQuest's Summon. Each WSD tool was evaluated in its ability to retrieve relevant results and in its coverage of MEDLINE content. All WSD tools returned between 50%-60% relevant results. Primo returned a higher number of duplicate results than the other 2 WSD products. Summon results were more relevant when search terms were automatically mapped to controlled vocabulary. EDS indexed the largest number of MEDLINE citations, followed closely by Summon. Additionally, keyword searches in all 3 WSD tools retrieved relevant material that was not found with precision (Medical Subject Headings) searches in MEDLINE. None of the 3 WSD products studied was overwhelmingly more effective in returning relevant results. While difficult to place the figure of 50%-60% relevance in context, it implies a strong likelihood that the average user would be able to find satisfactory sources on the first page of search results using a rudimentary keyword search. The discovery of additional relevant material beyond that retrieved from MEDLINE indicates WSD tools' value as a supplement to traditional resources for health sciences researchers.

  12. Comparison of three web-scale discovery services for health sciences research*

    PubMed Central

    Hanneke, Rosie; O'Brien, Kelly K.

    2016-01-01

    Objective The purpose of this study was to investigate the relative effectiveness of three web-scale discovery (WSD) tools in answering health sciences search queries. Methods Simple keyword searches, based on topics from six health sciences disciplines, were run at multiple real-world implementations of EBSCO Discovery Service (EDS), Ex Libris's Primo, and ProQuest's Summon. Each WSD tool was evaluated in its ability to retrieve relevant results and in its coverage of MEDLINE content. Results All WSD tools returned between 50%–60% relevant results. Primo returned a higher number of duplicate results than the other 2 WSD products. Summon results were more relevant when search terms were automatically mapped to controlled vocabulary. EDS indexed the largest number of MEDLINE citations, followed closely by Summon. Additionally, keyword searches in all 3 WSD tools retrieved relevant material that was not found with precision (Medical Subject Headings) searches in MEDLINE. Conclusions None of the 3 WSD products studied was overwhelmingly more effective in returning relevant results. While difficult to place the figure of 50%–60% relevance in context, it implies a strong likelihood that the average user would be able to find satisfactory sources on the first page of search results using a rudimentary keyword search. The discovery of additional relevant material beyond that retrieved from MEDLINE indicates WSD tools' value as a supplement to traditional resources for health sciences researchers. PMID:27076797

  13. Environmental System Science Data Infrastructure for a Virtual Ecosystem (ESS-DIVE) - A New U.S. DOE Data Archive

    NASA Astrophysics Data System (ADS)

    Agarwal, D.; Varadharajan, C.; Cholia, S.; Snavely, C.; Hendrix, V.; Gunter, D.; Riley, W. J.; Jones, M.; Budden, A. E.; Vieglais, D.

    2017-12-01

    The ESS-DIVE archive is a new U.S. Department of Energy (DOE) data archive designed to provide long-term stewardship and use of data from observational, experimental, and modeling activities in the earth and environmental sciences. The ESS-DIVE infrastructure is constructed with the long-term vision of enabling broad access to and usage of the DOE sponsored data stored in the archive. It is designed as a scalable framework that incentivizes data providers to contribute well-structured, high-quality data to the archive and that enables the user community to easily build data processing, synthesis, and analysis capabilities using those data. The key innovations in our design include: (1) application of user-experience research methods to understand the needs of users and data contributors; (2) support for early data archiving during project data QA/QC and before public release; (3) focus on implementation of data standards in collaboration with the community; (4) support for community built tools for data search, interpretation, analysis, and visualization tools; (5) data fusion database to support search of the data extracted from packages submitted and data available in partner data systems such as the Earth System Grid Federation (ESGF) and DataONE; and (6) support for archiving of data packages that are not to be released to the public. ESS-DIVE data contributors will be able to archive and version their data and metadata, obtain data DOIs, search for and access ESS data and metadata via web and programmatic portals, and provide data and metadata in standardized forms. The ESS-DIVE archive and catalog will be federated with other existing catalogs, allowing cross-catalog metadata search and data exchange with existing systems, including DataONE's Metacat search. ESS-DIVE is operated by a multidisciplinary team from Berkeley Lab, the National Center for Ecological Analysis and Synthesis (NCEAS), and DataONE. The primarily data copies are hosted at DOE's NERSC supercomputing facility with replicas at DataONE nodes.

  14. Doing Being a Foreign Language Learner in a Classroom: Embodiment of Cognitive States as Social Events

    ERIC Educational Resources Information Center

    Mori, Junko; Hasegawa, Atsushi

    2009-01-01

    Encountering trouble producing a word in the midst of a turn at talk is an everyday experience for foreign language learners. By employing conversation analysis (CA) as a central tool for analysis, the current study explores how students undertake a range of word searches while they organize a pair work session designed for the purpose of language…

  15. Finding Your Voice: Talent Development Centers and the Academic Talent Search

    ERIC Educational Resources Information Center

    Rushneck, Amy S.

    2012-01-01

    Talent Development Centers are just one of many tools every family, teacher, and gifted advocate should have in their tool box. To understand the importance of Talent Development Centers, it is essential to also understand the Academic Talent Search Program. Talent Search participants who obtain scores comparable to college-bound high school…

  16. Personalised Search Tool for Teachers--PoSTech!

    ERIC Educational Resources Information Center

    Seyedarabi, Faezeh; Peterson, Don; Keenoy, Kevin

    2005-01-01

    One of the ways in which teachers tend to "personalise" to the needs of their students is by complementing their teaching materials with online resources. However, the current online resources are designed in such a way that only allows teachers to customise their search and not personalise. Therefore, a Personalised Search Tool for…

  17. Novel Platform Technologies for Analysis of Norovirus Contamination of Sea Food

    USDA-ARS?s Scientific Manuscript database

    The study of human norovirus (NoVs) replication in vitro would be a highly useful tool to virologists and immunologists. For this reason, we have searched for new approaches to determine viability of noroviruses in food samples (especially seafood). Our research team has multiple years of experien...

  18. A procedure of multiple period searching in unequally spaced time-series with the Lomb-Scargle method

    NASA Technical Reports Server (NTRS)

    Van Dongen, H. P.; Olofsen, E.; VanHartevelt, J. H.; Kruyt, E. W.; Dinges, D. F. (Principal Investigator)

    1999-01-01

    Periodogram analysis of unequally spaced time-series, as part of many biological rhythm investigations, is complicated. The mathematical framework is scattered over the literature, and the interpretation of results is often debatable. In this paper, we show that the Lomb-Scargle method is the appropriate tool for periodogram analysis of unequally spaced data. A unique procedure of multiple period searching is derived, facilitating the assessment of the various rhythms that may be present in a time-series. All relevant mathematical and statistical aspects are considered in detail, and much attention is given to the correct interpretation of results. The use of the procedure is illustrated by examples, and problems that may be encountered are discussed. It is argued that, when following the procedure of multiple period searching, we can even benefit from the unequal spacing of a time-series in biological rhythm research.

  19. Variability search in M 31 using principal component analysis and the Hubble Source Catalogue

    NASA Astrophysics Data System (ADS)

    Moretti, M. I.; Hatzidimitriou, D.; Karampelas, A.; Sokolovsky, K. V.; Bonanos, A. Z.; Gavras, P.; Yang, M.

    2018-06-01

    Principal component analysis (PCA) is being extensively used in Astronomy but not yet exhaustively exploited for variability search. The aim of this work is to investigate the effectiveness of using the PCA as a method to search for variable stars in large photometric data sets. We apply PCA to variability indices computed for light curves of 18 152 stars in three fields in M 31 extracted from the Hubble Source Catalogue. The projection of the data into the principal components is used as a stellar variability detection and classification tool, capable of distinguishing between RR Lyrae stars, long-period variables (LPVs) and non-variables. This projection recovered more than 90 per cent of the known variables and revealed 38 previously unknown variable stars (about 30 per cent more), all LPVs except for one object of uncertain variability type. We conclude that this methodology can indeed successfully identify candidate variable stars.

  20. Lynx web services for annotations and systems analysis of multi-gene disorders.

    PubMed

    Sulakhe, Dinanath; Taylor, Andrew; Balasubramanian, Sandhya; Feng, Bo; Xie, Bingqing; Börnigen, Daniela; Dave, Utpal J; Foster, Ian T; Gilliam, T Conrad; Maltsev, Natalia

    2014-07-01

    Lynx is a web-based integrated systems biology platform that supports annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Lynx has integrated multiple classes of biomedical data (genomic, proteomic, pathways, phenotypic, toxicogenomic, contextual and others) from various public databases as well as manually curated data from our group and collaborators (LynxKB). Lynx provides tools for gene list enrichment analysis using multiple functional annotations and network-based gene prioritization. Lynx provides access to the integrated database and the analytical tools via REST based Web Services (http://lynx.ci.uchicago.edu/webservices.html). This comprises data retrieval services for specific functional annotations, services to search across the complete LynxKB (powered by Lucene), and services to access the analytical tools built within the Lynx platform. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. GGRNA: an ultrafast, transcript-oriented search engine for genes and transcripts

    PubMed Central

    Naito, Yuki; Bono, Hidemasa

    2012-01-01

    GGRNA (http://GGRNA.dbcls.jp/) is a Google-like, ultrafast search engine for genes and transcripts. The web server accepts arbitrary words and phrases, such as gene names, IDs, gene descriptions, annotations of gene and even nucleotide/amino acid sequences through one simple search box, and quickly returns relevant RefSeq transcripts. A typical search takes just a few seconds, which dramatically enhances the usability of routine searching. In particular, GGRNA can search sequences as short as 10 nt or 4 amino acids, which cannot be handled easily by popular sequence analysis tools. Nucleotide sequences can be searched allowing up to three mismatches, or the query sequences may contain degenerate nucleotide codes (e.g. N, R, Y, S). Furthermore, Gene Ontology annotations, Enzyme Commission numbers and probe sequences of catalog microarrays are also incorporated into GGRNA, which may help users to conduct searches by various types of keywords. GGRNA web server will provide a simple and powerful interface for finding genes and transcripts for a wide range of users. All services at GGRNA are provided free of charge to all users. PMID:22641850

  2. GGRNA: an ultrafast, transcript-oriented search engine for genes and transcripts.

    PubMed

    Naito, Yuki; Bono, Hidemasa

    2012-07-01

    GGRNA (http://GGRNA.dbcls.jp/) is a Google-like, ultrafast search engine for genes and transcripts. The web server accepts arbitrary words and phrases, such as gene names, IDs, gene descriptions, annotations of gene and even nucleotide/amino acid sequences through one simple search box, and quickly returns relevant RefSeq transcripts. A typical search takes just a few seconds, which dramatically enhances the usability of routine searching. In particular, GGRNA can search sequences as short as 10 nt or 4 amino acids, which cannot be handled easily by popular sequence analysis tools. Nucleotide sequences can be searched allowing up to three mismatches, or the query sequences may contain degenerate nucleotide codes (e.g. N, R, Y, S). Furthermore, Gene Ontology annotations, Enzyme Commission numbers and probe sequences of catalog microarrays are also incorporated into GGRNA, which may help users to conduct searches by various types of keywords. GGRNA web server will provide a simple and powerful interface for finding genes and transcripts for a wide range of users. All services at GGRNA are provided free of charge to all users.

  3. Developing and using a rubric for evaluating evidence-based medicine point-of-care tools.

    PubMed

    Shurtz, Suzanne; Foster, Margaret J

    2011-07-01

    The research sought to establish a rubric for evaluating evidence-based medicine (EBM) point-of-care tools in a health sciences library. The authors searched the literature for EBM tool evaluations and found that most previous reviews were designed to evaluate the ability of an EBM tool to answer a clinical question. The researchers' goal was to develop and complete rubrics for assessing these tools based on criteria for a general evaluation of tools (reviewing content, search options, quality control, and grading) and criteria for an evaluation of clinical summaries (searching tools for treatments of common diagnoses and evaluating summaries for quality control). Differences between EBM tools' options, content coverage, and usability were minimal. However, the products' methods for locating and grading evidence varied widely in transparency and process. As EBM tools are constantly updating and evolving, evaluation of these tools needs to be conducted frequently. Standards for evaluating EBM tools need to be established, with one method being the use of objective rubrics. In addition, EBM tools need to provide more information about authorship, reviewers, methods for evidence collection, and grading system employed.

  4. Special Focus

    PubMed Central

    Nawrocki, Eric P.; Burge, Sarah W.

    2013-01-01

    The development of RNA bioinformatic tools began more than 30 y ago with the description of the Nussinov and Zuker dynamic programming algorithms for single sequence RNA secondary structure prediction. Since then, many tools have been developed for various RNA sequence analysis problems such as homology search, multiple sequence alignment, de novo RNA discovery, read-mapping, and many more. In this issue, we have collected a sampling of reviews and original research that demonstrate some of the many ways bioinformatics is integrated with current RNA biology research. PMID:23948768

  5. metAlignID: a high-throughput software tool set for automated detection of trace level contaminants in comprehensive LECO two-dimensional gas chromatography time-of-flight mass spectrometry data.

    PubMed

    Lommen, Arjen; van der Kamp, Henk J; Kools, Harrie J; van der Lee, Martijn K; van der Weg, Guido; Mol, Hans G J

    2012-11-09

    A new alternative data processing tool set, metAlignID, is developed for automated pre-processing and library-based identification and concentration estimation of target compounds after analysis by comprehensive two-dimensional gas chromatography with mass spectrometric detection. The tool set has been developed for and tested on LECO data. The software is developed to run multi-threaded (one thread per processor core) on a standard PC (personal computer) under different operating systems and is as such capable of processing multiple data sets simultaneously. Raw data files are converted into netCDF (network Common Data Form) format using a fast conversion tool. They are then preprocessed using previously developed algorithms originating from metAlign software. Next, the resulting reduced data files are searched against a user-composed library (derived from user or commercial NIST-compatible libraries) (NIST=National Institute of Standards and Technology) and the identified compounds, including an indicative concentration, are reported in Excel format. Data can be processed batch wise. The overall time needed for conversion together with processing and searching of 30 raw data sets for 560 compounds is routinely within an hour. The screening performance is evaluated for detection of pesticides and contaminants in raw data obtained after analysis of soil and plant samples. Results are compared to the existing data-handling routine based on proprietary software (LECO, ChromaTOF). The developed software tool set, which is freely downloadable at www.metalign.nl, greatly accelerates data-analysis and offers more options for fine-tuning automated identification toward specific application needs. The quality of the results obtained is slightly better than the standard processing and also adds a quantitative estimate. The software tool set in combination with two-dimensional gas chromatography coupled to time-of-flight mass spectrometry shows great potential as a highly-automated and fast multi-residue instrumental screening method. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information

    PubMed Central

    2013-01-01

    Background Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. Results We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient’s clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Conclusions Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934 PMID:23402499

  7. Teamwork Assessment Tools in Obstetric Emergencies: A Systematic Review.

    PubMed

    Onwochei, Desire N; Halpern, Stephen; Balki, Mrinalini

    2017-06-01

    Team-based training and simulation can improve patient safety, by improving communication, decision making, and performance of team members. Currently, there is no general consensus on whether or not a specific assessment tool is better adapted to evaluate teamwork in obstetric emergencies. The purpose of this qualitative systematic review was to find the tools available to assess team effectiveness in obstetric emergencies. We searched Embase, Medline, PubMed, Web of Science, PsycINFO, CINAHL, and Google Scholar for prospective studies that evaluated nontechnical skills in multidisciplinary teams involving obstetric emergencies. The search included studies from 1944 until January 11, 2016. Data on reliability and validity measures were collected and used for interpretation. A descriptive analysis was performed on the data. Thirteen studies were included in the final qualitative synthesis. All the studies assessed teams in the context of obstetric simulation scenarios, but only six included anesthetists in the simulations. One study evaluated their teamwork tool using just validity measures, five using just reliability measures, and one used both. The most reliable tools identified were the Clinical Teamwork Scale, the Global Assessment of Obstetric Team Performance, and the Global Rating Scale of performance. However, they were still lacking in terms of quality and validity. More work needs to be conducted to establish the validity of teamwork tools for nontechnical skills, and the development of an ideal tool is warranted. Further studies are required to assess how outcomes, such as performance and patient safety, are influenced when using these tools.

  8. Refining prognosis in lung cancer: A report on the quality and relevance of clinical prognostic tools

    PubMed Central

    Mahar, Alyson L.; Compton, Carolyn; McShane, Lisa M.; Halabi, Susan; Asamura, Hisao; Rami-Porta, Ramon; Groome, Patti A.

    2015-01-01

    Introduction Accurate, individualized prognostication for lung cancer patients requires the integration of standard patient and pathologic factors, biologic, genetic, and other molecular characteristics of the tumor. Clinical prognostic tools aim to aggregate information on an individual patient to predict disease outcomes such as overall survival, but little is known about their clinical utility and accuracy in lung cancer. Methods A systematic search of the scientific literature for clinical prognostic tools in lung cancer published Jan 1, 1996-Jan 27, 2015 was performed. In addition, web-based resources were searched. A priori criteria determined by the Molecular Modellers Working Group of the American Joint Committee on Cancer were used to investigate the quality and usefulness of tools. Criteria included clinical presentation, model development approaches, validation strategies, and performance metrics. Results Thirty-two prognostic tools were identified. Patients with metastases were the most frequently considered population in non-small cell lung cancer. All tools for small cell lung cancer covered that entire patient population. Included prognostic factors varied considerably across tools. Internal validity was not formally evaluated for most tools and only eleven were evaluated for external validity. Two key considerations were highlighted for tool development: identification of an explicit purpose related to a relevant clinical population and clear decision-points, and prioritized inclusion of established prognostic factors over emerging factors. Conclusions Prognostic tools will contribute more meaningfully to the practice of personalized medicine if better study design and analysis approaches are used in their development and validation. PMID:26313682

  9. New tools for jet analysis in high energy collisions

    NASA Astrophysics Data System (ADS)

    Duffty, Daniel

    Our understanding of the fundamental interactions of particles has come far in the last century, and is still pushing forward. As we build ever more powerful machines to probe higher and higher energies, we will need to develop new tools to not only understand the new physics objects we are trying to detect, but even to understand the environment that we are searching in. We examine methods of identifying both boosted objects and low energy jets which will be shrouded in a sea of noise from other parts of the detector. We display the power of boosted-b tagging in a simulated W search. We also examine the effect of pileup on low energy jet reconstructions. For this purpose we develop a new priority-based jet algorithm, "p-jets", to cluster the energy that belongs together, but ignore the rest.

  10. GEOGLE: context mining tool for the correlation between gene expression and the phenotypic distinction.

    PubMed

    Yu, Yao; Tu, Kang; Zheng, Siyuan; Li, Yun; Ding, Guohui; Ping, Jie; Hao, Pei; Li, Yixue

    2009-08-25

    In the post-genomic era, the development of high-throughput gene expression detection technology provides huge amounts of experimental data, which challenges the traditional pipelines for data processing and analyzing in scientific researches. In our work, we integrated gene expression information from Gene Expression Omnibus (GEO), biomedical ontology from Medical Subject Headings (MeSH) and signaling pathway knowledge from sigPathway entries to develop a context mining tool for gene expression analysis - GEOGLE. GEOGLE offers a rapid and convenient way for searching relevant experimental datasets, pathways and biological terms according to multiple types of queries: including biomedical vocabularies, GDS IDs, gene IDs, pathway names and signature list. Moreover, GEOGLE summarizes the signature genes from a subset of GDSes and estimates the correlation between gene expression and the phenotypic distinction with an integrated p value. This approach performing global searching of expression data may expand the traditional way of collecting heterogeneous gene expression experiment data. GEOGLE is a novel tool that provides researchers a quantitative way to understand the correlation between gene expression and phenotypic distinction through meta-analysis of gene expression datasets from different experiments, as well as the biological meaning behind. The web site and user guide of GEOGLE are available at: http://omics.biosino.org:14000/kweb/workflow.jsp?id=00020.

  11. The MIGenAS integrated bioinformatics toolkit for web-based sequence analysis

    PubMed Central

    Rampp, Markus; Soddemann, Thomas; Lederer, Hermann

    2006-01-01

    We describe a versatile and extensible integrated bioinformatics toolkit for the analysis of biological sequences over the Internet. The web portal offers convenient interactive access to a growing pool of chainable bioinformatics software tools and databases that are centrally installed and maintained by the RZG. Currently, supported tasks comprise sequence similarity searches in public or user-supplied databases, computation and validation of multiple sequence alignments, phylogenetic analysis and protein–structure prediction. Individual tools can be seamlessly chained into pipelines allowing the user to conveniently process complex workflows without the necessity to take care of any format conversions or tedious parsing of intermediate results. The toolkit is part of the Max-Planck Integrated Gene Analysis System (MIGenAS) of the Max Planck Society available at (click ‘Start Toolkit’). PMID:16844980

  12. Extending the Virtual Solar Observatory (VSO) to Incorporate Data Analysis Capabilities (III)

    NASA Astrophysics Data System (ADS)

    Csillaghy, A.; Etesi, L.; Dennis, B.; Zarro, D.; Schwartz, R.; Tolbert, K.

    2008-12-01

    We will present a progress report on our activities to extend the data analysis capabilities of the VSO. Our efforts to date have focused on three areas: 1. Extending the data retrieval capabilities by developing a centralized data processing server. The server is built with Java, IDL (Interactive Data Language), and the SSW (Solar SoftWare) package with all SSW-related instrument libraries and required calibration data. When a user requests VSO data that requires preprocessing, the data are transparently sent to the server, processed, and returned to the user's IDL session for viewing and analysis. It is possible to have any Java or IDL client connect to the server. An IDL prototype for preparing and calibrating SOHO/EIT data wll be demonstrated. 2. Improving the solar data search in SHOW SYNOP, a graphical user tool connected to VSO in IDL. We introduce the Java-IDL interface that allows a flexible dynamic, and extendable way of searching the VSO, where all the communication with VSO are managed dynamically by standard Java tools. 3. Improving image overlay capability to support coregistration of solar disk observations obtained from different orbital view angles, position angles, and distances - such as from the twin STEREO spacecraft.

  13. The Reactome pathway Knowledgebase

    PubMed Central

    Fabregat, Antonio; Sidiropoulos, Konstantinos; Garapati, Phani; Gillespie, Marc; Hausmann, Kerstin; Haw, Robin; Jassal, Bijay; Jupe, Steven; Korninger, Florian; McKay, Sheldon; Matthews, Lisa; May, Bruce; Milacic, Marija; Rothfels, Karen; Shamovsky, Veronica; Webber, Marissa; Weiser, Joel; Williams, Mark; Wu, Guanming; Stein, Lincoln; Hermjakob, Henning; D'Eustachio, Peter

    2016-01-01

    The Reactome Knowledgebase (www.reactome.org) provides molecular details of signal transduction, transport, DNA replication, metabolism and other cellular processes as an ordered network of molecular transformations—an extended version of a classic metabolic map, in a single consistent data model. Reactome functions both as an archive of biological processes and as a tool for discovering unexpected functional relationships in data such as gene expression pattern surveys or somatic mutation catalogues from tumour cells. Over the last two years we redeveloped major components of the Reactome web interface to improve usability, responsiveness and data visualization. A new pathway diagram viewer provides a faster, clearer interface and smooth zooming from the entire reaction network to the details of individual reactions. Tool performance for analysis of user datasets has been substantially improved, now generating detailed results for genome-wide expression datasets within seconds. The analysis module can now be accessed through a RESTFul interface, facilitating its inclusion in third party applications. A new overview module allows the visualization of analysis results on a genome-wide Reactome pathway hierarchy using a single screen page. The search interface now provides auto-completion as well as a faceted search to narrow result lists efficiently. PMID:26656494

  14. FLASH_SSF_Aqua-FM3-MODIS_Version3C

    Atmospheric Science Data Center

    2018-04-04

    ... Tool:  CERES Order Tool  (netCDF) Subset Data:  CERES Search and Subset Tool (HDF4 & netCDF) ... Cloud Layer Area Cloud Infared Emissivity Cloud Base Pressure Surface (Radiative) Flux TOA Flux Surface Types TOT ... Radiance SW Filtered Radiance LW Flux Order Data:  Earthdata Search:  Order Data Guide Documents:  ...

  15. FLASH_SSF_Terra-FM1-MODIS_Version3C

    Atmospheric Science Data Center

    2018-04-04

    ... Tool:  CERES Order Tool  (netCDF) Subset Data:  CERES Search and Subset Tool (HDF4 & netCDF) ... Cloud Layer Area Cloud Infrared Emissivity Cloud Base Pressure Surface (Radiative) Flux TOA Flux Surface Types TOT ... Radiance SW Filtered Radiance LW Flux Order Data:  Earthdata Search:  Order Data Guide Documents:  ...

  16. Unifying Water Data Sources: How the CUAHSI Water Data Center is Enabling and Improving Access to a Growing Catalog of over 100 Data Providers

    NASA Astrophysics Data System (ADS)

    Pollak, J.; Berry, K.; Couch, A.; Arrigo, J.; Hooper, R. P.

    2013-12-01

    Scientific data about water are collected and distributed by numerous sources which can differ tremendously in scale. As competition for water resources increases, increasing access to and understanding of information about water will be critical. The mission of the new CUAHSI Water Data Center (WDC) is to provide those researchers who collect data a medium to publish their datasets and give those wanting to discover data the proper tools to efficiently find the data that they seek. These tools include standards-based data publication, data discovery tools based upon faceted and telescoping search, and a data analysis tool HydroDesktop that downloads and unifies data in standardized formats. The CUAHSI Hydrologic Information System (HIS) is a community developed and open source system for sharing water data. As a federated, web service oriented system it enables data publication for a diverse user population including scientific investigators (Research Coordination Networks, Critical Zone Observatories), government agencies (USGS, NASA, EPA), and citizen scientists (watershed associations). HydroDesktop is an end user application for data consumption in this system that the WDC supports. This application can be used for finding, downloading, and analyzing data from the HIS. It provides a GIS interface that allows users to incorporate spatial data that are not accessible via HIS, simple analysis tools to facilitate graphing and visualization, tools to export data to common file types, and provides an extensible architecture that developers can build upon. HydroDesktop, however, is just one example of a data access client for HIS. The web service oriented architecture enables data access by an unlimited number of clients provided they can consume the web services used in HIS. One such example developed at the WDC is the 'Faceted Search Client', which capitalizes upon exploratory search concepts to improve accuracy and precision during search. We highlight such features of the CUAHSI-HIS which make it particularly appropriate for providing unified access to several sources of water data. A growing community of researchers and educators are employing these tools for education; including sharing best practices around creating modules, supporting researchers and educators in accessing the services, and cataloging and sharing modules. The CUAHSI WDC is a community governed organization. Our agenda is driven by the community's voice through a Board of Directors and committees that decide strategic direction (new products), tactical decisions (product improvement), and evaluation of usability. By providing the aforementioned services within a community driven framework, we believe the WDC is providing critical services that include improving water data discoverability, accessibility and usability within a sustainable governance structure.

  17. Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses.

    PubMed

    Falagas, Matthew E; Pitsouni, Eleni I; Malietzis, George A; Pappas, Georgios

    2008-02-01

    The evolution of the electronic age has led to the development of numerous medical databases on the World Wide Web, offering search facilities on a particular subject and the ability to perform citation analysis. We compared the content coverage and practical utility of PubMed, Scopus, Web of Science, and Google Scholar. The official Web pages of the databases were used to extract information on the range of journals covered, search facilities and restrictions, and update frequency. We used the example of a keyword search to evaluate the usefulness of these databases in biomedical information retrieval and a specific published article to evaluate their utility in performing citation analysis. All databases were practical in use and offered numerous search facilities. PubMed and Google Scholar are accessed for free. The keyword search with PubMed offers optimal update frequency and includes online early articles; other databases can rate articles by number of citations, as an index of importance. For citation analysis, Scopus offers about 20% more coverage than Web of Science, whereas Google Scholar offers results of inconsistent accuracy. PubMed remains an optimal tool in biomedical electronic research. Scopus covers a wider journal range, of help both in keyword searching and citation analysis, but it is currently limited to recent articles (published after 1995) compared with Web of Science. Google Scholar, as for the Web in general, can help in the retrieval of even the most obscure information but its use is marred by inadequate, less often updated, citation information.

  18. Accelerated Bayesian model-selection and parameter-estimation in continuous gravitational-wave searches with pulsar-timing arrays

    NASA Astrophysics Data System (ADS)

    Taylor, Stephen; Ellis, Justin; Gair, Jonathan

    2014-11-01

    We describe several new techniques which accelerate Bayesian searches for continuous gravitational-wave emission from supermassive black-hole binaries using pulsar-timing arrays. These techniques mitigate the problematic increase of search dimensionality with the size of the pulsar array which arises from having to include an extra parameter per pulsar as the array is expanded. This extra parameter corresponds to searching over the phase of the gravitational wave as it propagates past each pulsar so that we can coherently include the pulsar term in our search strategies. Our techniques make the analysis tractable with powerful evidence-evaluation packages like MultiNest. We find good agreement of our techniques with the parameter-estimation and Bayes factor evaluation performed with full signal templates and conclude that these techniques make excellent first-cut tools for detection and characterization of continuous gravitational-wave signals with pulsar-timing arrays. Crucially, at low to moderate signal-to-noise ratios the factor by which the analysis is sped up can be ≳100 , permitting rigorous programs of systematic injection and recovery of signals to establish robust detection criteria within a Bayesian formalism.

  19. Measurement properties of self-report physical activity assessment tools in stroke: a protocol for a systematic review

    PubMed Central

    Martins, Júlia Caetano; Aguiar, Larissa Tavares; Nadeau, Sylvie; Scianni, Aline Alvim; Teixeira-Salmela, Luci Fuscaldi; Faria, Christina Danielli Coelho de Morais

    2017-01-01

    Introduction Self-report physical activity assessment tools are commonly used for the evaluation of physical activity levels in individuals with stroke. A great variety of these tools have been developed and widely used in recent years, which justify the need to examine their measurement properties and clinical utility. Therefore, the main objectives of this systematic review are to examine the measurement properties and clinical utility of self-report measures of physical activity and discuss the strengths and limitations of the identified tools. Methods and analysis A systematic review of studies that investigated the measurement properties and/or clinical utility of self-report physical activity assessment tools in stroke will be conducted. Electronic searches will be performed in five databases: Medical Literature Analysis and Retrieval System Online (MEDLINE) (PubMed), Excerpta Medica Database (EMBASE), Physiotherapy Evidence Database (PEDro), Literatura Latino-Americana e do Caribe em Ciências da Saúde (LILACS) and Scientific Electronic Library Online (SciELO), followed by hand searches of the reference lists of the included studies. Two independent reviewers will screen all retrieve titles, abstracts, and full texts, according to the inclusion criteria and will also extract the data. A third reviewer will be referred to solve any disagreement. A descriptive summary of the included studies will contain the design, participants, as well as the characteristics, measurement properties, and clinical utility of the self-report tools. The methodological quality of the studies will be evaluated using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist and the clinical utility of the identified tools will be assessed considering predefined criteria. This systematic review will follow the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) statement. Discussion This systematic review will provide an extensive review of the measurement properties and clinical utility of self-report physical activity assessment tools used in individuals with stroke, which would benefit clinicians and researchers. Trial registration number PROSPERO CRD42016037146. PMID:28193848

  20. T-scan III system diagnostic tool for digital occlusal analysis in orthodontics - a modern approach.

    PubMed

    Trpevska, Vesna; Kovacevska, Gordana; Benedeti, Alberto; Jordanov, Bozidar

    2014-01-01

    This systematic literature review was performed to establish the mechanism, methodology, characteristics, clinical application and opportunities of the T-Scan III System as a diagnostic tool for digital occlusal analysis in different fields of dentistry, precisely in orthodontics. Searching of electronic databases, using MEDLINE and PubMed, hand searching of relevant key journals, and screening of reference lists of included studies with no language restriction was performed. Publications providing statistically examined data were included for systematic review. Twenty potentially relevant Randomized Controlled Trials (RCTs) were identified. Only ten met the inclusion criteria. The literature demonstrates that using digital occlusal analysis with T-Scan III System in orthodontics has significant advantage with regard to the capability of measuring occlusal parameters in static positions and during dynamic of the mandible. Within the scope of this systematic review, there is evidence to support that T-Scan system is rapid and accurate in identifying the distribution of the tooth contacts and it shows great promise as a clinical diagnostic screening device for occlusion and for improving the occlusion after various dental treatments. Additional clinical studies are required to advance the indication filed of this system. Importance of using digital occlusal T-Scan analysis in orthodontics deserves further investigation.

  1. Validity of the Kinect for Gait Assessment: A Focused Review

    PubMed Central

    Springer, Shmuel; Yogev Seligmann, Galit

    2016-01-01

    Gait analysis may enhance clinical practice. However, its use is limited due to the need for expensive equipment which is not always available in clinical settings. Recent evidence suggests that Microsoft Kinect may provide a low cost gait analysis method. The purpose of this report is to critically evaluate the literature describing the concurrent validity of using the Kinect as a gait analysis instrument. An online search of PubMed, CINAHL, and ProQuest databases was performed. Included were studies in which walking was assessed with the Kinect and another gold standard device, and consisted of at least one numerical finding of spatiotemporal or kinematic measures. Our search identified 366 papers, from which 12 relevant studies were retrieved. The results demonstrate that the Kinect is valid only for some spatiotemporal gait parameters. Although the kinematic parameters measured by the Kinect followed the trend of the joint trajectories, they showed poor validity and large errors. In conclusion, the Kinect may have the potential to be used as a tool for measuring spatiotemporal aspects of gait, yet standardized methods should be established, and future examinations with both healthy subjects and clinical participants are required in order to integrate the Kinect as a clinical gait analysis tool. PMID:26861323

  2. In search of tools to aid logical thinking and communicating about medical decision making.

    PubMed

    Hunink, M G

    2001-01-01

    To have real-time impact on medical decision making, decision analysts need a wide variety of tools to aid logical thinking and communication. Decision models provide a formal framework to integrate evidence and values, but they are commonly perceived as complex and difficult to understand by those unfamiliar with the methods, especially in the context of clinical decision making. The theory of constraints, introduced by Eliyahu Goldratt in the business world, provides a set of tools for logical thinking and communication that could potentially be useful in medical decision making. The author used the concept of a conflict resolution diagram to analyze the decision to perform carotid endarterectomy prior to coronary artery bypass grafting in a patient with both symptomatic coronary and asymptomatic carotid artery disease. The method enabled clinicians to visualize and analyze the issues, identify and discuss the underlying assumptions, search for the best available evidence, and use the evidence to make a well-founded decision. The method also facilitated communication among those involved in the care of the patient. Techniques from fields other than decision analysis can potentially expand the repertoire of tools available to support medical decision making and to facilitate communication in decision consults.

  3. Situational Awareness Geospatial Application (iSAGA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sher, Benjamin

    Situational Awareness Geospatial Application (iSAGA) is a geospatial situational awareness software tool that uses an algorithm to extract location data from nearly any internet-based, or custom data source and display it geospatially; allows user-friendly conduct of spatial analysis using custom-developed tools; searches complex Geographic Information System (GIS) databases and accesses high resolution imagery. iSAGA has application at the federal, state and local levels of emergency response, consequence management, law enforcement, emergency operations and other decision makers as a tool to provide complete, visual, situational awareness using data feeds and tools selected by the individual agency or organization. Feeds may bemore » layered and custom tools developed to uniquely suit each subscribing agency or organization. iSAGA may similarly be applied to international agencies and organizations.« less

  4. Extravehicular Activity (EVA) Microbial Swab Tool

    NASA Technical Reports Server (NTRS)

    Rucker, Michelle

    2015-01-01

    When we send humans to search for life on Mars, we'll need to know what we brought with us versus what may already be there. To ensure our crewed spacecraft meet planetary protection requirements--and to protect our science from human contamination--we'll need to know whether micro-organisms are leaking/venting from our ships and spacesuits. This is easily done by swabbing external vents and surfaces for analysis, but there was no US EVA tool for that job. NASA engineers developed an EVA-compatible swab tool that can be used to collect data on current hardware, which will influence eventual Mars life support and EVA hardware designs.

  5. Design tool for multiprocessor scheduling and evaluation of iterative dataflow algorithms

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1995-01-01

    A graph-theoretic design process and software tool is defined for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. Graph-search algorithms and analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool applies the design process to a given problem and includes performance optimization through the inclusion of additional precedence constraints among the schedulable tasks.

  6. Decision-making tools in prostate cancer: from risk grouping to nomograms.

    PubMed

    Fontanella, Paolo; Benecchi, Luigi; Grasso, Angelica; Patel, Vipul; Albala, David; Abbou, Claude; Porpiglia, Francesco; Sandri, Marco; Rocco, Bernardo; Bianchi, Giampaolo

    2017-12-01

    Prostate cancer (PCa) is the most common solid neoplasm and the second leading cause of cancer death in men. After the Partin tables were developed, a number of predictive and prognostic tools became available for risk stratification. These tools have allowed the urologist to better characterize this disease and lead to more confident treatment decisions for patients. The purpose of this study is to critically review the decision-making tools currently available to the urologist, from the moment when PCa is first diagnosed until patients experience metastatic progression and death. A systematic and critical analysis through Medline, EMBASE, Scopus and Web of Science databases was carried out in February 2016 as per the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. The search was conducted using the following key words: "prostate cancer," "prediction tools," "nomograms." Seventy-two studies were identified in the literature search. We summarized the results into six sections: Tools for prediction of life expectancy (before treatment), Tools for prediction of pathological stage (before treatment), Tools for prediction of survival and cancer-specific mortality (before/after treatment), Tools for prediction of biochemical recurrence (before/after treatment), Tools for prediction of metastatic progression (after treatment) and in the last section biomarkers and genomics. The management of PCa patients requires a tailored approach to deliver a truly personalized treatment. The currently available tools are of great help in helping the urologist in the decision-making process. These tests perform very well in high-grade and low-grade disease, while for intermediate-grade disease further research is needed. Newly discovered markers, genomic tests, and advances in imaging acquisition through mpMRI will help in instilling confidence that the appropriate treatments are being offered to patients with prostate cancer.

  7. The availability and effectiveness of tools supporting shared decision making in metastatic breast cancer care: a review.

    PubMed

    Spronk, Inge; Burgers, Jako S; Schellevis, François G; van Vliet, Liesbeth M; Korevaar, Joke C

    2018-05-11

    Shared decision-making (SDM) in the management of metastatic breast cancer care is associated with positive patient outcomes. In daily clinical practice, however, SDM is not fully integrated yet. Initiatives to improve the implementation of SDM would be helpful. The aim of this review was to assess the availability and effectiveness of tools supporting SDM in metastatic breast cancer care. Literature databases were systematically searched for articles published since 2006 focusing on the development or evaluation of tools to improve information-provision and to support decision-making in metastatic breast cancer care. Internet searches and experts identified additional tools. Data from included tools were extracted and the evaluation of tools was appraised using the GRADE grading system. The literature search yielded five instruments. In addition, two tools were identified via internet searches and consultation of experts. Four tools were specifically developed for supporting SDM in metastatic breast cancer, the other three tools focused on metastatic cancer in general. Tools were mainly applicable across the care process, and usable for decisions on supportive care with or without chemotherapy. All tools were designed for patients to be used before a consultation with the physician. Effects on patient outcomes were generally weakly positive although most tools were not studied in well-designed studies. Despite its recognized importance, only two tools were positively evaluated on effectiveness and are available to support patients with metastatic breast cancer in SDM. These tools show promising results in pilot studies and focus on different aspects of care. However, their effectiveness should be confirmed in well-designed studies before implementation in clinical practice. Innovation and development of SDM tools targeting clinicians as well as patients during a clinical encounter is recommended.

  8. A Tool for the Analysis of Motion Picture Film or Video Tape.

    ERIC Educational Resources Information Center

    Ekman, Paul; Friesen, Wallace V.

    1969-01-01

    A visual information display and retrieval system (VID-R) is described for application to visual records. VID-R searches and retrieves events by time address (location) or by previously stored ovservations or measurements. Fields are labeled by writing discriminable binary addresses on the horizontal lines outside the normal viewing area. The…

  9. Randomization and Data-Analysis Items in Quality Standards for Single-Case Experimental Studies

    ERIC Educational Resources Information Center

    Heyvaert, Mieke; Wendt, Oliver; Van den Noortgate, Wim; Onghena, Patrick

    2015-01-01

    Reporting standards and critical appraisal tools serve as beacons for researchers, reviewers, and research consumers. Parallel to existing guidelines for researchers to report and evaluate group-comparison studies, single-case experimental (SCE) researchers are in need of guidelines for reporting and evaluating SCE studies. A systematic search was…

  10. Computer-Aided Discovery Tools for Volcano Deformation Studies with InSAR and GPS

    NASA Astrophysics Data System (ADS)

    Pankratius, V.; Pilewskie, J.; Rude, C. M.; Li, J. D.; Gowanlock, M.; Bechor, N.; Herring, T.; Wauthier, C.

    2016-12-01

    We present a Computer-Aided Discovery approach that facilitates the cloud-scalable fusion of different data sources, such as GPS time series and Interferometric Synthetic Aperture Radar (InSAR), for the purpose of identifying the expansion centers and deformation styles of volcanoes. The tools currently developed at MIT allow the definition of alternatives for data processing pipelines that use various analysis algorithms. The Computer-Aided Discovery system automatically generates algorithmic and parameter variants to help researchers explore multidimensional data processing search spaces efficiently. We present first application examples of this technique using GPS data on volcanoes on the Aleutian Islands and work in progress on combined GPS and InSAR data in Hawaii. In the model search context, we also illustrate work in progress combining time series Principal Component Analysis with InSAR augmentation to constrain the space of possible model explanations on current empirical data sets and achieve a better identification of deformation patterns. This work is supported by NASA AIST-NNX15AG84G and NSF ACI-1442997 (PI: V. Pankratius).

  11. Developing and using a rubric for evaluating evidence-based medicine point-of-care tools

    PubMed Central

    Foster, Margaret J

    2011-01-01

    Objective: The research sought to establish a rubric for evaluating evidence-based medicine (EBM) point-of-care tools in a health sciences library. Methods: The authors searched the literature for EBM tool evaluations and found that most previous reviews were designed to evaluate the ability of an EBM tool to answer a clinical question. The researchers' goal was to develop and complete rubrics for assessing these tools based on criteria for a general evaluation of tools (reviewing content, search options, quality control, and grading) and criteria for an evaluation of clinical summaries (searching tools for treatments of common diagnoses and evaluating summaries for quality control). Results: Differences between EBM tools' options, content coverage, and usability were minimal. However, the products' methods for locating and grading evidence varied widely in transparency and process. Conclusions: As EBM tools are constantly updating and evolving, evaluation of these tools needs to be conducted frequently. Standards for evaluating EBM tools need to be established, with one method being the use of objective rubrics. In addition, EBM tools need to provide more information about authorship, reviewers, methods for evidence collection, and grading system employed. PMID:21753917

  12. Measurement tools for the diagnosis of nasal septal deviation: a systematic review

    PubMed Central

    2014-01-01

    Objective To perform a systematic review of measurement tools utilized for the diagnosis of nasal septal deviation (NSD). Methods Electronic database searches were performed using MEDLINE (from 1966 to second week of August 2013), EMBASE (from 1966 to second week of August 2013), Web of Science (from 1945 to second week of August 2013) and all Evidence Based Medicine Reviews Files (EBMR); Cochrane Database of Systematic Review (CDSR), Cochrane Central Register of Controlled Trials (CCTR), Cochrane Methodology Register (CMR), Database of Abstracts of Reviews of Effects (DARE), American College of Physicians Journal Club (ACP Journal Club), Health Technology Assessments (HTA), NHS Economic Evaluation Database (NHSEED) till the second quarter of 2013. The search terms used in database searches were ‘nasal septum’, ‘deviation’, ‘diagnosis’, ‘nose deformities’ and ‘nose malformation’. The studies were reviewed using the updated Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Results Online searches resulted in 23 abstracts after removal of duplicates that resulted from overlap of studies between the electronic databases. An additional 15 abstracts were excluded due to lack of relevance. A total of 8 studies were systematically reviewed. Conclusions Diagnostic modalities such as acoustic rhinometry, rhinomanometry and nasal spectral sound analysis may be useful in identifying NSD in anterior region of the nasal cavity, but these tests in isolation are of limited utility. Compared to anterior rhinoscopy, nasal endoscopy, and imaging the above mentioned index tests lack sensitivity and specificity in identifying the presence, location, and severity of NSD. PMID:24762010

  13. Web Feet Guide to Search Engines: Finding It on the Net.

    ERIC Educational Resources Information Center

    Web Feet, 2001

    2001-01-01

    This guide to search engines for the World Wide Web discusses selecting the right search engine; interpreting search results; major search engines; online tutorials and guides; search engines for kids; specialized search tools for various subjects; and other specialized engines and gateways. (LRW)

  14. A smarter way to search, share and utilize open-spatial online data for energy R&D - Custom machine learning and GIS tools in U.S. DOE's virtual data library & laboratory, EDX

    NASA Astrophysics Data System (ADS)

    Rose, K.; Bauer, J.; Baker, D.; Barkhurst, A.; Bean, A.; DiGiulio, J.; Jones, K.; Jones, T.; Justman, D.; Miller, R., III; Romeo, L.; Sabbatino, M.; Tong, A.

    2017-12-01

    As spatial datasets are increasingly accessible through open, online systems, the opportunity to use these resources to address a range of Earth system questions grows. Simultaneously, there is a need for better infrastructure and tools to find and utilize these resources. We will present examples of advanced online computing capabilities, hosted in the U.S. DOE's Energy Data eXchange (EDX), that address these needs for earth-energy research and development. In one study the computing team developed a custom, machine learning, big data computing tool designed to parse the web and return priority datasets to appropriate servers to develop an open-source global oil and gas infrastructure database. The results of this spatial smart search approach were validated against expert-driven, manual search results which required a team of seven spatial scientists three months to produce. The custom machine learning tool parsed online, open systems, including zip files, ftp sites and other web-hosted resources, in a matter of days. The resulting resources were integrated into a geodatabase now hosted for open access via EDX. Beyond identifying and accessing authoritative, open spatial data resources, there is also a need for more efficient tools to ingest, perform, and visualize multi-variate, spatial data analyses. Within the EDX framework, there is a growing suite of processing, analytical and visualization capabilities that allow multi-user teams to work more efficiently in private, virtual workspaces. An example of these capabilities are a set of 5 custom spatio-temporal models and data tools that form NETL's Offshore Risk Modeling suite that can be used to quantify oil spill risks and impacts. Coupling the data and advanced functions from EDX with these advanced spatio-temporal models has culminated with an integrated web-based decision-support tool. This platform has capabilities to identify and combine data across scales and disciplines, evaluate potential environmental, social, and economic impacts, highlight knowledge or technology gaps, and reduce uncertainty for a range of `what if' scenarios relevant to oil spill prevention efforts. These examples illustrate EDX's growing capabilities for advanced spatial data search and analysis to support geo-data science needs.

  15. Detecting internet activity for erectile dysfunction using search engine query data in the Republic of Ireland.

    PubMed

    Davis, Niall F; Smyth, Lisa G; Flood, Hugh D

    2012-12-01

    What's known on the subject? and What does the study add? Despite the increasing prevalence of erectile dysfunction (ED), there is reluctance among symptomatic patients to present to healthcare providers for appropriate advice and treatment. A number of Internet campaigns have been launched by the Irish healthcare media since 2007 aiming to provide easily accessible advice on ED. Novel online technologies appear to provide a useful tool for educating the general public on the symptoms of ED because there has been a significant increase in overall Internet search activity for this term since 2007. • To assess Internet search trends for erectile dysfunction (ED) subsequent to public awareness campaigns being launched within the Republic of Ireland • To assess whether the advent of such campaigns correlates with increased Internet search activity for ED. • Google insights for search was utilized to examine Internet search trends for the term 'erectile dysfunction' across all categories between January 2005 and December 2011. • Search activity was limited to users from the Republic of Ireland within this timeframe. • Additionally, the number of Irish Internet media campaigns and Irish web pages providing information on ED was assessed between January 2005 and December 2011. • Statistical analysis of the data was performed using analysis of variance and Student's t-tests for pairwise comparisons. • There has been a significant increase in mean search activity for ED on an annual basis since 2007 (P < 0.001). • The number of Irish web pages associated with information on ED has also increased significantly on an annual basis since 2007 (P < 0.001). • There have been seven different Irish Internet media campaigns on ED since 2007 compared to two from 2005 to 2007 (P < 0.001). • There was no significant change in mean search activity for ED from 2005 to 2007 • The advent of recent Internet media campaigns and increasing number of Irish web pages is associated with a significant increase in online activity for ED in the Republic of Ireland. • Novel online technologies appear to provide a useful tool for educating the general public on the symptoms and treatment options available for ED. © 2012 BJU INTERNATIONAL.

  16. XTCE GOVSAT Tool Suite 1.0

    NASA Technical Reports Server (NTRS)

    Rice, J. Kevin

    2013-01-01

    The XTCE GOVSAT software suite contains three tools: validation, search, and reporting. The Extensible Markup Language (XML) Telemetric and Command Exchange (XTCE) GOVSAT Tool Suite is written in Java for manipulating XTCE XML files. XTCE is a Consultative Committee for Space Data Systems (CCSDS) and Object Management Group (OMG) specification for describing the format and information in telemetry and command packet streams. These descriptions are files that are used to configure real-time telemetry and command systems for mission operations. XTCE s purpose is to exchange database information between different systems. XTCE GOVSAT consists of rules for narrowing the use of XTCE for missions. The Validation Tool is used to syntax check GOVSAT XML files. The Search Tool is used to search (i.e. command and telemetry mnemonics) the GOVSAT XML files and view the results. Finally, the Reporting Tool is used to create command and telemetry reports. These reports can be displayed or printed for use by the operations team.

  17. Habitat Design Optimization and Analysis

    NASA Technical Reports Server (NTRS)

    SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.

    2006-01-01

    Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.

  18. VESPA: software to facilitate genomic annotation of prokaryotic organisms through integration of proteomic and transcriptomic data.

    PubMed

    Peterson, Elena S; McCue, Lee Ann; Schrimpe-Rutledge, Alexandra C; Jensen, Jeffrey L; Walker, Hyunjoo; Kobold, Markus A; Webb, Samantha R; Payne, Samuel H; Ansong, Charles; Adkins, Joshua N; Cannon, William R; Webb-Robertson, Bobbie-Jo M

    2012-04-05

    The procedural aspects of genome sequencing and assembly have become relatively inexpensive, yet the full, accurate structural annotation of these genomes remains a challenge. Next-generation sequencing transcriptomics (RNA-Seq), global microarrays, and tandem mass spectrometry (MS/MS)-based proteomics have demonstrated immense value to genome curators as individual sources of information, however, integrating these data types to validate and improve structural annotation remains a major challenge. Current visual and statistical analytic tools are focused on a single data type, or existing software tools are retrofitted to analyze new data forms. We present Visual Exploration and Statistics to Promote Annotation (VESPA) is a new interactive visual analysis software tool focused on assisting scientists with the annotation of prokaryotic genomes though the integration of proteomics and transcriptomics data with current genome location coordinates. VESPA is a desktop Java™ application that integrates high-throughput proteomics data (peptide-centric) and transcriptomics (probe or RNA-Seq) data into a genomic context, all of which can be visualized at three levels of genomic resolution. Data is interrogated via searches linked to the genome visualizations to find regions with high likelihood of mis-annotation. Search results are linked to exports for further validation outside of VESPA or potential coding-regions can be analyzed concurrently with the software through interaction with BLAST. VESPA is demonstrated on two use cases (Yersinia pestis Pestoides F and Synechococcus sp. PCC 7002) to demonstrate the rapid manner in which mis-annotations can be found and explored in VESPA using either proteomics data alone, or in combination with transcriptomic data. VESPA is an interactive visual analytics tool that integrates high-throughput data into a genomic context to facilitate the discovery of structural mis-annotations in prokaryotic genomes. Data is evaluated via visual analysis across multiple levels of genomic resolution, linked searches and interaction with existing bioinformatics tools. We highlight the novel functionality of VESPA and core programming requirements for visualization of these large heterogeneous datasets for a client-side application. The software is freely available at https://www.biopilot.org/docs/Software/Vespa.php.

  19. VESPA: software to facilitate genomic annotation of prokaryotic organisms through integration of proteomic and transcriptomic data

    PubMed Central

    2012-01-01

    Background The procedural aspects of genome sequencing and assembly have become relatively inexpensive, yet the full, accurate structural annotation of these genomes remains a challenge. Next-generation sequencing transcriptomics (RNA-Seq), global microarrays, and tandem mass spectrometry (MS/MS)-based proteomics have demonstrated immense value to genome curators as individual sources of information, however, integrating these data types to validate and improve structural annotation remains a major challenge. Current visual and statistical analytic tools are focused on a single data type, or existing software tools are retrofitted to analyze new data forms. We present Visual Exploration and Statistics to Promote Annotation (VESPA) is a new interactive visual analysis software tool focused on assisting scientists with the annotation of prokaryotic genomes though the integration of proteomics and transcriptomics data with current genome location coordinates. Results VESPA is a desktop Java™ application that integrates high-throughput proteomics data (peptide-centric) and transcriptomics (probe or RNA-Seq) data into a genomic context, all of which can be visualized at three levels of genomic resolution. Data is interrogated via searches linked to the genome visualizations to find regions with high likelihood of mis-annotation. Search results are linked to exports for further validation outside of VESPA or potential coding-regions can be analyzed concurrently with the software through interaction with BLAST. VESPA is demonstrated on two use cases (Yersinia pestis Pestoides F and Synechococcus sp. PCC 7002) to demonstrate the rapid manner in which mis-annotations can be found and explored in VESPA using either proteomics data alone, or in combination with transcriptomic data. Conclusions VESPA is an interactive visual analytics tool that integrates high-throughput data into a genomic context to facilitate the discovery of structural mis-annotations in prokaryotic genomes. Data is evaluated via visual analysis across multiple levels of genomic resolution, linked searches and interaction with existing bioinformatics tools. We highlight the novel functionality of VESPA and core programming requirements for visualization of these large heterogeneous datasets for a client-side application. The software is freely available at https://www.biopilot.org/docs/Software/Vespa.php. PMID:22480257

  20. Use of Semantic Technology to Create Curated Data Albums

    NASA Technical Reports Server (NTRS)

    Ramachandran, Rahul; Kulkarni, Ajinkya; Li, Xiang; Sainju, Roshan; Bakare, Rohan; Basyal, Sabin

    2014-01-01

    One of the continuing challenges in any Earth science investigation is the discovery and access of useful science content from the increasingly large volumes of Earth science data and related information available online. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. Those who know exactly the data sets they need can obtain the specific files using these systems. However, in cases where researchers are interested in studying an event of research interest, they must manually assemble a variety of relevant data sets by searching the different distributed data systems. Consequently, there is a need to design and build specialized search and discover tools in Earth science that can filter through large volumes of distributed online data and information and only aggregate the relevant resources needed to support climatology and case studies. This paper presents a specialized search and discovery tool that automatically creates curated Data Albums. The tool was designed to enable key elements of the search process such as dynamic interaction and sense-making. The tool supports dynamic interaction via different modes of interactivity and visual presentation of information. The compilation of information and data into a Data Album is analogous to a shoebox within the sense-making framework. This tool automates most of the tedious information/data gathering tasks for researchers. Data curation by the tool is achieved via an ontology-based, relevancy ranking algorithm that filters out nonrelevant information and data. The curation enables better search results as compared to the simple keyword searches provided by existing data systems in Earth science.

  1. Use of Semantic Technology to Create Curated Data Albums

    NASA Technical Reports Server (NTRS)

    Ramachandran, Rahul; Kulkarni, Ajinkya; Li, Xiang; Sainju, Roshan; Bakare, Rohan; Basyal, Sabin; Fox, Peter (Editor); Norack, Tom (Editor)

    2014-01-01

    One of the continuing challenges in any Earth science investigation is the discovery and access of useful science content from the increasingly large volumes of Earth science data and related information available online. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. Those who know exactly the data sets they need can obtain the specific files using these systems. However, in cases where researchers are interested in studying an event of research interest, they must manually assemble a variety of relevant data sets by searching the different distributed data systems. Consequently, there is a need to design and build specialized search and discovery tools in Earth science that can filter through large volumes of distributed online data and information and only aggregate the relevant resources needed to support climatology and case studies. This paper presents a specialized search and discovery tool that automatically creates curated Data Albums. The tool was designed to enable key elements of the search process such as dynamic interaction and sense-making. The tool supports dynamic interaction via different modes of interactivity and visual presentation of information. The compilation of information and data into a Data Album is analogous to a shoebox within the sense-making framework. This tool automates most of the tedious information/data gathering tasks for researchers. Data curation by the tool is achieved via an ontology-based, relevancy ranking algorithm that filters out non-relevant information and data. The curation enables better search results as compared to the simple keyword searches provided by existing data systems in Earth science.

  2. RNA motif search with data-driven element ordering.

    PubMed

    Rampášek, Ladislav; Jimenez, Randi M; Lupták, Andrej; Vinař, Tomáš; Brejová, Broňa

    2016-05-18

    In this paper, we study the problem of RNA motif search in long genomic sequences. This approach uses a combination of sequence and structure constraints to uncover new distant homologs of known functional RNAs. The problem is NP-hard and is traditionally solved by backtracking algorithms. We have designed a new algorithm for RNA motif search and implemented a new motif search tool RNArobo. The tool enhances the RNAbob descriptor language, allowing insertions in helices, which enables better characterization of ribozymes and aptamers. A typical RNA motif consists of multiple elements and the running time of the algorithm is highly dependent on their ordering. By approaching the element ordering problem in a principled way, we demonstrate more than 100-fold speedup of the search for complex motifs compared to previously published tools. We have developed a new method for RNA motif search that allows for a significant speedup of the search of complex motifs that include pseudoknots. Such speed improvements are crucial at a time when the rate of DNA sequencing outpaces growth in computing. RNArobo is available at http://compbio.fmph.uniba.sk/rnarobo .

  3. PIPI: PTM-Invariant Peptide Identification Using Coding Method.

    PubMed

    Yu, Fengchao; Li, Ning; Yu, Weichuan

    2016-12-02

    In computational proteomics, the identification of peptides with an unlimited number of post-translational modification (PTM) types is a challenging task. The computational cost associated with database search increases exponentially with respect to the number of modified amino acids and linearly with respect to the number of potential PTM types at each amino acid. The problem becomes intractable very quickly if we want to enumerate all possible PTM patterns. To address this issue, one group of methods named restricted tools (including Mascot, Comet, and MS-GF+) only allow a small number of PTM types in database search process. Alternatively, the other group of methods named unrestricted tools (including MS-Alignment, ProteinProspector, and MODa) avoids enumerating PTM patterns with an alignment-based approach to localizing and characterizing modified amino acids. However, because of the large search space and PTM localization issue, the sensitivity of these unrestricted tools is low. This paper proposes a novel method named PIPI to achieve PTM-invariant peptide identification. PIPI belongs to the category of unrestricted tools. It first codes peptide sequences into Boolean vectors and codes experimental spectra into real-valued vectors. For each coded spectrum, it then searches the coded sequence database to find the top scored peptide sequences as candidates. After that, PIPI uses dynamic programming to localize and characterize modified amino acids in each candidate. We used simulation experiments and real data experiments to evaluate the performance in comparison with restricted tools (i.e., Mascot, Comet, and MS-GF+) and unrestricted tools (i.e., Mascot with error tolerant search, MS-Alignment, ProteinProspector, and MODa). Comparison with restricted tools shows that PIPI has a close sensitivity and running speed. Comparison with unrestricted tools shows that PIPI has the highest sensitivity except for Mascot with error tolerant search and ProteinProspector. These two tools simplify the task by only considering up to one modified amino acid in each peptide, which results in a higher sensitivity but has difficulty in dealing with multiple modified amino acids. The simulation experiments also show that PIPI has the lowest false discovery proportion, the highest PTM characterization accuracy, and the shortest running time among the unrestricted tools.

  4. Medical scientists' information practices in the research work context.

    PubMed

    Roos, Annikki

    2015-03-01

    The aim of the study was to investigate the information practices of medical scientists in the research work context. This is a qualitative study based on semi-structured interviews. The interviews were transcribed and analysed in a web tool for qualitative analysis. Activity theory was used as the theoretical framework. The generating motives for the information related activity come from the core activity, research work. The motives result in actions such as searching and using information. Usability, accessibility and ease of use are the most important conditions that determine information related operations. Medical scientists search and use information most of all in the beginning and at the end of the research work. Information practices appear as an instrument producing activity to the central activity. Information services should be embedded in this core activity and in practice libraries should follow researchers' workflow and embed their tools and services in it. © 2015 Health Libraries Journal.

  5. The Search Engine for Multi-Proteoform Complexes: An Online Tool for the Identification and Stoichiometry Determination of Protein Complexes.

    PubMed

    Skinner, Owen S; Schachner, Luis F; Kelleher, Neil L

    2016-12-08

    Recent advances in top-down mass spectrometry using native electrospray now enable the analysis of intact protein complexes with relatively small sample amounts in an untargeted mode. Here, we describe how to characterize both homo- and heteropolymeric complexes with high molecular specificity using input data produced by tandem mass spectrometry of whole protein assemblies. The tool described is a "search engine for multi-proteoform complexes," (SEMPC) and is available for free online. The output is a list of candidate multi-proteoform complexes and scoring metrics, which are used to define a distinct set of one or more unique protein subunits, their overall stoichiometry in the intact complex, and their pre- and post-translational modifications. Thus, we present an approach for the identification and characterization of intact protein complexes from native mass spectrometry data. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  6. Update on Genomic Databases and Resources at the National Center for Biotechnology Information.

    PubMed

    Tatusova, Tatiana

    2016-01-01

    The National Center for Biotechnology Information (NCBI), as a primary public repository of genomic sequence data, collects and maintains enormous amounts of heterogeneous data. Data for genomes, genes, gene expressions, gene variation, gene families, proteins, and protein domains are integrated with the analytical, search, and retrieval resources through the NCBI website, text-based search and retrieval system, provides a fast and easy way to navigate across diverse biological databases.Comparative genome analysis tools lead to further understanding of evolution processes quickening the pace of discovery. Recent technological innovations have ignited an explosion in genome sequencing that has fundamentally changed our understanding of the biology of living organisms. This huge increase in DNA sequence data presents new challenges for the information management system and the visualization tools. New strategies have been designed to bring an order to this genome sequence shockwave and improve the usability of associated data.

  7. Database systems for knowledge-based discovery.

    PubMed

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.

  8. Rich Language Analysis for Counterterrorism

    NASA Astrophysics Data System (ADS)

    Guidère, Mathieu; Howard, Newton; Argamon, Shlomo

    Accurate and relevant intelligence is critical for effective counterterrorism. Too much irrelevant information is as bad or worse than not enough information. Modern computational tools promise to provide better search and summarization capabilities to help analysts filter and select relevant and key information. However, to do this task effectively, such tools must have access to levels of meaning beyond the literal. Terrorists operating in context-rich cultures like fundamentalist Islam use messages with multiple levels of interpretation, which are easily misunderstood by non-insiders. This chapter discusses several kinds of such “encryption” used by terrorists and insurgents in the Arabic language, and how knowledge of such methods can be used to enhance computational text analysis techniques for use in counterterrorism.

  9. Characteristics and use of urban health indicator tools by municipal built environment policy and decision-makers: a systematic review protocol.

    PubMed

    Pineo, Helen; Glonti, Ketevan; Rutter, Harry; Zimmermann, Nicole; Wilkinson, Paul; Davies, Michael

    2017-01-13

    There is wide agreement that there is a lack of attention to health in municipal environmental policy-making, such as urban planning and regeneration. Explanations for this include differing professional norms between health and urban environment professionals, system complexity and limited evidence for causality between attributes of the built environment and health outcomes. Data from urban health indicator (UHI) tools are potentially a valuable form of evidence for local government policy and decision-makers. Although many UHI tools have been specifically developed to inform policy, there is poor understanding of how they are used. This study aims to identify the nature and characteristics of UHI tools and their use by municipal built environment policy and decision-makers. Health and social sciences databases (ASSIA, Campbell Library, EMBASE, MEDLINE, Scopus, Social Policy and Practice and Web of Science Core Collection) will be searched for studies using UHI tools alongside hand-searching of key journals and citation searches of included studies. Advanced searches of practitioner websites and Google will also be used to find grey literature. Search results will be screened for UHI tools, and for studies which report on or evaluate the use of such tools. Data about UHI tools will be extracted to compile a census and taxonomy of existing tools based on their specific characteristics and purpose. In addition, qualitative and quantitative studies about the use of these tools will be appraised using quality appraisal tools produced by the UK National Institute for Health and Care Excellence (NICE) and synthesised in order to gain insight into the perceptions, value and use of UHI tools in the municipal built environment policy and decision-making process. This review is not registered with PROSPERO. This systematic review focuses specifically on UHI tools that assess the physical environment's impact on health (such as transport, housing, air quality and greenspace). This study will help indicator producers understand whether this form of evidence is of value to built environment policy and decision-makers and how such tools should be tailored for this audience. N/A.

  10. SS-Wrapper: a package of wrapper applications for similarity searches on Linux clusters.

    PubMed

    Wang, Chunlin; Lefkowitz, Elliot J

    2004-10-28

    Large-scale sequence comparison is a powerful tool for biological inference in modern molecular biology. Comparing new sequences to those in annotated databases is a useful source of functional and structural information about these sequences. Using software such as the basic local alignment search tool (BLAST) or HMMPFAM to identify statistically significant matches between newly sequenced segments of genetic material and those in databases is an important task for most molecular biologists. Searching algorithms are intrinsically slow and data-intensive, especially in light of the rapid growth of biological sequence databases due to the emergence of high throughput DNA sequencing techniques. Thus, traditional bioinformatics tools are impractical on PCs and even on dedicated UNIX servers. To take advantage of larger databases and more reliable methods, high performance computation becomes necessary. We describe the implementation of SS-Wrapper (Similarity Search Wrapper), a package of wrapper applications that can parallelize similarity search applications on a Linux cluster. Our wrapper utilizes a query segmentation-search (QS-search) approach to parallelize sequence database search applications. It takes into consideration load balancing between each node on the cluster to maximize resource usage. QS-search is designed to wrap many different search tools, such as BLAST and HMMPFAM using the same interface. This implementation does not alter the original program, so newly obtained programs and program updates should be accommodated easily. Benchmark experiments using QS-search to optimize BLAST and HMMPFAM showed that QS-search accelerated the performance of these programs almost linearly in proportion to the number of CPUs used. We have also implemented a wrapper that utilizes a database segmentation approach (DS-BLAST) that provides a complementary solution for BLAST searches when the database is too large to fit into the memory of a single node. Used together, QS-search and DS-BLAST provide a flexible solution to adapt sequential similarity searching applications in high performance computing environments. Their ease of use and their ability to wrap a variety of database search programs provide an analytical architecture to assist both the seasoned bioinformaticist and the wet-bench biologist.

  11. SS-Wrapper: a package of wrapper applications for similarity searches on Linux clusters

    PubMed Central

    Wang, Chunlin; Lefkowitz, Elliot J

    2004-01-01

    Background Large-scale sequence comparison is a powerful tool for biological inference in modern molecular biology. Comparing new sequences to those in annotated databases is a useful source of functional and structural information about these sequences. Using software such as the basic local alignment search tool (BLAST) or HMMPFAM to identify statistically significant matches between newly sequenced segments of genetic material and those in databases is an important task for most molecular biologists. Searching algorithms are intrinsically slow and data-intensive, especially in light of the rapid growth of biological sequence databases due to the emergence of high throughput DNA sequencing techniques. Thus, traditional bioinformatics tools are impractical on PCs and even on dedicated UNIX servers. To take advantage of larger databases and more reliable methods, high performance computation becomes necessary. Results We describe the implementation of SS-Wrapper (Similarity Search Wrapper), a package of wrapper applications that can parallelize similarity search applications on a Linux cluster. Our wrapper utilizes a query segmentation-search (QS-search) approach to parallelize sequence database search applications. It takes into consideration load balancing between each node on the cluster to maximize resource usage. QS-search is designed to wrap many different search tools, such as BLAST and HMMPFAM using the same interface. This implementation does not alter the original program, so newly obtained programs and program updates should be accommodated easily. Benchmark experiments using QS-search to optimize BLAST and HMMPFAM showed that QS-search accelerated the performance of these programs almost linearly in proportion to the number of CPUs used. We have also implemented a wrapper that utilizes a database segmentation approach (DS-BLAST) that provides a complementary solution for BLAST searches when the database is too large to fit into the memory of a single node. Conclusions Used together, QS-search and DS-BLAST provide a flexible solution to adapt sequential similarity searching applications in high performance computing environments. Their ease of use and their ability to wrap a variety of database search programs provide an analytical architecture to assist both the seasoned bioinformaticist and the wet-bench biologist. PMID:15511296

  12. Tools used to assess medical students competence in procedural skills at the end of a primary medical degree: a systematic review.

    PubMed

    Morris, Marie C; Gallagher, Tom K; Ridgway, Paul F

    2012-01-01

    The objective was to systematically review the literature to identify and grade tools used for the end point assessment of procedural skills (e.g., phlebotomy, IV cannulation, suturing) competence in medical students prior to certification. The authors searched eight bibliographic databases electronically - ERIC, Medline, CINAHL, EMBASE, Psychinfo, PsychLIT, EBM Reviews and the Cochrane databases. Two reviewers independently reviewed the literature to identify procedural assessment tools used specifically for assessing medical students within the PRISMA framework, the inclusion/exclusion criteria and search period. Papers on OSATS and DOPS were excluded as they focused on post-registration assessment and clinical rather than simulated competence. Of 659 abstracted articles 56 identified procedural assessment tools. Only 11 specifically assessed medical students. The final 11 studies consisted of 1 randomised controlled trial, 4 comparative and 6 descriptive studies yielding 12 heterogeneous procedural assessment tools for analysis. Seven tools addressed four discrete pre-certification skills, basic suture (3), airway management (2), nasogastric tube insertion (1) and intravenous cannulation (1). One tool used a generic assessment of procedural skills. Two tools focused on postgraduate laparoscopic skills and one on osteopathic students and thus were not included in this review. The levels of evidence are low with regard to reliability - κ = 0.65-0.71 and minimum validity is achieved - face and content. In conclusion, there are no tools designed specifically to assess competence of procedural skills in a final certification examination. There is a need to develop standardised tools with proven reliability and validity for assessment of procedural skills competence at the end of medical training. Medicine graduates must have comparable levels of procedural skills acquisition entering the clinical workforce irrespective of the country of training.

  13. RooStatsCms: A tool for analysis modelling, combination and statistical studies

    NASA Astrophysics Data System (ADS)

    Piparo, D.; Schott, G.; Quast, G.

    2010-04-01

    RooStatsCms is an object oriented statistical framework based on the RooFit technology. Its scope is to allow the modelling, statistical analysis and combination of multiple search channels for new phenomena in High Energy Physics. It provides a variety of methods described in literature implemented as classes, whose design is oriented to the execution of multiple CPU intensive jobs on batch systems or on the Grid.

  14. Study of Adversarial and Defensive Components in an Experimental Machinery Control Systems Laboratory Environment

    DTIC Science & Technology

    2014-09-01

    prevention system (IPS), capable of performing real-time traffic analysis and packet logging on IP networks [25]. Snort’s features include protocol... analysis and content searching/matching. Snort can detect a variety of attacks and network probes, such as buffer overflows, port scans and OS...www.digitalbond.com/tools/the- rack/jtr-s7-password-cracking/ Kismet Mike Kershaw Cross- platform Open source wireless network detector and wireless sniffer

  15. Functional Connectivity Between Superior Parietal Lobule and Primary Visual Cortex "at Rest" Predicts Visual Search Efficiency.

    PubMed

    Bueichekú, Elisenda; Ventura-Campos, Noelia; Palomar-García, María-Ángeles; Miró-Padilla, Anna; Parcet, María-Antonia; Ávila, César

    2015-10-01

    Spatiotemporal activity that emerges spontaneously "at rest" has been proposed to reflect individual a priori biases in cognitive processing. This research focused on testing neurocognitive models of visual attention by studying the functional connectivity (FC) of the superior parietal lobule (SPL), given its central role in establishing priority maps during visual search tasks. Twenty-three human participants completed a functional magnetic resonance imaging session that featured a resting-state scan, followed by a visual search task based on the alphanumeric category effect. As expected, the behavioral results showed longer reaction times and more errors for the within-category (i.e., searching a target letter among letters) than the between-category search (i.e., searching a target letter among numbers). The within-category condition was related to greater activation of the superior and inferior parietal lobules, occipital cortex, inferior frontal cortex, dorsal anterior cingulate cortex, and the superior colliculus than the between-category search. The resting-state FC analysis of the SPL revealed a broad network that included connections with the inferotemporal cortex, dorsolateral prefrontal cortex, and dorsal frontal areas like the supplementary motor area and frontal eye field. Noteworthy, the regression analysis revealed that the more efficient participants in the visual search showed stronger FC between the SPL and areas of primary visual cortex (V1) related to the search task. We shed some light on how the SPL establishes a priority map of the environment during visual attention tasks and how FC is a valuable tool for assessing individual differences while performing cognitive tasks.

  16. MerCat: a versatile k-mer counter and diversity estimator for database-independent property analysis obtained from metagenomic and/or metatranscriptomic sequencing data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Richard A.; Panyala, Ajay R.; Glass, Kevin A.

    MerCat is a parallel, highly scalable and modular property software package for robust analysis of features in next-generation sequencing data. MerCat inputs include assembled contigs and raw sequence reads from any platform resulting in feature abundance counts tables. MerCat allows for direct analysis of data properties without reference sequence database dependency commonly used by search tools such as BLAST and/or DIAMOND for compositional analysis of whole community shotgun sequencing (e.g. metagenomes and metatranscriptomes).

  17. Annotare--a tool for annotating high-throughput biomedical investigations and resulting data.

    PubMed

    Shankar, Ravi; Parkinson, Helen; Burdett, Tony; Hastings, Emma; Liu, Junmin; Miller, Michael; Srinivasa, Rashmi; White, Joseph; Brazma, Alvis; Sherlock, Gavin; Stoeckert, Christian J; Ball, Catherine A

    2010-10-01

    Computational methods in molecular biology will increasingly depend on standards-based annotations that describe biological experiments in an unambiguous manner. Annotare is a software tool that enables biologists to easily annotate their high-throughput experiments, biomaterials and data in a standards-compliant way that facilitates meaningful search and analysis. Annotare is available from http://code.google.com/p/annotare/ under the terms of the open-source MIT License (http://www.opensource.org/licenses/mit-license.php). It has been tested on both Mac and Windows.

  18. Content and Design Features of Academic Health Sciences Libraries' Home Pages.

    PubMed

    McConnaughy, Rozalynd P; Wilson, Steven P

    2018-01-01

    The goal of this content analysis was to identify commonly used content and design features of academic health sciences library home pages. After developing a checklist, data were collected from 135 academic health sciences library home pages. The core components of these library home pages included a contact phone number, a contact email address, an Ask-a-Librarian feature, the physical address listed, a feedback/suggestions link, subject guides, a discovery tool or database-specific search box, multimedia, social media, a site search option, a responsive web design, and a copyright year or update date.

  19. Tools for observational gait analysis in patients with stroke: a systematic review.

    PubMed

    Ferrarello, Francesco; Bianchi, Valeria Anna Maria; Baccini, Marco; Rubbieri, Gaia; Mossello, Enrico; Cavallini, Maria Chiara; Marchionni, Niccolò; Di Bari, Mauro

    2013-12-01

    Stroke severely affects walking ability, and assessment of gait kinematics is important in defining diagnosis, planning treatment, and evaluating interventions in stroke rehabilitation. Although observational gait analysis is the most common approach to evaluate gait kinematics, tools useful for this purpose have received little attention in the scientific literature and have not been thoroughly reviewed. The aims of this systematic review were to identify tools proposed to conduct observational gait analysis in adults with a stroke, to summarize evidence concerning their quality, and to assess their implementation in rehabilitation research and clinical practice. An extensive search was performed of original articles reporting on visual/observational tools developed to investigate gait kinematics in adults with a stroke. Two reviewers independently selected studies, extracted data, assessed quality of the included studies, and scored the metric properties and clinical utility of each tool. Rigor in reporting metric properties and dissemination of the tools also was evaluated. Five tools were identified, not all of which had been tested adequately for their metric properties. Evaluation of content validity was partially satisfactory. Reliability was poorly investigated in all but one tool. Concurrent validity and sensitivity to change were shown for 3 and 2 tools, respectively. Overall, adequate levels of quality were rarely reached. The dissemination of the tools was poor. Based on critical appraisal, the Gait Assessment and Intervention Tool shows a good level of quality, and its use in stroke rehabilitation is recommended. Rigorous studies are needed for the other tools in order to establish their usefulness.

  20. An Assessment, Survey, and Systems Engineering Design of Information Sharing and Discovery Systems in a Network-Centric Environment

    DTIC Science & Technology

    2009-12-01

    type of information available through DISA search tools: Centralized Search, Federated Search , and Enterprise Search (Defense Information Systems... Federated Search , and Enterprise 41 Search services. Likewise, EFD and GCDS support COIs in discovering information by making information

  1. An Automated Ab Initio Framework for Identifying New Ferroelectrics

    NASA Astrophysics Data System (ADS)

    Smidt, Tess; Reyes-Lillo, Sebastian E.; Jain, Anubhav; Neaton, Jeffrey B.

    Ferroelectric materials have a wide-range of technological applications including non-volatile RAM and optoelectronics. In this work, we present an automated first-principles search for ferroelectrics. We integrate density functional theory, crystal structure databases, symmetry tools, workflow software, and a custom analysis toolkit to build a library of known and proposed ferroelectrics. We screen thousands of candidates using symmetry relations between nonpolar and polar structure pairs. We use two search strategies 1) polar-nonpolar pairs with the same composition and 2) polar-nonpolar structure type pairs. Results are automatically parsed, stored in a database, and accessible via a web interface showing distortion animations and plots of polarization and total energy as a function of distortion. We benchmark our results against experimental data, present new ferroelectric candidates found through our search, and discuss future work on expanding this search methodology to other material classes such as anti-ferroelectrics and multiferroics.

  2. PROSPECT improves cis-acting regulatory element prediction by integrating expression profile data with consensus pattern searches

    PubMed Central

    Fujibuchi, Wataru; Anderson, John S. J.; Landsman, David

    2001-01-01

    Consensus pattern and matrix-based searches designed to predict cis-acting transcriptional regulatory sequences have historically been subject to large numbers of false positives. We sought to decrease false positives by incorporating expression profile data into a consensus pattern-based search method. We have systematically analyzed the expression phenotypes of over 6000 yeast genes, across 121 expression profile experiments, and correlated them with the distribution of 14 known regulatory elements over sequences upstream of the genes. Our method is based on a metric we term probabilistic element assessment (PEA), which is a ranking of potential sites based on sequence similarity in the upstream regions of genes with similar expression phenotypes. For eight of the 14 known elements that we examined, our method had a much higher selectivity than a naïve consensus pattern search. Based on our analysis, we have developed a web-based tool called PROSPECT, which allows consensus pattern-based searching of gene clusters obtained from microarray data. PMID:11574681

  3. Visual search of cyclic spatio-temporal events

    NASA Astrophysics Data System (ADS)

    Gautier, Jacques; Davoine, Paule-Annick; Cunty, Claire

    2018-05-01

    The analysis of spatio-temporal events, and especially of relationships between their different dimensions (space-time-thematic attributes), can be done with geovisualization interfaces. But few geovisualization tools integrate the cyclic dimension of spatio-temporal event series (natural events or social events). Time Coil and Time Wave diagrams represent both the linear time and the cyclic time. By introducing a cyclic temporal scale, these diagrams may highlight the cyclic characteristics of spatio-temporal events. However, the settable cyclic temporal scales are limited to usual durations like days or months. Because of that, these diagrams cannot be used to visualize cyclic events, which reappear with an unusual period, and don't allow to make a visual search of cyclic events. Also, they don't give the possibility to identify the relationships between the cyclic behavior of the events and their spatial features, and more especially to identify localised cyclic events. The lack of possibilities to represent the cyclic time, outside of the temporal diagram of multi-view geovisualization interfaces, limits the analysis of relationships between the cyclic reappearance of events and their other dimensions. In this paper, we propose a method and a geovisualization tool, based on the extension of Time Coil and Time Wave, to provide a visual search of cyclic events, by allowing to set any possible duration to the diagram's cyclic temporal scale. We also propose a symbology approach to push the representation of the cyclic time into the map, in order to improve the analysis of relationships between space and the cyclic behavior of events.

  4. An application of a relational database system for high-throughput prediction of elemental compositions from accurate mass values.

    PubMed

    Sakurai, Nozomu; Ara, Takeshi; Kanaya, Shigehiko; Nakamura, Yukiko; Iijima, Yoko; Enomoto, Mitsuo; Motegi, Takeshi; Aoki, Koh; Suzuki, Hideyuki; Shibata, Daisuke

    2013-01-15

    High-accuracy mass values detected by high-resolution mass spectrometry analysis enable prediction of elemental compositions, and thus are used for metabolite annotations in metabolomic studies. Here, we report an application of a relational database to significantly improve the rate of elemental composition predictions. By searching a database of pre-calculated elemental compositions with fixed kinds and numbers of atoms, the approach eliminates redundant evaluations of the same formula that occur in repeated calculations with other tools. When our approach is compared with HR2, which is one of the fastest tools available, our database search times were at least 109 times shorter than those of HR2. When a solid-state drive (SSD) was applied, the search time was 488 times shorter at 5 ppm mass tolerance and 1833 times at 0.1 ppm. Even if the search by HR2 was performed with 8 threads in a high-spec Windows 7 PC, the database search times were at least 26 and 115 times shorter without and with the SSD. These improvements were enhanced in a low spec Windows XP PC. We constructed a web service 'MFSearcher' to query the database in a RESTful manner. Available for free at http://webs2.kazusa.or.jp/mfsearcher. The web service is implemented in Java, MySQL, Apache and Tomcat, with all major browsers supported. sakurai@kazusa.or.jp Supplementary data are available at Bioinformatics online.

  5. Evaluation of Federated Searching Options for the School Library

    ERIC Educational Resources Information Center

    Abercrombie, Sarah E.

    2008-01-01

    Three hosted federated search tools, Follett One Search, Gale PowerSearch Plus, and WebFeat Express, were configured and implemented in a school library. Databases from five vendors and the OPAC were systematically searched. Federated search results were compared with each other and to the results of the same searches in the database's native…

  6. Searching social networks for subgraph patterns

    NASA Astrophysics Data System (ADS)

    Ogaard, Kirk; Kase, Sue; Roy, Heather; Nagi, Rakesh; Sambhoos, Kedar; Sudit, Moises

    2013-06-01

    Software tools for Social Network Analysis (SNA) are being developed which support various types of analysis of social networks extracted from social media websites (e.g., Twitter). Once extracted and stored in a database such social networks are amenable to analysis by SNA software. This data analysis often involves searching for occurrences of various subgraph patterns (i.e., graphical representations of entities and relationships). The authors have developed the Graph Matching Toolkit (GMT) which provides an intuitive Graphical User Interface (GUI) for a heuristic graph matching algorithm called the Truncated Search Tree (TruST) algorithm. GMT is a visual interface for graph matching algorithms processing large social networks. GMT enables an analyst to draw a subgraph pattern by using a mouse to select categories and labels for nodes and links from drop-down menus. GMT then executes the TruST algorithm to find the top five occurrences of the subgraph pattern within the social network stored in the database. GMT was tested using a simulated counter-insurgency dataset consisting of cellular phone communications within a populated area of operations in Iraq. The results indicated GMT (when executing the TruST graph matching algorithm) is a time-efficient approach to searching large social networks. GMT's visual interface to a graph matching algorithm enables intelligence analysts to quickly analyze and summarize the large amounts of data necessary to produce actionable intelligence.

  7. Data analysis of gravitational-wave signals from spinning neutron stars. III. Detection statistics and computational requirements

    NASA Astrophysics Data System (ADS)

    Jaranowski, Piotr; Królak, Andrzej

    2000-03-01

    We develop the analytic and numerical tools for data analysis of the continuous gravitational-wave signals from spinning neutron stars for ground-based laser interferometric detectors. The statistical data analysis method that we investigate is maximum likelihood detection which for the case of Gaussian noise reduces to matched filtering. We study in detail the statistical properties of the optimum functional that needs to be calculated in order to detect the gravitational-wave signal and estimate its parameters. We find it particularly useful to divide the parameter space into elementary cells such that the values of the optimal functional are statistically independent in different cells. We derive formulas for false alarm and detection probabilities both for the optimal and the suboptimal filters. We assess the computational requirements needed to do the signal search. We compare a number of criteria to build sufficiently accurate templates for our data analysis scheme. We verify the validity of our concepts and formulas by means of the Monte Carlo simulations. We present algorithms by which one can estimate the parameters of the continuous signals accurately. We find, confirming earlier work of other authors, that given a 100 Gflops computational power an all-sky search for observation time of 7 days and directed search for observation time of 120 days are possible whereas an all-sky search for 120 days of observation time is computationally prohibitive.

  8. SA-Search: a web tool for protein structure mining based on a Structural Alphabet

    PubMed Central

    Guyon, Frédéric; Camproux, Anne-Claude; Hochez, Joëlle; Tufféry, Pierre

    2004-01-01

    SA-Search is a web tool that can be used to mine for protein structures and extract structural similarities. It is based on a hidden Markov model derived Structural Alphabet (SA) that allows the compression of three-dimensional (3D) protein conformations into a one-dimensional (1D) representation using a limited number of prototype conformations. Using such a representation, classical methods developed for amino acid sequences can be employed. Currently, SA-Search permits the performance of fast 3D similarity searches such as the extraction of exact words using a suffix tree approach, and the search for fuzzy words viewed as a simple 1D sequence alignment problem. SA-Search is available at http://bioserv.rpbs.jussieu.fr/cgi-bin/SA-Search. PMID:15215446

  9. SA-Search: a web tool for protein structure mining based on a Structural Alphabet.

    PubMed

    Guyon, Frédéric; Camproux, Anne-Claude; Hochez, Joëlle; Tufféry, Pierre

    2004-07-01

    SA-Search is a web tool that can be used to mine for protein structures and extract structural similarities. It is based on a hidden Markov model derived Structural Alphabet (SA) that allows the compression of three-dimensional (3D) protein conformations into a one-dimensional (1D) representation using a limited number of prototype conformations. Using such a representation, classical methods developed for amino acid sequences can be employed. Currently, SA-Search permits the performance of fast 3D similarity searches such as the extraction of exact words using a suffix tree approach, and the search for fuzzy words viewed as a simple 1D sequence alignment problem. SA-Search is available at http://bioserv.rpbs.jussieu.fr/cgi-bin/SA-Search.

  10. CUAHSI Data Services: Tools and Cyberinfrastructure for Water Data Discovery, Research and Collaboration

    NASA Astrophysics Data System (ADS)

    Seul, M.; Brazil, L.; Castronova, A. M.

    2017-12-01

    CUAHSI Data Services: Tools and Cyberinfrastructure for Water Data Discovery, Research and CollaborationEnabling research surrounding interdisciplinary topics often requires a combination of finding, managing, and analyzing large data sets and models from multiple sources. This challenge has led the National Science Foundation to make strategic investments in developing community data tools and cyberinfrastructure that focus on water data, as it is central need for many of these research topics. CUAHSI (The Consortium of Universities for the Advancement of Hydrologic Science, Inc.) is a non-profit organization funded by the National Science Foundation to aid students, researchers, and educators in using and managing data and models to support research and education in the water sciences. This presentation will focus on open-source CUAHSI-supported tools that enable enhanced data discovery online using advanced searching capabilities and computational analysis run in virtual environments pre-designed for educators and scientists so they can focus their efforts on data analysis rather than IT set-up.

  11. The Aging Eye

    MedlinePlus

    ... Search Search the NEI Website search NEI on Social Media | Search A-Z | en español | Text size S M ... Outreach Tools and Tips Watch, Listen, and Learn Social Media Glaucoma Glaucoma Learn About Glaucoma Keep Vision in ...

  12. The Front-End to Google for Teachers' Online Searching

    ERIC Educational Resources Information Center

    Seyedarabi, Faezeh

    2006-01-01

    This paper reports on an ongoing work in designing and developing a personalised search tool for teachers' online searching using Google search engine (repository) for the implementation and testing of the first research prototype.

  13. Pepitome: evaluating improved spectral library search for identification complementarity and quality assessment

    PubMed Central

    Dasari, Surendra; Chambers, Matthew C.; Martinez, Misti A.; Carpenter, Kristin L.; Ham, Amy-Joan L.; Vega-Montoto, Lorenzo J.; Tabb, David L.

    2012-01-01

    Spectral libraries have emerged as a viable alternative to protein sequence databases for peptide identification. These libraries contain previously detected peptide sequences and their corresponding tandem mass spectra (MS/MS). Search engines can then identify peptides by comparing experimental MS/MS scans to those in the library. Many of these algorithms employ the dot product score for measuring the quality of a spectrum-spectrum match (SSM). This scoring system does not offer a clear statistical interpretation and ignores fragment ion m/z discrepancies in the scoring. We developed a new spectral library search engine, Pepitome, which employs statistical systems for scoring SSMs. Pepitome outperformed the leading library search tool, SpectraST, when analyzing data sets acquired on three different mass spectrometry platforms. We characterized the reliability of spectral library searches by confirming shotgun proteomics identifications through RNA-Seq data. Applying spectral library and database searches on the same sample revealed their complementary nature. Pepitome identifications enabled the automation of quality analysis and quality control (QA/QC) for shotgun proteomics data acquisition pipelines. PMID:22217208

  14. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kunst, O.; Cubasch, U.

    2014-12-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced by other users-saving CPU time, I/O and disk space. This study presents the different techniques and advantages of such a hybrid evaluation system making use of a Big Data HPC in climate science. website: www-miklip.dkrz.de visitor-login: guest password: miklip

  15. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Ulbrich, Uwe; Cubasch, Ulrich

    2015-04-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced by other users-saving CPU time, I/O and disk space. This study presents the different techniques and advantages of such a hybrid evaluation system making use of a Big Data HPC in climate science. website: www-miklip.dkrz.de visitor-login: click on "Guest"

  16. The Biomolecular Crystallization Database Version 4: expanded content and new features.

    PubMed

    Tung, Michael; Gallagher, D Travis

    2009-01-01

    The Biological Macromolecular Crystallization Database (BMCD) has been a publicly available resource since 1988, providing a curated archive of information on crystal growth for proteins and other biological macromolecules. The BMCD content has recently been expanded to include 14 372 crystal entries. The resource continues to be freely available at http://xpdb.nist.gov:8060/BMCD4. In addition, the software has been adapted to support the Java-based Lucene query language, enabling detailed searching over specific parameters, and explicit search of parameter ranges is offered for five numeric variables. Extensive tools have been developed for import and handling of data from the RCSB Protein Data Bank. The updated BMCD is called version 4.02 or BMCD4. BMCD4 entries have been expanded to include macromolecule sequence, enabling more elaborate analysis of relations among protein properties, crystal-growth conditions and the geometric and diffraction properties of the crystals. The BMCD version 4.02 contains greatly expanded content and enhanced search capabilities to facilitate scientific analysis and design of crystal-growth strategies.

  17. CNV-WebStore: online CNV analysis, storage and interpretation.

    PubMed

    Vandeweyer, Geert; Reyniers, Edwin; Wuyts, Wim; Rooms, Liesbeth; Kooy, R Frank

    2011-01-05

    Microarray technology allows the analysis of genomic aberrations at an ever increasing resolution, making functional interpretation of these vast amounts of data the main bottleneck in routine implementation of high resolution array platforms, and emphasising the need for a centralised and easy to use CNV data management and interpretation system. We present CNV-WebStore, an online platform to streamline the processing and downstream interpretation of microarray data in a clinical context, tailored towards but not limited to the Illumina BeadArray platform. Provided analysis tools include CNV analsyis, parent of origin and uniparental disomy detection. Interpretation tools include data visualisation, gene prioritisation, automated PubMed searching, linking data to several genome browsers and annotation of CNVs based on several public databases. Finally a module is provided for uniform reporting of results. CNV-WebStore is able to present copy number data in an intuitive way to both lab technicians and clinicians, making it a useful tool in daily clinical practice.

  18. Use of methodological tools for assessing the quality of studies in periodontology and implant dentistry: a systematic review.

    PubMed

    Faggion, Clovis M; Huda, Fahd; Wasiak, Jason

    2014-06-01

    To evaluate the methodological approaches used to assess the quality of studies included in systematic reviews (SRs) in periodontology and implant dentistry. Two electronic databases (PubMed and Cochrane Database of Systematic Reviews) were searched independently to identify SRs examining interventions published through 2 September 2013. The reference lists of included SRs and records of 10 specialty dental journals were searched manually. Methodological approaches were assessed using seven criteria based on the Cochrane Handbook for Systematic Reviews of Interventions. Temporal trends in methodological quality were also explored. Of the 159 SRs with meta-analyses included in the analysis, 44 (28%) reported the use of domain-based tools, 15 (9%) reported the use of checklists and 7 (4%) reported the use of scales. Forty-two (26%) SRs reported use of more than one tool. Criteria were met heterogeneously; authors of 15 (9%) publications incorporated the quality of evidence of primary studies into SRs, whereas 69% of SRs reported methodological approaches in the Materials/Methods section. Reporting of four criteria was significantly better in recent (2010-2013) than in previous publications. The analysis identified several methodological limitations of approaches used to assess evidence in studies included in SRs in periodontology and implant dentistry. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Web-Based Tools for Text-Based Patient-Provider Communication in Chronic Conditions: Scoping Review

    PubMed Central

    Grunfeld, Eva; Makuwaza, Tutsirai; Bender, Jacqueline L

    2017-01-01

    Background Patients with chronic conditions require ongoing care which not only necessitates support from health care providers outside appointments but also self-management. Web-based tools for text-based patient-provider communication, such as secure messaging, allow for sharing of contextual information and personal narrative in a simple accessible medium, empowering patients and enabling their providers to address emerging care needs. Objective The objectives of this study were to (1) conduct a systematic search of the published literature and the Internet for Web-based tools for text-based communication between patients and providers; (2) map tool characteristics, their intended use, contexts in which they were used, and by whom; (3) describe the nature of their evaluation; and (4) understand the terminology used to describe the tools. Methods We conducted a scoping review using the MEDLINE (Medical Literature Analysis and Retrieval System Online) and EMBASE (Excerpta Medica Database) databases. We summarized information on the characteristics of the tools (structure, functions, and communication paradigm), intended use, context and users, evaluation (study design and outcomes), and terminology. We performed a parallel search of the Internet to compare with tools identified in the published literature. Results We identified 54 papers describing 47 unique tools from 13 countries studied in the context of 68 chronic health conditions. The majority of tools (77%, 36/47) had functions in addition to communication (eg, viewable care plan, symptom diary, or tracker). Eight tools (17%, 8/47) were described as allowing patients to communicate with the team or multiple health care providers. Most of the tools were intended to support communication regarding symptom reporting (49%, 23/47), and lifestyle or behavior modification (36%, 17/47). The type of health care providers who used tools to communicate with patients were predominantly allied health professionals of various disciplines (30%, 14/47), nurses (23%, 11/47), and physicians (19%, 9/47), among others. Over half (52%, 25/48) of the tools were evaluated in randomized controlled trials, and 23 tools (48%, 23/48) were evaluated in nonrandomized studies. Terminology of tools varied by intervention type and functionality and did not consistently reflect a theme of communication. The majority of tools found in the Internet search were patient portals from 6 developers; none were found among published articles. Conclusions Web-based tools for text-based patient-provider communication were identified from a wide variety of clinical contexts and with varied functionality. Tools were most prevalent in contexts where intended use was self-management. Few tools for team-based communication were found, but this may become increasingly important as chronic disease care becomes more interdisciplinary. PMID:29079552

  20. Web-Based Tools for Text-Based Patient-Provider Communication in Chronic Conditions: Scoping Review.

    PubMed

    Voruganti, Teja; Grunfeld, Eva; Makuwaza, Tutsirai; Bender, Jacqueline L

    2017-10-27

    Patients with chronic conditions require ongoing care which not only necessitates support from health care providers outside appointments but also self-management. Web-based tools for text-based patient-provider communication, such as secure messaging, allow for sharing of contextual information and personal narrative in a simple accessible medium, empowering patients and enabling their providers to address emerging care needs. The objectives of this study were to (1) conduct a systematic search of the published literature and the Internet for Web-based tools for text-based communication between patients and providers; (2) map tool characteristics, their intended use, contexts in which they were used, and by whom; (3) describe the nature of their evaluation; and (4) understand the terminology used to describe the tools. We conducted a scoping review using the MEDLINE (Medical Literature Analysis and Retrieval System Online) and EMBASE (Excerpta Medica Database) databases. We summarized information on the characteristics of the tools (structure, functions, and communication paradigm), intended use, context and users, evaluation (study design and outcomes), and terminology. We performed a parallel search of the Internet to compare with tools identified in the published literature. We identified 54 papers describing 47 unique tools from 13 countries studied in the context of 68 chronic health conditions. The majority of tools (77%, 36/47) had functions in addition to communication (eg, viewable care plan, symptom diary, or tracker). Eight tools (17%, 8/47) were described as allowing patients to communicate with the team or multiple health care providers. Most of the tools were intended to support communication regarding symptom reporting (49%, 23/47), and lifestyle or behavior modification (36%, 17/47). The type of health care providers who used tools to communicate with patients were predominantly allied health professionals of various disciplines (30%, 14/47), nurses (23%, 11/47), and physicians (19%, 9/47), among others. Over half (52%, 25/48) of the tools were evaluated in randomized controlled trials, and 23 tools (48%, 23/48) were evaluated in nonrandomized studies. Terminology of tools varied by intervention type and functionality and did not consistently reflect a theme of communication. The majority of tools found in the Internet search were patient portals from 6 developers; none were found among published articles. Web-based tools for text-based patient-provider communication were identified from a wide variety of clinical contexts and with varied functionality. Tools were most prevalent in contexts where intended use was self-management. Few tools for team-based communication were found, but this may become increasingly important as chronic disease care becomes more interdisciplinary. ©Teja Voruganti, Eva Grunfeld, Tutsirai Makuwaza, Jacqueline L Bender. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 27.10.2017.

  1. Ganglion Cyst

    MedlinePlus

    ... with aspiration and injection therapy, there are nevertheless cases in which the ganglion cyst returns. Find an ACFAS Physician Search Search Tools Find an ACFAS Physician: Search by Mail Address ...

  2. Web Search Studies: Multidisciplinary Perspectives on Web Search Engines

    NASA Astrophysics Data System (ADS)

    Zimmer, Michael

    Perhaps the most significant tool of our internet age is the web search engine, providing a powerful interface for accessing the vast amount of information available on the world wide web and beyond. While still in its infancy compared to the knowledge tools that precede it - such as the dictionary or encyclopedia - the impact of web search engines on society and culture has already received considerable attention from a variety of academic disciplines and perspectives. This article aims to organize a meta-discipline of “web search studies,” centered around a nucleus of major research on web search engines from five key perspectives: technical foundations and evaluations; transaction log analyses; user studies; political, ethical, and cultural critiques; and legal and policy analyses.

  3. Make Mine a Metasearcher, Please!

    ERIC Educational Resources Information Center

    Repman, Judi; Carlson, Randal D.

    2000-01-01

    Describes metasearch tools and explains their value in helping library media centers improve students' Web searches. Discusses Boolean queries and the emphasis on speed at the expense of comprehensiveness; and compares four metasearch tools, including the number of search engines consulted, user control, and databases included. (LRW)

  4. Patent urachus repair - slideshow

    MedlinePlus

    ... Drugs & Supplements Videos & Tools About MedlinePlus Show Search Search MedlinePlus GO GO About MedlinePlus Site Map FAQs Customer Support Health Topics Drugs & Supplements Videos & Tools Español You Are Here: Home → Medical Encyclopedia → Patent urachus repair - series—Normal anatomy URL of this ...

  5. U.S. Army Research Laboratory (ARL) XPairIt Simulator for Peptide Docking and Analysis

    DTIC Science & Technology

    2014-07-01

    results from a case study, docking a short peptide to a small protein. For this test we choose the 1RXZ system from the Protein Data Bank, which...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data ...core of XPairIt, which additionally contains many data management and organization options, analysis tools, and custom simulation methodology. Two

  6. BRAD, the genetics and genomics database for Brassica plants.

    PubMed

    Cheng, Feng; Liu, Shengyi; Wu, Jian; Fang, Lu; Sun, Silong; Liu, Bo; Li, Pingxia; Hua, Wei; Wang, Xiaowu

    2011-10-13

    Brassica species include both vegetable and oilseed crops, which are very important to the daily life of common human beings. Meanwhile, the Brassica species represent an excellent system for studying numerous aspects of plant biology, specifically for the analysis of genome evolution following polyploidy, so it is also very important for scientific research. Now, the genome of Brassica rapa has already been assembled, it is the time to do deep mining of the genome data. BRAD, the Brassica database, is a web-based resource focusing on genome scale genetic and genomic data for important Brassica crops. BRAD was built based on the first whole genome sequence and on further data analysis of the Brassica A genome species, Brassica rapa (Chiifu-401-42). It provides datasets, such as the complete genome sequence of B. rapa, which was de novo assembled from Illumina GA II short reads and from BAC clone sequences, predicted genes and associated annotations, non coding RNAs, transposable elements (TE), B. rapa genes' orthologous to those in A. thaliana, as well as genetic markers and linkage maps. BRAD offers useful searching and data mining tools, including search across annotation datasets, search for syntenic or non-syntenic orthologs, and to search the flanking regions of a certain target, as well as the tools of BLAST and Gbrowse. BRAD allows users to enter almost any kind of information, such as a B. rapa or A. thaliana gene ID, physical position or genetic marker. BRAD, a new database which focuses on the genetics and genomics of the Brassica plants has been developed, it aims at helping scientists and breeders to fully and efficiently use the information of genome data of Brassica plants. BRAD will be continuously updated and can be accessed through http://brassicadb.org.

  7. TargetSearch--a Bioconductor package for the efficient preprocessing of GC-MS metabolite profiling data.

    PubMed

    Cuadros-Inostroza, Alvaro; Caldana, Camila; Redestig, Henning; Kusano, Miyako; Lisec, Jan; Peña-Cortés, Hugo; Willmitzer, Lothar; Hannah, Matthew A

    2009-12-16

    Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data.

  8. TargetSearch - a Bioconductor package for the efficient preprocessing of GC-MS metabolite profiling data

    PubMed Central

    2009-01-01

    Background Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. Results We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. Conclusions TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data. PMID:20015393

  9. SearchSmallRNA: a graphical interface tool for the assemblage of viral genomes using small RNA libraries data.

    PubMed

    de Andrade, Roberto R S; Vaslin, Maite F S

    2014-03-07

    Next-generation parallel sequencing (NGS) allows the identification of viral pathogens by sequencing the small RNAs of infected hosts. Thus, viral genomes may be assembled from host immune response products without prior virus enrichment, amplification or purification. However, mapping of the vast information obtained presents a bioinformatics challenge. In order to by pass the need of line command and basic bioinformatics knowledge, we develop a mapping software with a graphical interface to the assemblage of viral genomes from small RNA dataset obtained by NGS. SearchSmallRNA was developed in JAVA language version 7 using NetBeans IDE 7.1 software. The program also allows the analysis of the viral small interfering RNAs (vsRNAs) profile; providing an overview of the size distribution and other features of the vsRNAs produced in infected cells. The program performs comparisons between each read sequenced present in a library and a chosen reference genome. Reads showing Hamming distances smaller or equal to an allowed mismatched will be selected as positives and used to the assemblage of a long nucleotide genome sequence. In order to validate the software, distinct analysis using NGS dataset obtained from HIV and two plant viruses were used to reconstruct viral whole genomes. SearchSmallRNA program was able to reconstructed viral genomes using NGS of small RNA dataset with high degree of reliability so it will be a valuable tool for viruses sequencing and discovery. It is accessible and free to all research communities and has the advantage to have an easy-to-use graphical interface. SearchSmallRNA was written in Java and is freely available at http://www.microbiologia.ufrj.br/ssrna/.

  10. SearchSmallRNA: a graphical interface tool for the assemblage of viral genomes using small RNA libraries data

    PubMed Central

    2014-01-01

    Background Next-generation parallel sequencing (NGS) allows the identification of viral pathogens by sequencing the small RNAs of infected hosts. Thus, viral genomes may be assembled from host immune response products without prior virus enrichment, amplification or purification. However, mapping of the vast information obtained presents a bioinformatics challenge. Methods In order to by pass the need of line command and basic bioinformatics knowledge, we develop a mapping software with a graphical interface to the assemblage of viral genomes from small RNA dataset obtained by NGS. SearchSmallRNA was developed in JAVA language version 7 using NetBeans IDE 7.1 software. The program also allows the analysis of the viral small interfering RNAs (vsRNAs) profile; providing an overview of the size distribution and other features of the vsRNAs produced in infected cells. Results The program performs comparisons between each read sequenced present in a library and a chosen reference genome. Reads showing Hamming distances smaller or equal to an allowed mismatched will be selected as positives and used to the assemblage of a long nucleotide genome sequence. In order to validate the software, distinct analysis using NGS dataset obtained from HIV and two plant viruses were used to reconstruct viral whole genomes. Conclusions SearchSmallRNA program was able to reconstructed viral genomes using NGS of small RNA dataset with high degree of reliability so it will be a valuable tool for viruses sequencing and discovery. It is accessible and free to all research communities and has the advantage to have an easy-to-use graphical interface. Availability and implementation SearchSmallRNA was written in Java and is freely available at http://www.microbiologia.ufrj.br/ssrna/. PMID:24607237

  11. Motif enrichment tool.

    PubMed

    Blatti, Charles; Sinha, Saurabh

    2014-07-01

    The Motif Enrichment Tool (MET) provides an online interface that enables users to find major transcriptional regulators of their gene sets of interest. MET searches the appropriate regulatory region around each gene and identifies which transcription factor DNA-binding specificities (motifs) are statistically overrepresented. Motif enrichment analysis is currently available for many metazoan species including human, mouse, fruit fly, planaria and flowering plants. MET also leverages high-throughput experimental data such as ChIP-seq and DNase-seq from ENCODE and ModENCODE to identify the regulatory targets of a transcription factor with greater precision. The results from MET are produced in real time and are linked to a genome browser for easy follow-up analysis. Use of the web tool is free and open to all, and there is no login requirement. ADDRESS: http://veda.cs.uiuc.edu/MET/. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. VSO For Dummies

    NASA Astrophysics Data System (ADS)

    Schwartz, Richard A.; Zarro, D.; Csillaghy, A.; Dennis, B.; Tolbert, A. K.; Etesi, L.

    2009-05-01

    We report on our activities to integrate VSO search and retrieval capabilities into standard data access, display, and analysis tools. In addition to its standard Web-based search form, the VSO provides an Interactive Data Language (IDL) client (vso_search) that is available through the Solar Software (SSW) package. We have incorporated this client into an IDL-widget interface program (show_synop) that allows for more simplified searching and downloading of VSO datasets directly into a user's IDL data analysis environment. In particular, we have provided the capability to read VSO datasets into a general purpose IDL package (plotman) that can display different datatypes (lightcurves, images, and spectra) and perform basic data operations such as zooming, image overlays, solar rotation, etc. Currently, the show_synop tool supports access to ground-based and space-based (SOHO, STEREO, and Hinode) observations, and has the capability to include new datasets as they become available. A user encounters two major hurdles when using the VSO: (1) Instrument-specific software (such as level-0 file readers and data-prepping procedures) may not be available in the user's local SSW distribution. (2) Recent calibration files (such as flat-fields) are not automatically distributed with the analysis software. To address these issues, we have developed a dedicated server (prepserver) that incorporates all the latest instrument-specific software libraries and calibration files. The prepserver uses an IDL-Java bridge to read and implement data processing requests from a client and return a processed data file that can be readily displayed with the show_synop/plotman package. The advantage of the prepserver is that the user is only required to install the general branch (gen) of the SSW tree, and is freed from the more onerous task of installing instrument-specific libraries and calibration files. We will demonstrate how the prepserver can be used to read, process, and overlay SOHO/EIT, TRACE, SECCHI/EUVI, and RHESSI images.

  13. An advanced search engine for patent analytics in medicinal chemistry.

    PubMed

    Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnykova, Dina; Lovis, Christian; Ruch, Patrick

    2012-01-01

    Patent collections contain an important amount of medical-related knowledge, but existing tools were reported to lack of useful functionalities. We present here the development of TWINC, an advanced search engine dedicated to patent retrieval in the domain of health and life sciences. Our tool embeds two search modes: an ad hoc search to retrieve relevant patents given a short query and a related patent search to retrieve similar patents given a patent. Both search modes rely on tuning experiments performed during several patent retrieval competitions. Moreover, TWINC is enhanced with interactive modules, such as chemical query expansion, which is of prior importance to cope with various ways of naming biomedical entities. While the related patent search showed promising performances, the ad-hoc search resulted in fairly contrasted results. Nonetheless, TWINC performed well during the Chemathlon task of the PatOlympics competition and experts appreciated its usability.

  14. Quality tools and resources to support organisational improvement integral to high-quality primary care: a systematic review of published and grey literature.

    PubMed

    Janamian, Tina; Upham, Susan J; Crossland, Lisa; Jackson, Claire L

    2016-04-18

    To conduct a systematic review of the literature to identify existing online primary care quality improvement tools and resources to support organisational improvement related to the seven elements in the Primary Care Practice Improvement Tool (PC-PIT), with the identified tools and resources to progress to a Delphi study for further assessment of relevance and utility. Systematic review of the international published and grey literature. CINAHL, Embase and PubMed databases were searched in March 2014 for articles published between January 2004 and December 2013. GreyNet International and other relevant websites and repositories were also searched in March-April 2014 for documents dated between 1992 and 2012. All citations were imported into a bibliographic database. Published and unpublished tools and resources were included in the review if they were in English, related to primary care quality improvement and addressed any of the seven PC-PIT elements of a high-performing practice. Tools and resources that met the eligibility criteria were then evaluated for their accessibility, relevance, utility and comprehensiveness using a four-criteria appraisal framework. We used a data extraction template to systematically extract information from eligible tools and resources. A content analysis approach was used to explore the tools and resources and collate relevant information: name of the tool or resource, year and country of development, author, name of the organisation that provided access and its URL, accessibility information or problems, overview of each tool or resource and the quality improvement element(s) it addresses. If available, a copy of the tool or resource was downloaded into the bibliographic database, along with supporting evidence (published or unpublished) on its use in primary care. This systematic review identified 53 tools and resources that can potentially be provided as part of a suite of tools and resources to support primary care practices in improving the quality of their practice, to achieve improved health outcomes.

  15. New User-Friendly Approach to Obtain an Eisenberg Plot and Its Use as a Practical Tool in Protein Sequence Analysis

    PubMed Central

    Keller, Rob C.A.

    2011-01-01

    The Eisenberg plot or hydrophobic moment plot methodology is one of the most frequently used methods of bioinformatics. Bioinformatics is more and more recognized as a helpful tool in Life Sciences in general, and recent developments in approaches recognizing lipid binding regions in proteins are promising in this respect. In this study a bioinformatics approach specialized in identifying lipid binding helical regions in proteins was used to obtain an Eisenberg plot. The validity of the Heliquest generated hydrophobic moment plot was checked and exemplified. This study indicates that the Eisenberg plot methodology can be transferred to another hydrophobicity scale and renders a user-friendly approach which can be utilized in routine checks in protein–lipid interaction and in protein and peptide lipid binding characterization studies. A combined approach seems to be advantageous and results in a powerful tool in the search of helical lipid-binding regions in proteins and peptides. The strength and limitations of the Eisenberg plot approach itself are discussed as well. The presented approach not only leads to a better understanding of the nature of the protein–lipid interactions but also provides a user-friendly tool for the search of lipid-binding regions in proteins and peptides. PMID:22016610

  16. New user-friendly approach to obtain an Eisenberg plot and its use as a practical tool in protein sequence analysis.

    PubMed

    Keller, Rob C A

    2011-01-01

    The Eisenberg plot or hydrophobic moment plot methodology is one of the most frequently used methods of bioinformatics. Bioinformatics is more and more recognized as a helpful tool in Life Sciences in general, and recent developments in approaches recognizing lipid binding regions in proteins are promising in this respect. In this study a bioinformatics approach specialized in identifying lipid binding helical regions in proteins was used to obtain an Eisenberg plot. The validity of the Heliquest generated hydrophobic moment plot was checked and exemplified. This study indicates that the Eisenberg plot methodology can be transferred to another hydrophobicity scale and renders a user-friendly approach which can be utilized in routine checks in protein-lipid interaction and in protein and peptide lipid binding characterization studies. A combined approach seems to be advantageous and results in a powerful tool in the search of helical lipid-binding regions in proteins and peptides. The strength and limitations of the Eisenberg plot approach itself are discussed as well. The presented approach not only leads to a better understanding of the nature of the protein-lipid interactions but also provides a user-friendly tool for the search of lipid-binding regions in proteins and peptides.

  17. Complexity: an internet resource for analysis of DNA sequence complexity

    PubMed Central

    Orlov, Y. L.; Potapov, V. N.

    2004-01-01

    The search for DNA regions with low complexity is one of the pivotal tasks of modern structural analysis of complete genomes. The low complexity may be preconditioned by strong inequality in nucleotide content (biased composition), by tandem or dispersed repeats or by palindrome-hairpin structures, as well as by a combination of all these factors. Several numerical measures of textual complexity, including combinatorial and linguistic ones, together with complexity estimation using a modified Lempel–Ziv algorithm, have been implemented in a software tool called ‘Complexity’ (http://wwwmgs.bionet.nsc.ru/mgs/programs/low_complexity/). The software enables a user to search for low-complexity regions in long sequences, e.g. complete bacterial genomes or eukaryotic chromosomes. In addition, it estimates the complexity of groups of aligned sequences. PMID:15215465

  18. Using phylogenetically-informed annotation (PIA) to search for light-interacting genes in transcriptomes from non-model organisms.

    PubMed

    Speiser, Daniel I; Pankey, M Sabrina; Zaharoff, Alexander K; Battelle, Barbara A; Bracken-Grissom, Heather D; Breinholt, Jesse W; Bybee, Seth M; Cronin, Thomas W; Garm, Anders; Lindgren, Annie R; Patel, Nipam H; Porter, Megan L; Protas, Meredith E; Rivera, Ajna S; Serb, Jeanne M; Zigler, Kirk S; Crandall, Keith A; Oakley, Todd H

    2014-11-19

    Tools for high throughput sequencing and de novo assembly make the analysis of transcriptomes (i.e. the suite of genes expressed in a tissue) feasible for almost any organism. Yet a challenge for biologists is that it can be difficult to assign identities to gene sequences, especially from non-model organisms. Phylogenetic analyses are one useful method for assigning identities to these sequences, but such methods tend to be time-consuming because of the need to re-calculate trees for every gene of interest and each time a new data set is analyzed. In response, we employed existing tools for phylogenetic analysis to produce a computationally efficient, tree-based approach for annotating transcriptomes or new genomes that we term Phylogenetically-Informed Annotation (PIA), which places uncharacterized genes into pre-calculated phylogenies of gene families. We generated maximum likelihood trees for 109 genes from a Light Interaction Toolkit (LIT), a collection of genes that underlie the function or development of light-interacting structures in metazoans. To do so, we searched protein sequences predicted from 29 fully-sequenced genomes and built trees using tools for phylogenetic analysis in the Osiris package of Galaxy (an open-source workflow management system). Next, to rapidly annotate transcriptomes from organisms that lack sequenced genomes, we repurposed a maximum likelihood-based Evolutionary Placement Algorithm (implemented in RAxML) to place sequences of potential LIT genes on to our pre-calculated gene trees. Finally, we implemented PIA in Galaxy and used it to search for LIT genes in 28 newly-sequenced transcriptomes from the light-interacting tissues of a range of cephalopod mollusks, arthropods, and cubozoan cnidarians. Our new trees for LIT genes are available on the Bitbucket public repository ( http://bitbucket.org/osiris_phylogenetics/pia/ ) and we demonstrate PIA on a publicly-accessible web server ( http://galaxy-dev.cnsi.ucsb.edu/pia/ ). Our new trees for LIT genes will be a valuable resource for researchers studying the evolution of eyes or other light-interacting structures. We also introduce PIA, a high throughput method for using phylogenetic relationships to identify LIT genes in transcriptomes from non-model organisms. With simple modifications, our methods may be used to search for different sets of genes or to annotate data sets from taxa outside of Metazoa.

  19. How To Succeed in Promoting Your Web Site: The Impact of Search Engine Registration on Retrieval of a World Wide Web Site.

    ERIC Educational Resources Information Center

    Tunender, Heather; Ervin, Jane

    1998-01-01

    Character strings were planted in a World Wide Web site (Project Whistlestop) to test indexing and retrieval rates of five Web search tools (Lycos, infoseek, AltaVista, Yahoo, Excite). It was found that search tools indexed few of the planted character strings, none indexed the META descriptor tag, and only Excite indexed into the 3rd-4th site…

  20. YouTube as a potential learning tool to help distinguish tonic-clonic seizures from nonepileptic attacks.

    PubMed

    Muhammed, Louwai; Adcock, Jane E; Sen, Arjune

    2014-08-01

    Medical students are increasingly turning to the website YouTube as a learning resource. This study set out to determine whether the videos on YouTube accurately depict the type of seizures that a medical student may search for. Two consultant epileptologists independently assessed the top YouTube videos returned following searches for eight terms relating to different categories of seizures. The videos were rated for their technical quality, concordance of diagnosis with an epileptologist-assigned diagnosis, and efficacy as a learning tool for medical education. Of the 200 videos assessed, 106 (63%) met the inclusion criteria for further analysis. Technical quality was generally good and only interfered with the diagnostic process in 8.5% of the videos. Of the included videos, 40.6-46.2% were judged to depict the purported diagnosis with moderate agreement between raters (75% agreement, κ=0.50). Of the videos returned after searching "tonic-clonic seizure", 28.6-35.7% were judged to show nonepileptic seizures with almost perfect interrater agreement (92.9% agreement, κ=0.84). Of the videos returned following the search "pseudoseizure", 77.8-88.9% of videos were judged to show nonepileptic seizures with substantial agreement (88.9% agreement, κ=0.61). Across all search terms, 19.8-33% of videos were judged as potentially useful as a learning resource, with fair agreement between raters (75.5% agreement, κ=0.38). These findings suggest that the majority of videos on YouTube claiming to show specific seizure subtypes are inaccurate, and YouTube should not be recommended as a learning tool for students. However, a small group of videos provides excellent demonstrations of tonic-clonic and nonepileptic seizures, which could be used by an expert teacher to demonstrate the difference between epileptic and nonepileptic seizures. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Freva - Freie Univ Evaluation System Framework for Scientific Infrastructures in Earth System Modeling

    NASA Astrophysics Data System (ADS)

    Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Schartner, Thomas; Kirchner, Ingo; Rust, Henning W.; Cubasch, Ulrich; Ulbrich, Uwe

    2016-04-01

    The Freie Univ Evaluation System Framework (Freva - freva.met.fu-berlin.de) is a software infrastructure for standardized data and tool solutions in Earth system science. Freva runs on high performance computers to handle customizable evaluation systems of research projects, institutes or universities. It combines different software technologies into one common hybrid infrastructure, including all features present in the shell and web environment. The database interface satisfies the international standards provided by the Earth System Grid Federation (ESGF). Freva indexes different data projects into one common search environment by storing the meta data information of the self-describing model, reanalysis and observational data sets in a database. This implemented meta data system with its advanced but easy-to-handle search tool supports users, developers and their plugins to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitation of the provision and usage of tools and climate data automatically increases the number of scientists working with the data sets and identifying discrepancies. The integrated web-shell (shellinabox) adds a degree of freedom in the choice of the working environment and can be used as a gate to the research projects HPC. Plugins are able to integrate their e.g. post-processed results into the database of the user. This allows e.g. post-processing plugins to feed statistical analysis plugins, which fosters an active exchange between plugin developers of a research project. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a database. Configurations and results of the tools can be shared among scientists via shell or web system. Therefore, plugged-in tools benefit from transparency and reproducibility. Furthermore, if configurations match while starting an evaluation plugin, the system suggests to use results already produced by other users - saving CPU/h, I/O, disk space and time. The efficient interaction between different technologies improves the Earth system modeling science framed by Freva.

  2. Graphical Representations of Electronic Search Patterns.

    ERIC Educational Resources Information Center

    Lin, Xia; And Others

    1991-01-01

    Discussion of search behavior in electronic environments focuses on the development of GRIP (Graphic Representor of Interaction Patterns), a graphing tool based on HyperCard that produces graphic representations of search patterns. Search state spaces are explained, and forms of data available from electronic searches are described. (34…

  3. DMET-analyzer: automatic analysis of Affymetrix DMET data.

    PubMed

    Guzzi, Pietro Hiram; Agapito, Giuseppe; Di Martino, Maria Teresa; Arbitrio, Mariamena; Tassone, Pierfrancesco; Tagliaferri, Pierosandro; Cannataro, Mario

    2012-10-05

    Clinical Bioinformatics is currently growing and is based on the integration of clinical and omics data aiming at the development of personalized medicine. Thus the introduction of novel technologies able to investigate the relationship among clinical states and biological machineries may help the development of this field. For instance the Affymetrix DMET platform (drug metabolism enzymes and transporters) is able to study the relationship among the variation of the genome of patients and drug metabolism, detecting SNPs (Single Nucleotide Polymorphism) on genes related to drug metabolism. This may allow for instance to find genetic variants in patients which present different drug responses, in pharmacogenomics and clinical studies. Despite this, there is currently a lack in the development of open-source algorithms and tools for the analysis of DMET data. Existing software tools for DMET data generally allow only the preprocessing of binary data (e.g. the DMET-Console provided by Affymetrix) and simple data analysis operations, but do not allow to test the association of the presence of SNPs with the response to drugs. We developed DMET-Analyzer a tool for the automatic association analysis among the variation of the patient genomes and the clinical conditions of patients, i.e. the different response to drugs. The proposed system allows: (i) to automatize the workflow of analysis of DMET-SNP data avoiding the use of multiple tools; (ii) the automatic annotation of DMET-SNP data and the search in existing databases of SNPs (e.g. dbSNP), (iii) the association of SNP with pathway through the search in PharmaGKB, a major knowledge base for pharmacogenomic studies. DMET-Analyzer has a simple graphical user interface that allows users (doctors/biologists) to upload and analyse DMET files produced by Affymetrix DMET-Console in an interactive way. The effectiveness and easy use of DMET Analyzer is demonstrated through different case studies regarding the analysis of clinical datasets produced in the University Hospital of Catanzaro, Italy. DMET Analyzer is a novel tool able to automatically analyse data produced by the DMET-platform in case-control association studies. Using such tool user may avoid wasting time in the manual execution of multiple statistical tests avoiding possible errors and reducing the amount of time needed for a whole experiment. Moreover annotations and the direct link to external databases may increase the biological knowledge extracted. The system is freely available for academic purposes at: https://sourceforge.net/projects/dmetanalyzer/files/

  4. Search and dissemination in data processing. [searches performed for Aviation Technology Newsletter

    NASA Technical Reports Server (NTRS)

    Gold, C. H.; Moore, A. M.; Dodd, B.; Dittmar, V.

    1974-01-01

    Manual retrieval methods were used to complete 54 searches of interest for the General Aviation Newsletter. Subjects of search ranged from television transmission to machine tooling, Apollo moon landings, electronic equipment, and aerodynamics studies.

  5. [Advanced online search techniques and dedicated search engines for physicians].

    PubMed

    Nahum, Yoav

    2008-02-01

    In recent years search engines have become an essential tool in the work of physicians. This article will review advanced search techniques from the world of information specialists, as well as some advanced search engine operators that may help physicians improve their online search capabilities, and maximize the yield of their searches. This article also reviews popular dedicated scientific and biomedical literature search engines.

  6. A neotropical Miocene pollen database employing image-based search and semantic modeling.

    PubMed

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-08-01

    Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.

  7. Is Google Trends a reliable tool for digital epidemiology? Insights from different clinical settings.

    PubMed

    Cervellin, Gianfranco; Comelli, Ivan; Lippi, Giuseppe

    2017-09-01

    Internet-derived information has been recently recognized as a valuable tool for epidemiological investigation. Google Trends, a Google Inc. portal, generates data on geographical and temporal patterns according to specified keywords. The aim of this study was to compare the reliability of Google Trends in different clinical settings, for both common diseases with lower media coverage, and for less common diseases attracting major media coverage. We carried out a search in Google Trends using the keywords "renal colic", "epistaxis", and "mushroom poisoning", selected on the basis of available and reliable epidemiological data. Besides this search, we carried out a second search for three clinical conditions (i.e., "meningitis", "Legionella Pneumophila pneumonia", and "Ebola fever"), which recently received major focus by the Italian media. In our analysis, no correlation was found between data captured from Google Trends and epidemiology of renal colics, epistaxis and mushroom poisoning. Only when searching for the term "mushroom" alone the Google Trends search generated a seasonal pattern which almost overlaps with the epidemiological profile, but this was probably mostly due to searches for harvesting and cooking rather than to for poisoning. The Google Trends data also failed to reflect the geographical and temporary patterns of disease for meningitis, Legionella Pneumophila pneumonia and Ebola fever. The results of our study confirm that Google Trends has modest reliability for defining the epidemiology of relatively common diseases with minor media coverage, or relatively rare diseases with higher audience. Overall, Google Trends seems to be more influenced by the media clamor than by true epidemiological burden. Copyright © 2017 Ministry of Health, Saudi Arabia. Published by Elsevier Ltd. All rights reserved.

  8. CLAST: CUDA implemented large-scale alignment search tool.

    PubMed

    Yano, Masahiro; Mori, Hiroshi; Akiyama, Yutaka; Yamada, Takuji; Kurokawa, Ken

    2014-12-11

    Metagenomics is a powerful methodology to study microbial communities, but it is highly dependent on nucleotide sequence similarity searching against sequence databases. Metagenomic analyses with next-generation sequencing technologies produce enormous numbers of reads from microbial communities, and many reads are derived from microbes whose genomes have not yet been sequenced, limiting the usefulness of existing sequence similarity search tools. Therefore, there is a clear need for a sequence similarity search tool that can rapidly detect weak similarity in large datasets. We developed a tool, which we named CLAST (CUDA implemented large-scale alignment search tool), that enables analyses of millions of reads and thousands of reference genome sequences, and runs on NVIDIA Fermi architecture graphics processing units. CLAST has four main advantages over existing alignment tools. First, CLAST was capable of identifying sequence similarities ~80.8 times faster than BLAST and 9.6 times faster than BLAT. Second, CLAST executes global alignment as the default (local alignment is also an option), enabling CLAST to assign reads to taxonomic and functional groups based on evolutionarily distant nucleotide sequences with high accuracy. Third, CLAST does not need a preprocessed sequence database like Burrows-Wheeler Transform-based tools, and this enables CLAST to incorporate large, frequently updated sequence databases. Fourth, CLAST requires <2 GB of main memory, making it possible to run CLAST on a standard desktop computer or server node. CLAST achieved very high speed (similar to the Burrows-Wheeler Transform-based Bowtie 2 for long reads) and sensitivity (equal to BLAST, BLAT, and FR-HIT) without the need for extensive database preprocessing or a specialized computing platform. Our results demonstrate that CLAST has the potential to be one of the most powerful and realistic approaches to analyze the massive amount of sequence data from next-generation sequencing technologies.

  9. Clinical Decision Support Tools for Selecting Interventions for Patients with Disabling Musculoskeletal Disorders: A Scoping Review.

    PubMed

    Gross, Douglas P; Armijo-Olivo, Susan; Shaw, William S; Williams-Whitt, Kelly; Shaw, Nicola T; Hartvigsen, Jan; Qin, Ziling; Ha, Christine; Woodhouse, Linda J; Steenstra, Ivan A

    2016-09-01

    Purpose We aimed to identify and inventory clinical decision support (CDS) tools for helping front-line staff select interventions for patients with musculoskeletal (MSK) disorders. Methods We used Arksey and O'Malley's scoping review framework which progresses through five stages: (1) identifying the research question; (2) identifying relevant studies; (3) selecting studies for analysis; (4) charting the data; and (5) collating, summarizing and reporting results. We considered computer-based, and other available tools, such as algorithms, care pathways, rules and models. Since this research crosses multiple disciplines, we searched health care, computing science and business databases. Results Our search resulted in 4605 manuscripts. Titles and abstracts were screened for relevance. The reliability of the screening process was high with an average percentage of agreement of 92.3 %. Of the located articles, 123 were considered relevant. Within this literature, there were 43 CDS tools located. These were classified into 3 main areas: computer-based tools/questionnaires (n = 8, 19 %), treatment algorithms/models (n = 14, 33 %), and clinical prediction rules/classification systems (n = 21, 49 %). Each of these areas and the associated evidence are described. The state of evidentiary support for CDS tools is still preliminary and lacks external validation, head-to-head comparisons, or evidence of generalizability across different populations and settings. Conclusions CDS tools, especially those employing rapidly advancing computer technologies, are under development and of potential interest to health care providers, case management organizations and funders of care. Based on the results of this scoping review, we conclude that these tools, models and systems should be subjected to further validation before they can be recommended for large-scale implementation for managing patients with MSK disorders.

  10. Monitoring the ability to deliver care in low- and middle-income countries: a systematic review of health facility assessment tools

    PubMed Central

    Nickerson, Jason W; Adams, Orvill; Attaran, Amir; Hatcher-Roberts, Janet; Tugwell, Peter

    2015-01-01

    Introduction Health facilities assessments are an essential instrument for health system strengthening in low- and middle-income countries. These assessments are used to conduct health facility censuses to assess the capacity of the health system to deliver health care and to identify gaps in the coverage of health services. Despite the valuable role of these assessments, there are currently no minimum standards or frameworks for these tools. Methods We used a structured keyword search of the MEDLINE, EMBASE and HealthStar databases and searched the websites of the World Health Organization, the World Bank and the International Health Facilities Assessment Network to locate all available health facilities assessment tools intended for use in low- and middle-income countries. We parsed the various assessment tools to identify similarities between them, which we catalogued into a framework comprising 41 assessment domains. Results We identified 10 health facility assessment tools meeting our inclusion criteria, all of which were included in our analysis. We found substantial variation in the comprehensiveness of the included tools, with the assessments containing indicators in 13 to 33 (median: 25.5) of the 41 assessment domains included in our framework. None of the tools collected data on all 41 of the assessment domains we identified. Conclusions Not only do a large number of health facility assessment tools exist, but the data they collect and methods they employ are very different. This certainly limits the comparability of the data between different countries’ health systems and probably creates blind spots that impede efforts to strengthen those systems. Agreement is needed on the essential elements of health facility assessments to guide the development of specific indicators and for refining existing instruments. PMID:24895350

  11. Vega-Constellation Tools to Analize Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Savorskiy, V.; Loupian, E.; Balashov, I.; Kashnitskii, A.; Konstantinova, A.; Tolpin, V.; Uvarov, I.; Kuznetsov, O.; Maklakov, S.; Panova, O.; Savchenko, E.

    2016-06-01

    Creating high-performance means to manage massive hyperspectral data (HSD) arrays is an actual challenge when it is implemented to deal with disparate information resources. Aiming to solve this problem the present work develops tools to work with HSD in a distributed information infrastructure, i.e. primarily to use those tools in remote access mode. The main feature of presented approach is in the development of remotely accessed services, which allow users both to conduct search and retrieval procedures on HSD sets and to provide target users with tools to analyze and to process HSD in remote mode. These services were implemented within VEGA-Constellation family information systems that were extended by adding tools oriented to support the studies of certain classes of natural objects by exploring their HSD. Particular developed tools provide capabilities to conduct analysis of such objects as vegetation canopies (forest and agriculture), open soils, forest fires, and areas of thermal anomalies. Developed software tools were successfully tested on Hyperion data sets.

  12. A literature review of the cardiovascular risk-assessment tools: applicability among Asian population.

    PubMed

    Liau, Siow Yen; Mohamed Izham, M I; Hassali, M A; Shafie, A A

    2010-01-01

    Cardiovascular diseases, the main causes of hospitalisations and death globally, have put an enormous economic burden on the healthcare system. Several risk factors are associated with the occurrence of cardiovascular events. At the heart of efficient prevention of cardiovascular disease is the concept of risk assessment. This paper aims to review the available cardiovascular risk-assessment tools and its applicability in predicting cardiovascular risk among Asian populations. A systematic search was performed using keywords as MeSH and Boolean terms. A total of 25 risk-assessment tools were identified. Of these, only two risk-assessment tools (8%) were derived from an Asian population. These risk-assessment tools differ in various ways, including characteristics of the derivation sample, type of study, time frame of follow-up, end points, statistical analysis and risk factors included. Very few cardiovascular risk-assessment tools were developed in Asian populations. In order to accurately predict the cardiovascular risk of our population, there is a need to develop a risk-assessment tool based on local epidemiological data.

  13. Water Pollution Search | ECHO | US EPA

    EPA Pesticide Factsheets

    The Water Pollution Search within the Water Pollutant Loading Tool gives users options to search for pollutant loading information from Discharge Monitoring Report (DMR) and Toxic Release Inventory (TRI) data.

  14. Genetic Testing Registry

    MedlinePlus

    ... Splign Vector Alignment Search Tool (VAST) All Data & Software Resources... Domains & Structures BioSystems Cn3D Conserved Domain Database (CDD) Conserved Domain Search Service (CD Search) Structure (Molecular Modeling Database) Vector Alignment ...

  15. Helping Students Choose Tools To Search the Web.

    ERIC Educational Resources Information Center

    Cohen, Laura B.; Jacobson, Trudi E.

    2000-01-01

    Describes areas where faculty members can aid students in making intelligent use of the Web in their research. Differentiates between subject directories and search engines. Describes an engine's three components: spider, index, and search engine. Outlines two misconceptions: that Yahoo! is a search engine and that search engines contain all the…

  16. PolyPhred analysis software for mutation detection from fluorescence-based sequence data.

    PubMed

    Montgomery, Kate T; Iartchouck, Oleg; Li, Li; Loomis, Stephanie; Obourn, Vanessa; Kucherlapati, Raju

    2008-10-01

    The ability to search for genetic variants that may be related to human disease is one of the most exciting consequences of the availability of the sequence of the human genome. Large cohorts of individuals exhibiting certain phenotypes can be studied and candidate genes resequenced. However, the challenge of analyzing sequence data from many individuals with accuracy, speed, and economy is great. This unit describes one set of software tools: Phred, Phrap, PolyPhred, and Consed. Coverage includes the advantages and disadvantages of these analysis tools, details for obtaining and using the software, and the results one may expect. The software is being continually updated to permit further automation of mutation analysis. Currently, however, at least some manual review is required if one wishes to identify 100% of the variants in a sample set.

  17. Subject Specific Databases: A Powerful Research Tool

    ERIC Educational Resources Information Center

    Young, Terrence E., Jr.

    2004-01-01

    Subject specific databases, or vortals (vertical portals), are databases that provide highly detailed research information on a particular topic. They are the smallest, most focused search tools on the Internet and, in recent years, they've been on the rise. Currently, more of the so-called "mainstream" search engines, subject directories, and…

  18. ProtaBank: A repository for protein design and engineering data.

    PubMed

    Wang, Connie Y; Chang, Paul M; Ary, Marie L; Allen, Benjamin D; Chica, Roberto A; Mayo, Stephen L; Olafson, Barry D

    2018-03-25

    We present ProtaBank, a repository for storing, querying, analyzing, and sharing protein design and engineering data in an actively maintained and updated database. ProtaBank provides a format to describe and compare all types of protein mutational data, spanning a wide range of properties and techniques. It features a user-friendly web interface and programming layer that streamlines data deposition and allows for batch input and queries. The database schema design incorporates a standard format for reporting protein sequences and experimental data that facilitates comparison of results across different data sets. A suite of analysis and visualization tools are provided to facilitate discovery, to guide future designs, and to benchmark and train new predictive tools and algorithms. ProtaBank will provide a valuable resource to the protein engineering community by storing and safeguarding newly generated data, allowing for fast searching and identification of relevant data from the existing literature, and exploring correlations between disparate data sets. ProtaBank invites researchers to contribute data to the database to make it accessible for search and analysis. ProtaBank is available at https://protabank.org. © 2018 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.

  19. SpolSimilaritySearch - A web tool to compare and search similarities between spoligotypes of Mycobacterium tuberculosis complex.

    PubMed

    Couvin, David; Zozio, Thierry; Rastogi, Nalin

    2017-07-01

    Spoligotyping is one of the most commonly used polymerase chain reaction (PCR)-based methods for identification and study of genetic diversity of Mycobacterium tuberculosis complex (MTBC). Despite its known limitations if used alone, the methodology is particularly useful when used in combination with other methods such as mycobacterial interspersed repetitive units - variable number of tandem DNA repeats (MIRU-VNTRs). At a worldwide scale, spoligotyping has allowed identification of information on 103,856 MTBC isolates (corresponding to 98049 clustered strains plus 5807 unique isolates from 169 countries of patient origin) contained within the SITVIT2 proprietary database of the Institut Pasteur de la Guadeloupe. The SpolSimilaritySearch web-tool described herein (available at: http://www.pasteur-guadeloupe.fr:8081/SpolSimilaritySearch) incorporates a similarity search algorithm allowing users to get a complete overview of similar spoligotype patterns (with information on presence or absence of 43 spacers) in the aforementioned worldwide database. This tool allows one to analyze spread and evolutionary patterns of MTBC by comparing similar spoligotype patterns, to distinguish between widespread, specific and/or confined patterns, as well as to pinpoint patterns with large deleted blocks, which play an intriguing role in the genetic epidemiology of M. tuberculosis. Finally, the SpolSimilaritySearch tool also provides with the country distribution patterns for each queried spoligotype. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Custom Search Engines: Tools & Tips

    ERIC Educational Resources Information Center

    Notess, Greg R.

    2008-01-01

    Few have the resources to build a Google or Yahoo! from scratch. Yet anyone can build a search engine based on a subset of the large search engines' databases. Use Google Custom Search Engine or Yahoo! Search Builder or any of the other similar programs to create a vertical search engine targeting sites of interest to users. The basic steps to…

  1. The Chinchilla Research Resource Database: resource for an otolaryngology disease model

    PubMed Central

    Shimoyama, Mary; Smith, Jennifer R.; De Pons, Jeff; Tutaj, Marek; Khampang, Pawjai; Hong, Wenzhou; Erbe, Christy B.; Ehrlich, Garth D.; Bakaletz, Lauren O.; Kerschner, Joseph E.

    2016-01-01

    The long-tailed chinchilla (Chinchilla lanigera) is an established animal model for diseases of the inner and middle ear, among others. In particular, chinchilla is commonly used to study diseases involving viral and bacterial pathogens and polymicrobial infections of the upper respiratory tract and the ear, such as otitis media. The value of the chinchilla as a model for human diseases prompted the sequencing of its genome in 2012 and the more recent development of the Chinchilla Research Resource Database (http://crrd.mcw.edu) to provide investigators with easy access to relevant datasets and software tools to enhance their research. The Chinchilla Research Resource Database contains a complete catalog of genes for chinchilla and, for comparative purposes, human. Chinchilla genes can be viewed in the context of their genomic scaffold positions using the JBrowse genome browser. In contrast to the corresponding records at NCBI, individual gene reports at CRRD include functional annotations for Disease, Gene Ontology (GO) Biological Process, GO Molecular Function, GO Cellular Component and Pathway assigned to chinchilla genes based on annotations from the corresponding human orthologs. Data can be retrieved via keyword and gene-specific searches. Lists of genes with similar functional attributes can be assembled by leveraging the hierarchical structure of the Disease, GO and Pathway vocabularies through the Ontology Search and Browser tool. Such lists can then be further analyzed for commonalities using the Gene Annotator (GA) Tool. All data in the Chinchilla Research Resource Database is freely accessible and downloadable via the CRRD FTP site or using the download functions available in the search and analysis tools. The Chinchilla Research Resource Database is a rich resource for researchers using, or considering the use of, chinchilla as a model for human disease. Database URL: http://crrd.mcw.edu PMID:27173523

  2. Fungal genome resources at NCBI.

    PubMed

    Robbertse, B; Tatusova, T

    2011-09-01

    The National Center for Biotechnology Information (NCBI) is well known for the nucleotide sequence archive, GenBank and sequence analysis tool BLAST. However, NCBI integrates many types of biomolecular data from variety of sources and makes it available to the scientific community as interactive web resources as well as organized releases of bulk data. These tools are available to explore and compare fungal genomes. Searching all databases with Fungi [organism] at http://www.ncbi.nlm.nih.gov/ is the quickest way to find resources of interest with fungal entries. Some tools though are resources specific and can be indirectly accessed from a particular database in the Entrez system. These include graphical viewers and comparative analysis tools such as TaxPlot, TaxMap and UniGene DDD (found via UniGene Homepage). Gene and BioProject pages also serve as portals to external data such as community annotation websites, BioGrid and UniProt. There are many different ways of accessing genomic data at NCBI. Depending on the focus and goal of research projects or the level of interest, a user would select a particular route for accessing genomic databases and resources. This review article describes methods of accessing fungal genome data and provides examples that illustrate the use of analysis tools.

  3. Structure-activity relationships of pyrethroid insecticides. Part 2. The use of molecular dynamics for conformation searching and average parameter calculation

    NASA Astrophysics Data System (ADS)

    Hudson, Brian D.; George, Ashley R.; Ford, Martyn G.; Livingstone, David J.

    1992-04-01

    Molecular dynamics simulations have been performed on a number of conformationally flexible pyrethroid insecticides. The results indicate that molecular dynamics is a suitable tool for conformational searching of small molecules given suitable simulation parameters. The structures derived from the simulations are compared with the static conformation used in a previous study. Various physicochemical parameters have been calculated for a set of conformations selected from the simulations using multivariate analysis. The averaged values of the parameters over the selected set (and the factors derived from them) are compared with the single conformation values used in the previous study.

  4. Geena 2, improved automated analysis of MALDI/TOF mass spectra.

    PubMed

    Romano, Paolo; Profumo, Aldo; Rocco, Mattia; Mangerini, Rosa; Ferri, Fabio; Facchiano, Angelo

    2016-03-02

    Mass spectrometry (MS) is producing high volumes of data supporting oncological sciences, especially for translational research. Most of related elaborations can be carried out by combining existing tools at different levels, but little is currently available for the automation of the fundamental steps. For the analysis of MALDI/TOF spectra, a number of pre-processing steps are required, including joining of isotopic abundances for a given molecular species, normalization of signals against an internal standard, background noise removal, averaging multiple spectra from the same sample, and aligning spectra from different samples. In this paper, we present Geena 2, a public software tool for the automated execution of these pre-processing steps for MALDI/TOF spectra. Geena 2 has been developed in a Linux-Apache-MySQL-PHP web development environment, with scripts in PHP and Perl. Input and output are managed as simple formats that can be consumed by any database system and spreadsheet software. Input data may also be stored in a MySQL database. Processing methods are based on original heuristic algorithms which are introduced in the paper. Three simple and intuitive web interfaces are available: the Standard Search Interface, which allows a complete control over all parameters, the Bright Search Interface, which leaves to the user the possibility to tune parameters for alignment of spectra, and the Quick Search Interface, which limits the number of parameters to a minimum by using default values for the majority of parameters. Geena 2 has been utilized, in conjunction with a statistical analysis tool, in three published experimental works: a proteomic study on the effects of long-term cryopreservation on the low molecular weight fraction of serum proteome, and two retrospective serum proteomic studies, one on the risk of developing breat cancer in patients affected by gross cystic disease of the breast (GCDB) and the other for the identification of a predictor of breast cancer mortality following breast cancer surgery, whose results were validated by ELISA, a completely alternative method. Geena 2 is a public tool for the automated pre-processing of MS data originated by MALDI/TOF instruments, with a simple and intuitive web interface. It is now under active development for the inclusion of further filtering options and for the adoption of standard formats for MS spectra.

  5. An information retrieval system for computerized patient records in the context of a daily hospital practice: the example of the Léon Bérard Cancer Center (France).

    PubMed

    Biron, P; Metzger, M H; Pezet, C; Sebban, C; Barthuet, E; Durand, T

    2014-01-01

    A full-text search tool was introduced into the daily practice of Léon Bérard Center (France), a health care facility devoted to treatment of cancer. This tool was integrated into the hospital information system by the IT department having been granted full autonomy to improve the system. To describe the development and various uses of a tool for full-text search of computerized patient records. The technology is based on Solr, an open-source search engine. It is a web-based application that processes HTTP requests and returns HTTP responses. A data processing pipeline that retrieves data from different repositories, normalizes, cleans and publishes it to Solr, was integrated in the information system of the Leon Bérard center. The IT department developed also user interfaces to allow users to access the search engine within the computerized medical record of the patient. From January to May 2013, 500 queries were launched per month by an average of 140 different users. Several usages of the tool were described, as follows: medical management of patients, medical research, and improving the traceability of medical care in medical records. The sensitivity of the tool for detecting the medical records of patients diagnosed with both breast cancer and diabetes was 83.0%, and its positive predictive value was 48.7% (gold standard: manual screening by a clinical research assistant). The project demonstrates that the introduction of full-text-search tools allowed practitioners to use unstructured medical information for various purposes.

  6. Annotare—a tool for annotating high-throughput biomedical investigations and resulting data

    PubMed Central

    Shankar, Ravi; Parkinson, Helen; Burdett, Tony; Hastings, Emma; Liu, Junmin; Miller, Michael; Srinivasa, Rashmi; White, Joseph; Brazma, Alvis; Sherlock, Gavin; Stoeckert, Christian J.; Ball, Catherine A.

    2010-01-01

    Summary: Computational methods in molecular biology will increasingly depend on standards-based annotations that describe biological experiments in an unambiguous manner. Annotare is a software tool that enables biologists to easily annotate their high-throughput experiments, biomaterials and data in a standards-compliant way that facilitates meaningful search and analysis. Availability and Implementation: Annotare is available from http://code.google.com/p/annotare/ under the terms of the open-source MIT License (http://www.opensource.org/licenses/mit-license.php). It has been tested on both Mac and Windows. Contact: rshankar@stanford.edu PMID:20733062

  7. Association of Socioeconomic and Geographic Factors With Google Trends for Tanning and Sunscreen.

    PubMed

    Seth, Divya; Gittleman, Haley; Barnholtz-Sloan, Jill; Bordeaux, Jeremy S

    2018-02-01

    Internet search trends are used to track both infectious diseases and noncommunicable conditions. The authors sought to characterize Google Trends search volume index (SVI) for the terms "sunscreen" and tanning ("tanning salon" and "tanning bed") in the United States from 2010 to 2015 and analyze association with educational attainment, average income, and percent white data by state. SVI is search frequency data relative to total search volume. Analysis of variance, univariate, and multivariate analyses were performed to assess seasonal variations in SVI and the association of state-level SVI with state latitudes and census data. Hawaii had the highest SVI for sunscreen searches, whereas Alaska had the lowest. West Virginia had the highest SVI for tanning searches, whereas Hawaii had the lowest. There were significant differences between seasonal SVI for sunscreen and tanning searches (p < .001). Sunscreen SVI by state was correlated with an increase in educational attainment and average income, and a decrease in latitude (p < .05) in a multivariate model. Tanning SVI was correlated with a decrease in educational attainment and average income, and an increase in latitude (p < .05). Internet search trends for sunscreen and tanning are influenced by socioeconomic factors, and could be a tool for skin-related public health.

  8. Text mining and its potential applications in systems biology.

    PubMed

    Ananiadou, Sophia; Kell, Douglas B; Tsujii, Jun-ichi

    2006-12-01

    With biomedical literature increasing at a rate of several thousand papers per week, it is impossible to keep abreast of all developments; therefore, automated means to manage the information overload are required. Text mining techniques, which involve the processes of information retrieval, information extraction and data mining, provide a means of solving this. By adding meaning to text, these techniques produce a more structured analysis of textual knowledge than simple word searches, and can provide powerful tools for the production and analysis of systems biology models.

  9. Systematic review of methods for quantifying teamwork in the operating theatre

    PubMed Central

    Marshall, D.; Sykes, M.; McCulloch, P.; Shalhoub, J.; Maruthappu, M.

    2018-01-01

    Background Teamwork in the operating theatre is becoming increasingly recognized as a major factor in clinical outcomes. Many tools have been developed to measure teamwork. Most fall into two categories: self‐assessment by theatre staff and assessment by observers. A critical and comparative analysis of the validity and reliability of these tools is lacking. Methods MEDLINE and Embase databases were searched following PRISMA guidelines. Content validity was assessed using measurements of inter‐rater agreement, predictive validity and multisite reliability, and interobserver reliability using statistical measures of inter‐rater agreement and reliability. Quantitative meta‐analysis was deemed unsuitable. Results Forty‐eight articles were selected for final inclusion; self‐assessment tools were used in 18 and observational tools in 28, and there were two qualitative studies. Self‐assessment of teamwork by profession varied with the profession of the assessor. The most robust self‐assessment tool was the Safety Attitudes Questionnaire (SAQ), although this failed to demonstrate multisite reliability. The most robust observational tool was the Non‐Technical Skills (NOTECHS) system, which demonstrated both test–retest reliability (P > 0·09) and interobserver reliability (Rwg = 0·96). Conclusion Self‐assessment of teamwork by the theatre team was influenced by professional differences. Observational tools, when used by trained observers, circumvented this.

  10. National Center for Biotechnology Information

    MedlinePlus

    ... Splign Vector Alignment Search Tool (VAST) All Data & Software Resources... Domains & Structures BioSystems Cn3D Conserved Domain Database (CDD) Conserved Domain Search Service (CD Search) Structure (Molecular Modeling Database) Vector Alignment ...

  11. DMT-TAFM: a data mining tool for technical analysis of futures market

    NASA Astrophysics Data System (ADS)

    Stepanov, Vladimir; Sathaye, Archana

    2002-03-01

    Technical analysis of financial markets describes many patterns of market behavior. For practical use, all these descriptions need to be adjusted for each particular trading session. In this paper, we develop a data mining tool for technical analysis of the futures markets (DMT-TAFM), which dynamically generates rules based on the notion of the price pattern similarity. The tool consists of three main components. The first component provides visualization of data series on a chart with different ranges, scales, and chart sizes and types. The second component constructs pattern descriptions using sets of polynomials. The third component specifies the training set for mining, defines the similarity notion, and searches for a set of similar patterns. DMT-TAFM is useful to prepare the data, and then reveal and systemize statistical information about similar patterns found in any type of historical price series. We performed experiments with our tool on three decades of trading data fro hundred types of futures. Our results for this data set shows that, we can prove or disprove many well-known patterns based on real data, as well as reveal new ones, and use the set of relatively consistent patterns found during data mining for developing better futures trading strategies.

  12. Key elements of high-quality practice organisation in primary health care: a systematic review.

    PubMed

    Crossland, Lisa; Janamian, Tina; Jackson, Claire L

    2014-08-04

    To identify elements that are integral to high-quality practice and determine considerations relating to high-quality practice organisation in primary care. A narrative systematic review of published and grey literature. Electronic databases (PubMed, CINAHL, the Cochrane Library, Embase, Emerald Insight, PsycInfo, the Primary Health Care Research and Information Service website, Google Scholar) were searched in November 2013 and used to identify articles published in English from 2002 to 2013. Reference lists of included articles were searched for relevant unpublished articles and reports. Data were configured at the study level to allow for the inclusion of findings from a broad range of study types. Ten elements were most often included in the existing organisational assessment tools. A further three elements were identified from an inductive thematic analysis of descriptive articles, and were noted as important considerations in effective quality improvement in primary care settings. Although there are some validated tools available to primary care that identify and build quality, most are single-strategy approaches developed outside health care settings. There are currently no validated organisational improvement tools, designed specifically for primary health care, which combine all elements of practice improvement and whose use does not require extensive external facilitation.

  13. The MetabolomeExpress Project: enabling web-based processing, analysis and transparent dissemination of GC/MS metabolomics datasets.

    PubMed

    Carroll, Adam J; Badger, Murray R; Harvey Millar, A

    2010-07-14

    Standardization of analytical approaches and reporting methods via community-wide collaboration can work synergistically with web-tool development to result in rapid community-driven expansion of online data repositories suitable for data mining and meta-analysis. In metabolomics, the inter-laboratory reproducibility of gas-chromatography/mass-spectrometry (GC/MS) makes it an obvious target for such development. While a number of web-tools offer access to datasets and/or tools for raw data processing and statistical analysis, none of these systems are currently set up to act as a public repository by easily accepting, processing and presenting publicly submitted GC/MS metabolomics datasets for public re-analysis. Here, we present MetabolomeExpress, a new File Transfer Protocol (FTP) server and web-tool for the online storage, processing, visualisation and statistical re-analysis of publicly submitted GC/MS metabolomics datasets. Users may search a quality-controlled database of metabolite response statistics from publicly submitted datasets by a number of parameters (eg. metabolite, species, organ/biofluid etc.). Users may also perform meta-analysis comparisons of multiple independent experiments or re-analyse public primary datasets via user-friendly tools for t-test, principal components analysis, hierarchical cluster analysis and correlation analysis. They may interact with chromatograms, mass spectra and peak detection results via an integrated raw data viewer. Researchers who register for a free account may upload (via FTP) their own data to the server for online processing via a novel raw data processing pipeline. MetabolomeExpress https://www.metabolome-express.org provides a new opportunity for the general metabolomics community to transparently present online the raw and processed GC/MS data underlying their metabolomics publications. Transparent sharing of these data will allow researchers to assess data quality and draw their own insights from published metabolomics datasets.

  14. Alternative Fuels Data Center: Vehicle Search

    Science.gov Websites

    Tools » Vehicle Search Printable Version Share this resource Send a link to Alternative Fuels Data Center: Vehicle Search to someone by E-mail Share Alternative Fuels Data Center: Vehicle Search on Facebook Tweet about Alternative Fuels Data Center: Vehicle Search on Twitter Bookmark Alternative Fuels

  15. Scripting for Collaborative Search Computer-Supported Classroom Activities

    ERIC Educational Resources Information Center

    Verdugo, Renato; Barros, Leonardo; Albornoz, Daniela; Nussbaum, Miguel; McFarlane, Angela

    2014-01-01

    Searching online is one of the most powerful resources today's students have for accessing information. Searching in groups is a daily practice across multiple contexts; however, the tools we use for searching online do not enable collaborative practices and traditional search models consider a single user navigating online in solitary. This paper…

  16. Trajectory Browser: An Online Tool for Interplanetary Trajectory Analysis and Visualization

    NASA Technical Reports Server (NTRS)

    Foster, Cyrus James

    2013-01-01

    The trajectory browser is a web-based tool developed at the NASA Ames Research Center for finding preliminary trajectories to planetary bodies and for providing relevant launch date, time-of-flight and (Delta)V requirements. The site hosts a database of transfer trajectories from Earth to planets and small-bodies for various types of missions such as rendezvous, sample return or flybys. A search engine allows the user to find trajectories meeting desired constraints on the launch window, mission duration and (Delta)V capability, while a trajectory viewer tool allows the visualization of the heliocentric trajectory and the detailed mission itinerary. The anticipated user base of this tool consists primarily of scientists and engineers designing interplanetary missions in the context of pre-phase A studies, particularly for performing accessibility surveys to large populations of small-bodies.

  17. Mining Hidden Gems Beneath the Surface: A Look At the Invisible Web.

    ERIC Educational Resources Information Center

    Carlson, Randal D.; Repman, Judi

    2002-01-01

    Describes resources for researchers called the Invisible Web that are hidden from the usual search engines and other tools and contrasts them with those resources available on the surface Web. Identifies specialized search tools, databases, and strategies that can be used to locate credible in-depth information. (Author/LRW)

  18. Considerations in the Choice of an Internet Search Tool.

    ERIC Educational Resources Information Center

    Vaughan, Jason

    1999-01-01

    Describes a survey conducted among library school graduate students and librarians at the University of North Carolina at Chapel Hill that investigated factors that play a role in information professionals' choice of Internet search tools. Utility functions and ease of use are discussed and the original online survey is appended. (Author/LRW)

  19. Basic Reference Tools for Nursing Research. A Workbook with Explanations and Examples.

    ERIC Educational Resources Information Center

    Smalley, Topsy N.

    This workbook is designed to introduce nursing students to basic concepts and skills needed for searching the literatures of medicine, nursing, and allied health areas for materials relevant to specific information needs. The workbook introduces the following research tools: (1) the National Library of Medicine's MEDLINE searches, including a…

  20. Tools to Ease Your Internet Adventures: Part I.

    ERIC Educational Resources Information Center

    Descy, Don E.

    1993-01-01

    This first of a two-part series highlights three tools that improve accessibility to Internet resources: (1) Alex, a database that accesses files in FTP (file transfer protocol) sites; (2) Archie, software that searches for file names with a user's search term; and (3) Gopher, a menu-driven program to access Internet sites. (LRW)

  1. Conformational analysis of oligosaccharides and polysaccharides using molecular dynamics simulations.

    PubMed

    Frank, Martin

    2015-01-01

    Complex carbohydrates usually have a large number of rotatable bonds and consequently a large number of theoretically possible conformations can be generated (combinatorial explosion). The application of systematic search methods for conformational analysis of carbohydrates is therefore limited to disaccharides and trisaccharides in a routine analysis. An alternative approach is to use Monte-Carlo methods or (high-temperature) molecular dynamics (MD) simulations to explore the conformational space of complex carbohydrates. This chapter describes how to use MD simulation data to perform a conformational analysis (conformational maps, hydrogen bonds) of oligosaccharides and how to build realistic 3D structures of large polysaccharides using Conformational Analysis Tools (CAT).

  2. Federal Data Repository Research: Recent Developments in Mercury Search System Architecture

    NASA Astrophysics Data System (ADS)

    Devarakonda, R.

    2015-12-01

    New data intensive project initiatives needs new generation data system architecture. This presentation will discuss the recent developments in Mercury System [1] including adoption, challenges, and future efforts to handle such data intensive projects. Mercury is a combination of three main tools (i) Data/Metadata registration Tool (Online Metadata Editor): The new Online Metadata Editor (OME) is a web-based tool to help document the scientific data in a well-structured, popular scientific metadata formats. (ii) Search and Visualization Tool: Provides a single portal to information contained in disparate data management systems. It facilitates distributed metadata management, data discovery, and various visuzalization capabilities. (iii) Data Citation Tool: In collaboration with Department of Energy's Oak Ridge National Laboratory (ORNL) Mercury Consortium (funded by NASA, USGS and DOE), established a Digital Object Identifier (DOI) service. Mercury is a open source system, developed and managed at Oak Ridge National Laboratory and is currently being funded by three federal agencies, including NASA, USGS and DOE. It provides access to millions of bio-geo-chemical and ecological data; 30,000 scientists use it each month. Some recent data intensive projects that are using Mercury tool: USGS Science Data Catalog (http://data.usgs.gov/), Next-Generation Ecosystem Experiments (http://ngee-arctic.ornl.gov/), Carbon Dioxide Information Analysis Center (http://cdiac.ornl.gov/), Oak Ridge National Laboratory - Distributed Active Archive Center (http://daac.ornl.gov), SoilSCAPE (http://mercury.ornl.gov/soilscape). References: [1] Devarakonda, Ranjeet, et al. "Mercury: reusable metadata management, data discovery and access system." Earth Science Informatics 3.1-2 (2010): 87-94.

  3. Methodological quality and reporting of systematic reviews in hand and wrist pathology.

    PubMed

    Wasiak, J; Shen, A Y; Ware, R; O'Donohoe, T J; Faggion, C M

    2017-10-01

    The objective of this study was to assess methodological and reporting quality of systematic reviews in hand and wrist pathology. MEDLINE, EMBASE and Cochrane Library were searched from inception to November 2016 for relevant studies. Reporting quality was evaluated using Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and methodological quality using a measurement tool to assess systematic reviews, the Assessment of Multiple Systematic Reviews (AMSTAR). Descriptive statistics and linear regression were used to identify features associated with improved methodological quality. A total of 91 studies were included in the analysis. Most reviews inadequately reported PRISMA items regarding study protocol, search strategy and bias and AMSTAR items regarding protocol, publication bias and funding. Systematic reviews published in a plastics journal, or which included more authors, were associated with higher AMSTAR scores. A large proportion of systematic reviews within hand and wrist pathology literature score poorly with validated methodological assessment tools, which may affect the reliability of their conclusions. I.

  4. Reflection as a Learning Tool in Graduate Medical Education: A Systematic Review.

    PubMed

    Winkel, Abigail Ford; Yingling, Sandra; Jones, Aubrie-Ann; Nicholson, Joey

    2017-08-01

    Graduate medical education programs employ reflection to advance a range of outcomes for physicians in training. However, the most effective applications of this tool have not been fully explored. A systematic review of the literature examined interventions reporting the use of reflection in graduate medical education. The authors searched Medline/PubMed, Embase, Cochrane CENTRAL, and ERIC for studies of reflection as a teaching tool to develop medical trainees' capacities. Key words and subject headings included reflection , narrative , residents/GME , and education / teaching / learning . No language or date limits were applied. The search yielded 1308 citations between inception for each database and June 15, 2015. A total of 16 studies, encompassing 477 residents and fellows, met eligibility criteria. Study quality was assessed using the Critical Appraisal Skills Programme Qualitative Checklist. The authors conducted a thematic analysis of the 16 articles. Outcomes studied encompassed the impact of reflection on empathy, comfort with learning in complex situations, and engagement in the learning process. Reflection increased learning of complex subjects and deepened professional values. It appears to be an effective tool for improving attitudes and comfort when exploring difficult material. Limitations include that most studies had small samples, used volunteers, and did not measure behavioral outcomes. Critical reflection is a tool that can amplify learning in residents and fellows. Added research is needed to understand how reflection can influence growth in professional capacities and patient-level outcomes in ways that can be measured.

  5. PubMed and beyond: a survey of web tools for searching biomedical literature

    PubMed Central

    Lu, Zhiyong

    2011-01-01

    The past decade has witnessed the modern advances of high-throughput technology and rapid growth of research capacity in producing large-scale biological data, both of which were concomitant with an exponential growth of biomedical literature. This wealth of scholarly knowledge is of significant importance for researchers in making scientific discoveries and healthcare professionals in managing health-related matters. However, the acquisition of such information is becoming increasingly difficult due to its large volume and rapid growth. In response, the National Center for Biotechnology Information (NCBI) is continuously making changes to its PubMed Web service for improvement. Meanwhile, different entities have devoted themselves to developing Web tools for helping users quickly and efficiently search and retrieve relevant publications. These practices, together with maturity in the field of text mining, have led to an increase in the number and quality of various Web tools that provide comparable literature search service to PubMed. In this study, we review 28 such tools, highlight their respective innovations, compare them to the PubMed system and one another, and discuss directions for future development. Furthermore, we have built a website dedicated to tracking existing systems and future advances in the field of biomedical literature search. Taken together, our work serves information seekers in choosing tools for their needs and service providers and developers in keeping current in the field. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/search PMID:21245076

  6. A method for the design and development of medical or health care information websites to optimize search engine results page rankings on Google.

    PubMed

    Dunne, Suzanne; Cummins, Niamh Maria; Hannigan, Ailish; Shannon, Bill; Dunne, Colum; Cullen, Walter

    2013-08-27

    The Internet is a widely used source of information for patients searching for medical/health care information. While many studies have assessed existing medical/health care information on the Internet, relatively few have examined methods for design and delivery of such websites, particularly those aimed at the general public. This study describes a method of evaluating material for new medical/health care websites, or for assessing those already in existence, which is correlated with higher rankings on Google's Search Engine Results Pages (SERPs). A website quality assessment (WQA) tool was developed using criteria related to the quality of the information to be contained in the website in addition to an assessment of the readability of the text. This was retrospectively applied to assess existing websites that provide information about generic medicines. The reproducibility of the WQA tool and its predictive validity were assessed in this study. The WQA tool demonstrated very high reproducibility (intraclass correlation coefficient=0.95) between 2 independent users. A moderate to strong correlation was found between WQA scores and rankings on Google SERPs. Analogous correlations were seen between rankings and readability of websites as determined by Flesch Reading Ease and Flesch-Kincaid Grade Level scores. The use of the WQA tool developed in this study is recommended as part of the design phase of a medical or health care information provision website, along with assessment of readability of the material to be used. This may ensure that the website performs better on Google searches. The tool can also be used retrospectively to make improvements to existing websites, thus, potentially enabling better Google search result positions without incurring the costs associated with Search Engine Optimization (SEO) professionals or paid promotion.

  7. Short-term Internet search using makes people rely on search engines when facing unknown issues.

    PubMed

    Wang, Yifan; Wu, Lingdan; Luo, Liang; Zhang, Yifen; Dong, Guangheng

    2017-01-01

    The Internet search engines, which have powerful search/sort functions and ease of use features, have become an indispensable tool for many individuals. The current study is to test whether the short-term Internet search training can make people more dependent on it. Thirty-one subjects out of forty subjects completed the search training study which included a pre-test, a six-day's training of Internet search, and a post-test. During the pre- and post- tests, subjects were asked to search online the answers to 40 unusual questions, remember the answers and recall them in the scanner. Un-learned questions were randomly presented at the recalling stage in order to elicited search impulse. Comparing to the pre-test, subjects in the post-test reported higher impulse to use search engines to answer un-learned questions. Consistently, subjects showed higher brain activations in dorsolateral prefrontal cortex and anterior cingulate cortex in the post-test than in the pre-test. In addition, there were significant positive correlations self-reported search impulse and brain responses in the frontal areas. The results suggest that a simple six-day's Internet search training can make people dependent on the search tools when facing unknown issues. People are easily dependent on the Internet search engines.

  8. Short-term Internet search using makes people rely on search engines when facing unknown issues

    PubMed Central

    Wang, Yifan; Wu, Lingdan; Luo, Liang; Zhang, Yifen

    2017-01-01

    The Internet search engines, which have powerful search/sort functions and ease of use features, have become an indispensable tool for many individuals. The current study is to test whether the short-term Internet search training can make people more dependent on it. Thirty-one subjects out of forty subjects completed the search training study which included a pre-test, a six-day’s training of Internet search, and a post-test. During the pre- and post- tests, subjects were asked to search online the answers to 40 unusual questions, remember the answers and recall them in the scanner. Un-learned questions were randomly presented at the recalling stage in order to elicited search impulse. Comparing to the pre-test, subjects in the post-test reported higher impulse to use search engines to answer un-learned questions. Consistently, subjects showed higher brain activations in dorsolateral prefrontal cortex and anterior cingulate cortex in the post-test than in the pre-test. In addition, there were significant positive correlations self-reported search impulse and brain responses in the frontal areas. The results suggest that a simple six-day’s Internet search training can make people dependent on the search tools when facing unknown issues. People are easily dependent on the Internet search engines. PMID:28441408

  9. The Complex Dynamics of Sponsored Search Markets

    NASA Astrophysics Data System (ADS)

    Robu, Valentin; La Poutré, Han; Bohte, Sander

    This paper provides a comprehensive study of the structure and dynamics of online advertising markets, mostly based on techniques from the emergent discipline of complex systems analysis. First, we look at how the display rank of a URL link influences its click frequency, for both sponsored search and organic search. Second, we study the market structure that emerges from these queries, especially the market share distribution of different advertisers. We show that the sponsored search market is highly concentrated, with less than 5% of all advertisers receiving over 2/3 of the clicks in the market. Furthermore, we show that both the number of ad impressions and the number of clicks follow power law distributions of approximately the same coefficient. However, we find this result does not hold when studying the same distribution of clicks per rank position, which shows considerable variance, most likely due to the way advertisers divide their budget on different keywords. Finally, we turn our attention to how such sponsored search data could be used to provide decision support tools for bidding for combinations of keywords. We provide a method to visualize keywords of interest in graphical form, as well as a method to partition these graphs to obtain desirable subsets of search terms.

  10. Offering A Price Transparency Tool Did Not Reduce Overall Spending Among California Public Employees And Retirees.

    PubMed

    Desai, Sunita; Hatfield, Laura A; Hicks, Andrew L; Sinaiko, Anna D; Chernew, Michael E; Cowling, David; Gautam, Santosh; Wu, Sze-Jung; Mehrotra, Ateev

    2017-08-01

    Insurers, employers, and states increasingly encourage price transparency so that patients can compare health care prices across providers. However, the evidence on whether price transparency tools encourage patients to receive lower-cost care and reduce overall spending remains limited and mixed. We examined the experience of a large insured population that was offered a price transparency tool, focusing on a set of "shoppable" services (lab tests, office visits, and advanced imaging services). Overall, offering the tool was not associated with lower shoppable services spending. Only 12 percent of employees who were offered the tool used it in the first fifteen months after it was introduced, and use of the tool was not associated with lower prices for lab tests or office visits. The average price paid for imaging services preceded by a price search was 14 percent lower than that paid for imaging services not preceded by a price search. However, only 1 percent of those who received advanced imaging conducted a price search. Simply offering a price transparency tool is not sufficient to meaningfully decrease health care prices or spending. Project HOPE—The People-to-People Health Foundation, Inc.

  11. Finding similar nucleotide sequences using network BLAST searches.

    PubMed

    Ladunga, Istvan

    2009-06-01

    The Basic Local Alignment Search Tool (BLAST) is a keystone of bioinformatics due to its performance and user-friendliness. Beginner and intermediate users will learn how to design and submit blastn and Megablast searches on the Web pages at the National Center for Biotechnology Information. We map nucleic acid sequences to genomes, find identical or similar mRNA, expressed sequence tag, and noncoding RNA sequences, and run Megablast searches, which are much faster than blastn. Understanding results is assisted by taxonomy reports, genomic views, and multiple alignments. We interpret expected frequency thresholds, biological significance, and statistical significance. Weak hits provide no evidence, but hints for further analyses. We find genes that may code for homologous proteins by translated BLAST. We reduce false positives by filtering out low-complexity regions. Parsed BLAST results can be integrated into analysis pipelines. Links in the output connect to Entrez, PUBMED, structural, sequence, interaction, and expression databases. This facilitates integration with a wide spectrum of biological knowledge.

  12. SNPassoc: an R package to perform whole genome association studies.

    PubMed

    González, Juan R; Armengol, Lluís; Solé, Xavier; Guinó, Elisabet; Mercader, Josep M; Estivill, Xavier; Moreno, Víctor

    2007-03-01

    The popularization of large-scale genotyping projects has led to the widespread adoption of genetic association studies as the tool of choice in the search for single nucleotide polymorphisms (SNPs) underlying susceptibility to complex diseases. Although the analysis of individual SNPs is a relatively trivial task, when the number is large and multiple genetic models need to be explored it becomes necessary a tool to automate the analyses. In order to address this issue, we developed SNPassoc, an R package to carry out most common analyses in whole genome association studies. These analyses include descriptive statistics and exploratory analysis of missing values, calculation of Hardy-Weinberg equilibrium, analysis of association based on generalized linear models (either for quantitative or binary traits), and analysis of multiple SNPs (haplotype and epistasis analysis). Package SNPassoc is available at CRAN from http://cran.r-project.org. A tutorial is available on Bioinformatics online and in http://davinci.crg.es/estivill_lab/snpassoc.

  13. The COMPASS Project

    NASA Astrophysics Data System (ADS)

    Duley, A. R.; Sullivan, D.; Fladeland, M. M.; Myers, J.; Craig, M.; Enomoto, F.; Van Gilst, D. P.; Johan, S.

    2011-12-01

    The Common Operations and Management Portal for Airborne Science Systems (COMPASS) project is a multi-center collaborative effort to advance and extend the research capabilities of the National Aeronautics and Space Administration's (NASA) Airborne Science Program (ASP). At its most basic, COMPASS provides tools for visualizing the position of aircraft and instrument observations during the course of a mission, and facilitates dissemination, discussion, and analysis and of multiple disparate data sources in order to more efficiently plan and execute airborne science missions. COMPASS targets a number of key objectives. First, deliver a common operating picture for improved shared situational awareness to all participants in NASA's Airborne Science missions. These participants include scientists, engineers, managers, and the general public. Second, encourage more responsive and collaborative measurements between instruments on multiple aircraft, satellites, and on the surface in order to increase the scientific value of these measurements. Fourth, provide flexible entry points for data providers to supply model and advanced analysis products to mission team members. Fifth, provide data consumers with a mechanism to ingest, search and display data products. Finally, embrace an open and transparent platform where common data products, services, and end user components can be shared with the broader scientific community. In pursuit of these objectives, and in concert with requirements solicited by the airborne science research community, the COMPASS project team has delivered a suite of core tools intended to represent the next generation toolset for airborne research. This toolset includes a collection of loosely coupled RESTful web-services, a system to curate, register, and search, commonly used data sources, end-user tools which leverage web socket and other next generation HTML5 technologies to aid real time aircraft position and data visualization, and an extensible a framework to rapidly accommodate mission specific requirements and mission tools.

  14. The Diverse Data, User Driven Services and the Power of Giovanni at NASA GES DISC

    NASA Technical Reports Server (NTRS)

    Shen, Suhung

    2017-01-01

    This presentation provides an overview of remote sensing and model data at GES (Goddard Earth Sciences) DISC (Data and Information Services Center); Overview of data services at GES DISC (Registration with NASA data system; Searching and downloading data); Giovanni (Geospatial Interactive Online VisualizationANd aNalysis Infrastructure): online data exploration tool; and NASA Earth Data and Information System.

  15. Contrast Analysis for Side-Looking Sonar

    DTIC Science & Technology

    2013-09-30

    bound for shadow depth that can be used to validate modeling tools such as SWAT (Shallow Water Acoustics Toolkit). • Adaptive Postprocessing: Tune image...0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send

  16. Ocean Drilling Program: Science Operator Search Engine

    Science.gov Websites

    and products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP's main -USIO site, plus IODP, ODP, and DSDP Publications, together or separately. ODP | Search | Database

  17. Integrated Bio-Search: challenges and trends for the integration, search and comprehensive processing of biological information

    PubMed Central

    2014-01-01

    Many efforts exist to design and implement approaches and tools for data capture, integration and analysis in the life sciences. Challenges are not only the heterogeneity, size and distribution of information sources, but also the danger of producing too many solutions for the same problem. Methodological, technological, infrastructural and social aspects appear to be essential for the development of a new generation of best practices and tools. In this paper, we analyse and discuss these aspects from different perspectives, by extending some of the ideas that arose during the NETTAB 2012 Workshop, making reference especially to the European context. First, relevance of using data and software models for the management and analysis of biological data is stressed. Second, some of the most relevant community achievements of the recent years, which should be taken as a starting point for future efforts in this research domain, are presented. Third, some of the main outstanding issues, challenges and trends are analysed. The challenges related to the tendency to fund and create large scale international research infrastructures and public-private partnerships in order to address the complex challenges of data intensive science are especially discussed. The needs and opportunities of Genomic Computing (the integration, search and display of genomic information at a very specific level, e.g. at the level of a single DNA region) are then considered. In the current data and network-driven era, social aspects can become crucial bottlenecks. How these may best be tackled to unleash the technical abilities for effective data integration and validation efforts is then discussed. Especially the apparent lack of incentives for already overwhelmed researchers appears to be a limitation for sharing information and knowledge with other scientists. We point out as well how the bioinformatics market is growing at an unprecedented speed due to the impact that new powerful in silico analysis promises to have on better diagnosis, prognosis, drug discovery and treatment, towards personalized medicine. An open business model for bioinformatics, which appears to be able to reduce undue duplication of efforts and support the increased reuse of valuable data sets, tools and platforms, is finally discussed. PMID:24564249

  18. A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth

    2005-03-15

    The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scalemore » long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK projects have made use of this infrastructure to build performance measurement and analysis tools that scale to long-running programs on large parallel and distributed systems and that automate much of the search for performance bottlenecks.« less

  19. Rare disease diagnosis: A review of web search, social media and large-scale data-mining approaches.

    PubMed

    Svenstrup, Dan; Jørgensen, Henrik L; Winther, Ole

    2015-01-01

    Physicians and the general public are increasingly using web-based tools to find answers to medical questions. The field of rare diseases is especially challenging and important as shown by the long delay and many mistakes associated with diagnoses. In this paper we review recent initiatives on the use of web search, social media and data mining in data repositories for medical diagnosis. We compare the retrieval accuracy on 56 rare disease cases with known diagnosis for the web search tools google.com, pubmed.gov, omim.org and our own search tool findzebra.com. We give a detailed description of IBM's Watson system and make a rough comparison between findzebra.com and Watson on subsets of the Doctor's dilemma dataset. The recall@10 and recall@20 (fraction of cases where the correct result appears in top 10 and top 20) for the 56 cases are found to be be 29%, 16%, 27% and 59% and 32%, 18%, 34% and 64%, respectively. Thus, FindZebra has a significantly (p < 0.01) higher recall than the other 3 search engines. When tested under the same conditions, Watson and FindZebra showed similar recall@10 accuracy. However, the tests were performed on different subsets of Doctors dilemma questions. Advances in technology and access to high quality data have opened new possibilities for aiding the diagnostic process. Specialized search engines, data mining tools and social media are some of the areas that hold promise.

  20. Rare disease diagnosis: A review of web search, social media and large-scale data-mining approaches

    PubMed Central

    Svenstrup, Dan; Jørgensen, Henrik L; Winther, Ole

    2015-01-01

    Physicians and the general public are increasingly using web-based tools to find answers to medical questions. The field of rare diseases is especially challenging and important as shown by the long delay and many mistakes associated with diagnoses. In this paper we review recent initiatives on the use of web search, social media and data mining in data repositories for medical diagnosis. We compare the retrieval accuracy on 56 rare disease cases with known diagnosis for the web search tools google.com, pubmed.gov, omim.org and our own search tool findzebra.com. We give a detailed description of IBM's Watson system and make a rough comparison between findzebra.com and Watson on subsets of the Doctor's dilemma dataset. The recall@10 and recall@20 (fraction of cases where the correct result appears in top 10 and top 20) for the 56 cases are found to be be 29%, 16%, 27% and 59% and 32%, 18%, 34% and 64%, respectively. Thus, FindZebra has a significantly (p < 0.01) higher recall than the other 3 search engines. When tested under the same conditions, Watson and FindZebra showed similar recall@10 accuracy. However, the tests were performed on different subsets of Doctors dilemma questions. Advances in technology and access to high quality data have opened new possibilities for aiding the diagnostic process. Specialized search engines, data mining tools and social media are some of the areas that hold promise. PMID:26442199

  1. Racism as a determinant of health: a protocol for conducting a systematic review and meta-analysis.

    PubMed

    Paradies, Yin; Priest, Naomi; Ben, Jehonathan; Truong, Mandy; Gupta, Arpana; Pieterse, Alex; Kelaher, Margaret; Gee, Gilbert

    2013-09-23

    Racism is increasingly recognized as a key determinant of health. A growing body of epidemiological evidence shows strong associations between self-reported racism and poor health outcomes across diverse minority groups in developed countries. While the relationship between racism and health has received increasing attention over the last two decades, a comprehensive meta-analysis focused on the health effects of racism has yet to be conducted. The aim of this review protocol is to provide a structure from which to conduct a systematic review and meta-analysis of studies that assess the relationship between racism and health. This research will consist of a systematic review and meta-analysis. Studies will be considered for review if they are empirical studies reporting quantitative data on the association between racism and health for adults and/or children of all ages from any racial/ethnic/cultural groups. Outcome measures will include general health and well-being, physical health, mental health, healthcare use and health behaviors. Scientific databases (for example, Medline) will be searched using a comprehensive search strategy and reference lists will be manually searched for relevant studies. In addition, use of online search engines (for example, Google Scholar), key websites, and personal contact with experts will also be undertaken. Screening of search results and extraction of data from included studies will be independently conducted by at least two authors, including assessment of inter-rater reliability. Studies included in the review will be appraised for quality using tools tailored to each study design. Summary statistics of study characteristics and findings will be compiled and findings synthesized in a narrative summary as well as a meta-analysis. This review aims to examine associations between reported racism and health outcomes. This comprehensive and systematic review and meta-analysis of empirical research will provide a rigorous and reliable evidence base for future research, policy and practice, including information on the extent of available evidence for a range of racial/ethnic minority groups.

  2. Automated Patent Searching in the EPO: From Online Searching to Document Delivery.

    ERIC Educational Resources Information Center

    Nuyts, Annemie; Jonckheere, Charles

    The European Patent Office (EPO) has recently implemented the last part of its ambitious automation project aimed at creating an automated search environment for approximately 1200 EPO patent search examiners. The examiners now have at their disposal an integrated set of tools offering a full range of functionalities from online searching, via…

  3. Improve homology search sensitivity of PacBio data by correcting frameshifts.

    PubMed

    Du, Nan; Sun, Yanni

    2016-09-01

    Single-molecule, real-time sequencing (SMRT) developed by Pacific BioSciences produces longer reads than secondary generation sequencing technologies such as Illumina. The long read length enables PacBio sequencing to close gaps in genome assembly, reveal structural variations, and identify gene isoforms with higher accuracy in transcriptomic sequencing. However, PacBio data has high sequencing error rate and most of the errors are insertion or deletion errors. During alignment-based homology search, insertion or deletion errors in genes will cause frameshifts and may only lead to marginal alignment scores and short alignments. As a result, it is hard to distinguish true alignments from random alignments and the ambiguity will incur errors in structural and functional annotation. Existing frameshift correction tools are designed for data with much lower error rate and are not optimized for PacBio data. As an increasing number of groups are using SMRT, there is an urgent need for dedicated homology search tools for PacBio data. In this work, we introduce Frame-Pro, a profile homology search tool for PacBio reads. Our tool corrects sequencing errors and also outputs the profile alignments of the corrected sequences against characterized protein families. We applied our tool to both simulated and real PacBio data. The results showed that our method enables more sensitive homology search, especially for PacBio data sets of low sequencing coverage. In addition, we can correct more errors when comparing with a popular error correction tool that does not rely on hybrid sequencing. The source code is freely available at https://sourceforge.net/projects/frame-pro/ yannisun@msu.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Epidemiology of Major Depressive Disorder in Iran: a Systematic Review and Meta-Analysis

    PubMed Central

    Sadeghirad, Behnam; Haghdoost, Ali-Akbar; Amin-Esmaeili, Masoumeh; Ananloo, Esmaeil Shahsavand; Ghaeli, Padideh; Rahimi-Movaghar, Afarin; Talebian, Elham; Pourkhandani, Ali; Noorbala, Ahmad Ali; Barooti, Esmat

    2010-01-01

    Objectives: There are a large number of primary researches on the prevalence of major depressive disorder (MDD) in Iran; however, their findings are varied considerably. A systematic review was performed in order to summarize the findings. Methods: Electronic and manual searches in international and Iranian journals were conducted to find relevant studies reporting MDD prevalence. To maximize the sensitivity of the search, the references of relevant papers were also explored. We explored the potential sources of heterogeneity such as diagnostic tools, gender and other characteristics using meta-regression model. The combined mean prevalence rates were calculated for genders, studies using each type of instruments and for each province using meta-analysis method. Results: From 44 articles included in the systematic review, 24 reported current prevalence and 20 reported lifetime prevalence of MDD. The overall estimation of current prevalence of MDD was 4.1% (95% CI: 3.1-5.1). Women were 1.95 (95% CI: 1.55-2.45) times more likely to have MDD. The current prevalence of MDD in urban inhabitants was not significantly different from rural inhabitants. The analysis identified the variations in diagnostic tools as an important source of heterogeneity. Conclusions: Although there is not adequate information on MDD prevalence in some areas of Iran, the overall current prevalence of MDD in the country is high and females are at the greater risk of disease. PMID:21566767

  5. Patient- and Caregiver-Reported Assessment Tools for Palliative Care: Summary of the 2017 Agency for Healthcare Research and Quality Technical Brief.

    PubMed

    Aslakson, Rebecca A; Dy, Sydney M; Wilson, Renee F; Waldfogel, Julie; Zhang, Allen; Isenberg, Sarina R; Blair, Alex; Sixon, Joshua; Lorenz, Karl A; Robinson, Karen A

    2017-12-01

    Assessment tools are data collection instruments that are completed by or with patients or caregivers and which collect data at the individual patient or caregiver level. The objectives of this study are to 1) summarize palliative care assessment tools completed by or with patients or caregivers and 2) identify needs for future tool development and evaluation. We completed 1) a systematic review of systematic reviews; 2) a supplemental search of previous reviews and Web sites, and/or 3) a targeted search for primary articles when no tools existed in a domain. Paired investigators screened search results, assessed risk of bias, and abstracted data. We organized tools by domains from the National Consensus Project Clinical Practice Guidelines for Palliative Care and selected the most relevant, recent, and highest quality systematic review for each domain. We included 10 systematic reviews and identified 152 tools (97 from systematic reviews and 55 from supplemental sources). Key gaps included no systematic review for pain and few tools assessing structural, cultural, spiritual, or ethical/legal domains, or patient-reported experience with end-of-life care. Psychometric information was available for many tools, but few studies evaluated responsiveness (sensitivity to change) and no studies compared tools. Few to no tools address the spiritual, ethical, or cultural domains or patient-reported experience with end-of-life care. While some data exist on psychometric properties of tools, the responsiveness of different tools to change and/or comparisons between tools have not been evaluated. Future research should focus on developing or testing tools that address domains for which few tools exist, evaluating responsiveness, and comparing tools. Copyright © 2017 American Academy of Hospice and Palliative Medicine. All rights reserved.

  6. Stop the Bleeding: the Development of a Tool to Streamline NASA Earth Science Metadata Curation Efforts

    NASA Astrophysics Data System (ADS)

    le Roux, J.; Baker, A.; Caltagirone, S.; Bugbee, K.

    2017-12-01

    The Common Metadata Repository (CMR) is a high-performance, high-quality repository for Earth science metadata records, and serves as the primary way to search NASA's growing 17.5 petabytes of Earth science data holdings. Released in 2015, CMR has the capability to support several different metadata standards already being utilized by NASA's combined network of Earth science data providers, or Distributed Active Archive Centers (DAACs). The Analysis and Review of CMR (ARC) Team located at Marshall Space Flight Center is working to improve the quality of records already in CMR with the goal of making records optimal for search and discovery. This effort entails a combination of automated and manual review, where each NASA record in CMR is checked for completeness, accuracy, and consistency. This effort is highly collaborative in nature, requiring communication and transparency of findings amongst NASA personnel, DAACs, the CMR team and other metadata curation teams. Through the evolution of this project it has become apparent that there is a need to document and report findings, as well as track metadata improvements in a more efficient manner. The ARC team has collaborated with Element 84 in order to develop a metadata curation tool to meet these needs. In this presentation, we will provide an overview of this metadata curation tool and its current capabilities. Challenges and future plans for the tool will also be discussed.

  7. Chemoinformatics Profiling of the Chromone Nucleus as a MAO-B/A2AAR Dual Binding Scaffold

    PubMed Central

    Cruz-Monteagudo, Maykel; Borges, Fernanda; Cordeiro, M. Natália D. S.; Helguera, Aliuska Morales; Tejera, Eduardo; Paz-y-Miño, Cesar; Sánchez-Rodríguez, Aminael; Perera-Sardiña, Yunier; Perez-Castillo, Yunierkis

    2017-01-01

    Background: In the context of the current drug discovery efforts to find disease modifying therapies for Parkinson´s disease (PD) the current single target strategy has proved inefficient. Consequently, the search for multi-potent agents is attracting more and more attention due to the multiple pathogenetic factors implicated in PD. Multiple evidences points to the dual inhibition of the monoamine oxidase B (MAO-B), as well as adenosine A2A receptor (A2AAR) blockade, as a promising approach to prevent the neurodegeneration involved in PD. Currently, only two chemical scaffolds has been proposed as potential dual MAO-B inhibitors/A2AAR antagonists (caffeine derivatives and benzothiazinones). Methods: In this study, we conduct a series of chemoinformatics analysis in order to evaluate and advance the potential of the chromone nucleus as a MAO-B/A2AAR dual binding scaffold. Results: The information provided by SAR data mining analysis based on network similarity graphs and molecular docking studies support the suitability of the chromone nucleus as a potential MAO-B/A2AAR dual binding scaffold. Additionally, a virtual screening tool based on a group fusion similarity search approach was developed for the prioritization of potential MAO-B/A2AAR dual binder candidates. Among several data fusion schemes evaluated, the MEAN-SIM and MIN-RANK GFSS approaches demonstrated to be efficient virtual screening tools. Then, a combinatorial library potentially enriched with MAO-B/A2AAR dual binding chromone derivatives was assembled and sorted by using the MIN-RANK and then the MEAN-SIM GFSS VS approaches. Conclusion: The information and tools provided in this work represent valuable decision making elements in the search of novel chromone derivatives with a favorable dual binding profile as MAO-B inhibitors and A2AAR antagonists with the potential to act as a disease-modifying therapeutic for Parkinson´s disease. PMID:28093976

  8. Large-scale feature searches of collections of medical imagery

    NASA Astrophysics Data System (ADS)

    Hedgcock, Marcus W.; Karshat, Walter B.; Levitt, Tod S.; Vosky, D. N.

    1993-09-01

    Large scale feature searches of accumulated collections of medical imagery are required for multiple purposes, including clinical studies, administrative planning, epidemiology, teaching, quality improvement, and research. To perform a feature search of large collections of medical imagery, one can either search text descriptors of the imagery in the collection (usually the interpretation), or (if the imagery is in digital format) the imagery itself. At our institution, text interpretations of medical imagery are all available in our VA Hospital Information System. These are downloaded daily into an off-line computer. The text descriptors of most medical imagery are usually formatted as free text, and so require a user friendly database search tool to make searches quick and easy for any user to design and execute. We are tailoring such a database search tool (Liveview), developed by one of the authors (Karshat). To further facilitate search construction, we are constructing (from our accumulated interpretation data) a dictionary of medical and radiological terms and synonyms. If the imagery database is digital, the imagery which the search discovers is easily retrieved from the computer archive. We describe our database search user interface, with examples, and compare the efficacy of computer assisted imagery searches from a clinical text database with manual searches. Our initial work on direct feature searches of digital medical imagery is outlined.

  9. YPED: An Integrated Bioinformatics Suite and Database for Mass Spectrometry-based Proteomics Research

    PubMed Central

    Colangelo, Christopher M.; Shifman, Mark; Cheung, Kei-Hoi; Stone, Kathryn L.; Carriero, Nicholas J.; Gulcicek, Erol E.; Lam, TuKiet T.; Wu, Terence; Bjornson, Robert D.; Bruce, Can; Nairn, Angus C.; Rinehart, Jesse; Miller, Perry L.; Williams, Kenneth R.

    2015-01-01

    We report a significantly-enhanced bioinformatics suite and database for proteomics research called Yale Protein Expression Database (YPED) that is used by investigators at more than 300 institutions worldwide. YPED meets the data management, archival, and analysis needs of a high-throughput mass spectrometry-based proteomics research ranging from a single laboratory, group of laboratories within and beyond an institution, to the entire proteomics community. The current version is a significant improvement over the first version in that it contains new modules for liquid chromatography–tandem mass spectrometry (LC–MS/MS) database search results, label and label-free quantitative proteomic analysis, and several scoring outputs for phosphopeptide site localization. In addition, we have added both peptide and protein comparative analysis tools to enable pairwise analysis of distinct peptides/proteins in each sample and of overlapping peptides/proteins between all samples in multiple datasets. We have also implemented a targeted proteomics module for automated multiple reaction monitoring (MRM)/selective reaction monitoring (SRM) assay development. We have linked YPED’s database search results and both label-based and label-free fold-change analysis to the Skyline Panorama repository for online spectra visualization. In addition, we have built enhanced functionality to curate peptide identifications into an MS/MS peptide spectral library for all of our protein database search identification results. PMID:25712262

  10. YPED: an integrated bioinformatics suite and database for mass spectrometry-based proteomics research.

    PubMed

    Colangelo, Christopher M; Shifman, Mark; Cheung, Kei-Hoi; Stone, Kathryn L; Carriero, Nicholas J; Gulcicek, Erol E; Lam, TuKiet T; Wu, Terence; Bjornson, Robert D; Bruce, Can; Nairn, Angus C; Rinehart, Jesse; Miller, Perry L; Williams, Kenneth R

    2015-02-01

    We report a significantly-enhanced bioinformatics suite and database for proteomics research called Yale Protein Expression Database (YPED) that is used by investigators at more than 300 institutions worldwide. YPED meets the data management, archival, and analysis needs of a high-throughput mass spectrometry-based proteomics research ranging from a single laboratory, group of laboratories within and beyond an institution, to the entire proteomics community. The current version is a significant improvement over the first version in that it contains new modules for liquid chromatography-tandem mass spectrometry (LC-MS/MS) database search results, label and label-free quantitative proteomic analysis, and several scoring outputs for phosphopeptide site localization. In addition, we have added both peptide and protein comparative analysis tools to enable pairwise analysis of distinct peptides/proteins in each sample and of overlapping peptides/proteins between all samples in multiple datasets. We have also implemented a targeted proteomics module for automated multiple reaction monitoring (MRM)/selective reaction monitoring (SRM) assay development. We have linked YPED's database search results and both label-based and label-free fold-change analysis to the Skyline Panorama repository for online spectra visualization. In addition, we have built enhanced functionality to curate peptide identifications into an MS/MS peptide spectral library for all of our protein database search identification results. Copyright © 2015 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.

  11. Customized Resources | OSTI, US Dept of Energy Office of Scientific and

    Science.gov Websites

    Technical Information skip to main content Sign In Create Account OSTI.GOV title logo U.S . Department of Energy Office of Scientific and Technical Information Search terms: Advanced search options Tools Public Access Policy Data Services & Dev Tools About FAQs News Sign In Create Account This

  12. DOE Collections | OSTI, US Dept of Energy Office of Scientific and

    Science.gov Websites

    Technical Information skip to main content Sign In Create Account OSTI.GOV title logo U.S . Department of Energy Office of Scientific and Technical Information Search terms: Advanced search options Tools Public Access Policy Data Services & Dev Tools About FAQs News Sign In Create Account This

  13. Contact Us | OSTI, US Dept of Energy Office of Scientific and Technical

    Science.gov Websites

    Information skip to main content Sign In Create Account OSTI.GOV title logo U.S. Department of Energy Office of Scientific and Technical Information Search terms: Advanced search options Advanced Tools Public Access Policy Data Services & Dev Tools About FAQs News Sign In Create Account Contact

  14. OPUS: A Comprehensive Search Tool for Remote Sensing Observations of the Outer Planets. Now with Enhanced Geometric Metadata for Cassini and New Horizons Optical Remote Sensing Instruments.

    NASA Astrophysics Data System (ADS)

    Gordon, M. K.; Showalter, M. R.; Ballard, L.; Tiscareno, M.; French, R. S.; Olson, D.

    2017-06-01

    The PDS RMS Node hosts OPUS - an accurate, comprehensive search tool for spacecraft remote sensing observations. OPUS supports Cassini: CIRS, ISS, UVIS, VIMS; New Horizons: LORRI, MVIC; Galileo SSI; Voyager ISS; and Hubble: ACS, STIS, WFC3, WFPC2.

  15. Liverpool's Discovery: A University Library Applies a New Search Tool to Improve the User Experience

    ERIC Educational Resources Information Center

    Kenney, Brian

    2011-01-01

    This article features the University of Liverpool's arts and humanities library, which applies a new search tool to improve the user experience. In nearly every way imaginable, the Sydney Jones Library and the Harold Cohen Library--the university's two libraries that serve science, engineering, and medical students--support the lives of their…

  16. E-Portfolio, a Valuable Job Search Tool for College Students

    ERIC Educational Resources Information Center

    Yu, Ti

    2012-01-01

    Purpose: The purpose of this paper is to find answers to the following questions: How do employers think about e-portfolios? Do employers really see e-portfolios as a suitable hiring tool? Which factors in students' e-portfolios attract potential employers? Can e-portfolios be successfully used by students in their search for a job?…

  17. A knowledge based search tool for performance measures in health care systems.

    PubMed

    Beyan, Oya D; Baykal, Nazife

    2012-02-01

    Performance measurement is vital for improving the health care systems. However, we are still far from having accepted performance measurement models. Researchers and developers are seeking comparable performance indicators. We developed an intelligent search tool to identify appropriate measures for specific requirements by matching diverse care settings. We reviewed the literature and analyzed 229 performance measurement studies published after 2000. These studies are evaluated with an original theoretical framework and stored in the database. A semantic network is designed for representing domain knowledge and supporting reasoning. We have applied knowledge based decision support techniques to cope with uncertainty problems. As a result we designed a tool which simplifies the performance indicator search process and provides most relevant indicators by employing knowledge based systems.

  18. VizieR Online Data Catalog: Jame Clerk Maxwell Telescope Science Archive (CADC, 2003)

    NASA Astrophysics Data System (ADS)

    Canadian Astronomy Data, Centre

    2018-01-01

    The JCMT Science Archive (JSA), a collaboration between the CADC and EOA, is the official distribution site for observational data obtained with the James Clerk Maxwell Telescope (JCMT) on Mauna Kea, Hawaii. The JSA search interface is provided by the CADC Search tool, which provides generic access to the complete set of telescopic data archived at the CADC. Help on the use of this tool is provided via tooltips. For additional information on instrument capabilities and data reduction, please consult the SCUBA-2 and ACSIS instrument pages provided on the JAC maintained JCMT pages. JCMT-specific help related to the use of the CADC AdvancedSearch tool is available from the JAC. (1 data file).

  19. PAPST, a User Friendly and Powerful Java Platform for ChIP-Seq Peak Co-Localization Analysis and Beyond.

    PubMed

    Bible, Paul W; Kanno, Yuka; Wei, Lai; Brooks, Stephen R; O'Shea, John J; Morasso, Maria I; Loganantharaj, Rasiah; Sun, Hong-Wei

    2015-01-01

    Comparative co-localization analysis of transcription factors (TFs) and epigenetic marks (EMs) in specific biological contexts is one of the most critical areas of ChIP-Seq data analysis beyond peak calling. Yet there is a significant lack of user-friendly and powerful tools geared towards co-localization analysis based exploratory research. Most tools currently used for co-localization analysis are command line only and require extensive installation procedures and Linux expertise. Online tools partially address the usability issues of command line tools, but slow response times and few customization features make them unsuitable for rapid data-driven interactive exploratory research. We have developed PAPST: Peak Assignment and Profile Search Tool, a user-friendly yet powerful platform with a unique design, which integrates both gene-centric and peak-centric co-localization analysis into a single package. Most of PAPST's functions can be completed in less than five seconds, allowing quick cycles of data-driven hypothesis generation and testing. With PAPST, a researcher with or without computational expertise can perform sophisticated co-localization pattern analysis of multiple TFs and EMs, either against all known genes or a set of genomic regions obtained from public repositories or prior analysis. PAPST is a versatile, efficient, and customizable tool for genome-wide data-driven exploratory research. Creatively used, PAPST can be quickly applied to any genomic data analysis that involves a comparison of two or more sets of genomic coordinate intervals, making it a powerful tool for a wide range of exploratory genomic research. We first present PAPST's general purpose features then apply it to several public ChIP-Seq data sets to demonstrate its rapid execution and potential for cutting-edge research with a case study in enhancer analysis. To our knowledge, PAPST is the first software of its kind to provide efficient and sophisticated post peak-calling ChIP-Seq data analysis as an easy-to-use interactive application. PAPST is available at https://github.com/paulbible/papst and is a public domain work.

  20. PAPST, a User Friendly and Powerful Java Platform for ChIP-Seq Peak Co-Localization Analysis and Beyond

    PubMed Central

    Bible, Paul W.; Kanno, Yuka; Wei, Lai; Brooks, Stephen R.; O’Shea, John J.; Morasso, Maria I.; Loganantharaj, Rasiah; Sun, Hong-Wei

    2015-01-01

    Comparative co-localization analysis of transcription factors (TFs) and epigenetic marks (EMs) in specific biological contexts is one of the most critical areas of ChIP-Seq data analysis beyond peak calling. Yet there is a significant lack of user-friendly and powerful tools geared towards co-localization analysis based exploratory research. Most tools currently used for co-localization analysis are command line only and require extensive installation procedures and Linux expertise. Online tools partially address the usability issues of command line tools, but slow response times and few customization features make them unsuitable for rapid data-driven interactive exploratory research. We have developed PAPST: Peak Assignment and Profile Search Tool, a user-friendly yet powerful platform with a unique design, which integrates both gene-centric and peak-centric co-localization analysis into a single package. Most of PAPST’s functions can be completed in less than five seconds, allowing quick cycles of data-driven hypothesis generation and testing. With PAPST, a researcher with or without computational expertise can perform sophisticated co-localization pattern analysis of multiple TFs and EMs, either against all known genes or a set of genomic regions obtained from public repositories or prior analysis. PAPST is a versatile, efficient, and customizable tool for genome-wide data-driven exploratory research. Creatively used, PAPST can be quickly applied to any genomic data analysis that involves a comparison of two or more sets of genomic coordinate intervals, making it a powerful tool for a wide range of exploratory genomic research. We first present PAPST’s general purpose features then apply it to several public ChIP-Seq data sets to demonstrate its rapid execution and potential for cutting-edge research with a case study in enhancer analysis. To our knowledge, PAPST is the first software of its kind to provide efficient and sophisticated post peak-calling ChIP-Seq data analysis as an easy-to-use interactive application. PAPST is available at https://github.com/paulbible/papst and is a public domain work. PMID:25970601

  1. World Wide Web Search Engines: AltaVista and Yahoo.

    ERIC Educational Resources Information Center

    Machovec, George S., Ed.

    1996-01-01

    Examines the history, structure, and search capabilities of Internet search tools AltaVista and Yahoo. AltaVista provides relevance-ranked feedback on full-text searches. Yahoo indexes Web "citations" only but does organize information hierarchically into predefined categories. Yahoo has recently become a publicly held company and…

  2. Utilization of a radiology-centric search engine.

    PubMed

    Sharpe, Richard E; Sharpe, Megan; Siegel, Eliot; Siddiqui, Khan

    2010-04-01

    Internet-based search engines have become a significant component of medical practice. Physicians increasingly rely on information available from search engines as a means to improve patient care, provide better education, and enhance research. Specialized search engines have emerged to more efficiently meet the needs of physicians. Details about the ways in which radiologists utilize search engines have not been documented. The authors categorized every 25th search query in a radiology-centric vertical search engine by radiologic subspecialty, imaging modality, geographic location of access, time of day, use of abbreviations, misspellings, and search language. Musculoskeletal and neurologic imagings were the most frequently searched subspecialties. The least frequently searched were breast imaging, pediatric imaging, and nuclear medicine. Magnetic resonance imaging and computed tomography were the most frequently searched modalities. A majority of searches were initiated in North America, but all continents were represented. Searches occurred 24 h/day in converted local times, with a majority occurring during the normal business day. Misspellings and abbreviations were common. Almost all searches were performed in English. Search engine utilization trends are likely to mirror trends in diagnostic imaging in the region from which searches originate. Internet searching appears to function as a real-time clinical decision-making tool, a research tool, and an educational resource. A more thorough understanding of search utilization patterns can be obtained by analyzing phrases as actually entered as well as the geographic location and time of origination. This knowledge may contribute to the development of more efficient and personalized search engines.

  3. A Systematic Review of Physician Leadership and Emotional Intelligence

    PubMed Central

    Mintz, Laura Janine; Stoller, James K.

    2014-01-01

    Objective This review evaluates the current understanding of emotional intelligence (EI) and physician leadership, exploring key themes and areas for future research. Literature Search We searched the literature using PubMed, Google Scholar, and Business Source Complete for articles published between 1990 and 2012. Search terms included physician and leadership, emotional intelligence, organizational behavior, and organizational development. All abstracts were reviewed. Full articles were evaluated if they addressed the connection between EI and physician leadership. Articles were included if they focused on physicians or physicians-in-training and discussed interventions or recommendations. Appraisal and Synthesis We assessed articles for conceptual rigor, study design, and measurement quality. A thematic analysis categorized the main themes and findings of the articles. Results The search produced 3713 abstracts, of which 437 full articles were read and 144 were included in this review. Three themes were identified: (1) EI is broadly endorsed as a leadership development strategy across providers and settings; (2) models of EI and leadership development practices vary widely; and (3) EI is considered relevant throughout medical education and practice. Limitations of the literature were that most reports were expert opinion or observational and studies used several different tools for measuring EI. Conclusions EI is widely endorsed as a component of curricula for developing physician leaders. Research comparing practice models and measurement tools will critically advance understanding about how to develop and nurture EI to enhance leadership skills in physicians throughout their careers. PMID:24701306

  4. Assessment tools for the measurement of the self-efficacy of drug users: protocol for a systematic review

    PubMed Central

    Vasconcelos, Selene Cordeiro; Frazão, Iracema da Silva; Sougey, Everton Botelho; de Souza, Sandra Lopes; da Silva, Tatiana de Paula Santana; Lima, Murilo Duarte da Costa

    2018-01-01

    Introduction The abuse of alcohol and other drugs is a worldwide problem, the treatment of which poses a challenge to healthcare workers. Objective This study presents a proposal for a systematic review to analyse the psychometric properties of assessment tools developed to measure the self-efficacy of drug users with regard to resisting the urge to take drugs in high-risk situations. Methods and Analysis The guiding question was based on PICOS (Population Intervention Comparator Outcome Setting), and the report of the methods of review protocol was written in accordance with the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P). Searches will be performed in the PsycINFO, Cochrane, Pubmed, Web of Science, SCOPUS and CINAHL databases, followed by the use of the ‘snowball’ strategy. The inclusion criteria for the articles will be (1) assessment tool validation studies; (2) assessment tools developed to measure self-efficacy; (3) quantitative measures; (4) measures designed for use on adults; (5) data from self-reports of the participants; (6) studies involving a description of psychometric properties of the measures; and (7) studies that explain how the level of self-efficacy is scored. The search, selection and analysis will be performed by two independent reviewers. In cases of a divergence of opinion, a third reviewer will be consulted. The COSMIN checklist will be used for the appraisal of the methodological quality of the assessment tools and the certainty of the evidence in the articles (risk of bias) will be analysed using the GRADE (Grading of Recommendations Assessment, Development and Evaluation) approach. Ethics and dissemination This protocol does not require ethical approval. However, this protocol is part of the thesis entitled Drug-Taking Confidence Questionnaire for use in Brazil, presented for obtaining a doctorate in neuropsychiatry and behavioural sciences from the Federal University of Pernambuco, and has received approval from the human research ethics committee of the Federal University of Pernambuco (reference number: 1.179.162). The results will be disseminated to clinicians and researchers through peer-reviewed publications and conferences. PROSPERO registration number CRD42017068555. PMID:29540409

  5. Inconsistency in the items included in tools used in general health research and physical therapy to evaluate the methodological quality of randomized controlled trials: a descriptive analysis

    PubMed Central

    2013-01-01

    Background Assessing the risk of bias of randomized controlled trials (RCTs) is crucial to understand how biases affect treatment effect estimates. A number of tools have been developed to evaluate risk of bias of RCTs; however, it is unknown how these tools compare to each other in the items included. The main objective of this study was to describe which individual items are included in RCT quality tools used in general health and physical therapy (PT) research, and how these items compare to those of the Cochrane Risk of Bias (RoB) tool. Methods We used comprehensive literature searches and a systematic approach to identify tools that evaluated the methodological quality or risk of bias of RCTs in general health and PT research. We extracted individual items from all quality tools. We calculated the frequency of quality items used across tools and compared them to those in the RoB tool. Comparisons were made between general health and PT quality tools using Chi-squared tests. Results In addition to the RoB tool, 26 quality tools were identified, with 19 being used in general health and seven in PT research. The total number of quality items included in general health research tools was 130, compared with 48 items across PT tools and seven items in the RoB tool. The most frequently included items in general health research tools (14/19, 74%) were inclusion and exclusion criteria, and appropriate statistical analysis. In contrast, the most frequent items included in PT tools (86%, 6/7) were: baseline comparability, blinding of investigator/assessor, and use of intention-to-treat analysis. Key items of the RoB tool (sequence generation and allocation concealment) were included in 71% (5/7) of PT tools, and 63% (12/19) and 37% (7/19) of general health research tools, respectively. Conclusions There is extensive item variation across tools that evaluate the risk of bias of RCTs in health research. Results call for an in-depth analysis of items that should be used to assess risk of bias of RCTs. Further empirical evidence on the use of individual items and the psychometric properties of risk of bias tools is needed. PMID:24044807

  6. Inconsistency in the items included in tools used in general health research and physical therapy to evaluate the methodological quality of randomized controlled trials: a descriptive analysis.

    PubMed

    Armijo-Olivo, Susan; Fuentes, Jorge; Ospina, Maria; Saltaji, Humam; Hartling, Lisa

    2013-09-17

    Assessing the risk of bias of randomized controlled trials (RCTs) is crucial to understand how biases affect treatment effect estimates. A number of tools have been developed to evaluate risk of bias of RCTs; however, it is unknown how these tools compare to each other in the items included. The main objective of this study was to describe which individual items are included in RCT quality tools used in general health and physical therapy (PT) research, and how these items compare to those of the Cochrane Risk of Bias (RoB) tool. We used comprehensive literature searches and a systematic approach to identify tools that evaluated the methodological quality or risk of bias of RCTs in general health and PT research. We extracted individual items from all quality tools. We calculated the frequency of quality items used across tools and compared them to those in the RoB tool. Comparisons were made between general health and PT quality tools using Chi-squared tests. In addition to the RoB tool, 26 quality tools were identified, with 19 being used in general health and seven in PT research. The total number of quality items included in general health research tools was 130, compared with 48 items across PT tools and seven items in the RoB tool. The most frequently included items in general health research tools (14/19, 74%) were inclusion and exclusion criteria, and appropriate statistical analysis. In contrast, the most frequent items included in PT tools (86%, 6/7) were: baseline comparability, blinding of investigator/assessor, and use of intention-to-treat analysis. Key items of the RoB tool (sequence generation and allocation concealment) were included in 71% (5/7) of PT tools, and 63% (12/19) and 37% (7/19) of general health research tools, respectively. There is extensive item variation across tools that evaluate the risk of bias of RCTs in health research. Results call for an in-depth analysis of items that should be used to assess risk of bias of RCTs. Further empirical evidence on the use of individual items and the psychometric properties of risk of bias tools is needed.

  7. The Transcriptome Analysis and Comparison Explorer--T-ACE: a platform-independent, graphical tool to process large RNAseq datasets of non-model organisms.

    PubMed

    Philipp, E E R; Kraemer, L; Mountfort, D; Schilhabel, M; Schreiber, S; Rosenstiel, P

    2012-03-15

    Next generation sequencing (NGS) technologies allow a rapid and cost-effective compilation of large RNA sequence datasets in model and non-model organisms. However, the storage and analysis of transcriptome information from different NGS platforms is still a significant bottleneck, leading to a delay in data dissemination and subsequent biological understanding. Especially database interfaces with transcriptome analysis modules going beyond mere read counts are missing. Here, we present the Transcriptome Analysis and Comparison Explorer (T-ACE), a tool designed for the organization and analysis of large sequence datasets, and especially suited for transcriptome projects of non-model organisms with little or no a priori sequence information. T-ACE offers a TCL-based interface, which accesses a PostgreSQL database via a php-script. Within T-ACE, information belonging to single sequences or contigs, such as annotation or read coverage, is linked to the respective sequence and immediately accessible. Sequences and assigned information can be searched via keyword- or BLAST-search. Additionally, T-ACE provides within and between transcriptome analysis modules on the level of expression, GO terms, KEGG pathways and protein domains. Results are visualized and can be easily exported for external analysis. We developed T-ACE for laboratory environments, which have only a limited amount of bioinformatics support, and for collaborative projects in which different partners work on the same dataset from different locations or platforms (Windows/Linux/MacOS). For laboratories with some experience in bioinformatics and programming, the low complexity of the database structure and open-source code provides a framework that can be customized according to the different needs of the user and transcriptome project.

  8. The PARIGA server for real time filtering and analysis of reciprocal BLAST results.

    PubMed

    Orsini, Massimiliano; Carcangiu, Simone; Cuccuru, Gianmauro; Uva, Paolo; Tramontano, Anna

    2013-01-01

    BLAST-based similarity searches are commonly used in several applications involving both nucleotide and protein sequences. These applications span from simple tasks such as mapping sequences over a database to more complex procedures as clustering or annotation processes. When the amount of analysed data increases, manual inspection of BLAST results become a tedious procedure. Tools for parsing or filtering BLAST results for different purposes are then required. We describe here PARIGA (http://resources.bioinformatica.crs4.it/pariga/), a server that enables users to perform all-against-all BLAST searches on two sets of sequences selected by the user. Moreover, since it stores the two BLAST output in a python-serialized-objects database, results can be filtered according to several parameters in real-time fashion, without re-running the process and avoiding additional programming efforts. Results can be interrogated by the user using logical operations, for example to retrieve cases where two queries match same targets, or when sequences from the two datasets are reciprocal best hits, or when a query matches a target in multiple regions. The Pariga web server is designed to be a helpful tool for managing the results of sequence similarity searches. The design and implementation of the server renders all operations very fast and easy to use.

  9. Assessment Tools for Evaluation of Oral Feeding in Infants Less than Six Months Old

    PubMed Central

    Pados, Britt F.; Park, Jinhee; Estrem, Hayley; Awotwi, Araba

    2015-01-01

    Background Feeding difficulty is common in infants less than six months old. Identification of infants in need of specialized treatment is critical to ensure appropriate nutrition and feeding skill development. Valid and reliable assessment tools help clinicians objectively evaluate feeding. Purpose To identify and evaluate assessment tools available for clinical assessment of bottle- and breast-feeding in infants less than six months old. Methods/Search Strategy CINAHL, HaPI, PubMed, and Web of Science were searched for “infant feeding” and “assessment tool.” The literature (n=237) was reviewed for relevant assessment tools. A secondary search was conducted in CINAHL and PubMed for additional literature on identified tools. Findings/Results Eighteen assessment tools met inclusion criteria. Of these, seven were excluded because of limited available literature or because they were intended for use with a specific diagnosis or in research only. There are 11 assessment tools available for clinical practice. Only two of these were intended for bottle-feeding. All 11 indicated they were appropriate for use with breast-feeding. None of the available tools have adequate psychometric development and testing. Implications for Practice All of the tools should be used with caution. The Early Feeding Skills Assessment and Bristol Breastfeeding Assessment Tool had the most supportive psychometric development and testing. Implications for Research Feeding assessment tools need to be developed and tested to guide optimal clinical care of infants from birth through six months. A tool that assesses both bottle- and breast-feeding would allow for consistent assessment across feeding methods. PMID:26945280

  10. WHAM!: a web-based visualization suite for user-defined analysis of metagenomic shotgun sequencing data.

    PubMed

    Devlin, Joseph C; Battaglia, Thomas; Blaser, Martin J; Ruggles, Kelly V

    2018-06-25

    Exploration of large data sets, such as shotgun metagenomic sequence or expression data, by biomedical experts and medical professionals remains as a major bottleneck in the scientific discovery process. Although tools for this purpose exist for 16S ribosomal RNA sequencing analysis, there is a growing but still insufficient number of user-friendly interactive visualization workflows for easy data exploration and figure generation. The development of such platforms for this purpose is necessary to accelerate and streamline microbiome laboratory research. We developed the Workflow Hub for Automated Metagenomic Exploration (WHAM!) as a web-based interactive tool capable of user-directed data visualization and statistical analysis of annotated shotgun metagenomic and metatranscriptomic data sets. WHAM! includes exploratory and hypothesis-based gene and taxa search modules for visualizing differences in microbial taxa and gene family expression across experimental groups, and for creating publication quality figures without the need for command line interface or in-house bioinformatics. WHAM! is an interactive and customizable tool for downstream metagenomic and metatranscriptomic analysis providing a user-friendly interface allowing for easy data exploration by microbiome and ecological experts to facilitate discovery in multi-dimensional and large-scale data sets.

  11. The Gender Analysis Tools Applied in Natural Disasters Management: A Systematic Literature Review

    PubMed Central

    Sohrabizadeh, Sanaz; Tourani, Sogand; Khankeh, Hamid Reza

    2014-01-01

    Background: Although natural disasters have caused considerable damages around the world, and gender analysis can improve community disaster preparedness or mitigation, there is little research about the gendered analytical tools and methods in communities exposed to natural disasters and hazards. These tools evaluate gender vulnerability and capacity in pre-disaster and post-disaster phases of the disaster management cycle. Objectives: Identifying the analytical gender tools and the strengths and limitations of them as well as determining gender analysis studies which had emphasized on the importance of using gender analysis in disasters. Methods: The literature search was conducted in June 2013 using PubMed, Web of Sciences, ProQuest Research Library, World Health Organization Library, Gender and Disaster Network (GDN) archive. All articles, guidelines, fact sheets and other materials that provided an analytical framework for a gender analysis approach in disasters were included and the non-English documents as well as gender studies of non-disasters area were excluded. Analysis of the included studies was done separately by descriptive and thematic analyses. Results: A total of 207 documents were retrieved, of which only nine references were included. Of these, 45% were in form of checklist, 33% case study report, and the remaining 22% were article. All selected papers were published within the period 1994-2012. Conclusions: A focus on women’s vulnerability in the related research and the lack of valid and reliable gender analysis tools were considerable issues identified by the literature review. Although non-English literatures with English abstract were included in the study, the possible exclusion of non-English ones was found as the limitation of this study. PMID:24678441

  12. Search optimization of named entities from twitter streams

    NASA Astrophysics Data System (ADS)

    Fazeel, K. Mohammed; Hassan Mottur, Simama; Norman, Jasmine; Mangayarkarasi, R.

    2017-11-01

    With Enormous number of tweets, People often face difficulty to get exact information about those tweets. One of the approach followed for getting information about those tweets via Google. There is not any accuracy tool developed for search optimization and as well as getting information about those tweets. So, this system contains the search optimization and functionalities for getting information about those tweets. Another problem faced here are the tweets that contains grammatical errors, misspellings, non-standard abbreviations, and meaningless capitalization. So, these problems can be eliminated by the use of this tool. Lot of time can be saved and as well as by the use of efficient search optimization each information about those particular tweets can be obtained.

  13. 75 FR 23306 - Southern Nuclear Operating Company, et al.: Supplementary Notice of Hearing and Opportunity To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-03

    ...'' field when using either the Web-based search (advanced search) engine or the ADAMS FIND tool in Citrix... should enter ``05200011'' in the ``Docket Number'' field in the web-based search (advanced search) engine... ML100740441. To search for documents in ADAMS using Vogtle Units 3 and 4 COL application docket numbers, 52...

  14. Optimal design of groundwater remediation system using a probabilistic multi-objective fast harmony search algorithm under uncertainty

    NASA Astrophysics Data System (ADS)

    Luo, Qiankun; Wu, Jianfeng; Yang, Yun; Qian, Jiazhong; Wu, Jichun

    2014-11-01

    This study develops a new probabilistic multi-objective fast harmony search algorithm (PMOFHS) for optimal design of groundwater remediation systems under uncertainty associated with the hydraulic conductivity (K) of aquifers. The PMOFHS integrates the previously developed deterministic multi-objective optimization method, namely multi-objective fast harmony search algorithm (MOFHS) with a probabilistic sorting technique to search for Pareto-optimal solutions to multi-objective optimization problems in a noisy hydrogeological environment arising from insufficient K data. The PMOFHS is then coupled with the commonly used flow and transport codes, MODFLOW and MT3DMS, to identify the optimal design of groundwater remediation systems for a two-dimensional hypothetical test problem and a three-dimensional Indiana field application involving two objectives: (i) minimization of the total remediation cost through the engineering planning horizon, and (ii) minimization of the mass remaining in the aquifer at the end of the operational period, whereby the pump-and-treat (PAT) technology is used to clean up contaminated groundwater. Also, Monte Carlo (MC) analysis is employed to evaluate the effectiveness of the proposed methodology. Comprehensive analysis indicates that the proposed PMOFHS can find Pareto-optimal solutions with low variability and high reliability and is a potentially effective tool for optimizing multi-objective groundwater remediation problems under uncertainty.

  15. The search and selection for primary studies in systematic reviews published in dental journals indexed in MEDLINE was not fully reproducible.

    PubMed

    Faggion, Clovis Mariano; Huivin, Raquel; Aranda, Luisiana; Pandis, Nikolaos; Alarcon, Marco

    2018-06-01

    To evaluate whether the reporting of search strategies and the primary study selection process in dental systematic reviews is reproducible. A survey of systematic reviews published in MEDLINE-indexed dental journals from June 2015 to June 2016 was conducted. Study selection was performed independently by two authors, and the reproducibility of the selection process was assessed using a tool consisting of 12 criteria. Regression analyses were implemented to evaluate any associations between degrees of reporting (measured by the number of items positively answered) and journal impact factor (IF), presence of meta-analysis, and number of citations of the systematic review in Google Scholar. Five hundred and thirty systematic reviews were identified. Following our 12 criteria, none of the systematic reviews had complete reporting of the search strategies and selection process. Eight (1.5%) systematic reviews reported the list of excluded articles (with reasons for exclusion) after title and abstract assessment. Systematic reviews with more positive answers to the criteria were significantly associated with higher journal IF, number of citations, and inclusion of meta-analysis. Search strategies and primary study selection process in systematic reviews published in MEDLINE-indexed dental journals may not be fully reproducible. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. GenderMedDB: an interactive database of sex and gender-specific medical literature.

    PubMed

    Oertelt-Prigione, Sabine; Gohlke, Björn-Oliver; Dunkel, Mathias; Preissner, Robert; Regitz-Zagrosek, Vera

    2014-01-01

    Searches for sex and gender-specific publications are complicated by the absence of a specific algorithm within search engines and by the lack of adequate archives to collect the retrieved results. We previously addressed this issue by initiating the first systematic archive of medical literature containing sex and/or gender-specific analyses. This initial collection has now been greatly enlarged and re-organized as a free user-friendly database with multiple functions: GenderMedDB (http://gendermeddb.charite.de). GenderMedDB retrieves the included publications from the PubMed database. Manuscripts containing sex and/or gender-specific analysis are continuously screened and the relevant findings organized systematically into disciplines and diseases. Publications are furthermore classified by research type, subject and participant numbers. More than 11,000 abstracts are currently included in the database, after screening more than 40,000 publications. The main functions of the database include searches by publication data or content analysis based on pre-defined classifications. In addition, registrants are enabled to upload relevant publications, access descriptive publication statistics and interact in an open user forum. Overall, GenderMedDB offers the advantages of a discipline-specific search engine as well as the functions of a participative tool for the gender medicine community.

  17. A neotropical Miocene pollen database employing image-based search and semantic modeling1

    PubMed Central

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-01-01

    • Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648

  18. Protein 3D Structure and Electron Microscopy Map Retrieval Using 3D-SURFER2.0 and EM-SURFER.

    PubMed

    Han, Xusi; Wei, Qing; Kihara, Daisuke

    2017-12-08

    With the rapid growth in the number of solved protein structures stored in the Protein Data Bank (PDB) and the Electron Microscopy Data Bank (EMDB), it is essential to develop tools to perform real-time structure similarity searches against the entire structure database. Since conventional structure alignment methods need to sample different orientations of proteins in the three-dimensional space, they are time consuming and unsuitable for rapid, real-time database searches. To this end, we have developed 3D-SURFER and EM-SURFER, which utilize 3D Zernike descriptors (3DZD) to conduct high-throughput protein structure comparison, visualization, and analysis. Taking an atomic structure or an electron microscopy map of a protein or a protein complex as input, the 3DZD of a query protein is computed and compared with the 3DZD of all other proteins in PDB or EMDB. In addition, local geometrical characteristics of a query protein can be analyzed using VisGrid and LIGSITE CSC in 3D-SURFER. This article describes how to use 3D-SURFER and EM-SURFER to carry out protein surface shape similarity searches, local geometric feature analysis, and interpretation of the search results. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  19. RAG-3D: A search tool for RNA 3D substructures

    DOE PAGES

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; ...

    2015-08-24

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less

  20. RAG-3D: a search tool for RNA 3D substructures

    PubMed Central

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-01-01

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  1. RAG-3D: A search tool for RNA 3D substructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef

    In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less

  2. Google Scholar as replacement for systematic literature searches: good relative recall and precision are not enough.

    PubMed

    Boeker, Martin; Vach, Werner; Motschall, Edith

    2013-10-26

    Recent research indicates a high recall in Google Scholar searches for systematic reviews. These reports raised high expectations of Google Scholar as a unified and easy to use search interface. However, studies on the coverage of Google Scholar rarely used the search interface in a realistic approach but instead merely checked for the existence of gold standard references. In addition, the severe limitations of the Google Search interface must be taken into consideration when comparing with professional literature retrieval tools.The objectives of this work are to measure the relative recall and precision of searches with Google Scholar under conditions which are derived from structured search procedures conventional in scientific literature retrieval; and to provide an overview of current advantages and disadvantages of the Google Scholar search interface in scientific literature retrieval. General and MEDLINE-specific search strategies were retrieved from 14 Cochrane systematic reviews. Cochrane systematic review search strategies were translated to Google Scholar search expression as good as possible under consideration of the original search semantics. The references of the included studies from the Cochrane reviews were checked for their inclusion in the result sets of the Google Scholar searches. Relative recall and precision were calculated. We investigated Cochrane reviews with a number of included references between 11 and 70 with a total of 396 references. The Google Scholar searches resulted in sets between 4,320 and 67,800 and a total of 291,190 hits. The relative recall of the Google Scholar searches had a minimum of 76.2% and a maximum of 100% (7 searches). The precision of the Google Scholar searches had a minimum of 0.05% and a maximum of 0.92%. The overall relative recall for all searches was 92.9%, the overall precision was 0.13%. The reported relative recall must be interpreted with care. It is a quality indicator of Google Scholar confined to an experimental setting which is unavailable in systematic retrieval due to the severe limitations of the Google Scholar search interface. Currently, Google Scholar does not provide necessary elements for systematic scientific literature retrieval such as tools for incremental query optimization, export of a large number of references, a visual search builder or a history function. Google Scholar is not ready as a professional searching tool for tasks where structured retrieval methodology is necessary.

  3. A Powerful, Cost Effective, Web Based Engineering Solution Supporting Conjunction Detection and Visual Analysis

    NASA Astrophysics Data System (ADS)

    Novak, Daniel M.; Biamonti, Davide; Gross, Jeremy; Milnes, Martin

    2013-08-01

    An innovative and visually appealing tool is presented for efficient all-vs-all conjunction analysis on a large catalogue of objects. The conjunction detection uses a nearest neighbour search algorithm, based on spatial binning and identification of pairs of objects in adjacent bins. This results in the fastest all vs all filtering the authors are aware of. The tool is constructed on a server-client architecture, where the server broadcasts to the client the conjunction data and ephemerides, while the client supports the user interface through a modern browser, without plug-in. In order to make the tool flexible and maintainable, Java software technologies were used on the server side, including Spring, Camel, ActiveMQ and CometD. The user interface and visualisation are based on the latest web technologies: HTML5, WebGL, THREE.js. Importance has been given on the ergonomics and visual appeal of the software. In fact certain design concepts have been borrowed from the gaming industry.

  4. Posterior Tibial Tendon Dysfunction (PTTD)

    MedlinePlus

    ... treatment, surgery may be required. For some advanced cases, surgery may be the only option. Your foot and ankle surgeon will determine the best approach for you. Find an ACFAS Physician Search Search Tools Find an ACFAS Physician: Search by Mail Address ...

  5. Systematic review of fall risk screening tools for older patients in acute hospitals.

    PubMed

    Matarese, Maria; Ivziku, Dhurata; Bartolozzi, Francesco; Piredda, Michela; De Marinis, Maria Grazia

    2015-06-01

    To determine the most accurate fall risk screening tools for predicting falls among patients aged 65 years or older admitted to acute care hospitals. Falls represent a serious problem in older inpatients due to the potential physical, social, psychological and economic consequences. Older inpatients present with risk factors associated with age-related physiological and psychological changes as well as multiple morbidities. Thus, fall risk screening tools for older adults should include these specific risk factors. There are no published recommendations addressing what tools are appropriate for older hospitalized adults. Systematic review. MEDLINE, CINAHL and Cochrane electronic databases were searched between January 1981-April 2013. Only prospective validation studies reporting sensitivity and specificity values were included. Recommendations of the Cochrane Handbook of Diagnostic Test Accuracy Reviews have been followed. Three fall risk assessment tools were evaluated in seven articles. Due to the limited number of studies, meta-analysis was carried out only for the STRATIFY and Hendrich Fall Risk Model II. In the combined analysis, the Hendrich Fall Risk Model II demonstrated higher sensitivity than STRATIFY, while the STRATIFY showed higher specificity. In both tools, the Youden index showed low prognostic accuracy. The identified tools do not demonstrate predictive values as high as needed for identifying older inpatients at risk for falls. For this reason, no tool can be recommended for fall detection. More research is needed to evaluate fall risk screening tools for older inpatients. © 2014 John Wiley & Sons Ltd.

  6. EGenBio: A Data Management System for Evolutionary Genomics and Biodiversity

    PubMed Central

    Nahum, Laila A; Reynolds, Matthew T; Wang, Zhengyuan O; Faith, Jeremiah J; Jonna, Rahul; Jiang, Zhi J; Meyer, Thomas J; Pollock, David D

    2006-01-01

    Background Evolutionary genomics requires management and filtering of large numbers of diverse genomic sequences for accurate analysis and inference on evolutionary processes of genomic and functional change. We developed Evolutionary Genomics and Biodiversity (EGenBio; ) to begin to address this. Description EGenBio is a system for manipulation and filtering of large numbers of sequences, integrating curated sequence alignments and phylogenetic trees, managing evolutionary analyses, and visualizing their output. EGenBio is organized into three conceptual divisions, Evolution, Genomics, and Biodiversity. The Genomics division includes tools for selecting pre-aligned sequences from different genes and species, and for modifying and filtering these alignments for further analysis. Species searches are handled through queries that can be modified based on a tree-based navigation system and saved. The Biodiversity division contains tools for analyzing individual sequences or sequence alignments, whereas the Evolution division contains tools involving phylogenetic trees. Alignments are annotated with analytical results and modification history using our PRAED format. A miscellaneous Tools section and Help framework are also available. EGenBio was developed around our comparative genomic research and a prototype database of mtDNA genomes. It utilizes MySQL-relational databases and dynamic page generation, and calls numerous custom programs. Conclusion EGenBio was designed to serve as a platform for tools and resources to ease combined analysis in evolution, genomics, and biodiversity. PMID:17118150

  7. The VIMS Data Explorer: A tool for locating and visualizing hyperspectral data

    NASA Astrophysics Data System (ADS)

    Pasek, V. D.; Lytle, D. M.; Brown, R. H.

    2016-12-01

    Since successfully entering Saturn's orbit during Summer 2004 there have been over 300,000 hyperspectral data cubes returned from the visible and infrared mapping spectrometer (VIMS) instrument onboard the Cassini spacecraft. The VIMS Science Investigation is a multidisciplinary effort that uses these hyperspectral data to study a variety of scientific problems, including surface characterizations of the icy satellites and atmospheric analyses of Titan and Saturn. Such investigations may need to identify thousands of exemplary data cubes for analysis and can span many years in scope. Here we describe the VIMS data explorer (VDE) application, currently employed by the VIMS Investigation to search for and visualize data. The VDE application facilitates real-time inspection of the entire VIMS hyperspectral dataset, the construction of in situ maps, and markers to save and recall work. The application relies on two databases to provide comprehensive search capabilities. The first database contains metadata for every cube. These metadata searches are used to identify records based on parameters such as target, observation name, or date taken; they fall short in utility for some investigations. The cube metadata contains no target geometry information. Through the introduction of a post-calibration pixel database, the VDE tool enables users to greatly expand their searching capabilities. Users can select favorable cubes for further processing into 2-D and 3-D interactive maps, aiding in the data interpretation and selection process. The VDE application enables efficient search, visualization, and access to VIMS hyperspectral data. It is simple to use, requiring nothing more than a browser for access. Hyperspectral bands can be individually selected or combined to create real-time color images, a technique commonly employed by hyperspectral researchers to highlight compositional differences.

  8. PhAST: pharmacophore alignment search tool.

    PubMed

    Hähnke, Volker; Hofmann, Bettina; Grgat, Tomislav; Proschak, Ewgenij; Steinhilber, Dieter; Schneider, Gisbert

    2009-04-15

    We present a ligand-based virtual screening technique (PhAST) for rapid hit and lead structure searching in large compound databases. Molecules are represented as strings encoding the distribution of pharmacophoric features on the molecular graph. In contrast to other text-based methods using SMILES strings, we introduce a new form of text representation that describes the pharmacophore of molecules. This string representation opens the opportunity for revealing functional similarity between molecules by sequence alignment techniques in analogy to homology searching in protein or nucleic acid sequence databases. We favorably compared PhAST with other current ligand-based virtual screening methods in a retrospective analysis using the BEDROC metric. In a prospective application, PhAST identified two novel inhibitors of 5-lipoxygenase product formation with minimal experimental effort. This outcome demonstrates the applicability of PhAST to drug discovery projects and provides an innovative concept of sequence-based compound screening with substantial scaffold hopping potential. 2008 Wiley Periodicals, Inc.

  9. MPA Portable: A Stand-Alone Software Package for Analyzing Metaproteome Samples on the Go.

    PubMed

    Muth, Thilo; Kohrs, Fabian; Heyer, Robert; Benndorf, Dirk; Rapp, Erdmann; Reichl, Udo; Martens, Lennart; Renard, Bernhard Y

    2018-01-02

    Metaproteomics, the mass spectrometry-based analysis of proteins from multispecies samples faces severe challenges concerning data analysis and results interpretation. To overcome these shortcomings, we here introduce the MetaProteomeAnalyzer (MPA) Portable software. In contrast to the original server-based MPA application, this newly developed tool no longer requires computational expertise for installation and is now independent of any relational database system. In addition, MPA Portable now supports state-of-the-art database search engines and a convenient command line interface for high-performance data processing tasks. While search engine results can easily be combined to increase the protein identification yield, an additional two-step workflow is implemented to provide sufficient analysis resolution for further postprocessing steps, such as protein grouping as well as taxonomic and functional annotation. Our new application has been developed with a focus on intuitive usability, adherence to data standards, and adaptation to Web-based workflow platforms. The open source software package can be found at https://github.com/compomics/meta-proteome-analyzer .

  10. Exploring Google to Enhance Reference Services

    ERIC Educational Resources Information Center

    Jia, Peijun

    2011-01-01

    Google is currently recognized as the world's most powerful search engine. Google is so powerful and intuitive that one does not need to possess many skills to use it. However, Google is more than just simple search. For those who have special search skills and know Google's superior search features, it becomes an extraordinary tool. To understand…

  11. The library as a reference tool: online catalogs

    USGS Publications Warehouse

    Stark, M.

    1991-01-01

    Online catalogs are computerized listings of materials in a particular library or group of libraries. General characteristics of online catalogs include ability for searching interactively and for locating descriptions of books, maps, and reports on regional or topical geology. Suggestions for searching, evaluating results, modifying searches, and limitations of searching are presented. -Author

  12. Taming the Information Jungle with WWW Search Engines.

    ERIC Educational Resources Information Center

    Repman, Judi; And Others

    1997-01-01

    Because searching the Web with different engines often produces different results, the best strategy is to learn how each engine works. Discusses comparing search engines; qualities to consider (ease of use, relevance of hits, and speed); and six of the most popular search tools (Yahoo, Magellan. InfoSeek, Alta Vista, Lycos, and Excite). Lists…

  13. Search Engines for Tomorrow's Scholars, Part Two

    ERIC Educational Resources Information Center

    Fagan, Jody Condit

    2012-01-01

    This two-part article considers how well some of today's search tools support scholars' work. The first part of the article reviewed Google Scholar and Microsoft Academic Search using a modified version of Carole L. Palmer, Lauren C. Teffeau, and Carrier M. Pirmann's framework (2009). Microsoft Academic Search is a strong contender when…

  14. Data Discovery of Big and Diverse Climate Change Datasets - Options, Practices and Challenges

    NASA Astrophysics Data System (ADS)

    Palanisamy, G.; Boden, T.; McCord, R. A.; Frame, M. T.

    2013-12-01

    Developing data search tools is a very common, but often confusing, task for most of the data intensive scientific projects. These search interfaces need to be continually improved to handle the ever increasing diversity and volume of data collections. There are many aspects which determine the type of search tool a project needs to provide to their user community. These include: number of datasets, amount and consistency of discovery metadata, ancillary information such as availability of quality information and provenance, and availability of similar datasets from other distributed sources. Environmental Data Science and Systems (EDSS) group within the Environmental Science Division at the Oak Ridge National Laboratory has a long history of successfully managing diverse and big observational datasets for various scientific programs via various data centers such as DOE's Atmospheric Radiation Measurement Program (ARM), DOE's Carbon Dioxide Information and Analysis Center (CDIAC), USGS's Core Science Analytics and Synthesis (CSAS) metadata Clearinghouse and NASA's Distributed Active Archive Center (ORNL DAAC). This talk will showcase some of the recent developments for improving the data discovery within these centers The DOE ARM program recently developed a data discovery tool which allows users to search and discover over 4000 observational datasets. These datasets are key to the research efforts related to global climate change. The ARM discovery tool features many new functions such as filtered and faceted search logic, multi-pass data selection, filtering data based on data quality, graphical views of data quality and availability, direct access to data quality reports, and data plots. The ARM Archive also provides discovery metadata to other broader metadata clearinghouses such as ESGF, IASOA, and GOS. In addition to the new interface, ARM is also currently working on providing DOI metadata records to publishers such as Thomson Reuters and Elsevier. The ARM program also provides a standards based online metadata editor (OME) for PIs to submit their data to the ARM Data Archive. USGS CSAS metadata Clearinghouse aggregates metadata records from several USGS projects and other partner organizations. The Clearinghouse allows users to search and discover over 100,000 biological and ecological datasets from a single web portal. The Clearinghouse also enabled some new data discovery functions such as enhanced geo-spatial searches based on land and ocean classifications, metadata completeness rankings, data linkage via digital object identifiers (DOIs), and semantically enhanced keyword searches. The Clearinghouse also currently working on enabling a dashboard which allows the data providers to look at various statistics such as number their records accessed via the Clearinghouse, most popular keywords, metadata quality report and DOI creation service. The Clearinghouse also publishes metadata records to broader portals such as NSF DataONE and Data.gov. The author will also present how these capabilities are currently reused by the recent and upcoming data centers such as DOE's NGEE-Arctic project. References: [1] Devarakonda, R., Palanisamy, G., Wilson, B. E., & Green, J. M. (2010). Mercury: reusable metadata management, data discovery and access system. Earth Science Informatics, 3(1-2), 87-94. [2]Devarakonda, R., Shrestha, B., Palanisamy, G., Hook, L., Killeffer, T., Krassovski, M., ... & Frame, M. (2014, October). OME: Tool for generating and managing metadata to handle BigData. In BigData Conference (pp. 8-10).

  15. IDL Object Oriented Software for Hinode/XRT Image Analysis

    NASA Astrophysics Data System (ADS)

    Higgins, P. A.; Gallagher, P. T.

    2008-09-01

    We have developed a set of object oriented IDL routines that enable users to search, download and analyse images from the X-Ray Telescope (XRT) on-board Hinode. In this paper, we give specific examples of how the object can be used and how multi-instrument data analysis can be performed. The XRT object is a highly versatile and powerful IDL object, which will prove to be a useful tool for solar researchers. This software utilizes the generic Framework object available within the GEN branch of SolarSoft.

  16. VESPA: Software to Facilitate Genomic Annotation of Prokaryotic Organisms Through Integration of Proteomic and Transcriptomic Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Elena S.; McCue, Lee Ann; Rutledge, Alexandra C.

    2012-04-25

    Visual Exploration and Statistics to Promote Annotation (VESPA) is an interactive visual analysis software tool that facilitates the discovery of structural mis-annotations in prokaryotic genomes. VESPA integrates high-throughput peptide-centric proteomics data and oligo-centric or RNA-Seq transcriptomics data into a genomic context. The data may be interrogated via visual analysis across multiple levels of genomic resolution, linked searches, exports and interaction with BLAST to rapidly identify location of interest within the genome and evaluate potential mis-annotations.

  17. Project management tool

    NASA Technical Reports Server (NTRS)

    Maluf, David A. (Inventor); Bell, David G. (Inventor); Gurram, Mohana M. (Inventor); Gawdiak, Yuri O. (Inventor)

    2009-01-01

    A system for managing a project that includes multiple tasks and a plurality of workers. Input information includes characterizations based upon a human model, a team model and a product model. Periodic reports, such as a monthly report, a task plan report, a budget report and a risk management report, are generated and made available for display or further analysis. An extensible database allows searching for information based upon context and upon content.

  18. Shared biomarkers between female diastolic heart failure and pre‐eclampsia: a systematic review and meta‐analysis

    PubMed Central

    Bokslag, Anouk; Maas, Angela H.E.M.; Franx, Arie; Paulus, Walter J.; de Groot, Christianne J.M.

    2017-01-01

    Abstract Evidence accumulates for associations between hypertensive pregnancy disorders and increased cardiovascular risk later. The main goal of this study was to explore shared biomarkers representing common pathogenic pathways between heart failure with preserved ejection fraction (HFpEF) and pre‐eclampsia where these biomarkers might be potentially eligible for cardiovascular risk stratification in women after hypertensive pregnancy disorders. We sought for blood markers in women with diastolic dysfunction in a first literature search, and through a second search, we investigated whether these same biochemical markers were present in pre‐eclampsia.This systematic review and meta‐analysis presents two subsequent systematic searches in PubMed and EMBASE. Search I yielded 3014 studies on biomarkers discriminating women with HFpEF from female controls, of which 13 studies on 11 biochemical markers were included. Cases had HFpEF, and controls had no heart failure. The second search was for studies discriminating women with pre‐eclampsia from women with non‐hypertensive pregnancies with at least one of the biomarkers found in Search I. Search II yielded 1869 studies, of which 51 studies on seven biomarkers were included in meta‐analyses and 79 studies on 12 biomarkers in systematic review.Eleven biological markers differentiated women with diastolic dysfunction from controls, of which the following 10 markers differentiated women with pre‐eclampsia from controls as well: C‐reactive protein, HDL, insulin, fatty acid‐binding protein 4, brain natriuretic peptide, N terminal pro brain natriuretic peptide, adrenomedullin, mid‐region pro adrenomedullin, cardiac troponin I, and cancer antigen 125.Our study supports the hypothesis that HFpEF in women shares a common pathogenic background with pre‐eclampsia. The biomarkers representing inflammatory state, disturbances in myocardial function/structure, and unfavourable lipid metabolism may possibly be eligible for future prognostic tools. PMID:28451444

  19. Reactive-Diffusive-Advective Traveling Waves in a Family of Degenerate Nonlinear Equations.

    PubMed

    Sánchez-Garduño, Faustino; Pérez-Velázquez, Judith

    This paper deals with the analysis of existence of traveling wave solutions (TWS) for a diffusion-degenerate (at D (0) = 0) and advection-degenerate (at h '(0) = 0) reaction-diffusion-advection (RDA) equation. Diffusion is a strictly increasing function and the reaction term generalizes the kinetic part of the Fisher-KPP equation. We consider different forms of the convection term h ( u ): (1)   h '( u ) is constant k , (2)   h '( u ) = ku with k > 0, and (3) it is a quite general form which guarantees the degeneracy in the advective term. In Case 1, we prove that the task can be reduced to that for the corresponding equation, where k = 0, and then previous results reported from the authors can be extended. For the other two cases, we use both analytical and numerical tools. The analysis we carried out is based on the restatement of searching TWS for the full RDA equation into a two-dimensional dynamical problem. This consists of searching for the conditions on the parameter values for which there exist heteroclinic trajectories of the ordinary differential equations (ODE) system in the traveling wave coordinates. Throughout the paper we obtain the dynamics by using tools coming from qualitative theory of ODE.

  20. The LANL hemorrhagic fever virus database, a new platform for analyzing biothreat viruses.

    PubMed

    Kuiken, Carla; Thurmond, Jim; Dimitrijevic, Mira; Yoon, Hyejin

    2012-01-01

    Hemorrhagic fever viruses (HFVs) are a diverse set of over 80 viral species, found in 10 different genera comprising five different families: arena-, bunya-, flavi-, filo- and togaviridae. All these viruses are highly variable and evolve rapidly, making them elusive targets for the immune system and for vaccine and drug design. About 55,000 HFV sequences exist in the public domain today. A central website that provides annotated sequences and analysis tools will be helpful to HFV researchers worldwide. The HFV sequence database collects and stores sequence data and provides a user-friendly search interface and a large number of sequence analysis tools, following the model of the highly regarded and widely used Los Alamos HIV database [Kuiken, C., B. Korber, and R.W. Shafer, HIV sequence databases. AIDS Rev, 2003. 5: p. 52-61]. The database uses an algorithm that aligns each sequence to a species-wide reference sequence. The NCBI RefSeq database [Sayers et al. (2011) Database resources of the National Center for Biotechnology Information. Nucleic Acids Res., 39, D38-D51.] is used for this; if a reference sequence is not available, a Blast search finds the best candidate. Using this method, sequences in each genus can be retrieved pre-aligned. The HFV website can be accessed via http://hfv.lanl.gov.

  1. How far are we from full implementation of health promoting workplace concepts? A review of implementation tools and frameworks in workplace interventions.

    PubMed

    Motalebi G, Masoud; Keshavarz Mohammadi, Nastaran; Kuhn, Karl; Ramezankhani, Ali; Azari, Mansour R

    2018-06-01

    Health promoting workplace frameworks provide a holistic view on determinants of workplace health and the link between individuals, work and environment, however, the operationalization of these frameworks has not been very clear. This study provides a typology of the different understandings, frameworks/tools used in the workplace health promotion practice or research worldwide. It discusses the degree of their conformity with Ottawa Charter's spirit and the key actions expected to be implemented in health promoting settings such as workplaces. A comprehensive online search was conducted utilizing relevant key words. The search also included official websites of related international, regional, and national organizations. After exclusion, 27 texts were analysed utilizing conventional content analyses. The results of the analysis were categorized as dimensions (level or main structure) of a healthy or health promoting workplaces and subcategorized characteristics/criteria of healthy/health promoting workplace. Our analysis shows diversity and ambiguity in the workplace health literature regarding domains and characteristics of a healthy/health promoting workplace. This may have roots in lack of a common understanding of the concepts or different social and work environment context. Development of global or national health promoting workplace standards in a participatory process might be considered as a potential solution.

  2. Reactive-Diffusive-Advective Traveling Waves in a Family of Degenerate Nonlinear Equations

    PubMed Central

    Sánchez-Garduño, Faustino

    2016-01-01

    This paper deals with the analysis of existence of traveling wave solutions (TWS) for a diffusion-degenerate (at D(0) = 0) and advection-degenerate (at h′(0) = 0) reaction-diffusion-advection (RDA) equation. Diffusion is a strictly increasing function and the reaction term generalizes the kinetic part of the Fisher-KPP equation. We consider different forms of the convection term h(u): (1)  h′(u) is constant k, (2)  h′(u) = ku with k > 0, and (3) it is a quite general form which guarantees the degeneracy in the advective term. In Case 1, we prove that the task can be reduced to that for the corresponding equation, where k = 0, and then previous results reported from the authors can be extended. For the other two cases, we use both analytical and numerical tools. The analysis we carried out is based on the restatement of searching TWS for the full RDA equation into a two-dimensional dynamical problem. This consists of searching for the conditions on the parameter values for which there exist heteroclinic trajectories of the ordinary differential equations (ODE) system in the traveling wave coordinates. Throughout the paper we obtain the dynamics by using tools coming from qualitative theory of ODE. PMID:27689131

  3. Usability Evaluation of an Unstructured Clinical Document Query Tool for Researchers.

    PubMed

    Hultman, Gretchen; McEwan, Reed; Pakhomov, Serguei; Lindemann, Elizabeth; Skube, Steven; Melton, Genevieve B

    2018-01-01

    Natural Language Processing - Patient Information Extraction for Researchers (NLP-PIER) was developed for clinical researchers for self-service Natural Language Processing (NLP) queries with clinical notes. This study was to conduct a user-centered analysis with clinical researchers to gain insight into NLP-PIER's usability and to gain an understanding of the needs of clinical researchers when using an application for searching clinical notes. Clinical researcher participants (n=11) completed tasks using the system's two existing search interfaces and completed a set of surveys and an exit interview. Quantitative data including time on task, task completion rate, and survey responses were collected. Interviews were analyzed qualitatively. Survey scores, time on task and task completion proportions varied widely. Qualitative analysis indicated that participants found the system to be useful and usable in specific projects. This study identified several usability challenges and our findings will guide the improvement of NLP-PIER 's interfaces.

  4. Exploring FlyBase Data Using QuickSearch.

    PubMed

    Marygold, Steven J; Antonazzo, Giulia; Attrill, Helen; Costa, Marta; Crosby, Madeline A; Dos Santos, Gilberto; Goodman, Joshua L; Gramates, L Sian; Matthews, Beverley B; Rey, Alix J; Thurmond, Jim

    2016-12-08

    FlyBase (flybase.org) is the primary online database of genetic, genomic, and functional information about Drosophila species, with a major focus on the model organism Drosophila melanogaster. The long and rich history of Drosophila research, combined with recent surges in genomic-scale and high-throughput technologies, mean that FlyBase now houses a huge quantity of data. Researchers need to be able to rapidly and intuitively query these data, and the QuickSearch tool has been designed to meet these needs. This tool is conveniently located on the FlyBase homepage and is organized into a series of simple tabbed interfaces that cover the major data and annotation classes within the database. This unit describes the functionality of all aspects of the QuickSearch tool. With this knowledge, FlyBase users will be equipped to take full advantage of all QuickSearch features and thereby gain improved access to data relevant to their research. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  5. What can Google and Wikipedia can tell us about a disease? Big Data trends analysis in Systemic Lupus Erythematosus.

    PubMed

    Sciascia, Savino; Radin, Massimo

    2017-11-01

    To investigate trends of Internet search volumes linked to Systemic Lupus Erythematosus (SLE), on-going clinical trials and research developments associated to the disease, using Big Data monitoring and data mining. We performed a longitudinal analysis based on the large amount of data generated by Google Trends, scientific search tools (SCOPUS, Medline/Pubmed/ClinicalTrails.gov) considering 'SLE', and 'lupus' in a 5-year web-based research. Wikipedia page views were also analysed using WikiTrends and the results were compared with the search volumes generated by Google Trends. We observed an overall higher distribution of search volumes from Google Trends in United States, South America, Canada, South Africa, Australia and Europe (mainly Italy, United Kingdom, Spain, France, Germany), showing a geographically heterogeneity in insight into health-related behaviour of the different populations towards SLE. By comparing the search volumes analysing the Wikipedia page views of both SLE and belimumab, we found a close peak trend, reflecting the knowledge translation after the approval of belimumab for the treatment of SLE. When focusing on search volumes of Google Trends, we noticed that the highest peaks were related to news headlines that involved celebrities affected by SLE, also when comparing to the peak generated by the approval of belimumab. This new approach, able to investigate health information seeking, might give an estimate of the health-related demand and even of the health-related behaviour of SLE, bringing new light to unanswered questions. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Meta-analysis diagnostic accuracy of SNP-based pathogenicity detection tools: a case of UTG1A1 gene mutations.

    PubMed

    Galehdari, Hamid; Saki, Najmaldin; Mohammadi-Asl, Javad; Rahim, Fakher

    2013-01-01

    Crigler-Najjar syndrome (CNS) type I and type II are usually inherited as autosomal recessive conditions that result from mutations in the UGT1A1 gene. The main objective of the present review is to summarize results of all available evidence on the accuracy of SNP-based pathogenicity detection tools compared to published clinical result for the prediction of in nsSNPs that leads to disease using prediction performance method. A comprehensive search was performed to find all mutations related to CNS. Database searches included dbSNP, SNPdbe, HGMD, Swissvar, ensemble, and OMIM. All the mutation related to CNS was extracted. The pathogenicity prediction was done using SNP-based pathogenicity detection tools include SIFT, PHD-SNP, PolyPhen2, fathmm, Provean, and Mutpred. Overall, 59 different SNPs related to missense mutations in the UGT1A1 gene, were reviewed. Comparing the diagnostic OR, PolyPhen2 and Mutpred have the highest detection 4.983 (95% CI: 1.24 - 20.02) in both, following by SIFT (diagnostic OR: 3.25, 95% CI: 1.07 - 9.83). The highest MCC of SNP-based pathogenicity detection tools, was belong to SIFT (34.19%) followed by Provean, PolyPhen2, and Mutpred (29.99%, 29.89%, and 29.89%, respectively). Hence the highest SNP-based pathogenicity detection tools ACC, was fit to SIFT (62.71%) followed by PolyPhen2, and Mutpred (61.02%, in both). Our results suggest that some of the well-established SNP-based pathogenicity detection tools can appropriately reflect the role of a disease-associated SNP in both local and global structures.

  7. The BioCyc collection of microbial genomes and metabolic pathways.

    PubMed

    Karp, Peter D; Billington, Richard; Caspi, Ron; Fulcher, Carol A; Latendresse, Mario; Kothari, Anamika; Keseler, Ingrid M; Krummenacker, Markus; Midford, Peter E; Ong, Quang; Ong, Wai Kit; Paley, Suzanne M; Subhraveti, Pallavi

    2017-08-17

    BioCyc.org is a microbial genome Web portal that combines thousands of genomes with additional information inferred by computer programs, imported from other databases and curated from the biomedical literature by biologist curators. BioCyc also provides an extensive range of query tools, visualization services and analysis software. Recent advances in BioCyc include an expansion in the content of BioCyc in terms of both the number of genomes and the types of information available for each genome; an expansion in the amount of curated content within BioCyc; and new developments in the BioCyc software tools including redesigned gene/protein pages and metabolite pages; new search tools; a new sequence-alignment tool; a new tool for visualizing groups of related metabolic pathways; and a facility called SmartTables, which enables biologists to perform analyses that previously would have required a programmer's assistance. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Computer applications making rapid advances in high throughput microbial proteomics (HTMP).

    PubMed

    Anandkumar, Balakrishna; Haga, Steve W; Wu, Hui-Fen

    2014-02-01

    The last few decades have seen the rise of widely-available proteomics tools. From new data acquisition devices, such as MALDI-MS and 2DE to new database searching softwares, these new products have paved the way for high throughput microbial proteomics (HTMP). These tools are enabling researchers to gain new insights into microbial metabolism, and are opening up new areas of study, such as protein-protein interactions (interactomics) discovery. Computer software is a key part of these emerging fields. This current review considers: 1) software tools for identifying the proteome, such as MASCOT or PDQuest, 2) online databases of proteomes, such as SWISS-PROT, Proteome Web, or the Proteomics Facility of the Pathogen Functional Genomics Resource Center, and 3) software tools for applying proteomic data, such as PSI-BLAST or VESPA. These tools allow for research in network biology, protein identification, functional annotation, target identification/validation, protein expression, protein structural analysis, metabolic pathway engineering and drug discovery.

  9. Seasonal variation in Internet searches for vitamin D.

    PubMed

    Moon, Rebecca J; Curtis, Elizabeth M; Davies, Justin H; Cooper, Cyrus; Harvey, Nicholas C

    2017-12-01

    Internet search rates for "vitamin D" were explored using Google Trends. Search rates increased from 2004 until 2010 and thereafter displayed a seasonal pattern peaking in late winter. This knowledge could help guide the timing of public health interventions aimed at managing vitamin D deficiency. The Internet is an important source of health information. Analysis of Internet search activity rates can provide information on disease epidemiology, health related behaviors and public interest. We explored Internet search rates for vitamin D to determine whether this reflects the increasing scientific interest in this topic. Google Trends is a publically available tool that provides data on Internet searches using Google. Search activity for the term "vitamin D" from 1st January 2004 until 31st October 2016 was obtained. Comparison was made to other bone and nutrition related terms. Worldwide, searches for "vitamin D" increased from 2004 until 2010 and thereafter a statistically significant (p < 0.001) seasonal pattern with a peak in February and nadir in August was observed. This seasonal pattern was evident for searches originating from both the USA (peak in February) and Australia (peak in August); p < 0.001 for both. Searches for the terms "osteoporosis", "rickets", "back pain" or "folic acid" did not display the increase observed for vitamin D or evidence of seasonal variation. Public interest in vitamin D, as assessed by Internet search activity, did increase from 2004 to 2010, likely reflecting the growing scientific interest, but now displays a seasonal pattern with peak interest during late winter. This information could be used to guide public health approaches to managing vitamin D deficiency.

  10. Speeding up the screening of steroids in urine: development of a user-friendly library.

    PubMed

    Galesio, M; López-Fdez, H; Reboiro-Jato, M; Gómez-Meire, Silvana; Glez-Peña, D; Fdez-Riverola, F; Lodeiro, Carlos; Diniz, M E; Capelo, J L

    2013-12-11

    This work presents a novel database search engine - MLibrary - designed to assist the user in the detection and identification of androgenic anabolic steroids (AAS) and its metabolites by matrix assisted laser desorption/ionization (MALDI) and mass spectrometry-based strategies. The detection of the AAS in the samples was accomplished by searching (i) the mass spectrometric (MS) spectra against the library developed to identify possible positives and (ii) by comparison of the tandem mass spectrometric (MS/MS) spectra produced after fragmentation of the possible positives with a complete set of spectra that have previously been assigned to the software. The urinary screening for anabolic agents plays a major role in anti-doping laboratories as they represent the most abused drug class in sports. With the help of the MLibrary software application, the use of MALDI techniques for doping control is simplified and the time for evaluation and interpretation of the results is reduced. To do so, the search engine takes as input several MALDI-TOF-MS and MALDI-TOF-MS/MS spectra. It aids the researcher in an automatic mode by identifying possible positives in a single MS analysis and then confirming their presence in tandem MS analysis by comparing the experimental tandem mass spectrometric data with the database. Furthermore, the search engine can, potentially, be further expanded to other compounds in addition to AASs. The applicability of the MLibrary tool is shown through the analysis of spiked urine samples. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. iProphet: Multi-level Integrative Analysis of Shotgun Proteomic Data Improves Peptide and Protein Identification Rates and Error Estimates*

    PubMed Central

    Shteynberg, David; Deutsch, Eric W.; Lam, Henry; Eng, Jimmy K.; Sun, Zhi; Tasman, Natalie; Mendoza, Luis; Moritz, Robert L.; Aebersold, Ruedi; Nesvizhskii, Alexey I.

    2011-01-01

    The combination of tandem mass spectrometry and sequence database searching is the method of choice for the identification of peptides and the mapping of proteomes. Over the last several years, the volume of data generated in proteomic studies has increased dramatically, which challenges the computational approaches previously developed for these data. Furthermore, a multitude of search engines have been developed that identify different, overlapping subsets of the sample peptides from a particular set of tandem mass spectrometry spectra. We present iProphet, the new addition to the widely used open-source suite of proteomic data analysis tools Trans-Proteomics Pipeline. Applied in tandem with PeptideProphet, it provides more accurate representation of the multilevel nature of shotgun proteomic data. iProphet combines the evidence from multiple identifications of the same peptide sequences across different spectra, experiments, precursor ion charge states, and modified states. It also allows accurate and effective integration of the results from multiple database search engines applied to the same data. The use of iProphet in the Trans-Proteomics Pipeline increases the number of correctly identified peptides at a constant false discovery rate as compared with both PeptideProphet and another state-of-the-art tool Percolator. As the main outcome, iProphet permits the calculation of accurate posterior probabilities and false discovery rate estimates at the level of sequence identical peptide identifications, which in turn leads to more accurate probability estimates at the protein level. Fully integrated with the Trans-Proteomics Pipeline, it supports all commonly used MS instruments, search engines, and computer platforms. The performance of iProphet is demonstrated on two publicly available data sets: data from a human whole cell lysate proteome profiling experiment representative of typical proteomic data sets, and from a set of Streptococcus pyogenes experiments more representative of organism-specific composite data sets. PMID:21876204

  12. Using the electronic health record to build a culture of practice safety: evaluating the implementation of trigger tools in one general practice.

    PubMed

    Margham, Tom; Symes, Natalie; Hull, Sally A

    2018-04-01

    Identifying patients at risk of harm in general practice is challenging for busy clinicians. In UK primary care, trigger tools and case note reviews are mainly used to identify rates of harm in sample populations. This study explores how adaptions to existing trigger tool methodology can identify patient safety events and engage clinicians in ongoing reflective work around safety. Mixed-method quantitative and narrative evaluation using thematic analysis in a single East London training practice. The project team developed and tested five trigger searches, supported by Excel worksheets to guide the case review process. Project evaluation included summary statistics of completed worksheets and a qualitative review focused on ease of use, barriers to implementation, and perception of value to clinicians. Trigger searches identified 204 patients for GP review. Overall, 117 (57%) of cases were reviewed and 62 (53%) of these cases had patient safety events identified. These were usually incidents of omission, including failure to monitor or review. Key themes from interviews with practice members included the fact that GPs' work is generally reactive and GPs welcomed an approach that identified patients who were 'under the radar' of safety. All GPs expressed concern that the tool might identify too many patients at risk of harm, placing further demands on their time. Electronic trigger tools can identify patients for review in domains of clinical risk for primary care. The high yield of safety events engaged clinicians and provided validation of the need for routine safety checks. © British Journal of General Practice 2018.

  13. Tools for Large-Scale Mobile Malware Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bierma, Michael

    Analyzing mobile applications for malicious behavior is an important area of re- search, and is made di cult, in part, by the increasingly large number of appli- cations available for the major operating systems. There are currently over 1.2 million apps available in both the Google Play and Apple App stores (the respec- tive o cial marketplaces for the Android and iOS operating systems)[1, 2]. Our research provides two large-scale analysis tools to aid in the detection and analysis of mobile malware. The rst tool we present, Andlantis, is a scalable dynamic analysis system capa- ble of processing over 3000more » Android applications per hour. Traditionally, Android dynamic analysis techniques have been relatively limited in scale due to the compu- tational resources required to emulate the full Android system to achieve accurate execution. Andlantis is the most scalable Android dynamic analysis framework to date, and is able to collect valuable forensic data, which helps reverse-engineers and malware researchers identify and understand anomalous application behavior. We discuss the results of running 1261 malware samples through the system, and provide examples of malware analysis performed with the resulting data. While techniques exist to perform static analysis on a large number of appli- cations, large-scale analysis of iOS applications has been relatively small scale due to the closed nature of the iOS ecosystem, and the di culty of acquiring appli- cations for analysis. The second tool we present, iClone, addresses the challenges associated with iOS research in order to detect application clones within a dataset of over 20,000 iOS applications.« less

  14. New Tools in Orthology Analysis: A Brief Review of Promising Perspectives

    PubMed Central

    Nichio, Bruno T. L.; Marchaukoski, Jeroniza Nunes; Raittz, Roberto Tadeu

    2017-01-01

    Nowadays defying homology relationships among sequences is essential for biological research. Within homology the analysis of orthologs sequences is of great importance for computational biology, annotation of genomes and for phylogenetic inference. Since 2007, with the increase in the number of new sequences being deposited in large biological databases, researchers have begun to analyse computerized methodologies and tools aimed at selecting the most promising ones in the prediction of orthologous groups. Literature in this field of research describes the problems that the majority of available tools show, such as those encountered in accuracy, time required for analysis (especially in light of the increasing volume of data being submitted, which require faster techniques) and the automatization of the process without requiring manual intervention. Conducting our search through BMC, Google Scholar, NCBI PubMed, and Expasy, we examined more than 600 articles pursuing the most recent techniques and tools developed to solve most the problems still existing in orthology detection. We listed the main computational tools created and developed between 2011 and 2017, taking into consideration the differences in the type of orthology analysis, outlining the main features of each tool and pointing to the problems that each one tries to address. We also observed that several tools still use as their main algorithm the BLAST “all-against-all” methodology, which entails some limitations, such as limited number of queries, computational cost, and high processing time to complete the analysis. However, new promising tools are being developed, like OrthoVenn (which uses the Venn diagram to show the relationship of ortholog groups generated by its algorithm); or proteinOrtho (which improves the accuracy of ortholog groups); or ReMark (tackling the integration of the pipeline to turn the entry process automatic); or OrthAgogue (using algorithms developed to minimize processing time); and proteinOrtho (developed for dealing with large amounts of biological data). We made a comparison among the main features of four tool and tested them using four for prokaryotic genomas. We hope that our review can be useful for researchers and will help them in selecting the most appropriate tool for their work in the field of orthology. PMID:29163633

  15. New Tools in Orthology Analysis: A Brief Review of Promising Perspectives.

    PubMed

    Nichio, Bruno T L; Marchaukoski, Jeroniza Nunes; Raittz, Roberto Tadeu

    2017-01-01

    Nowadays defying homology relationships among sequences is essential for biological research. Within homology the analysis of orthologs sequences is of great importance for computational biology, annotation of genomes and for phylogenetic inference. Since 2007, with the increase in the number of new sequences being deposited in large biological databases, researchers have begun to analyse computerized methodologies and tools aimed at selecting the most promising ones in the prediction of orthologous groups. Literature in this field of research describes the problems that the majority of available tools show, such as those encountered in accuracy, time required for analysis (especially in light of the increasing volume of data being submitted, which require faster techniques) and the automatization of the process without requiring manual intervention. Conducting our search through BMC, Google Scholar, NCBI PubMed, and Expasy, we examined more than 600 articles pursuing the most recent techniques and tools developed to solve most the problems still existing in orthology detection. We listed the main computational tools created and developed between 2011 and 2017, taking into consideration the differences in the type of orthology analysis, outlining the main features of each tool and pointing to the problems that each one tries to address. We also observed that several tools still use as their main algorithm the BLAST "all-against-all" methodology, which entails some limitations, such as limited number of queries, computational cost, and high processing time to complete the analysis. However, new promising tools are being developed, like OrthoVenn (which uses the Venn diagram to show the relationship of ortholog groups generated by its algorithm); or proteinOrtho (which improves the accuracy of ortholog groups); or ReMark (tackling the integration of the pipeline to turn the entry process automatic); or OrthAgogue (using algorithms developed to minimize processing time); and proteinOrtho (developed for dealing with large amounts of biological data). We made a comparison among the main features of four tool and tested them using four for prokaryotic genomas. We hope that our review can be useful for researchers and will help them in selecting the most appropriate tool for their work in the field of orthology.

  16. Testing search strategies for systematic reviews in the Medline literature database through PubMed.

    PubMed

    Volpato, Enilze S N; Betini, Marluci; El Dib, Regina

    2014-04-01

    A high-quality electronic search is essential in ensuring accuracy and completeness in retrieved records for the conducting of a systematic review. We analysed the available sample of search strategies to identify the best method for searching in Medline through PubMed, considering the use or not of parenthesis, double quotation marks, truncation and use of a simple search or search history. In our cross-sectional study of search strategies, we selected and analysed the available searches performed during evidence-based medicine classes and in systematic reviews conducted in the Botucatu Medical School, UNESP, Brazil. We analysed 120 search strategies. With regard to the use of phrase searches with parenthesis, there was no difference between the results with and without parenthesis and simple searches or search history tools in 100% of the sample analysed (P = 1.0). The number of results retrieved by the searches analysed was smaller using double quotations marks and using truncation compared with the standard strategy (P = 0.04 and P = 0.08, respectively). There is no need to use phrase-searching parenthesis to retrieve studies; however, we recommend the use of double quotation marks when an investigator attempts to retrieve articles in which a term appears to be exactly the same as what was proposed in the search form. Furthermore, we do not recommend the use of truncation in search strategies in the Medline via PubMed. Although the results of simple searches or search history tools were the same, we recommend using the latter.

  17. Caught on the Web

    ERIC Educational Resources Information Center

    Isakson, Carol

    2004-01-01

    Search engines rapidly add new services and experimental tools in trying to outmaneuver each other for customers. In this article, the author describes the latest additional services of some search engines and provides its sources. The author also suggests tips for using these new search upgrades.

  18. 75 FR 34777 - Florida Power & Light Company, Combined License Application for the Turkey Point Units 6 & 7...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-18

    ... search (advanced search) engine or the ADAMS ``Find'' tool in Citrix. The Westinghouse AP1000 DCD, which... local residents at the South Dade Regional Library and the Homestead Branch Library. To search for...

  19. 75 FR 39285 - Tennessee Valley Authority; Notice of Receipt of Updated Antitrust Information and Opportunity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-08

    ... either the Web-based search (advanced search) engine or the ADAMS find tool in Citrix. Within 30 days.... To search for other related documents in ADAMS using the Watts Bar Nuclear Plant Unit 2 OL...

  20. Trade-Space Analysis Tool for Constellations (TAT-C)

    NASA Technical Reports Server (NTRS)

    Le Moigne, Jacqueline; Dabney, Philip; de Weck, Olivier; Foreman, Veronica; Grogan, Paul; Holland, Matthew; Hughes, Steven; Nag, Sreeja

    2016-01-01

    Traditionally, space missions have relied on relatively large and monolithic satellites, but in the past few years, under a changing technological and economic environment, including instrument and spacecraft miniaturization, scalable launchers, secondary launches as well as hosted payloads, there is growing interest in implementing future NASA missions as Distributed Spacecraft Missions (DSM). The objective of our project is to provide a framework that facilitates DSM Pre-Phase A investigations and optimizes DSM designs with respect to a-priori Science goals. In this first version of our Trade-space Analysis Tool for Constellations (TAT-C), we are investigating questions such as: How many spacecraft should be included in the constellation? Which design has the best costrisk value? The main goals of TAT-C are to: Handle multiple spacecraft sharing a mission objective, from SmallSats up through flagships, Explore the variables trade space for pre-defined science, cost and risk goals, and pre-defined metrics Optimize cost and performance across multiple instruments and platforms vs. one at a time.This paper describes the overall architecture of TAT-C including: a User Interface (UI) interacting with multiple users - scientists, missions designers or program managers; an Executive Driver gathering requirements from UI, then formulating Trade-space Search Requests for the Trade-space Search Iterator first with inputs from the Knowledge Base, then, in collaboration with the Orbit Coverage, Reduction Metrics, and Cost Risk modules, generating multiple potential architectures and their associated characteristics. TAT-C leverages the use of the Goddard Mission Analysis Tool (GMAT) to compute coverage and ancillary data, streamlining the computations by modeling orbits in a way that balances accuracy and performance.TAT-C current version includes uniform Walker constellations as well as Ad-Hoc constellations, and its cost model represents an aggregate model consisting of Cost Estimating Relationships (CERs) from widely accepted models. The Knowledge Base supports both analysis and exploration, and the current GUI prototype automatically generates graphics representing metrics such as average revisit time or coverage as a function of cost.

  1. To press or not to press, and if so, with what? A single question-focused meta-analysis of vasopressor choice during regional anesthesia in obstetrics.

    PubMed

    Biddle, Chuck

    2013-08-01

    Given the underlying assumption that reasonable maternal hemodynamics can be achieved with either ephedrine or phenylephrine, this focused meta-analysis addresses the impact of vasopressor choice on resultant neonatal Apgar scores during regional anesthesia. The literature was systematically searched for randomized trials of obstetric vasopressor use employing standard search tools. Only the highest quality trials were included. Of 142 studies retrieved, 9 met the defined inclusion criteria. Apgar scores at 1 and 5 minutes in the ephedrine group (served as control) vs the phenylephrine group did not differ at either time epoch; no abnormal values prevailed in either group (relative risk, 0.88; CI, 0.79-1.16). This meta-analysis focused on the most clinically relevant, immediately available information pertinent in the obstetric suite, the Apgar score, and found that ephedrine and phenylephrine did not differ in their effect on this metric. The current meta-analysis provides an updated, evidence-based validation of vasopressor use from the American Society of Anesthesiologists' 2007 "Practice Guidelines for Obstetric Anesthesia".

  2. Dr Google

    PubMed Central

    Pías-Peleteiro, Leticia; Cortés-Bordoy, Javier; Martinón-Torres, Federico

    2013-01-01

    Objectives: To assess and analyze the information and recommendations provided by Google Web Search™ (Google) in relation to web searches on the HPV vaccine, indications for females and males and possible adverse effects. Materials and Methods: Descriptive cross-sectional study of the results of 14 web searches. Comprehensive analysis of results based on general recommendation given (favorable/dissuasive), as well as compliance with pre-established criteria, namely design, content and credibility. Sub-analysis of results according to site category: general information, blog / forum and press. Results: In the comprehensive analysis of results, 72.2% of websites offer information favorable to HPV vaccination, with varying degrees of content detail, vs. 27.8% with highly dissuasive content in relation to HPV vaccination. The most frequent type of site is the blog or forum. The information found is frequently incomplete, poorly structured, and often lacking in updates, bibliography and adequate citations, as well as sound credibility criteria (scientific association accreditation and/or trust mark system). Conclusions: Google, as a tool which users employ to locate medical information and advice, is not specialized in providing information that is necessarily rigorous or valid from a scientific perspective. Search results and ranking based on Google's generalized algorithms can lead users to poorly grounded opinions and statements, which may impact HPV vaccination perception and subsequent decision making. PMID:23744505

  3. Making Space for Specialized Astronomy Resources

    NASA Astrophysics Data System (ADS)

    MacMillan, D.

    2007-10-01

    With the growth of both free and subscription-based resources, articles on astronomy have never been easier to find. Locating the best and most current materials for any given search, however, now requires multiple tools and strategies dependent on the query. An analysis of the tools currently available shows that while astronomy is well-served by Google Scholar, Scopus and Inspec, its literature is best accessed through specialized resources such as ADS (Astrophysics Data System). While no surprise to astronomers, this has major implications for those of us who teach information literacy skills to astronomy students and work in academic settings where astronomy is just one of many subjects for which our non-specialist colleagues at the reference desk provide assistance. This paper will examine some of the implications of this analysis for library instruction, reference assistance and training, and library webpage development.

  4. PARPs database: A LIMS systems for protein-protein interaction data mining or laboratory information management system

    PubMed Central

    Droit, Arnaud; Hunter, Joanna M; Rouleau, Michèle; Ethier, Chantal; Picard-Cloutier, Aude; Bourgais, David; Poirier, Guy G

    2007-01-01

    Background In the "post-genome" era, mass spectrometry (MS) has become an important method for the analysis of proteins and the rapid advancement of this technique, in combination with other proteomics methods, results in an increasing amount of proteome data. This data must be archived and analysed using specialized bioinformatics tools. Description We herein describe "PARPs database," a data analysis and management pipeline for liquid chromatography tandem mass spectrometry (LC-MS/MS) proteomics. PARPs database is a web-based tool whose features include experiment annotation, protein database searching, protein sequence management, as well as data-mining of the peptides and proteins identified. Conclusion Using this pipeline, we have successfully identified several interactions of biological significance between PARP-1 and other proteins, namely RFC-1, 2, 3, 4 and 5. PMID:18093328

  5. A tool for assessment of heart failure prescribing quality: A systematic review and meta-analysis.

    PubMed

    El Hadidi, Seif; Darweesh, Ebtissam; Byrne, Stephen; Bermingham, Margaret

    2018-04-16

    Heart failure (HF) guidelines aim to standardise patient care. Internationally, prescribing practice in HF may deviate from guidelines and so a standardised tool is required to assess prescribing quality. A systematic review and meta-analysis were performed to identify a quantitative tool for measuring adherence to HF guidelines and its clinical implications. Eleven electronic databases were searched to include studies reporting a comprehensive tool for measuring adherence to prescribing guidelines in HF patients aged ≥18 years. Qualitative studies or studies measuring prescription rates alone were excluded. Study quality was assessed using the Good ReseArch for Comparative Effectiveness Checklist. In total, 2455 studies were identified. Sixteen eligible full-text articles were included (n = 14 354 patients, mean age 69 ± 8 y). The Guideline Adherence Index (GAI), and its modified versions, was the most frequently cited tool (n = 13). Other tools identified were the Individualised Reconciled Evidence Recommendations, the Composite Heart Failure Performance, and the Heart Failure Scale. The meta-analysis included the GAI studies of good to high quality. The average GAI-3 was 62%. Compared to low GAI, high GAI patients had lower mortality rate (7.6% vs 33.9%) and lower rehospitalisation rates (23.5% vs 24.5%); both P ≤ .05. High GAI was associated with reduced risk of mortality (hazard ratio = 0.29, 95% confidence interval, 0.06-0.51) and rehospitalisation (hazard ratio = 0.64, 95% confidence interval, 0.41-1.00). No tool was used to improve prescribing quality. The GAI is the most frequently used tool to assess guideline adherence in HF. High GAI is associated with improved HF outcomes. Copyright © 2018 John Wiley & Sons, Ltd.

  6. Introducing Products to DoD Using Specifications and Standards

    DTIC Science & Technology

    2011-08-18

    to utilize the Product Introduction Tool. Search ~Favorites .S » Links ~Customize Links ~ EDS-NMCI ~Free Hotmail Product Introduction Process User...the Product Introduction Tool. Search ~Favorites .S » Links ~Customize Links ~ EDS-NMCI ~Free Hotmail Product Introduction Process User Pol icy...Links i1 EDS-NMCI ~ Free Hotmail i] I] Go ldentitify Categories/Subcategories Identify the category/subcategory that most closely covers your

  7. "Google Reigns Triumphant"?: Stemming the Tide of Googlitis via Collaborative, Situated Information Literacy Instruction

    ERIC Educational Resources Information Center

    Leibiger, Carol A.

    2011-01-01

    Googlitis, the overreliance on search engines for research and the resulting development of poor searching skills, is a recognized problem among today's students. Google is not an effective research tool because, in addition to encouraging keyword searching at the expense of more powerful subject searching, it only accesses the Surface Web and is…

  8. Promising Practices in Instruction of Discovery Tools

    ERIC Educational Resources Information Center

    Buck, Stefanie; Steffy, Christina

    2013-01-01

    Libraries are continually changing to meet the needs of users; this includes implementing discovery tools, also referred to as web-scale discovery tools, to make searching library resources easier. Because these tools are so new, it is difficult to establish definitive best practices for teaching these tools; however, promising practices are…

  9. A Pathway to Freedom: An Evaluation of Screening Tools for the Identification of Trafficking Victims.

    PubMed

    Bespalova, Nadejda; Morgan, Juliet; Coverdale, John

    2016-02-01

    Because training residents and faculty to identify human trafficking victims is a major public health priority, the authors review existing assessment tools. PubMed and Google were searched using combinations of search terms including human, trafficking, sex, labor, screening, identification, and tool. Nine screening tools that met the inclusion criteria were found. They varied greatly in length, format, target demographic, supporting resources, and other parameters. Only two tools were designed specifically for healthcare providers. Only one tool was formally assessed to be valid and reliable in a pilot project in trafficking victim service organizations, although it has not been validated in the healthcare setting. This toolbox should facilitate the education of resident physicians and faculty in screening for trafficking victims, assist educators in assessing screening skills, and promote future research on the identification of trafficking victims.

  10. Blast2GO goes grid: developing a grid-enabled prototype for functional genomics analysis.

    PubMed

    Aparicio, G; Götz, S; Conesa, A; Segrelles, D; Blanquer, I; García, J M; Hernandez, V; Robles, M; Talon, M

    2006-01-01

    The vast amount in complexity of data generated in Genomic Research implies that new dedicated and powerful computational tools need to be developed to meet their analysis requirements. Blast2GO (B2G) is a bioinformatics tool for Gene Ontology-based DNA or protein sequence annotation and function-based data mining. The application has been developed with the aim of affering an easy-to-use tool for functional genomics research. Typical B2G users are middle size genomics labs carrying out sequencing, ETS and microarray projects, handling datasets up to several thousand sequences. In the current version of B2G. The power and analytical potential of both annotation and function data-mining is somehow restricted to the computational power behind each particular installation. In order to be able to offer the possibility of an enhanced computational capacity within this bioinformatics application, a Grid component is being developed. A prototype has been conceived for the particular problem of speeding up the Blast searches to obtain fast results for large datasets. Many efforts have been done in the literature concerning the speeding up of Blast searches, but few of them deal with the use of large heterogeneous production Grid Infrastructures. These are the infrastructures that could reach the largest number of resources and the best load balancing for data access. The Grid Service under development will analyse requests based on the number of sequences, splitting them accordingly to the available resources. Lower-level computation will be performed through MPIBLAST. The software architecture is based on the WSRF standard.

  11. SpinachDB: A Well-Characterized Genomic Database for Gene Family Classification and SNP Information of Spinach.

    PubMed

    Yang, Xue-Dong; Tan, Hua-Wei; Zhu, Wei-Min

    2016-01-01

    Spinach (Spinacia oleracea L.), which originated in central and western Asia, belongs to the family Amaranthaceae. Spinach is one of most important leafy vegetables with a high nutritional value as well as being a perfect research material for plant sex chromosome models. As the completion of genome assembly and gene prediction of spinach, we developed SpinachDB (http://222.73.98.124/spinachdb) to store, annotate, mine and analyze genomics and genetics datasets efficiently. In this study, all of 21702 spinach genes were annotated. A total of 15741 spinach genes were catalogued into 4351 families, including identification of a substantial number of transcription factors. To construct a high-density genetic map, a total of 131592 SSRs and 1125743 potential SNPs located in 548801 loci of spinach genome were identified in 11 cultivated and wild spinach cultivars. The expression profiles were also performed with RNA-seq data using the FPKM method, which could be used to compare the genes. Paralogs in spinach and the orthologous genes in Arabidopsis, grape, sugar beet and rice were identified for comparative genome analysis. Finally, the SpinachDB website contains seven main sections, including the homepage; the GBrowse map that integrates genome, genes, SSR and SNP marker information; the Blast alignment service; the gene family classification search tool; the orthologous and paralogous gene pairs search tool; and the download and useful contact information. SpinachDB will be continually expanded to include newly generated robust genomics and genetics data sets along with the associated data mining and analysis tools.

  12. Finding Protein and Nucleotide Similarities with FASTA

    PubMed Central

    Pearson, William R.

    2016-01-01

    The FASTA programs provide a comprehensive set of rapid similarity searching tools ( fasta36, fastx36, tfastx36, fasty36, tfasty36), similar to those provided by the BLAST package, as well as programs for slower, optimal, local and global similarity searches ( ssearch36, ggsearch36) and for searching with short peptides and oligonucleotides ( fasts36, fastm36). The FASTA programs use an empirical strategy for estimating statistical significance that accommodates a range of similarity scoring matrices and gap penalties, improving alignment boundary accuracy and search sensitivity (Unit 3.5). The FASTA programs can produce “BLAST-like” alignment and tabular output, for ease of integration into existing analysis pipelines, and can search small, representative databases, and then report results for a larger set of sequences, using links from the smaller dataset. The FASTA programs work with a wide variety of database formats, including mySQL and postgreSQL databases (Unit 9.4). The programs also provide a strategy for integrating domain and active site annotations into alignments and highlighting the mutational state of functionally critical residues. These protocols describe how to use the FASTA programs to characterize protein and DNA sequences, using protein:protein, protein:DNA, and DNA:DNA comparisons. PMID:27010337

  13. Tuning into Scorpius X-1: adapting a continuous gravitational-wave search for a known binary system

    NASA Astrophysics Data System (ADS)

    Meadors, Grant David; Goetz, Evan; Riles, Keith

    2016-05-01

    We describe how the TwoSpect data analysis method for continuous gravitational waves (GWs) has been tuned for directed sources such as the low-mass X-ray binary (LMXB), Scorpius X-1 (Sco X-1). A comparison of five search algorithms generated simulations of the orbital and GW parameters of Sco X-1. Whereas that comparison focused on relative performance, here the simulations help quantify the sensitivity enhancement and parameter estimation abilities of this directed method, derived from an all-sky search for unknown sources, using doubly Fourier-transformed data. Sensitivity is shown to be enhanced when the source sky location and period are known, because we can run a fully templated search, bypassing the all-sky hierarchical stage using an incoherent harmonic sum. The GW strain and frequency, as well as the projected semi-major axis of the binary system, are recovered and uncertainty estimated, for simulated signals that are detected. Upper limits for GW strain are set for undetected signals. Applications to future GW observatory data are discussed. Robust against spin-wandering and computationally tractable despite an unknown frequency, this directed search is an important new tool for finding gravitational signals from LMXBs.

  14. Finding Protein and Nucleotide Similarities with FASTA.

    PubMed

    Pearson, William R

    2016-03-24

    The FASTA programs provide a comprehensive set of rapid similarity searching tools (fasta36, fastx36, tfastx36, fasty36, tfasty36), similar to those provided by the BLAST package, as well as programs for slower, optimal, local, and global similarity searches (ssearch36, ggsearch36), and for searching with short peptides and oligonucleotides (fasts36, fastm36). The FASTA programs use an empirical strategy for estimating statistical significance that accommodates a range of similarity scoring matrices and gap penalties, improving alignment boundary accuracy and search sensitivity. The FASTA programs can produce "BLAST-like" alignment and tabular output, for ease of integration into existing analysis pipelines, and can search small, representative databases, and then report results for a larger set of sequences, using links from the smaller dataset. The FASTA programs work with a wide variety of database formats, including mySQL and postgreSQL databases. The programs also provide a strategy for integrating domain and active site annotations into alignments and highlighting the mutational state of functionally critical residues. These protocols describe how to use the FASTA programs to characterize protein and DNA sequences, using protein:protein, protein:DNA, and DNA:DNA comparisons. Copyright © 2016 John Wiley & Sons, Inc.

  15. A Method for the Design and Development of Medical or Health Care Information Websites to Optimize Search Engine Results Page Rankings on Google

    PubMed Central

    Cummins, Niamh Maria; Hannigan, Ailish; Shannon, Bill; Dunne, Colum; Cullen, Walter

    2013-01-01

    Background The Internet is a widely used source of information for patients searching for medical/health care information. While many studies have assessed existing medical/health care information on the Internet, relatively few have examined methods for design and delivery of such websites, particularly those aimed at the general public. Objective This study describes a method of evaluating material for new medical/health care websites, or for assessing those already in existence, which is correlated with higher rankings on Google's Search Engine Results Pages (SERPs). Methods A website quality assessment (WQA) tool was developed using criteria related to the quality of the information to be contained in the website in addition to an assessment of the readability of the text. This was retrospectively applied to assess existing websites that provide information about generic medicines. The reproducibility of the WQA tool and its predictive validity were assessed in this study. Results The WQA tool demonstrated very high reproducibility (intraclass correlation coefficient=0.95) between 2 independent users. A moderate to strong correlation was found between WQA scores and rankings on Google SERPs. Analogous correlations were seen between rankings and readability of websites as determined by Flesch Reading Ease and Flesch-Kincaid Grade Level scores. Conclusions The use of the WQA tool developed in this study is recommended as part of the design phase of a medical or health care information provision website, along with assessment of readability of the material to be used. This may ensure that the website performs better on Google searches. The tool can also be used retrospectively to make improvements to existing websites, thus, potentially enabling better Google search result positions without incurring the costs associated with Search Engine Optimization (SEO) professionals or paid promotion. PMID:23981848

  16. Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search.

    PubMed

    Jay, Caroline; Harper, Simon; Dunlop, Ian; Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain

    2016-01-14

    Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these "experts." Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the "Google generation" than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is "Google-like," enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F1,19=37.3, P<.001), with a main effect of task (F3,57=6.3, P<.001). Further, participants completed the task significantly faster using the Web search interface (F1,19=18.0, P<.001). There was also a main effect of task (F2,38=4.1, P=.025, Greenhouse-Geisser correction applied). Overall, participants were asked to rate learnability, ease of use, and satisfaction. Paired mean comparisons showed that the Web search interface received significantly higher ratings than the traditional search interface for learnability (P=.002, 95% CI [0.6-2.4]), ease of use (P<.001, 95% CI [1.2-3.2]), and satisfaction (P<.001, 95% CI [1.8-3.5]). The results show superior cross-domain usability of Web search, which is consistent with its general familiarity and with enabling queries to be refined as the search proceeds, which treats serendipity as part of the refinement. The results provide clear evidence that data science should adopt single-field natural language search interfaces for variable search supporting in particular: query reformulation; data browsing; faceted search; surrogates; relevance feedback; summarization, analytics, and visual presentation.

  17. Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search

    PubMed Central

    Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain

    2016-01-01

    Background Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these “experts.” Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. Objective The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the “Google generation” than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Methods Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is “Google-like,” enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Results Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F 1,19=37.3, P<.001), with a main effect of task (F 3,57=6.3, P<.001). Further, participants completed the task significantly faster using the Web search interface (F 1,19=18.0, P<.001). There was also a main effect of task (F 2,38=4.1, P=.025, Greenhouse-Geisser correction applied). Overall, participants were asked to rate learnability, ease of use, and satisfaction. Paired mean comparisons showed that the Web search interface received significantly higher ratings than the traditional search interface for learnability (P=.002, 95% CI [0.6-2.4]), ease of use (P<.001, 95% CI [1.2-3.2]), and satisfaction (P<.001, 95% CI [1.8-3.5]). The results show superior cross-domain usability of Web search, which is consistent with its general familiarity and with enabling queries to be refined as the search proceeds, which treats serendipity as part of the refinement. Conclusions The results provide clear evidence that data science should adopt single-field natural language search interfaces for variable search supporting in particular: query reformulation; data browsing; faceted search; surrogates; relevance feedback; summarization, analytics, and visual presentation. PMID:26769334

  18. Formative evaluation of a patient-specific clinical knowledge summarization tool

    PubMed Central

    Del Fiol, Guilherme; Mostafa, Javed; Pu, Dongqiuye; Medlin, Richard; Slager, Stacey; Jonnalagadda, Siddhartha R.; Weir, Charlene R.

    2015-01-01

    Objective To iteratively design a prototype of a computerized clinical knowledge summarization (CKS) tool aimed at helping clinicians finding answers to their clinical questions; and to conduct a formative assessment of the usability, usefulness, efficiency, and impact of the CKS prototype on physicians’ perceived decision quality compared with standard search of UpToDate and PubMed. Materials and methods Mixed-methods observations of the interactions of 10 physicians with the CKS prototype vs. standard search in an effort to solve clinical problems posed as case vignettes. Results The CKS tool automatically summarizes patient-specific and actionable clinical recommendations from PubMed (high quality randomized controlled trials and systematic reviews) and UpToDate. Two thirds of the study participants completed 15 out of 17 usability tasks. The median time to task completion was less than 10 s for 12 of the 17 tasks. The difference in search time between the CKS and standard search was not significant (median = 4.9 vs. 4.5 min). Physician’s perceived decision quality was significantly higher with the CKS than with manual search (mean = 16.6 vs. 14.4; p = 0.036). Conclusions The CKS prototype was well-accepted by physicians both in terms of usability and usefulness. Physicians perceived better decision quality with the CKS prototype compared to standard search of PubMed and UpToDate within a similar search time. Due to the formative nature of this study and a small sample size, conclusions regarding efficiency and efficacy are exploratory. PMID:26612774

  19. Formative evaluation of a patient-specific clinical knowledge summarization tool.

    PubMed

    Del Fiol, Guilherme; Mostafa, Javed; Pu, Dongqiuye; Medlin, Richard; Slager, Stacey; Jonnalagadda, Siddhartha R; Weir, Charlene R

    2016-02-01

    To iteratively design a prototype of a computerized clinical knowledge summarization (CKS) tool aimed at helping clinicians finding answers to their clinical questions; and to conduct a formative assessment of the usability, usefulness, efficiency, and impact of the CKS prototype on physicians' perceived decision quality compared with standard search of UpToDate and PubMed. Mixed-methods observations of the interactions of 10 physicians with the CKS prototype vs. standard search in an effort to solve clinical problems posed as case vignettes. The CKS tool automatically summarizes patient-specific and actionable clinical recommendations from PubMed (high quality randomized controlled trials and systematic reviews) and UpToDate. Two thirds of the study participants completed 15 out of 17 usability tasks. The median time to task completion was less than 10s for 12 of the 17 tasks. The difference in search time between the CKS and standard search was not significant (median=4.9 vs. 4.5m in). Physician's perceived decision quality was significantly higher with the CKS than with manual search (mean=16.6 vs. 14.4; p=0.036). The CKS prototype was well-accepted by physicians both in terms of usability and usefulness. Physicians perceived better decision quality with the CKS prototype compared to standard search of PubMed and UpToDate within a similar search time. Due to the formative nature of this study and a small sample size, conclusions regarding efficiency and efficacy are exploratory. Published by Elsevier Ireland Ltd.

  20. OpenFresco | OpenFresco

    Science.gov Websites

    Skip to content HOME NEWS USERS OpenFrescoExpress OpenFresco Examples & Tools Feedback staff and research students learning about hybrid simulation and starting to use this experimental the Pacific Earthquake Engineering Research Center (PEER) and others. Search Search for: Search Menu

  1. Engaging Patients as Partners in Developing Patient-Reported Outcome Measures in Cancer-A Review of the Literature.

    PubMed

    Camuso, Natasha; Bajaj, Prerna; Dudgeon, Deborah; Mitera, Gunita

    2016-08-01

    Tools to collect patient-reported outcome measures (PROMs) are frequently used in the healthcare setting to collect information that is most meaningful to patients. Due to discordance among how patients and healthcare providers rank symptoms that are considered most meaningful to the patient, engagement of patients in the development of PROMs is extremely important. This review aimed to identify studies that described how patients are involved in the item generation stage of cancer-specific PROM tools developed for cancer patients. A literature search was conducted using keywords relevant to PROMs, cancer, and patient engagement. A manual search of relevant reference lists was also conducted. Inclusion criteria stipulated that publications must describe patient engagement in the item generation stage of development of cancer-specific PROM tools. Results were excluded if they were duplicate findings or non-English. The initial search yielded 230 publications. After removal of duplicates and review of publications, 6 were deemed relevant. Fourteen additional publications were retrieved through a manual search of references from relevant publications. A total of 13 unique PROM tools that included patient input in item generation were identified. The most common method of patient engagement was through qualitative interviews or focus groups. Despite recommendations from international groups and the emphasized importance of incorporating patient feedback in all stages of development of PROMs, few unique tools have incorporated patient input in item generation of cancer-specific tools. Moving forward, a framework of best practices on how to best engage patients in developing PROMs is warranted to support high-quality patient-centered care.

  2. FOAMSearch.net: A custom search engine for emergency medicine and critical care.

    PubMed

    Raine, Todd; Thoma, Brent; Chan, Teresa M; Lin, Michelle

    2015-08-01

    The number of online resources read by and pertinent to clinicians has increased dramatically. However, most healthcare professionals still use mainstream search engines as their primary port of entry to the resources on the Internet. These search engines use algorithms that do not make it easy to find clinician-oriented resources. FOAMSearch, a custom search engine (CSE), was developed to find relevant, high-quality online resources for emergency medicine and critical care (EMCC) clinicians. Using Google™ algorithms, it searches a vetted list of >300 blogs, podcasts, wikis, knowledge translation tools, clinical decision support tools and medical journals. Utilisation has increased progressively to >3000 users/month since its launch in 2011. Further study of the role of CSEs to find medical resources is needed, and it might be possible to develop similar CSEs for other areas of medicine. © 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  3. PathVisio-Faceted Search: an exploration tool for multi-dimensional navigation of large pathways

    PubMed Central

    Fried, Jake Y.; Luna, Augustin

    2013-01-01

    Purpose: The PathVisio-Faceted Search plugin helps users explore and understand complex pathways by overlaying experimental data and data from webservices, such as Ensembl BioMart, onto diagrams drawn using formalized notations in PathVisio. The plugin then provides a filtering mechanism, known as a faceted search, to find and highlight diagram nodes (e.g. genes and proteins) of interest based on imported data. The tool additionally provides a flexible scripting mechanism to handle complex queries. Availability: The PathVisio-Faceted Search plugin is compatible with PathVisio 3.0 and above. PathVisio is compatible with Windows, Mac OS X and Linux. The plugin, documentation, example diagrams and Groovy scripts are available at http://PathVisio.org/wiki/PathVisioFacetedSearchHelp. The plugin is free, open-source and licensed by the Apache 2.0 License. Contact: augustin@mail.nih.gov or jakeyfried@gmail.com PMID:23547033

  4. A user-friendly tool for medical-related patent retrieval.

    PubMed

    Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnyakova, Dina; Lovis, Christian; Ruch, Patrick

    2012-01-01

    Health-related information retrieval is complicated by the variety of nomenclatures available to name entities, since different communities of users will use different ways to name a same entity. We present in this report the development and evaluation of a user-friendly interactive Web application aiming at facilitating health-related patent search. Our tool, called TWINC, relies on a search engine tuned during several patent retrieval competitions, enhanced with intelligent interaction modules, such as chemical query, normalization and expansion. While the functionality of related article search showed promising performances, the ad hoc search results in fairly contrasted results. Nonetheless, TWINC performed well during the PatOlympics competition and was appreciated by intellectual property experts. This result should be balanced by the limited evaluation sample. We can also assume that it can be customized to be applied in corporate search environments to process domain and company-specific vocabularies, including non-English literature and patents reports.

  5. Need a Special Tool? Make It Yourself!

    ERIC Educational Resources Information Center

    Mordini, Robert D.

    2007-01-01

    People seem to have created a tool for every purpose. If a person searches diligently, he can usually find the tool he needs. However, several things may affect this process such as time, cost of the tool, and limited tool sources. The solution to all these is to make the tool yourself. People have made tools for many thousands of years, and with…

  6. Datasets2Tools, repository and search engine for bioinformatics datasets, tools and canned analyses

    PubMed Central

    Torre, Denis; Krawczuk, Patrycja; Jagodnik, Kathleen M.; Lachmann, Alexander; Wang, Zichen; Wang, Lily; Kuleshov, Maxim V.; Ma’ayan, Avi

    2018-01-01

    Biomedical data repositories such as the Gene Expression Omnibus (GEO) enable the search and discovery of relevant biomedical digital data objects. Similarly, resources such as OMICtools, index bioinformatics tools that can extract knowledge from these digital data objects. However, systematic access to pre-generated ‘canned’ analyses applied by bioinformatics tools to biomedical digital data objects is currently not available. Datasets2Tools is a repository indexing 31,473 canned bioinformatics analyses applied to 6,431 datasets. The Datasets2Tools repository also contains the indexing of 4,901 published bioinformatics software tools, and all the analyzed datasets. Datasets2Tools enables users to rapidly find datasets, tools, and canned analyses through an intuitive web interface, a Google Chrome extension, and an API. Furthermore, Datasets2Tools provides a platform for contributing canned analyses, datasets, and tools, as well as evaluating these digital objects according to their compliance with the findable, accessible, interoperable, and reusable (FAIR) principles. By incorporating community engagement, Datasets2Tools promotes sharing of digital resources to stimulate the extraction of knowledge from biomedical research data. Datasets2Tools is freely available from: http://amp.pharm.mssm.edu/datasets2tools. PMID:29485625

  7. Datasets2Tools, repository and search engine for bioinformatics datasets, tools and canned analyses.

    PubMed

    Torre, Denis; Krawczuk, Patrycja; Jagodnik, Kathleen M; Lachmann, Alexander; Wang, Zichen; Wang, Lily; Kuleshov, Maxim V; Ma'ayan, Avi

    2018-02-27

    Biomedical data repositories such as the Gene Expression Omnibus (GEO) enable the search and discovery of relevant biomedical digital data objects. Similarly, resources such as OMICtools, index bioinformatics tools that can extract knowledge from these digital data objects. However, systematic access to pre-generated 'canned' analyses applied by bioinformatics tools to biomedical digital data objects is currently not available. Datasets2Tools is a repository indexing 31,473 canned bioinformatics analyses applied to 6,431 datasets. The Datasets2Tools repository also contains the indexing of 4,901 published bioinformatics software tools, and all the analyzed datasets. Datasets2Tools enables users to rapidly find datasets, tools, and canned analyses through an intuitive web interface, a Google Chrome extension, and an API. Furthermore, Datasets2Tools provides a platform for contributing canned analyses, datasets, and tools, as well as evaluating these digital objects according to their compliance with the findable, accessible, interoperable, and reusable (FAIR) principles. By incorporating community engagement, Datasets2Tools promotes sharing of digital resources to stimulate the extraction of knowledge from biomedical research data. Datasets2Tools is freely available from: http://amp.pharm.mssm.edu/datasets2tools.

  8. Google Scholar as replacement for systematic literature searches: good relative recall and precision are not enough

    PubMed Central

    2013-01-01

    Background Recent research indicates a high recall in Google Scholar searches for systematic reviews. These reports raised high expectations of Google Scholar as a unified and easy to use search interface. However, studies on the coverage of Google Scholar rarely used the search interface in a realistic approach but instead merely checked for the existence of gold standard references. In addition, the severe limitations of the Google Search interface must be taken into consideration when comparing with professional literature retrieval tools. The objectives of this work are to measure the relative recall and precision of searches with Google Scholar under conditions which are derived from structured search procedures conventional in scientific literature retrieval; and to provide an overview of current advantages and disadvantages of the Google Scholar search interface in scientific literature retrieval. Methods General and MEDLINE-specific search strategies were retrieved from 14 Cochrane systematic reviews. Cochrane systematic review search strategies were translated to Google Scholar search expression as good as possible under consideration of the original search semantics. The references of the included studies from the Cochrane reviews were checked for their inclusion in the result sets of the Google Scholar searches. Relative recall and precision were calculated. Results We investigated Cochrane reviews with a number of included references between 11 and 70 with a total of 396 references. The Google Scholar searches resulted in sets between 4,320 and 67,800 and a total of 291,190 hits. The relative recall of the Google Scholar searches had a minimum of 76.2% and a maximum of 100% (7 searches). The precision of the Google Scholar searches had a minimum of 0.05% and a maximum of 0.92%. The overall relative recall for all searches was 92.9%, the overall precision was 0.13%. Conclusion The reported relative recall must be interpreted with care. It is a quality indicator of Google Scholar confined to an experimental setting which is unavailable in systematic retrieval due to the severe limitations of the Google Scholar search interface. Currently, Google Scholar does not provide necessary elements for systematic scientific literature retrieval such as tools for incremental query optimization, export of a large number of references, a visual search builder or a history function. Google Scholar is not ready as a professional searching tool for tasks where structured retrieval methodology is necessary. PMID:24160679

  9. FlavonoidSearch: A system for comprehensive flavonoid annotation by mass spectrometry.

    PubMed

    Akimoto, Nayumi; Ara, Takeshi; Nakajima, Daisuke; Suda, Kunihiro; Ikeda, Chiaki; Takahashi, Shingo; Muneto, Reiko; Yamada, Manabu; Suzuki, Hideyuki; Shibata, Daisuke; Sakurai, Nozomu

    2017-04-28

    Currently, in mass spectrometry-based metabolomics, limited reference mass spectra are available for flavonoid identification. In the present study, a database of probable mass fragments for 6,867 known flavonoids (FsDatabase) was manually constructed based on new structure- and fragmentation-related rules using new heuristics to overcome flavonoid complexity. We developed the FlavonoidSearch system for flavonoid annotation, which consists of the FsDatabase and a computational tool (FsTool) to automatically search the FsDatabase using the mass spectra of metabolite peaks as queries. This system showed the highest identification accuracy for the flavonoid aglycone when compared to existing tools and revealed accurate discrimination between the flavonoid aglycone and other compounds. Sixteen new flavonoids were found from parsley, and the diversity of the flavonoid aglycone among different fruits and vegetables was investigated.

  10. A Registry for Planetary Data Tools and Services

    NASA Astrophysics Data System (ADS)

    Hardman, S.; Cayanan, M.; Hughes, J. S.; Joyner, R.; Crichton, D.; Law, E.

    2018-04-01

    The PDS Engineering Node has upgraded a prototype Tool Registry developed by the International Planetary Data Alliance to increase the visibility and enhance functionality along with incorporating the registered tools into PDS data search results.

  11. Enterprise Reference Library

    NASA Technical Reports Server (NTRS)

    Bickham, Grandin; Saile, Lynn; Havelka, Jacque; Fitts, Mary

    2011-01-01

    Introduction: Johnson Space Center (JSC) offers two extensive libraries that contain journals, research literature and electronic resources. Searching capabilities are available to those individuals residing onsite or through a librarian s search. Many individuals have rich collections of references, but no mechanisms to share reference libraries across researchers, projects, or directorates exist. Likewise, information regarding which references are provided to which individuals is not available, resulting in duplicate requests, redundant labor costs and associated copying fees. In addition, this tends to limit collaboration between colleagues and promotes the establishment of individual, unshared silos of information The Integrated Medical Model (IMM) team has utilized a centralized reference management tool during the development, test, and operational phases of this project. The Enterprise Reference Library project expands the capabilities developed for IMM to address the above issues and enhance collaboration across JSC. Method: After significant market analysis for a multi-user reference management tool, no available commercial tool was found to meet this need, so a software program was built around a commercial tool, Reference Manager 12 by The Thomson Corporation. A use case approach guided the requirements development phase. The premise of the design is that individuals use their own reference management software and export to SharePoint when their library is incorporated into the Enterprise Reference Library. This results in a searchable user-specific library application. An accompanying share folder will warehouse the electronic full-text articles, which allows the global user community to access full -text articles. Discussion: An enterprise reference library solution can provide a multidisciplinary collection of full text articles. This approach improves efficiency in obtaining and storing reference material while greatly reducing labor, purchasing and duplication costs. Most importantly, increasing collaboration across research groups provides unprecedented access to information relevant to NASA s mission. Conclusion: This project is an expansion and cost-effective leveraging of the existing JSC centralized library. Adding key word and author search capabilities and an alert function for notifications about new articles, based on users profiles, represent examples of future enhancements.

  12. Beyond Keyword Search: Representations and Models for Personalization

    DTIC Science & Technology

    2013-01-29

    model of information flow in the blogosphere. Blogscope is intended to be an analysis and visualization tool for the blogosphere. Unlike us, they are...Compressive nonlinearity in the hair bundle’s active response to mechanical stimulation 2001 98 14386-14391 10123 In vivo evidence for a cochlear ...p)ppGpp in plant signaling 2000 97 3747-3752 12176 Cochlear mechanisms from a phylogenetic viewpoint 2000 97 11736-11743 12270 Putting ion channels

  13. Genetic Simulation Resources: a website for the registration and discovery of genetic data simulators

    PubMed Central

    Peng, Bo; Chen, Huann-Sheng; Mechanic, Leah E.; Racine, Ben; Clarke, John; Clarke, Lauren; Gillanders, Elizabeth; Feuer, Eric J.

    2013-01-01

    Summary: Many simulation methods and programs have been developed to simulate genetic data of the human genome. These data have been widely used, for example, to predict properties of populations retrospectively or prospectively according to mathematically intractable genetic models, and to assist the validation, statistical inference and power analysis of a variety of statistical models. However, owing to the differences in type of genetic data of interest, simulation methods, evolutionary features, input and output formats, terminologies and assumptions for different applications, choosing the right tool for a particular study can be a resource-intensive process that usually involves searching, downloading and testing many different simulation programs. Genetic Simulation Resources (GSR) is a website provided by the National Cancer Institute (NCI) that aims to help researchers compare and choose the appropriate simulation tools for their studies. This website allows authors of simulation software to register their applications and describe them with well-defined attributes, thus allowing site users to search and compare simulators according to specified features. Availability: http://popmodels.cancercontrol.cancer.gov/gsr. Contact: gsr@mail.nih.gov PMID:23435068

  14. Aggregation Tool to Create Curated Data albums to Support Disaster Recovery and Response

    NASA Technical Reports Server (NTRS)

    Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon

    2014-01-01

    Despite advances in science and technology of prediction and simulation of natural hazards, losses incurred due to natural disasters keep growing every year. Natural disasters cause more economic losses as compared to anthropogenic disasters. Economic losses due to natural hazards are estimated to be around $6-$10 billion dollars annually for the U.S. and this number keeps increasing every year. This increase has been attributed to population growth and migration to more hazard prone locations such as coasts. As this trend continues, in concert with shifts in weather patterns caused by climate change, it is anticipated that losses associated with natural disasters will keep growing substantially. One of challenges disaster response and recovery analysts face is to quickly find, access and utilize a vast variety of relevant geospatial data collected by different federal agencies such as DoD, NASA, NOAA, EPA, USGS etc. Some examples of these data sets include high spatio-temporal resolution multi/hyperspectral satellite imagery, model prediction outputs from weather models, latest radar scans, measurements from an array of sensor networks such as Integrated Ocean Observing System etc. More often analysts may be familiar with limited, but specific datasets and are often unaware of or unfamiliar with a large quantity of other useful resources. Finding airborne or satellite data useful to a natural disaster event often requires a time consuming search through web pages and data archives. Additional information related to damages, deaths, and injuries requires extensive online searches for news reports and official report summaries. An analyst must also sift through vast amounts of potentially useful digital information captured by the general public such as geo-tagged photos, videos and real time damage updates within twitter feeds. Collecting and aggregating these information fragments can provide useful information in assessing damage in real time and help direct recovery efforts. The search process for the analyst could be made much more efficient and productive if a tool could go beyond a typical search engine and provide not just links to web sites but actual links to specific data relevant to the natural disaster, parse unstructured reports for useful information nuggets, as well as gather other related reports, summaries, news stories, and images. This presentation will describe a semantic aggregation tool developed to address similar problem for Earth Science researchers. This tool provides automated curation, and creates "Data Albums" to support case studies. The generated "Data Albums" are compiled collections of information related to a specific science topic or event, containing links to relevant data files (granules) from different instruments; tools and services for visualization and analysis; information about the event contained in news reports, and images or videos to supplement research analysis. An ontology-based relevancy-ranking algorithm drives the curation of relevant data sets for a given event. This tool is now being used to generate a catalog of Hurricane Case Studies at Global Hydrology Resource Center (GHRC), one of NASA's Distribute Active Archive Centers. Another instance of the Data Albums tool is currently being created in collaboration with NASA/MSFC's SPoRT Center, which conducts research on unique NASA products and capabilities that can be transitioned to the operational community to solve forecast problems. This new instance focuses on severe weather to support SPoRT researchers in their model evaluation studies

  15. Predicting periodontitis progression?

    PubMed

    Ferraiolo, Debra M

    2016-03-01

    Cochrane Library, Ovid, Medline, Embase and LILACS were searched using no language restrictions and included information up to July 2014. Bibliographic references of included articles and related review articles were hand searched. On-line hand searching of recent issues of key periodontal journals was performed (Journal of Clinical Periodontology, Journal of Dental Research, Journal of Periodontal Research, Journal of Periodontology, Oral Health and Preventive Dentistry). Prospective and retrospective cohort studies were used for answering the question of prediction since there were no randomised controlled trials on this topic. Risk of bias was assessed using the validated Newcastle-Ottawa quality assessment scale for non-randomised studies. Cross-sectional studies were included in the summary of currently reported risk assessment tools but not for risk of progression of disease, due to the inability to properly assess bias in these types of studies. Titles and abstracts were scanned by two reviewers independently.Full reports were obtained for those articles meeting inclusion criteria or those with insufficient information in the title to make a decision. Any published risk assessment tool was considered. The tool was defined to include any composite measure of patient-level risk directed towards determining the probability for further disease progression in adults with periodontitis. Periodontitis was defined to include both chronic and aggressive forms in the adult population. Outcomes included changes in attachment levels and/or deepening of periodontal pockets in millimeters in study populations undergoing supportive periodontal therapy. Data extraction was performed independently and in collaboration by two reviewers; completed evidence tables were reviewed by three reviewers. Studies were each given a descriptive summary to assess the quantity of data as well as further assessment of study variations within study characteristics. This also allowed for determining the suitability of data for further quantitative analysis (meta-analysis). Unfortunately, the heterogeneity of the data did not allow. After screening, 19 studies fitted the inclusion criteria of identifying five different patient-based periodontal risk assessment tools. DenPlan Excel/Previsor Patient Assessment (DEP-PA) and its modifications were used in five studies. The HIDEP model, the dentition risk system (DRS) and the risk assessment-based individualised treatment (RABIT) were each used in one study. Lastly, the periodontal risk assessment (PRA) and its modifications were found in 12 publications.PRA uses the following factors to assess risk of recurrence of disease: Percentage of bleeding on probing, loss of teeth from a total of 28 teeth, loss of periodontal support in relation to the patient's age, prevalence of residual pockets greater than 4 mm (3-5 mm), systemic and genetic conditions and environmental factors, such as cigarette smoking.Ten included studies had cohort designs (N= 2130) spanning three to 12 years with different follow-up times. Generally, these studies reflected that different assessment tools were able to separate subjects with differing probability of disease progression and tooth loss. The observed effect was dose dependent (the higher the estimation of risk the higher the level of observed disease or tooth loss).Six cross sectional studies (N=1078) reported the comparison of different assessment tools, adjusted or unadjusted associations with periodontal disease and subjective risk assessments provided by the tools. There were three articles noted in the flow diagram as articles proposing the tool. Qualitative analysis reflects that parameters are similar across the studies but differences are present in how these parameters were assessed. In treated populations, results of patient-based risk assessments predicted periodontitis progression and tooth loss in various populations. Additional research on the utility of risk assessment and results in improving patient management are needed.

  16. Moon Search Algorithms for NASA's Dawn Mission to Asteroid Vesta

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mcfadden, Lucy A.; Skillman, David R.; McLean, Brian; Mutchler, Max; Carsenty, Uri; Palmer, Eric E.

    2012-01-01

    A moon or natural satellite is a celestial body that orbits a planetary body such as a planet, dwarf planet, or an asteroid. Scientists seek understanding the origin and evolution of our solar system by studying moons of these bodies. Additionally, searches for satellites of planetary bodies can be important to protect the safety of a spacecraft as it approaches or orbits a planetary body. If a satellite of a celestial body is found, the mass of that body can also be calculated once its orbit is determined. Ensuring the Dawn spacecraft's safety on its mission to the asteroid Vesta primarily motivated the work of Dawn's Satellite Working Group (SWG) in summer of 2011. Dawn mission scientists and engineers utilized various computational tools and techniques for Vesta's satellite search. The objectives of this paper are to 1) introduce the natural satellite search problem, 2) present the computational challenges, approaches, and tools used when addressing this problem, and 3) describe applications of various image processing and computational algorithms for performing satellite searches to the electronic imaging and computer science community. Furthermore, we hope that this communication would enable Dawn mission scientists to improve their satellite search algorithms and tools and be better prepared for performing the same investigation in 2015, when the spacecraft is scheduled to approach and orbit the dwarf planet Ceres.

  17. Development of Response Surface Models for Rapid Analysis & Multidisciplinary Optimization of Launch Vehicle Design Concepts

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1999-01-01

    Multdisciplinary design optimization (MDO) is an important step in the design and evaluation of launch vehicles, since it has a significant impact on performance and lifecycle cost. The objective in MDO is to search the design space to determine the values of design parameters that optimize the performance characteristics subject to system constraints. Vehicle Analysis Branch (VAB) at NASA Langley Research Center has computerized analysis tools in many of the disciplines required for the design and analysis of launch vehicles. Vehicle performance characteristics can be determined by the use of these computerized analysis tools. The next step is to optimize the system performance characteristics subject to multidisciplinary constraints. However, most of the complex sizing and performance evaluation codes used for launch vehicle design are stand-alone tools, operated by disciplinary experts. They are, in general, difficult to integrate and use directly for MDO. An alternative has been to utilize response surface methodology (RSM) to obtain polynomial models that approximate the functional relationships between performance characteristics and design variables. These approximation models, called response surface models, are then used to integrate the disciplines using mathematical programming methods for efficient system level design analysis, MDO and fast sensitivity simulations. A second-order response surface model of the form given has been commonly used in RSM since in many cases it can provide an adequate approximation especially if the region of interest is sufficiently limited.

  18. Searching U.S. Patents: Core Collection and Suggestions for Service.

    ERIC Educational Resources Information Center

    Harwell, Kevin R.

    1993-01-01

    Provides fundamental information about patents, describes effective and affordable reference resources, and discusses specific issues in providing patent information services to inventors and other patrons. Basic resources, including CD-ROM products, patent classification and searching resources, and other search tools are described in an…

  19. Application of the Biosonar Measurement Tool (BMT) and Instrumented Mine Simulators (IMS) to Exploration of Dolphin Echolocation During Free-Swimming, Bottom-Object Searches

    DTIC Science & Technology

    2003-09-01

    0-933957-31-9 311 Application of the Biosonar Measurement Tool (BMT) and Instrumented...dolphin biosonar (echolocation). Research work conducted by the Navy has addressed the characteristics of echolocation clicks, mechanisms of...information on dolphin echolocation that can be data mined for biosonar search strategies under real-world conditions. Results can be applied to the

  20. Use of computers in dysmorphology.

    PubMed Central

    Diliberti, J H

    1988-01-01

    As a consequence of the increasing power and decreasing cost of digital computers, dysmorphologists have begun to explore a wide variety of computerised applications in clinical genetics. Of considerable interest are developments in the areas of syndrome databases, expert systems, literature searches, image processing, and pattern recognition. Each of these areas is reviewed from the perspective of the underlying computer principles, existing applications, and the potential for future developments. Particular emphasis is placed on the analysis of the tasks performed by the dysmorphologist and the design of appropriate tools to facilitate these tasks. In this context the computer and associated software are considered paradigmatically as tools for the dysmorphologist and should be designed accordingly. Continuing improvements in the ability of computers to manipulate vast amounts of data rapidly makes the development of increasingly powerful tools for the dysmorphologist highly probable. PMID:3050092

  1. Effect of birth ball on labor pain relief: A systematic review and meta-analysis.

    PubMed

    Makvandi, Somayeh; Latifnejad Roudsari, Robab; Sadeghi, Ramin; Karimi, Leila

    2015-11-01

    To critically evaluate the available evidence related to the impact of using a birth ball on labor pain relief. The Cochrane library, Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE/PubMed and Scopus were searched from their inception to January 2015 using keywords: (Birth* OR Swiss OR Swedish OR balance OR fitness OR gym* OR Pezzi OR sport* OR stability) AND (ball*) AND (labor OR labour OR Obstetric). All available randomized controlled trials involving women using a birth ball for pain relief during labor were considered. The search resulted in 341 titles and abstracts, which were narrowed down to eight potentially relevant articles. Of these, four studies met the inclusion criteria. Pain intensity on a 10 cm visual analogue scale was used as the main outcome measure. Risk of bias was assessed using the Cochrane Risk of Bias tool. Comprehensive Meta-Analysis Version 2 was used for statistical analysis. Four RCTs involving 220 women were included in the systematic review. One study was excluded from the meta-analysis because of heterogeneous interventions and a lack of mean and standard deviation results of labor pain score. The meta-analysis showed that birth ball exercises provided statistically significant improvements to labor pain (pooled mean difference -0.921; 95% confidence interval -1.28, -0.56; P = 0.0000005; I(2)  = 33.7%). The clinical implementation of a birth ball exercise could be an effective tool for parturient women to reduce labor pain. However, rigorous RCTs are needed to evaluate the effect of the birth ball on labor pain relief. © 2015 Japan Society of Obstetrics and Gynecology.

  2. A gene network bioinformatics analysis for pemphigoid autoimmune blistering diseases.

    PubMed

    Barone, Antonio; Toti, Paolo; Giuca, Maria Rita; Derchi, Giacomo; Covani, Ugo

    2015-07-01

    In this theoretical study, a text mining search and clustering analysis of data related to genes potentially involved in human pemphigoid autoimmune blistering diseases (PAIBD) was performed using web tools to create a gene/protein interaction network. The Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database was employed to identify a final set of PAIBD-involved genes and to calculate the overall significant interactions among genes: for each gene, the weighted number of links, or WNL, was registered and a clustering procedure was performed using the WNL analysis. Genes were ranked in class (leader, B, C, D and so on, up to orphans). An ontological analysis was performed for the set of 'leader' genes. Using the above-mentioned data network, 115 genes represented the final set; leader genes numbered 7 (intercellular adhesion molecule 1 (ICAM-1), interferon gamma (IFNG), interleukin (IL)-2, IL-4, IL-6, IL-8 and tumour necrosis factor (TNF)), class B genes were 13, whereas the orphans were 24. The ontological analysis attested that the molecular action was focused on extracellular space and cell surface, whereas the activation and regulation of the immunity system was widely involved. Despite the limited knowledge of the present pathologic phenomenon, attested by the presence of 24 genes revealing no protein-protein direct or indirect interactions, the network showed significant pathways gathered in several subgroups: cellular components, molecular functions, biological processes and the pathologic phenomenon obtained from the Kyoto Encyclopaedia of Genes and Genomes (KEGG) database. The molecular basis for PAIBD was summarised and expanded, which will perhaps give researchers promising directions for the identification of new therapeutic targets.

  3. [The Efficacy of Near-Infrared Devices in Facilitating Peripheral Intravenous Access in Children: A Systematic Review and Subgroup Meta-Analysis].

    PubMed

    Kuo, Chia-Chi; Feng, I-Jung; Lee, Wei-Jing

    2017-10-01

    Peripheral intravenous access is a common and invasive procedure that is performed in pediatric clinical settings. Children often have difficult intravenous-access problems that may not only increase staff stress but also affect the timeliness of immediate treatments. To determine the efficacy of near-infrared devices in facilitating peripheral intravenous access in children, using a systematic review and meta-analysis. Six databases, namely the Index to Taiwan Periodical Literature System, Airiti Library, CINAHL, Cochrane Library, PubMed/MEDLINE, and ProQuest were searched for related articles that were published between the earliest year available and February 2017. The search was limited to studies on populations of children that used either a randomized controlled trial or controlled clinical trial approach and used the key words "near-infrared devices" AND "peripheral intravenous access." The 12 articles that met these criteria were included in the analysis. The Cochrane Collaboration bias assessment tool was used to assess the methodological quality. In addition, RevMan 5.3.5 software was used to conduct the meta-analysis. The near-infrared devices did not significantly improve the first-attempt success rate, number of attempts, or the procedural time of peripheral intravenous access in children. However, the subgroup analysis of difficult intravenous-access factors revealed a significant improvement in the first-attempt success rate of children with difficult intravenous access scores (OR = 1.83, p = .03). Near-infrared devices may improve the first-attempt success rate in children with difficult intravenous access by allowing healthcare professionals to visualize the peripheral veins. Therefore, we suggest that the difficult intravenous-access score be used as a screening tool to suggest when to apply near-infrared devices to children with difficult peripheral intravenous access in order to maximize efficacy of treatment.

  4. Simple tools for assembling and searching high-density picolitre pyrophosphate sequence data.

    PubMed

    Parker, Nicolas J; Parker, Andrew G

    2008-04-18

    The advent of pyrophosphate sequencing makes large volumes of sequencing data available at a lower cost than previously possible. However, the short read lengths are difficult to assemble and the large dataset is difficult to handle. During the sequencing of a virus from the tsetse fly, Glossina pallidipes, we found the need for tools to search quickly a set of reads for near exact text matches. A set of tools is provided to search a large data set of pyrophosphate sequence reads under a "live" CD version of Linux on a standard PC that can be used by anyone without prior knowledge of Linux and without having to install a Linux setup on the computer. The tools permit short lengths of de novo assembly, checking of existing assembled sequences, selection and display of reads from the data set and gathering counts of sequences in the reads. Demonstrations are given of the use of the tools to help with checking an assembly against the fragment data set; investigating homopolymer lengths, repeat regions and polymorphisms; and resolving inserted bases caused by incomplete chain extension. The additional information contained in a pyrophosphate sequencing data set beyond a basic assembly is difficult to access due to a lack of tools. The set of simple tools presented here would allow anyone with basic computer skills and a standard PC to access this information.

  5. Informed-Proteomics: open-source software package for top-down proteomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jungkap; Piehowski, Paul D.; Wilkins, Christopher

    Top-down proteomics involves the analysis of intact proteins. This approach is very attractive as it allows for analyzing proteins in their endogenous form without proteolysis, preserving valuable information about post-translation modifications, isoforms, proteolytic processing or their combinations collectively called proteoforms. Moreover, the quality of the top-down LC-MS/MS datasets is rapidly increasing due to advances in the liquid chromatography and mass spectrometry instrumentation and sample processing protocols. However, the top-down mass spectra are substantially more complex compare to the more conventional bottom-up data. To take full advantage of the increasing quality of the top-down LC-MS/MS datasets there is an urgent needmore » to develop algorithms and software tools for confident proteoform identification and quantification. In this study we present a new open source software suite for top-down proteomics analysis consisting of an LC-MS feature finding algorithm, a database search algorithm, and an interactive results viewer. The presented tool along with several other popular tools were evaluated using human-in-mouse xenograft luminal and basal breast tumor samples that are known to have significant differences in protein abundance based on bottom-up analysis.« less

  6. New Features in ADS Labs

    NASA Astrophysics Data System (ADS)

    Accomazzi, Alberto; Kurtz, M. J.; Henneken, E. A.; Grant, C. S.; Thompson, D.; Di Milia, G.; Luker, J.; Murray, S. S.

    2013-01-01

    The NASA Astrophysics Data System (ADS) has been working hard on updating its services and interfaces to better support our community's research needs. ADS Labs is a new interface built on the old tried-and-true ADS Abstract Databases, so all of ADS's content is available through it. In this presentation we highlight the new features that have been developed in ADS Labs over the last year: new recommendations, metrics, a citation tool and enhanced fulltext search. ADS Labs has long been providing article-level recommendations based on keyword similarity, co-readership and co-citation analysis of its corpus. We have now introduced personal recommendations, which provide a list of articles to be considered based on a individual user's readership history. A new metrics interface provides a summary of the basic impact indicators for a list of records. These include the total and normalized number of papers, citations, reads, and downloads. Also included are some of the popular indices such as the h, g and i10 index. The citation helper tool allows one to submit a set of records and obtain a list of top 10 papers which cite and/or are cited by papers in the original list (but which are not in it). The process closely resembles the network approach of establishing "friends of friends" via an analysis of the citation network. The full-text search service now covers more than 2.5 million documents, including all the major astronomy journals, as well as physics journals published by Springer, Elsevier, the American Physical Society, the American Geophysical Union, and all of the arXiv eprints. The full-text search interface interface allows users and librarians to dig deep and find words or phrases in the body of the indexed articles. ADS Labs is available at http://adslabs.org

  7. Design, Analysis, and Reporting of Crossover Trials for Inclusion in a Meta-Analysis.

    PubMed

    Li, Tianjing; Yu, Tsung; Hawkins, Barbara S; Dickersin, Kay

    2015-01-01

    To evaluate the characteristics of the design, analysis, and reporting of crossover trials for inclusion in a meta-analysis of treatment for primary open-angle glaucoma and to provide empirical evidence to inform the development of tools to assess the validity of the results from crossover trials and reporting guidelines. We searched MEDLINE, EMBASE, and Cochrane's CENTRAL register for randomized crossover trials for a systematic review and network meta-analysis we are conducting. Two individuals independently screened the search results for eligibility and abstracted data from each included report. We identified 83 crossover trials eligible for inclusion. Issues affecting the risk of bias in crossover trials, such as carryover, period effects and missing data, were often ignored. Some trials failed to accommodate the within-individual differences in the analysis. For a large proportion of the trials, the authors tabulated the results as if they arose from a parallel design. Precision estimates properly accounting for the paired nature of the design were often unavailable from the study reports; consequently, to include trial findings in a meta-analysis would require further manipulation and assumptions. The high proportion of poorly reported analyses and results has the potential to affect whether crossover data should or can be included in a meta-analysis. There is pressing need for reporting guidelines for crossover trials.

  8. RNAPattMatch: a web server for RNA sequence/structure motif detection based on pattern matching with flexible gaps

    PubMed Central

    Drory Retwitzer, Matan; Polishchuk, Maya; Churkin, Elena; Kifer, Ilona; Yakhini, Zohar; Barash, Danny

    2015-01-01

    Searching for RNA sequence-structure patterns is becoming an essential tool for RNA practitioners. Novel discoveries of regulatory non-coding RNAs in targeted organisms and the motivation to find them across a wide range of organisms have prompted the use of computational RNA pattern matching as an enhancement to sequence similarity. State-of-the-art programs differ by the flexibility of patterns allowed as queries and by their simplicity of use. In particular—no existing method is available as a user-friendly web server. A general program that searches for RNA sequence-structure patterns is RNA Structator. However, it is not available as a web server and does not provide the option to allow flexible gap pattern representation with an upper bound of the gap length being specified at any position in the sequence. Here, we introduce RNAPattMatch, a web-based application that is user friendly and makes sequence/structure RNA queries accessible to practitioners of various background and proficiency. It also extends RNA Structator and allows a more flexible variable gaps representation, in addition to analysis of results using energy minimization methods. RNAPattMatch service is available at http://www.cs.bgu.ac.il/rnapattmatch. A standalone version of the search tool is also available to download at the site. PMID:25940619

  9. Measurement tools of resource use and quality of life in clinical trials for dementia or cognitive impairment interventions: protocol for a scoping review.

    PubMed

    Yang, Fan; Dawes, Piers; Leroi, Iracema; Gannon, Brenda

    2017-01-26

    Dementia and cognitive impairment could severely impact patients' life and bring heavy burden to patients, caregivers and societies. Some interventions are suggested for the older patients with these conditions to help them live well, but economic evaluation is needed to assess the cost-effectiveness of these interventions. Trial-based economic evaluation is an ideal method; however, little is known about the tools used to collect data of resource use and quality of life alongside the trials. Therefore, the aim of this review is to identify and describe the resource use and quality of life instruments in clinical trials of interventions for older patients with dementia or cognitive impairment. We will perform a search in main electronic databases (Ovid MEDLINE, PsycINFO, EMBASE, CINAHL, Cochrane Databases of Systematic Reviews, Web of Science and Scopus) using the key terms or their synonyms: older, dementia, cognitive impairment, cost, quality of life, intervention and tools. After removing duplicates, two independent reviewers will screen each entry for eligibility, initially by title and abstract, then by full-text. A hand search of the references of included articles and general search, e.g. Google Scholar, will also be conducted to identify potential relevant studies. All disagreements will be resolved by discussion or consultation with a third reviewer if necessary. Data analysis will be completed and reported in a narrative review. This review will identify the instruments used in clinical trials to collect resource use and quality of life data for dementia or cognitive impairment interventions. This will help to guide the study design of future trial-based economic evaluation of these interventions. PROSPERO CRD42016038495.

  10. The effects of applying information technology on job empowerment dimensions.

    PubMed

    Ajami, Sima; Arab-Chadegani, Raziyeh

    2014-01-01

    Information Technology (IT) is known as a valuable tool for information dissemination. Today, information communication technology can be used as a powerful tool to improve employees' quality and efficiency. The increasing development of technology-based tools and their adaptation speed with human requirements has led to a new form of the learning environment and creative, active and inclusive interaction. These days, information is one of the most important power resources in every organization and accordingly, acquiring information, especially central or strategic one can help organizations to build a power base and influence others. The aim of this study was to identify the most important criteria in job empowerment using IT and also the advantages of assessing empowerment. This study was a narrative review. The literature was searched on databases and journals of Springer, Proquest, PubMed, science direct and scientific information database) with keywords including IT, empowerment and employees in the searching areas of titles, keywords, abstracts and full texts. The preliminary search resulted in 85 articles, books and conference proceedings in which published between 1983 and 2013 during July 2013. After a careful analysis of the content of each paper, a total of 40 papers and books were selected based on their relevancy. According to Ardalan Model IT plays a significant role in the fast data collection, global and fast access to a broad range of health information, a quick evaluation of information, better communication among health experts and more awareness through access to various information sources. IT leads to a better performance accompanied by higher efficiency in service providing all of which will cause more satisfaction from fast and high-quality services.

  11. The effects of applying information technology on job empowerment dimensions

    PubMed Central

    Ajami, Sima; Arab-Chadegani, Raziyeh

    2014-01-01

    Information Technology (IT) is known as a valuable tool for information dissemination. Today, information communication technology can be used as a powerful tool to improve employees’ quality and efficiency. The increasing development of technology-based tools and their adaptation speed with human requirements has led to a new form of the learning environment and creative, active and inclusive interaction. These days, information is one of the most important power resources in every organization and accordingly, acquiring information, especially central or strategic one can help organizations to build a power base and influence others. The aim of this study was to identify the most important criteria in job empowerment using IT and also the advantages of assessing empowerment. This study was a narrative review. The literature was searched on databases and journals of Springer, Proquest, PubMed, science direct and scientific information database) with keywords including IT, empowerment and employees in the searching areas of titles, keywords, abstracts and full texts. The preliminary search resulted in 85 articles, books and conference proceedings in which published between 1983 and 2013 during July 2013. After a careful analysis of the content of each paper, a total of 40 papers and books were selected based on their relevancy. According to Ardalan Model IT plays a significant role in the fast data collection, global and fast access to a broad range of health information, a quick evaluation of information, better communication among health experts and more awareness through access to various information sources. IT leads to a better performance accompanied by higher efficiency in service providing all of which will cause more satisfaction from fast and high-quality services. PMID:25250350

  12. Breadth of Coverage, Ease of Use, and Quality of Mobile Point-of-Care Tool Information Summaries: An Evaluation

    PubMed Central

    Ren, Jinma

    2016-01-01

    Background With advances in mobile technology, accessibility of clinical resources at the point of care has increased. Objective The objective of this research was to identify if six selected mobile point-of-care tools meet the needs of clinicians in internal medicine. Point-of-care tools were evaluated for breadth of coverage, ease of use, and quality. Methods Six point-of-care tools were evaluated utilizing four different devices (two smartphones and two tablets). Breadth of coverage was measured using select International Classification of Diseases, Ninth Revision, codes if information on summary, etiology, pathophysiology, clinical manifestations, diagnosis, treatment, and prognosis was provided. Quality measures included treatment and diagnostic inline references and individual and application time stamping. Ease of use covered search within topic, table of contents, scrolling, affordance, connectivity, and personal accounts. Analysis of variance based on the rank of score was used. Results Breadth of coverage was similar among Medscape (mean 6.88), Uptodate (mean 6.51), DynaMedPlus (mean 6.46), and EvidencePlus (mean 6.41) (P>.05) with DynaMed (mean 5.53) and Epocrates (mean 6.12) scoring significantly lower (P<.05). Ease of use had DynaMedPlus with the highest score, and EvidencePlus was lowest (6.0 vs 4.0, respectively, P<.05). For quality, reviewers rated the same score (4.00) for all tools except for Medscape, which was rated lower (P<.05). Conclusions For breadth of coverage, most point-of-care tools were similar with the exception of DynaMed. For ease of use, only UpToDate and DynaMedPlus allow for search within a topic. All point-of-care tools have remote access with the exception of UpToDate and Essential Evidence Plus. All tools except Medscape covered criteria for quality evaluation. Overall, there was no significant difference between the point-of-care tools with regard to coverage on common topics used by internal medicine clinicians. Selection of point-of-care tools is highly dependent on individual preference based on ease of use and cost of the application. PMID:27733328

  13. Two-Dimensional Neutronic and Fuel Cycle Analysis of the Transatomic Power Molten Salt Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Betzler, Benjamin R.; Powers, Jeffrey J.; Worrall, Andrew

    2017-01-15

    This status report presents the results from the first phase of the collaboration between Transatomic Power Corporation (TAP) and Oak Ridge National Laboratory (ORNL) to provide neutronic and fuel cycle analysis of the TAP core design through the Department of Energy Gateway for Accelerated Innovation in Nuclear, Nuclear Energy Voucher program. The TAP design is a molten salt reactor using movable moderator rods to shift the neutron spectrum in the core from mostly epithermal at beginning of life to thermal at end of life. Additional developments in the ChemTriton modeling and simulation tool provide the critical moderator-to-fuel ratio searches andmore » time-dependent parameters necessary to simulate the continuously changing physics in this complex system. Results from simulations with these tools show agreement with TAP-calculated performance metrics for core lifetime, discharge burnup, and salt volume fraction, verifying the viability of reducing actinide waste production with this design. Additional analyses of time step sizes, mass feed rates and enrichments, and isotopic removals provide additional information to make informed design decisions. This work further demonstrates capabilities of ORNL modeling and simulation tools for analysis of molten salt reactor designs and strongly positions this effort for the upcoming three-dimensional core analysis.« less

  14. Efficient RNA structure comparison algorithms.

    PubMed

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  15. Internet Databases of the Properties, Enzymatic Reactions, and Metabolism of Small Molecules—Search Options and Applications in Food Science

    PubMed Central

    Minkiewicz, Piotr; Darewicz, Małgorzata; Iwaniak, Anna; Bucholska, Justyna; Starowicz, Piotr; Czyrko, Emilia

    2016-01-01

    Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components) and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs. PMID:27929431

  16. Internet Databases of the Properties, Enzymatic Reactions, and Metabolism of Small Molecules-Search Options and Applications in Food Science.

    PubMed

    Minkiewicz, Piotr; Darewicz, Małgorzata; Iwaniak, Anna; Bucholska, Justyna; Starowicz, Piotr; Czyrko, Emilia

    2016-12-06

    Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components) and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs.

  17. E-MSD: an integrated data resource for bioinformatics.

    PubMed

    Golovin, A; Oldfield, T J; Tate, J G; Velankar, S; Barton, G J; Boutselakis, H; Dimitropoulos, D; Fillon, J; Hussain, A; Ionides, J M C; John, M; Keller, P A; Krissinel, E; McNeil, P; Naim, A; Newman, R; Pajon, A; Pineda, J; Rachedi, A; Copeland, J; Sitnov, A; Sobhany, S; Suarez-Uruena, A; Swaminathan, G J; Tagari, M; Tromm, S; Vranken, W; Henrick, K

    2004-01-01

    The Macromolecular Structure Database (MSD) group (http://www.ebi.ac.uk/msd/) continues to enhance the quality and consistency of macromolecular structure data in the Protein Data Bank (PDB) and to work towards the integration of various bioinformatics data resources. We have implemented a simple form-based interface that allows users to query the MSD directly. The MSD 'atlas pages' show all of the information in the MSD for a particular PDB entry. The group has designed new search interfaces aimed at specific areas of interest, such as the environment of ligands and the secondary structures of proteins. We have also implemented a novel search interface that begins to integrate separate MSD search services in a single graphical tool. We have worked closely with collaborators to build a new visualization tool that can present both structure and sequence data in a unified interface, and this data viewer is now used throughout the MSD services for the visualization and presentation of search results. Examples showcasing the functionality and power of these tools are available from tutorial webpages (http://www. ebi.ac.uk/msd-srv/docs/roadshow_tutorial/).

  18. E-MSD: an integrated data resource for bioinformatics

    PubMed Central

    Golovin, A.; Oldfield, T. J.; Tate, J. G.; Velankar, S.; Barton, G. J.; Boutselakis, H.; Dimitropoulos, D.; Fillon, J.; Hussain, A.; Ionides, J. M. C.; John, M.; Keller, P. A.; Krissinel, E.; McNeil, P.; Naim, A.; Newman, R.; Pajon, A.; Pineda, J.; Rachedi, A.; Copeland, J.; Sitnov, A.; Sobhany, S.; Suarez-Uruena, A.; Swaminathan, G. J.; Tagari, M.; Tromm, S.; Vranken, W.; Henrick, K.

    2004-01-01

    The Macromolecular Structure Database (MSD) group (http://www.ebi.ac.uk/msd/) continues to enhance the quality and consistency of macromolecular structure data in the Protein Data Bank (PDB) and to work towards the integration of various bioinformatics data resources. We have implemented a simple form-based interface that allows users to query the MSD directly. The MSD ‘atlas pages’ show all of the information in the MSD for a particular PDB entry. The group has designed new search interfaces aimed at specific areas of interest, such as the environment of ligands and the secondary structures of proteins. We have also implemented a novel search interface that begins to integrate separate MSD search services in a single graphical tool. We have worked closely with collaborators to build a new visualization tool that can present both structure and sequence data in a unified interface, and this data viewer is now used throughout the MSD services for the visualization and presentation of search results. Examples showcasing the functionality and power of these tools are available from tutorial webpages (http://www.ebi.ac.uk/msd-srv/docs/roadshow_tutorial/). PMID:14681397

  19. RAPSearch: a fast protein similarity search tool for short reads

    PubMed Central

    2011-01-01

    Background Next Generation Sequencing (NGS) is producing enormous corpuses of short DNA reads, affecting emerging fields like metagenomics. Protein similarity search--a key step to achieve annotation of protein-coding genes in these short reads, and identification of their biological functions--faces daunting challenges because of the very sizes of the short read datasets. Results We developed a fast protein similarity search tool RAPSearch that utilizes a reduced amino acid alphabet and suffix array to detect seeds of flexible length. For short reads (translated in 6 frames) we tested, RAPSearch achieved ~20-90 times speedup as compared to BLASTX. RAPSearch missed only a small fraction (~1.3-3.2%) of BLASTX similarity hits, but it also discovered additional homologous proteins (~0.3-2.1%) that BLASTX missed. By contrast, BLAT, a tool that is even slightly faster than RAPSearch, had significant loss of sensitivity as compared to RAPSearch and BLAST. Conclusions RAPSearch is implemented as open-source software and is accessible at http://omics.informatics.indiana.edu/mg/RAPSearch. It enables faster protein similarity search. The application of RAPSearch in metageomics has also been demonstrated. PMID:21575167

  20. Oregon State University | Oregon State University

    Science.gov Websites

    Services About Academics Research Outreach Athletics OSU150 Current Students Online Students Future Students Faculty and Staff Parents and Family Open Menu Open Search search for people and pages Search OSU - and ours. More Research. Virtual Tour Tools and Services Audience Menu Future Students Current

Top