Sample records for easily accessible database

  1. Development and applications of the EntomopathogenID MLSA database for use in agricultural systems

    USDA-ARS?s Scientific Manuscript database

    The current study reports the development and application of a publicly accessible, curated database of Hypocrealean entomopathogenic fungi sequence data. The goal was to provide a platform for users to easily access sequence data from reference strains. The database can be used to accurately identi...

  2. Entomopathogen ID: a curated sequence resource for entomopathogenic fungi

    USDA-ARS?s Scientific Manuscript database

    We report the development of a publicly accessible, curated database of Hypocrealean entomopathogenic fungi sequence data. The goal is to provide a platform for users to easily access sequence data from reference strains. The database can be used to accurately identify unknown entomopathogenic fungi...

  3. Customizable tool for ecological data entry, assessment, monitoring, and interpretation

    USDA-ARS?s Scientific Manuscript database

    The Database for Inventory, Monitoring and Assessment (DIMA) is a highly customizable tool for data entry, assessment, monitoring, and interpretation. DIMA is a Microsoft Access database that can easily be used without Access knowledge and is available at no cost. Data can be entered for common, nat...

  4. An Open-source Toolbox for Analysing and Processing PhysioNet Databases in MATLAB and Octave.

    PubMed

    Silva, Ikaro; Moody, George B

    The WaveForm DataBase (WFDB) Toolbox for MATLAB/Octave enables integrated access to PhysioNet's software and databases. Using the WFDB Toolbox for MATLAB/Octave, users have access to over 50 physiological databases in PhysioNet. The toolbox provides access over 4 TB of biomedical signals including ECG, EEG, EMG, and PLETH. Additionally, most signals are accompanied by metadata such as medical annotations of clinical events: arrhythmias, sleep stages, seizures, hypotensive episodes, etc. Users of this toolbox should easily be able to reproduce, validate, and compare results published based on PhysioNet's software and databases.

  5. A Web-Based Database for Nurse Led Outreach Teams (NLOT) in Toronto.

    PubMed

    Li, Shirley; Kuo, Mu-Hsing; Ryan, David

    2016-01-01

    A web-based system can provide access to real-time data and information. Healthcare is moving towards digitizing patients' medical information and securely exchanging it through web-based systems. In one of Ontario's health regions, Nurse Led Outreach Teams (NLOT) provide emergency mobile nursing services to help reduce unnecessary transfers from long-term care homes to emergency departments. Currently the NLOT team uses a Microsoft Access database to keep track of the health information on the residents that they serve. The Access database lacks scalability, portability, and interoperability. The objective of this study is the development of a web-based database using Oracle Application Express that is easily accessible from mobile devices. The web-based database will allow NLOT nurses to enter and access resident information anytime and from anywhere.

  6. ExpoCastDB: A Publicly Accessible Database for Observational Exposure Data

    EPA Science Inventory

    The application of environmental informatics tools for human health risk assessment will require the development of advanced exposure information technology resources. Exposure data for chemicals is often not readily accessible. There is a pressing need for easily accessible, che...

  7. Design and Implementation of an Environmental Mercury Database for Northeastern North America

    NASA Astrophysics Data System (ADS)

    Clair, T. A.; Evers, D.; Smith, T.; Goodale, W.; Bernier, M.

    2002-12-01

    An important issue faced when attempting to interpret geochemical variability studies across large regions, is the accumulation, access and consistent display of data from a large number of sources. We were given the opportunity to provide a regional assessment of mercury distribution in surface waters, sediments, invertebrates, fish, and birds in a region extending from New York State to the Island of Newfoundland. We received over 20 individual databases from State, Provincial, and Federal governments, as well as university researchers from both Canada and the United States. These databases came in a variety of formats and sizes. Our challenge was to find a way of accumulating and presenting the large amounts of acquired data, in a consistent, easily accessible fashion, which could then be more easily interpreted. Moreover, the database had to be portable and easily distributable to the large number of study participants. We developed a static database structure using a web-based approach which we were then able to mount on a server which was accessible to all project participants. The site also contained all the necessary documentation related to the data, its acquisition, as well as the methods used in its analysis and interpretation. We then copied the complete web site on CDROM's which we then distributed to all project participants, funding agencies, and other interested parties. The CDROM formed a permanent record of the project and was issued ISSN and ISBN numbers so that the information remained accessible to researchers in perpetuity. Here we present an overview of the CDROM and data structures, of the information accumulated over the first year of the study, and initial interpretation of the results.

  8. GrainGenes: Changing Times, Changing Databases, Digital Evolution.

    USDA-ARS?s Scientific Manuscript database

    The GrainGenes database is one of few agricultural databases that had an early start on the Internet and that has changed with the times. Initial goals were to collect a wide range of data relating to the developing maps and attributes of small grains crops, and to make them easily accessible. The ...

  9. Cpf1-Database: web-based genome-wide guide RNA library design for gene knockout screens using CRISPR-Cpf1.

    PubMed

    Park, Jeongbin; Bae, Sangsu

    2018-03-15

    Following the type II CRISPR-Cas9 system, type V CRISPR-Cpf1 endonucleases have been found to be applicable for genome editing in various organisms in vivo. However, there are as yet no web-based tools capable of optimally selecting guide RNAs (gRNAs) among all possible genome-wide target sites. Here, we present Cpf1-Database, a genome-wide gRNA library design tool for LbCpf1 and AsCpf1, which have DNA recognition sequences of 5'-TTTN-3' at the 5' ends of target sites. Cpf1-Database provides a sophisticated but simple way to design gRNAs for AsCpf1 nucleases on the genome scale. One can easily access the data using a straightforward web interface, and using the powerful collections feature one can easily design gRNAs for thousands of genes in short time. Free access at http://www.rgenome.net/cpf1-database/. sangsubae@hanyang.ac.kr.

  10. Astronaut Demographic Database: Everything You Want to Know About Astronauts and More

    NASA Technical Reports Server (NTRS)

    Keeton, Kathryn; Patterson, Holly

    2011-01-01

    A wealth of information regarding the astronaut population is available that could be especially useful to researchers. However, until now, it has been difficult to obtain that information in a systematic way. Therefore, this "astronaut database" began as a way for researchers within the Behavioral Health and Performance Group to keep track of the ever growing astronaut corps population. Before our effort, compilation of such data could be found, but not in a way that was easily acquired or accessible. One would have to use internet search engines, read through lengthy and potentially inaccurate informational sites, or read through astronaut biographies compiled by NASA. Astronauts are a unique class of individuals and, by examining such information, which we dubbed "Demographics," we hoped to find some commonalities that may be useful for other research areas and future research topics. By organizing the information pertaining to astronauts1 in a formal, unified catalog, we believe we have made the information more easily accessible, readily useable, and user friendly. Our end goal is to provide this database to others as a highly functional resource within the research community. Perhaps the database can eventually be an official, published document for researchers to gain full access.

  11. ENVIRONMENTAL INFORMATION MANAGEMENT SYSTEM (EIMS)

    EPA Science Inventory

    The Environmental Information Management System (EIMS) organizes descriptive information (metadata) for data sets, databases, documents, models, projects, and spatial data. The EIMS design provides a repository for scientific documentation that can be easily accessed with standar...

  12. ZeBase: an open-source relational database for zebrafish laboratories.

    PubMed

    Hensley, Monica R; Hassenplug, Eric; McPhail, Rodney; Leung, Yuk Fai

    2012-03-01

    Abstract ZeBase is an open-source relational database for zebrafish inventory. It is designed for the recording of genetic, breeding, and survival information of fish lines maintained in a single- or multi-laboratory environment. Users can easily access ZeBase through standard web-browsers anywhere on a network. Convenient search and reporting functions are available to facilitate routine inventory work; such functions can also be automated by simple scripting. Optional barcode generation and scanning are also built-in for easy access to the information related to any fish. Further information of the database and an example implementation can be found at http://zebase.bio.purdue.edu.

  13. Use of the Geographic Information System (GIS) in nurseries

    Treesearch

    Brent Olson; Chad Loreth

    2002-01-01

    The use of GIs in nursery operations provides a variety of opportunities. All planning activities can be incorporated into an accessible database. GIS can be used to create ways for employees to access and analyze data. The program can be used for historical record keeping. Use of GIS in planning can improve the efficiency of nursery operations. GIS can easily be used...

  14. Brain Tumor Database, a free relational database for collection and analysis of brain tumor patient information.

    PubMed

    Bergamino, Maurizio; Hamilton, David J; Castelletti, Lara; Barletta, Laura; Castellan, Lucio

    2015-03-01

    In this study, we describe the development and utilization of a relational database designed to manage the clinical and radiological data of patients with brain tumors. The Brain Tumor Database was implemented using MySQL v.5.0, while the graphical user interface was created using PHP and HTML, thus making it easily accessible through a web browser. This web-based approach allows for multiple institutions to potentially access the database. The BT Database can record brain tumor patient information (e.g. clinical features, anatomical attributes, and radiological characteristics) and be used for clinical and research purposes. Analytic tools to automatically generate statistics and different plots are provided. The BT Database is a free and powerful user-friendly tool with a wide range of possible clinical and research applications in neurology and neurosurgery. The BT Database graphical user interface source code and manual are freely available at http://tumorsdatabase.altervista.org. © The Author(s) 2013.

  15. The NIDDK Information Network: A Community Portal for Finding Data, Materials, and Tools for Researchers Studying Diabetes, Digestive, and Kidney Diseases

    PubMed Central

    Whetzel, Patricia L.; Grethe, Jeffrey S.; Banks, Davis E.; Martone, Maryann E.

    2015-01-01

    The NIDDK Information Network (dkNET; http://dknet.org) was launched to serve the needs of basic and clinical investigators in metabolic, digestive and kidney disease by facilitating access to research resources that advance the mission of the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK). By research resources, we mean the multitude of data, software tools, materials, services, projects and organizations available to researchers in the public domain. Most of these are accessed via web-accessible databases or web portals, each developed, designed and maintained by numerous different projects, organizations and individuals. While many of the large government funded databases, maintained by agencies such as European Bioinformatics Institute and the National Center for Biotechnology Information, are well known to researchers, many more that have been developed by and for the biomedical research community are unknown or underutilized. At least part of the problem is the nature of dynamic databases, which are considered part of the “hidden” web, that is, content that is not easily accessed by search engines. dkNET was created specifically to address the challenge of connecting researchers to research resources via these types of community databases and web portals. dkNET functions as a “search engine for data”, searching across millions of database records contained in hundreds of biomedical databases developed and maintained by independent projects around the world. A primary focus of dkNET are centers and projects specifically created to provide high quality data and resources to NIDDK researchers. Through the novel data ingest process used in dkNET, additional data sources can easily be incorporated, allowing it to scale with the growth of digital data and the needs of the dkNET community. Here, we provide an overview of the dkNET portal and its functions. We show how dkNET can be used to address a variety of use cases that involve searching for research resources. PMID:26393351

  16. DoSSiER: Database of scientific simulation and experimental results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenzel, Hans; Yarba, Julia; Genser, Krzystof

    The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.

  17. DoSSiER: Database of scientific simulation and experimental results

    DOE PAGES

    Wenzel, Hans; Yarba, Julia; Genser, Krzystof; ...

    2016-08-01

    The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.

  18. An SQL query generator for CLIPS

    NASA Technical Reports Server (NTRS)

    Snyder, James; Chirica, Laurian

    1990-01-01

    As expert systems become more widely used, their access to large amounts of external information becomes increasingly important. This information exists in several forms such as statistical, tabular data, knowledge gained by experts and large databases of information maintained by companies. Because many expert systems, including CLIPS, do not provide access to this external information, much of the usefulness of expert systems is left untapped. The scope of this paper is to describe a database extension for the CLIPS expert system shell. The current industry standard database language is SQL. Due to SQL standardization, large amounts of information stored on various computers, potentially at different locations, will be more easily accessible. Expert systems should be able to directly access these existing databases rather than requiring information to be re-entered into the expert system environment. The ORACLE relational database management system (RDBMS) was used to provide a database connection within the CLIPS environment. To facilitate relational database access a query generation system was developed as a CLIPS user function. The queries are entered in a CLlPS-like syntax and are passed to the query generator, which constructs and submits for execution, an SQL query to the ORACLE RDBMS. The query results are asserted as CLIPS facts. The query generator was developed primarily for use within the ICADS project (Intelligent Computer Aided Design System) currently being developed by the CAD Research Unit in the California Polytechnic State University (Cal Poly). In ICADS, there are several parallel or distributed expert systems accessing a common knowledge base of facts. Expert system has a narrow domain of interest and therefore needs only certain portions of the information. The query generator provides a common method of accessing this information and allows the expert system to specify what data is needed without specifying how to retrieve it.

  19. Cloud-Based Distributed Control of Unmanned Systems

    DTIC Science & Technology

    2015-04-01

    during mission execution. At best, the data is saved onto hard-drives and is accessible only by the local team. Data history in a form available and...following open source technologies: GeoServer, OpenLayers, PostgreSQL , and PostGIS are chosen to implement the back-end database and server. A brief...geospatial map data. 3. PostgreSQL : An SQL-compliant object-relational database that easily scales to accommodate large amounts of data - upwards to

  20. Internet Tools Access Administrative Data at the University of Delaware.

    ERIC Educational Resources Information Center

    Jacobson, Carl

    1995-01-01

    At the University of Delaware, World Wide Web tools are used to produce multiplatform administrative applications, including hyperreporting, mixed media, electronic forms, and kiosk services. Web applications are quickly and easily crafted to interact with administrative databases. They are particularly suited to customer outreach efforts,…

  1. Nuclear Science References Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritychenko, B., E-mail: pritychenko@bnl.gov; Běták, E.; Singh, B.

    2014-06-15

    The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energymore » Agency (http://www-nds.iaea.org/nsr)« less

  2. An XML-based Generic Tool for Information Retrieval in Solar Databases

    NASA Astrophysics Data System (ADS)

    Scholl, Isabelle F.; Legay, Eric; Linsolas, Romain

    This paper presents the current architecture of the `Solar Web Project' now in its development phase. This tool will provide scientists interested in solar data with a single web-based interface for browsing distributed and heterogeneous catalogs of solar observations. The main goal is to have a generic application that can be easily extended to new sets of data or to new missions with a low level of maintenance. It is developed with Java and XML is used as a powerful configuration language. The server, independent of any database scheme, can communicate with a client (the user interface) and several local or remote archive access systems (such as existing web pages, ftp sites or SQL databases). Archive access systems are externally described in XML files. The user interface is also dynamically generated from an XML file containing the window building rules and a simplified database description. This project is developed at MEDOC (Multi-Experiment Data and Operations Centre), located at the Institut d'Astrophysique Spatiale (Orsay, France). Successful tests have been conducted with other solar archive access systems.

  3. A survey of commercial object-oriented database management systems

    NASA Technical Reports Server (NTRS)

    Atkins, John

    1992-01-01

    The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.

  4. Integrating Free Computer Software in Chemistry and Biochemistry Instruction: An International Collaboration

    ERIC Educational Resources Information Center

    Cedeno, David L.; Jones, Marjorie A.; Friesen, Jon A.; Wirtz, Mark W.; Rios, Luz Amalia; Ocampo, Gonzalo Taborda

    2010-01-01

    At the Universidad de Caldas, Manizales, Colombia, we used their new computer facilities to introduce chemistry graduate students to biochemical database mining and quantum chemistry calculations using freeware. These hands-on workshops allowed the students a strong introduction to easily accessible software and how to use this software to begin…

  5. libChEBI: an API for accessing the ChEBI database.

    PubMed

    Swainston, Neil; Hastings, Janna; Dekker, Adriano; Muthukrishnan, Venkatesh; May, John; Steinbeck, Christoph; Mendes, Pedro

    2016-01-01

    ChEBI is a database and ontology of chemical entities of biological interest. It is widely used as a source of identifiers to facilitate unambiguous reference to chemical entities within biological models, databases, ontologies and literature. ChEBI contains a wealth of chemical data, covering over 46,500 distinct chemical entities, and related data such as chemical formula, charge, molecular mass, structure, synonyms and links to external databases. Furthermore, ChEBI is an ontology, and thus provides meaningful links between chemical entities. Unlike many other resources, ChEBI is fully human-curated, providing a reliable, non-redundant collection of chemical entities and related data. While ChEBI is supported by a web service for programmatic access and a number of download files, it does not have an API library to facilitate the use of ChEBI and its data in cheminformatics software. To provide this missing functionality, libChEBI, a comprehensive API library for accessing ChEBI data, is introduced. libChEBI is available in Java, Python and MATLAB versions from http://github.com/libChEBI, and provides full programmatic access to all data held within the ChEBI database through a simple and documented API. libChEBI is reliant upon the (automated) download and regular update of flat files that are held locally. As such, libChEBI can be embedded in both on- and off-line software applications. libChEBI allows better support of ChEBI and its data in the development of new cheminformatics software. Covering three key programming languages, it allows for the entirety of the ChEBI database to be accessed easily and quickly through a simple API. All code is open access and freely available.

  6. Development of an expert analysis tool based on an interactive subsidence hazard map for urban land use in the city of Celaya, Mexico

    NASA Astrophysics Data System (ADS)

    Alloy, A.; Gonzalez Dominguez, F.; Nila Fonseca, A. L.; Ruangsirikulchai, A.; Gentle, J. N., Jr.; Cabral, E.; Pierce, S. A.

    2016-12-01

    Land Subsidence as a result of groundwater extraction in central Mexico's larger urban centers initiated in the 80's as a result of population and economic growth. The city of Celaya has undergone subsidence for a few decades and a consequence is the development of an active normal fault system that affects its urban infrastructure and residential areas. To facilitate its analysis and a land use decision-making process we created an online interactive map enabling users to easily obtain information associated with land subsidence. Geological and socioeconomic data of the city was collected, including fault location, population data, and other important infrastructure and structural data has been obtained from fieldwork as part of a study abroad interchange undergraduate course. The subsidence and associated faulting hazard map was created using an InSAR derived subsidence velocity map and population data from INEGI to identify hazard zones using a subsidence gradient spatial analysis approach based on a subsidence gradient and population risk matrix. This interactive map provides a simple perspective of different vulnerable urban elements. As an accessible visualization tool, it will enhance communication between scientific and socio-economic disciplines. Our project also lays the groundwork for a future expert analysis system with an open source and easily accessible Python coded, SQLite database driven website which archives fault and subsidence data along with visual damage documentation to civil structures. This database takes field notes and provides an entry form for uniform datasets, which are used to generate a JSON. Such a database is useful because it allows geoscientists to have a centralized repository and access to their observations over time. Because of the widespread presence of the subsidence phenomena throughout cities in central Mexico, the spatial analysis has been automated using the open source software R. Raster, rgeos, shapefiles, and rgdal libraries have been used to develop the script which permits to obtain the raster maps of horizontal gradient and population density. An advantage is that this analysis can be automated for periodic updates or repurposed for similar analysis in other cities, providing an easily accessible tool for land subsidence hazard assessments.

  7. Improved Infrastucture for Cdms and JPL Molecular Spectroscopy Catalogues

    NASA Astrophysics Data System (ADS)

    Endres, Christian; Schlemmer, Stephan; Drouin, Brian; Pearson, John; Müller, Holger S. P.; Schilke, P.; Stutzki, Jürgen

    2014-06-01

    Over the past years a new infrastructure for atomic and molecular databases has been developed within the framework of the Virtual Atomic and Molecular Data Centre (VAMDC). Standards for the representation of atomic and molecular data as well as a set of protocols have been established which allow now to retrieve data from various databases through one portal and to combine the data easily. Apart from spectroscopic databases such as the Cologne Database for Molecular Spectroscopy (CDMS), the Jet Propulsion Laboratory microwave, millimeter and submillimeter spectral line catalogue (JPL) and the HITRAN database, various databases on molecular collisions (BASECOL, KIDA) and reactions (UMIST) are connected. Together with other groups within the VAMDC consortium we are working on common user tools to simplify the access for new customers and to tailor data requests for users with specified needs. This comprises in particular tools to support the analysis of complex observational data obtained with the ALMA telescope. In this presentation requests to CDMS and JPL will be used to explain the basic concepts and the tools which are provided by VAMDC. In addition a new portal to CDMS will be presented which has a number of new features, in particular meaningful quantum numbers, references linked to data points, access to state energies and improved documentation. Fit files are accessible for download and queries to other databases are possible.

  8. SWS: accessing SRS sites contents through Web Services.

    PubMed

    Romano, Paolo; Marra, Domenico

    2008-03-26

    Web Services and Workflow Management Systems can support creation and deployment of network systems, able to automate data analysis and retrieval processes in biomedical research. Web Services have been implemented at bioinformatics centres and workflow systems have been proposed for biological data analysis. New databanks are often developed by taking into account these technologies, but many existing databases do not allow a programmatic access. Only a fraction of available databanks can thus be queried through programmatic interfaces. SRS is a well know indexing and search engine for biomedical databanks offering public access to many databanks and analysis tools. Unfortunately, these data are not easily and efficiently accessible through Web Services. We have developed 'SRS by WS' (SWS), a tool that makes information available in SRS sites accessible through Web Services. Information on known sites is maintained in a database, srsdb. SWS consists in a suite of WS that can query both srsdb, for information on sites and databases, and SRS sites. SWS returns results in a text-only format and can be accessed through a WSDL compliant client. SWS enables interoperability between workflow systems and SRS implementations, by also managing access to alternative sites, in order to cope with network and maintenance problems, and selecting the most up-to-date among available systems. Development and implementation of Web Services, allowing to make a programmatic access to an exhaustive set of biomedical databases can significantly improve automation of in-silico analysis. SWS supports this activity by making biological databanks that are managed in public SRS sites available through a programmatic interface.

  9. TheCellMap.org: A Web-Accessible Database for Visualizing and Mining the Global Yeast Genetic Interaction Network

    PubMed Central

    Usaj, Matej; Tan, Yizhao; Wang, Wen; VanderSluis, Benjamin; Zou, Albert; Myers, Chad L.; Costanzo, Michael; Andrews, Brenda; Boone, Charles

    2017-01-01

    Providing access to quantitative genomic data is key to ensure large-scale data validation and promote new discoveries. TheCellMap.org serves as a central repository for storing and analyzing quantitative genetic interaction data produced by genome-scale Synthetic Genetic Array (SGA) experiments with the budding yeast Saccharomyces cerevisiae. In particular, TheCellMap.org allows users to easily access, visualize, explore, and functionally annotate genetic interactions, or to extract and reorganize subnetworks, using data-driven network layouts in an intuitive and interactive manner. PMID:28325812

  10. TheCellMap.org: A Web-Accessible Database for Visualizing and Mining the Global Yeast Genetic Interaction Network.

    PubMed

    Usaj, Matej; Tan, Yizhao; Wang, Wen; VanderSluis, Benjamin; Zou, Albert; Myers, Chad L; Costanzo, Michael; Andrews, Brenda; Boone, Charles

    2017-05-05

    Providing access to quantitative genomic data is key to ensure large-scale data validation and promote new discoveries. TheCellMap.org serves as a central repository for storing and analyzing quantitative genetic interaction data produced by genome-scale Synthetic Genetic Array (SGA) experiments with the budding yeast Saccharomyces cerevisiae In particular, TheCellMap.org allows users to easily access, visualize, explore, and functionally annotate genetic interactions, or to extract and reorganize subnetworks, using data-driven network layouts in an intuitive and interactive manner. Copyright © 2017 Usaj et al.

  11. Open Window: When Easily Identifiable Genomes and Traits Are in the Public Domain

    PubMed Central

    Angrist, Misha

    2014-01-01

    “One can't be of an enquiring and experimental nature, and still be very sensible.” - Charles Fort [1] As the costs of personal genetic testing “self-quantification” fall, publicly accessible databases housing people's genotypic and phenotypic information are gradually increasing in number and scope. The latest entrant is openSNP, which allows participants to upload their personal genetic/genomic and self-reported phenotypic data. I believe the emergence of such open repositories of human biological data is a natural reflection of inquisitive and digitally literate people's desires to make genomic and phenotypic information more easily available to a community beyond the research establishment. Such unfettered databases hold the promise of contributing mightily to science, science education and medicine. That said, in an age of increasingly widespread governmental and corporate surveillance, we would do well to be mindful that genomic DNA is uniquely identifying. Participants in open biological databases are engaged in a real-time experiment whose outcome is unknown. PMID:24647311

  12. ThermoBuild: Online Method Made Available for Accessing NASA Glenn Thermodynamic Data

    NASA Technical Reports Server (NTRS)

    McBride, Bonnie; Zehe, Michael J.

    2004-01-01

    The new Web site program "ThermoBuild" allows users to easily access and use the NASA Glenn Thermodynamic Database of over 2000 solid, liquid, and gaseous species. A convenient periodic table allows users to "build" the molecules of interest and designate the temperature range over which thermodynamic functions are to be displayed. ThermoBuild also allows users to build custom databases for use with NASA's Chemical Equilibrium with Applications (CEA) program or other programs that require the NASA format for thermodynamic properties. The NASA Glenn Research Center has long been a leader in the compilation and dissemination of up-to-date thermodynamic data, primarily for use with the NASA CEA program, but increasingly for use with other computer programs.

  13. Integrating Radar Image Data with Google Maps

    NASA Technical Reports Server (NTRS)

    Chapman, Bruce D.; Gibas, Sarah

    2010-01-01

    A public Web site has been developed as a method for displaying the multitude of radar imagery collected by NASA s Airborne Synthetic Aperture Radar (AIRSAR) instrument during its 16-year mission. Utilizing NASA s internal AIRSAR site, the new Web site features more sophisticated visualization tools that enable the general public to have access to these images. The site was originally maintained at NASA on six computers: one that held the Oracle database, two that took care of the software for the interactive map, and three that were for the Web site itself. Several tasks were involved in moving this complicated setup to just one computer. First, the AIRSAR database was migrated from Oracle to MySQL. Then the back-end of the AIRSAR Web site was updated in order to access the MySQL database. To do this, a few of the scripts needed to be modified; specifically three Perl scripts that query that database. The database connections were then updated from Oracle to MySQL, numerous syntax errors were corrected, and a query was implemented that replaced one of the stored Oracle procedures. Lastly, the interactive map was designed, implemented, and tested so that users could easily browse and access the radar imagery through the Google Maps interface.

  14. Designing a data portal for synthesis modeling

    NASA Astrophysics Data System (ADS)

    Holmes, M. A.

    2006-12-01

    Processing of field and model data in multi-disciplinary integrated science studies is a vital part of synthesis modeling. Collection and storage techniques for field data vary greatly between the participating scientific disciplines due to the nature of the data being collected, whether it be in situ, remotely sensed, or recorded by automated data logging equipment. Spreadsheets, personal databases, text files and binary files are used in the initial storage and processing of the raw data. In order to be useful to scientists, engineers and modelers the data need to be stored in a format that is easily identifiable, accessible and transparent to a variety of computing environments. The Model Operations and Synthesis (MOAS) database and associated web portal were created to provide such capabilities. The industry standard relational database is comprised of spatial and temporal data tables, shape files and supporting metadata accessible over the network, through a menu driven web-based portal or spatially accessible through ArcSDE connections from the user's local GIS desktop software. A separate server provides public access to spatial data and model output in the form of attributed shape files through an ArcIMS web-based graphical user interface.

  15. Nuclear data made easily accessible through the Notre Dame Nuclear Database

    NASA Astrophysics Data System (ADS)

    Khouw, Timothy; Lee, Kevin; Fasano, Patrick; Mumpower, Matthew; Aprahamian, Ani

    2014-09-01

    In 1994, the NNDC revolutionized nuclear research by providing a colorful, clickable, searchable database over the internet. Over the last twenty years, web technology has evolved dramatically. Our project, the Notre Dame Nuclear Database, aims to provide a more comprehensive and broadly searchable interactive body of data. The database can be searched by an array of filters which includes metadata such as the facility where a measurement is made, the author(s), or date of publication for the datum of interest. The user interface takes full advantage of HTML, a web markup language, CSS (cascading style sheets to define the aesthetics of the website), and JavaScript, a language that can process complex data. A command-line interface is supported that interacts with the database directly on a user's local machine which provides single command access to data. This is possible through the use of a standardized API (application programming interface) that relies upon well-defined filtering variables to produce customized search results. We offer an innovative chart of nuclides utilizing scalable vector graphics (SVG) to deliver users an unsurpassed level of interactivity supported on all computers and mobile devices. We will present a functional demo of our database at the conference.

  16. "TPSX: Thermal Protection System Expert and Material Property Database"

    NASA Technical Reports Server (NTRS)

    Squire, Thomas H.; Milos, Frank S.; Rasky, Daniel J. (Technical Monitor)

    1997-01-01

    The Thermal Protection Branch at NASA Ames Research Center has developed a computer program for storing, organizing, and accessing information about thermal protection materials. The program, called Thermal Protection Systems Expert and Material Property Database, or TPSX, is available for the Microsoft Windows operating system. An "on-line" version is also accessible on the World Wide Web. TPSX is designed to be a high-quality source for TPS material properties presented in a convenient, easily accessible form for use by engineers and researchers in the field of high-speed vehicle design. Data can be displayed and printed in several formats. An information window displays a brief description of the material with properties at standard pressure and temperature. A spread sheet window displays complete, detailed property information. Properties which are a function of temperature and/or pressure can be displayed as graphs. In any display the data can be converted from English to SI units with the click of a button. Two material databases included with TPSX are: 1) materials used and/or developed by the Thermal Protection Branch at NASA Ames Research Center, and 2) a database compiled by NASA Johnson Space Center 9JSC). The Ames database contains over 60 advanced TPS materials including flexible blankets, rigid ceramic tiles, and ultra-high temperature ceramics. The JSC database contains over 130 insulative and structural materials. The Ames database is periodically updated and expanded as required to include newly developed materials and material property refinements.

  17. A Framework for Cloudy Model Optimization and Database Storage

    NASA Astrophysics Data System (ADS)

    Calvén, Emilia; Helton, Andrew; Sankrit, Ravi

    2018-01-01

    We present a framework for producing Cloudy photoionization models of the nebular emission from novae ejecta and storing a subset of the results in SQL database format for later usage. The database can be searched for models best fitting observed spectral line ratios. Additionally, the framework includes an optimization feature that can be used in tandem with the database to search for and improve on models by creating new Cloudy models while, varying the parameters. The database search and optimization can be used to explore the structures of nebulae by deriving their properties from the best-fit models. The goal is to provide the community with a large database of Cloudy photoionization models, generated from parameters reflecting conditions within novae ejecta, that can be easily fitted to observed spectral lines; either by directly accessing the database using the framework code or by usage of a website specifically made for this purpose.

  18. Casimage project: a digital teaching files authoring environment.

    PubMed

    Rosset, Antoine; Muller, Henning; Martins, Martina; Dfouni, Natalia; Vallée, Jean-Paul; Ratib, Osman

    2004-04-01

    The goal of the Casimage project is to offer an authoring and editing environment integrated with the Picture Archiving and Communication Systems (PACS) for creating image-based electronic teaching files. This software is based on a client/server architecture allowing remote access of users to a central database. This authoring environment allows radiologists to create reference databases and collection of digital images for teaching and research directly from clinical cases being reviewed on PACS diagnostic workstations. The environment includes all tools to create teaching files, including textual description, annotations, and image manipulation. The software also allows users to generate stand-alone CD-ROMs and web-based teaching files to easily share their collections. The system includes a web server compatible with the Medical Imaging Resource Center standard (MIRC, http://mirc.rsna.org) to easily integrate collections in the RSNA web network dedicated to teaching files. This software could be installed on any PACS workstation to allow users to add new cases at any time and anywhere during clinical operations. Several images collections were created with this tool, including thoracic imaging that was subsequently made available on a CD-Rom and on our web site and through the MIRC network for public access.

  19. Pathogen metadata platform: software for accessing and analyzing pathogen strain information.

    PubMed

    Chang, Wenling E; Peterson, Matthew W; Garay, Christopher D; Korves, Tonia

    2016-09-15

    Pathogen metadata includes information about where and when a pathogen was collected and the type of environment it came from. Along with genomic nucleotide sequence data, this metadata is growing rapidly and becoming a valuable resource not only for research but for biosurveillance and public health. However, current freely available tools for analyzing this data are geared towards bioinformaticians and/or do not provide summaries and visualizations needed to readily interpret results. We designed a platform to easily access and summarize data about pathogen samples. The software includes a PostgreSQL database that captures metadata useful for disease outbreak investigations, and scripts for downloading and parsing data from NCBI BioSample and BioProject into the database. The software provides a user interface to query metadata and obtain standardized results in an exportable, tab-delimited format. To visually summarize results, the user interface provides a 2D histogram for user-selected metadata types and mapping of geolocated entries. The software is built on the LabKey data platform, an open-source data management platform, which enables developers to add functionalities. We demonstrate the use of the software in querying for a pathogen serovar and for genome sequence identifiers. This software enables users to create a local database for pathogen metadata, populate it with data from NCBI, easily query the data, and obtain visual summaries. Some of the components, such as the database, are modular and can be incorporated into other data platforms. The source code is freely available for download at https://github.com/wchangmitre/bioattribution .

  20. Lessons Learned From Developing Reactor Pressure Vessel Steel Embrittlement Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jy-An John

    Materials behaviors caused by neutron irradiation under fission and/or fusion environments can be little understood without practical examination. Easily accessible material information system with large material database using effective computers is necessary for design of nuclear materials and analyses or simulations of the phenomena. The developed Embrittlement Data Base (EDB) at ORNL is this comprehensive collection of data. EDB database contains power reactor pressure vessel surveillance data, the material test reactor data, foreign reactor data (through bilateral agreements authorized by NRC), and the fracture toughness data. The lessons learned from building EDB program and the associated database management activity regardingmore » Material Database Design Methodology, Architecture and the Embedded QA Protocol are described in this report. The development of IAEA International Database on Reactor Pressure Vessel Materials (IDRPVM) and the comparison of EDB database and IAEA IDRPVM database are provided in the report. The recommended database QA protocol and database infrastructure are also stated in the report.« less

  1. BNDB - the Biochemical Network Database.

    PubMed

    Küntzer, Jan; Backes, Christina; Blum, Torsten; Gerasch, Andreas; Kaufmann, Michael; Kohlbacher, Oliver; Lenhof, Hans-Peter

    2007-10-02

    Technological advances in high-throughput techniques and efficient data acquisition methods have resulted in a massive amount of life science data. The data is stored in numerous databases that have been established over the last decades and are essential resources for scientists nowadays. However, the diversity of the databases and the underlying data models make it difficult to combine this information for solving complex problems in systems biology. Currently, researchers typically have to browse several, often highly focused, databases to obtain the required information. Hence, there is a pressing need for more efficient systems for integrating, analyzing, and interpreting these data. The standardization and virtual consolidation of the databases is a major challenge resulting in a unified access to a variety of data sources. We present the Biochemical Network Database (BNDB), a powerful relational database platform, allowing a complete semantic integration of an extensive collection of external databases. BNDB is built upon a comprehensive and extensible object model called BioCore, which is powerful enough to model most known biochemical processes and at the same time easily extensible to be adapted to new biological concepts. Besides a web interface for the search and curation of the data, a Java-based viewer (BiNA) provides a powerful platform-independent visualization and navigation of the data. BiNA uses sophisticated graph layout algorithms for an interactive visualization and navigation of BNDB. BNDB allows a simple, unified access to a variety of external data sources. Its tight integration with the biochemical network library BN++ offers the possibility for import, integration, analysis, and visualization of the data. BNDB is freely accessible at http://www.bndb.org.

  2. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  3. Online database for documenting clinical pathology resident education.

    PubMed

    Hoofnagle, Andrew N; Chou, David; Astion, Michael L

    2007-01-01

    Training of clinical pathologists is evolving and must now address the 6 core competencies described by the Accreditation Council for Graduate Medical Education (ACGME), which include patient care. A substantial portion of the patient care performed by the clinical pathology resident takes place while the resident is on call for the laboratory, a practice that provides the resident with clinical experience and assists the laboratory in providing quality service to clinicians in the hospital and surrounding community. Documenting the educational value of these on-call experiences and providing evidence of competence is difficult for residency directors. An online database of these calls, entered by residents and reviewed by faculty, would provide a mechanism for documenting and improving the education of clinical pathology residents. With Microsoft Access we developed an online database that uses active server pages and secure sockets layer encryption to document calls to the clinical pathology resident. Using the data collected, we evaluated the efficacy of 3 interventions aimed at improving resident education. The database facilitated the documentation of more than 4 700 calls in the first 21 months it was online, provided archived resident-generated data to assist in serving clients, and demonstrated that 2 interventions aimed at improving resident education were successful. We have developed a secure online database, accessible from any computer with Internet access, that can be used to easily document clinical pathology resident education and competency.

  4. Turning Access into a web-enabled secure information system for clinical trials.

    PubMed

    Dongquan Chen; Chen, Wei-Bang; Soong, Mayhue; Soong, Seng-Jaw; Orthner, Helmuth F

    2009-08-01

    Organizations that have limited resources need to conduct clinical studies in a cost-effective, but secure way. Clinical data residing in various individual databases need to be easily accessed and secured. Although widely available, digital certification, encryption, and secure web server, have not been implemented as widely, partly due to a lack of understanding of needs and concerns over issues such as cost and difficulty in implementation. The objective of this study was to test the possibility of centralizing various databases and to demonstrate ways of offering an alternative to a large-scale comprehensive and costly commercial product, especially for simple phase I and II trials, with reasonable convenience and security. We report a working procedure to transform and develop a standalone Access database into a secure Web-based secure information system. For data collection and reporting purposes, we centralized several individual databases; developed, and tested a web-based secure server using self-issued digital certificates. The system lacks audit trails. The cost of development and maintenance may hinder its wide application. The clinical trial databases scattered in various departments of an institution could be centralized into a web-enabled secure information system. The limitations such as the lack of a calendar and audit trail can be partially addressed with additional programming. The centralized Web system may provide an alternative to a comprehensive clinical trial management system.

  5. SPSmart: adapting population based SNP genotype databases for fast and comprehensive web access.

    PubMed

    Amigo, Jorge; Salas, Antonio; Phillips, Christopher; Carracedo, Angel

    2008-10-10

    In the last five years large online resources of human variability have appeared, notably HapMap, Perlegen and the CEPH foundation. These databases of genotypes with population information act as catalogues of human diversity, and are widely used as reference sources for population genetics studies. Although many useful conclusions may be extracted by querying databases individually, the lack of flexibility for combining data from within and between each database does not allow the calculation of key population variability statistics. We have developed a novel tool for accessing and combining large-scale genomic databases of single nucleotide polymorphisms (SNPs) in widespread use in human population genetics: SPSmart (SNPs for Population Studies). A fast pipeline creates and maintains a data mart from the most commonly accessed databases of genotypes containing population information: data is mined, summarized into the standard statistical reference indices, and stored into a relational database that currently handles as many as 4 x 10(9) genotypes and that can be easily extended to new database initiatives. We have also built a web interface to the data mart that allows the browsing of underlying data indexed by population and the combining of populations, allowing intuitive and straightforward comparison of population groups. All the information served is optimized for web display, and most of the computations are already pre-processed in the data mart to speed up the data browsing and any computational treatment requested. In practice, SPSmart allows populations to be combined into user-defined groups, while multiple databases can be accessed and compared in a few simple steps from a single query. It performs the queries rapidly and gives straightforward graphical summaries of SNP population variability through visual inspection of allele frequencies outlined in standard pie-chart format. In addition, full numerical description of the data is output in statistical results panels that include common population genetics metrics such as heterozygosity, Fst and In.

  6. A Database of Interplanetary and Interstellar Dust Detected by the Wind Spacecraft

    NASA Technical Reports Server (NTRS)

    Malaspina, David M.; Wilson, Lynn B., III

    2016-01-01

    It was recently discovered that the WAVES instrument on the Wind spacecraft has been detecting, in situ, interplanetary and interstellar dust of approximately 1 micron radius for the past 22 years. These data have the potential to enable advances in the study of cosmic dust and dust-plasma coupling within the heliosphere due to several unique properties: the Wind dust database spans two full solar cycles; it contains over 107,000 dust detections; it contains information about dust grain direction of motion; it contains data exclusively from the space environment within 350 Earth radii of Earth; and it overlaps by 12 years with the Ulysses dust database. Further, changes to the WAVES antenna response and the plasma environment traversed by Wind over the lifetime of the Wind mission create an opportunity for these data to inform investigations of the physics governing the coupling of dust impacts on spacecraft surfaces to electric field antennas. A Wind dust database has been created to make the Wind dust data easily accessible to the heliophysics community and other researchers. This work describes the motivation, methodology, contents, and accessibility of the Wind dust database.

  7. A web-based system architecture for ontology-based data integration in the domain of IT benchmarking

    NASA Astrophysics Data System (ADS)

    Pfaff, Matthias; Krcmar, Helmut

    2018-03-01

    In the domain of IT benchmarking (ITBM), a variety of data and information are collected. Although these data serve as the basis for business analyses, no unified semantic representation of such data yet exists. Consequently, data analysis across different distributed data sets and different benchmarks is almost impossible. This paper presents a system architecture and prototypical implementation for an integrated data management of distributed databases based on a domain-specific ontology. To preserve the semantic meaning of the data, the ITBM ontology is linked to data sources and functions as the central concept for database access. Thus, additional databases can be integrated by linking them to this domain-specific ontology and are directly available for further business analyses. Moreover, the web-based system supports the process of mapping ontology concepts to external databases by introducing a semi-automatic mapping recommender and by visualizing possible mapping candidates. The system also provides a natural language interface to easily query linked databases. The expected result of this ontology-based approach of knowledge representation and data access is an increase in knowledge and data sharing in this domain, which will enhance existing business analysis methods.

  8. TOXBASE: Poisons information on the internet

    PubMed Central

    Bateman, D; Good, A; Laing, W; Kelly, C

    2002-01-01

    Objectives: To assess the uptake, usage and acceptability of TOXBASE, the National Poisons Information Service internet toxicology information service. Methods: An observational study of database usage, and a questionnaire of users were undertaken involving users of TOXBASE within the UK between August 1999, when the internet site was launched, and May 2000. The main outcome measures were numbers of registered users, usage patterns on the database, responses to user satisfaction questionnaire. Results: The number of registered users increased from 567 to 1500. There was a 68% increase in accident and emergency departments registered, a 159% increase in general practitioners, but a 324% increase in other hospital departments. Between January 2000 and the end of May there had been 60 281 accesses to the product database, the most frequent to the paracetamol entry (7291 accesses). Ecstasy was the seventh most frequent entry accessed. Altogether 165 of 330 questionnaires were returned. The majority came from accident and emergency departments, the major users of the system. Users were generally well (>95%) satisfied with ease and speed of access. A number of suggestions for improvements were put forward. Conclusions: TOXBASE has been extensively accessed since being placed on the internet (http://www.spib.axl.co.uk). The pattern of enquiries mirrors clinical presentation with poisoning. The system seems to be easily used. It is a model for future delivery of treatment guidelines at the point of patient care. PMID:11777868

  9. Montreal Archive of Sleep Studies: an open-access resource for instrument benchmarking and exploratory research.

    PubMed

    O'Reilly, Christian; Gosselin, Nadia; Carrier, Julie; Nielsen, Tore

    2014-12-01

    Manual processing of sleep recordings is extremely time-consuming. Efforts to automate this process have shown promising results, but automatic systems are generally evaluated on private databases, not allowing accurate cross-validation with other systems. In lacking a common benchmark, the relative performances of different systems are not compared easily and advances are compromised. To address this fundamental methodological impediment to sleep study, we propose an open-access database of polysomnographic biosignals. To build this database, whole-night recordings from 200 participants [97 males (aged 42.9 ± 19.8 years) and 103 females (aged 38.3 ± 18.9 years); age range: 18-76 years] were pooled from eight different research protocols performed in three different hospital-based sleep laboratories. All recordings feature a sampling frequency of 256 Hz and an electroencephalography (EEG) montage of 4-20 channels plus standard electro-oculography (EOG), electromyography (EMG), electrocardiography (ECG) and respiratory signals. Access to the database can be obtained through the Montreal Archive of Sleep Studies (MASS) website (http://www.ceams-carsm.ca/en/MASS), and requires only affiliation with a research institution and prior approval by the applicant's local ethical review board. Providing the research community with access to this free and open sleep database is expected to facilitate the development and cross-validation of sleep analysis automation systems. It is also expected that such a shared resource will be a catalyst for cross-centre collaborations on difficult topics such as improving inter-rater agreement on sleep stage scoring. © 2014 European Sleep Research Society.

  10. WebCN: A web-based computation tool for in situ-produced cosmogenic nuclides

    NASA Astrophysics Data System (ADS)

    Ma, Xiuzeng; Li, Yingkui; Bourgeois, Mike; Caffee, Marc; Elmore, David; Granger, Darryl; Muzikar, Paul; Smith, Preston

    2007-06-01

    Cosmogenic nuclide techniques are increasingly being utilized in geoscience research. For this it is critical to establish an effective, easily accessible and well defined tool for cosmogenic nuclide computations. We have been developing a web-based tool (WebCN) to calculate surface exposure ages and erosion rates based on the nuclide concentrations measured by the accelerator mass spectrometry. WebCN for 10Be and 26Al has been finished and published at http://www.physics.purdue.edu/primelab/for_users/rockage.html. WebCN for 36Cl is under construction. WebCN is designed as a three-tier client/server model and uses the open source PostgreSQL for the database management and PHP for the interface design and calculations. On the client side, an internet browser and Microsoft Access are used as application interfaces to access the system. Open Database Connectivity is used to link PostgreSQL and Microsoft Access. WebCN accounts for both spatial and temporal distributions of the cosmic ray flux to calculate the production rates of in situ-produced cosmogenic nuclides at the Earth's surface.

  11. Information technologies in public health management: a database on biocides to improve quality of life.

    PubMed

    Roman, C; Scripcariu, L; Diaconescu, Rm; Grigoriu, A

    2012-01-01

    Biocides for prolonging the shelf life of a large variety of materials have been extensively used over the last decades. It has estimated that the worldwide biocide consumption to be about 12.4 billion dollars in 2011, and is expected to increase in 2012. As biocides are substances we get in contact with in our everyday lives, access to this type of information is of paramount importance in order to ensure an appropriate living environment. Consequently, a database where information may be quickly processed, sorted, and easily accessed, according to different search criteria, is the most desirable solution. The main aim of this work was to design and implement a relational database with complete information about biocides used in public health management to improve the quality of life. Design and implementation of a relational database for biocides, by using the software "phpMyAdmin". A database, which allows for an efficient collection, storage, and management of information including chemical properties and applications of a large quantity of biocides, as well as its adequate dissemination into the public health environment. The information contained in the database herein presented promotes an adequate use of biocides, by means of information technologies, which in consequence may help achieve important improvement in our quality of life.

  12. Assembly: a resource for assembled genomes at NCBI

    PubMed Central

    Kitts, Paul A.; Church, Deanna M.; Thibaud-Nissen, Françoise; Choi, Jinna; Hem, Vichet; Sapojnikov, Victor; Smith, Robert G.; Tatusova, Tatiana; Xiang, Charlie; Zherikov, Andrey; DiCuccio, Michael; Murphy, Terence D.; Pruitt, Kim D.; Kimchi, Avi

    2016-01-01

    The NCBI Assembly database (www.ncbi.nlm.nih.gov/assembly/) provides stable accessioning and data tracking for genome assembly data. The model underlying the database can accommodate a range of assembly structures, including sets of unordered contig or scaffold sequences, bacterial genomes consisting of a single complete chromosome, or complex structures such as a human genome with modeled allelic variation. The database provides an assembly accession and version to unambiguously identify the set of sequences that make up a particular version of an assembly, and tracks changes to updated genome assemblies. The Assembly database reports metadata such as assembly names, simple statistical reports of the assembly (number of contigs and scaffolds, contiguity metrics such as contig N50, total sequence length and total gap length) as well as the assembly update history. The Assembly database also tracks the relationship between an assembly submitted to the International Nucleotide Sequence Database Consortium (INSDC) and the assembly represented in the NCBI RefSeq project. Users can find assemblies of interest by querying the Assembly Resource directly or by browsing available assemblies for a particular organism. Links in the Assembly Resource allow users to easily download sequence and annotations for current versions of genome assemblies from the NCBI genomes FTP site. PMID:26578580

  13. MMDB: Entrez’s 3D-structure database

    PubMed Central

    Wang, Yanli; Anderson, John B.; Chen, Jie; Geer, Lewis Y.; He, Siqian; Hurwitz, David I.; Liebert, Cynthia A.; Madej, Thomas; Marchler, Gabriele H.; Marchler-Bauer, Aron; Panchenko, Anna R.; Shoemaker, Benjamin A.; Song, James S.; Thiessen, Paul A.; Yamashita, Roxanne A.; Bryant, Stephen H.

    2002-01-01

    Three-dimensional structures are now known within many protein families and it is quite likely, in searching a sequence database, that one will encounter a homolog with known structure. The goal of Entrez’s 3D-structure database is to make this information, and the functional annotation it can provide, easily accessible to molecular biologists. To this end Entrez’s search engine provides three powerful features. (i) Sequence and structure neighbors; one may select all sequences similar to one of interest, for example, and link to any known 3D structures. (ii) Links between databases; one may search by term matching in MEDLINE, for example, and link to 3D structures reported in these articles. (iii) Sequence and structure visualization; identifying a homolog with known structure, one may view molecular-graphic and alignment displays, to infer approximate 3D structure. In this article we focus on two features of Entrez’s Molecular Modeling Database (MMDB) not described previously: links from individual biopolymer chains within 3D structures to a systematic taxonomy of organisms represented in molecular databases, and links from individual chains (and compact 3D domains within them) to structure neighbors, other chains (and 3D domains) with similar 3D structure. MMDB may be accessed at http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=Structure. PMID:11752307

  14. "Utstein style" spreadsheet and database programs based on Microsoft Excel and Microsoft Access software for CPR data management of in-hospital resuscitation.

    PubMed

    Adams, Bruce D; Whitlock, Warren L

    2004-04-01

    In 1997, The American Heart Association in association with representatives of the International Committee on Resuscitation (ILCOR) published recommended guidelines for reviewing, reporting and conducting in-hospital cardiopulmonary resuscitation (CPR) outcomes using the "Utstein style". Using these guidelines, we developed two Microsoft Office based database management programs that may be useful to the resuscitation community. We developed a user-friendly spreadsheet based on MS Office Excel. The user enters patient variables such as name, age, and diagnosis. Then, event resuscitation variables such as time of collapse and CPR team arrival are entered from a "code flow sheet". Finally, outcome variables such as patient condition at different time points are recorded. The program then makes automatic calculations of average response times, survival rates and other important outcome measurements. Also using the Utstein style, we developed a database program based on MS Office Access. To promote free public access to these programs, we established at a website. These programs will help hospitals track, analyze, and present their CPR outcomes data. Clinical CPR researchers might also find the programs useful because they are easily modified and have statistical functions.

  15. External access to ALICE controls conditions data

    NASA Astrophysics Data System (ADS)

    Jadlovský, J.; Jadlovská, A.; Sarnovský, J.; Jajčišin, Š.; Čopík, M.; Jadlovská, S.; Papcun, P.; Bielek, R.; Čerkala, J.; Kopčík, M.; Chochula, P.; Augustinus, A.

    2014-06-01

    ALICE Controls data produced by commercial SCADA system WINCCOA is stored in ORACLE database on the private experiment network. The SCADA system allows for basic access and processing of the historical data. More advanced analysis requires tools like ROOT and needs therefore a separate access method to the archives. The present scenario expects that detector experts create simple WINCCOA scripts, which retrieves and stores data in a form usable for further studies. This relatively simple procedure generates a lot of administrative overhead - users have to request the data, experts needed to run the script, the results have to be exported outside of the experiment network. The new mechanism profits from database replica, which is running on the CERN campus network. Access to this database is not restricted and there is no risk of generating a heavy load affecting the operation of the experiment. The developed tools presented in this paper allow for access to this data. The users can use web-based tools to generate the requests, consisting of the data identifiers and period of time of interest. The administrators maintain full control over the data - an authorization and authentication mechanism helps to assign privileges to selected users and restrict access to certain groups of data. Advanced caching mechanism allows the user to profit from the presence of already processed data sets. This feature significantly reduces the time required for debugging as the retrieval of raw data can last tens of minutes. A highly configurable client allows for information retrieval bypassing the interactive interface. This method is for example used by ALICE Offline to extract operational conditions after a run is completed. Last but not least, the software can be easily adopted to any underlying database structure and is therefore not limited to WINCCOA.

  16. The Geant4 physics validation repository

    NASA Astrophysics Data System (ADS)

    Wenzel, H.; Yarba, J.; Dotti, A.

    2015-12-01

    The Geant4 collaboration regularly performs validation and regression tests. The results are stored in a central repository and can be easily accessed via a web application. In this article we describe the Geant4 physics validation repository which consists of a relational database storing experimental data and Geant4 test results, a java API and a web application. The functionality of these components and the technology choices we made are also described.

  17. The CDC Hemophilia B mutation project mutation list: a new online resource.

    PubMed

    Li, Tengguo; Miller, Connie H; Payne, Amanda B; Craig Hooper, W

    2013-11-01

    Hemophilia B (HB) is caused by mutations in the human gene F9. The mutation type plays a pivotal role in genetic counseling and prediction of inhibitor development. To help the HB community understand the molecular etiology of HB, we have developed a listing of all F9 mutations that are reported to cause HB based on the literature and existing databases. The Centers for Disease Control and Prevention (CDC) Hemophilia B Mutation Project (CHBMP) mutation list is compiled in an easily accessible format of Microsoft Excel and contains 1083 unique mutations that are reported to cause HB. Each mutation is identified using Human Genome Variation Society (HGVS) nomenclature standards. The mutation types and the predicted changes in amino acids, if applicable, are also provided. Related information including the location of mutation, severity of HB, the presence of inhibitor, and original publication reference are listed as well. Therefore, our mutation list provides an easily accessible resource for genetic counselors and HB researchers to predict inhibitors. The CHBMP mutation list is freely accessible at http://www.cdc.gov/hemophiliamutations.

  18. Using CORBA to integrate manufacturing cells to a virtual enterprise

    NASA Astrophysics Data System (ADS)

    Pancerella, Carmen M.; Whiteside, Robert A.

    1997-01-01

    It is critical in today's enterprises that manufacturing facilities are not isolated from design, planning, and other business activities and that information flows easily and bidirectionally between these activities. It is also important and cost-effective that COTS software, databases, and corporate legacy codes are well integrated in the information architecture. Further, much of the information generated during manufacturing must be dynamically accessible to engineering and business operations both in a restricted corporate intranet and on the internet. The software integration strategy in the Sandia Agile Manufacturing Testbed supports these enterprise requirements. We are developing a CORBA-based distributed object software system for manufacturing. Each physical machining device is a CORBA object and exports a common IDL interface to allow for rapid and dynamic insertion, deletion, and upgrading within the manufacturing cell. Cell management CORBA components access manufacturing devices without knowledge of any device-specific implementation. To support information flow from design to planning data is accessible to machinists on the shop floor. CORBA allows manufacturing components to be easily accessible to the enterprise. Dynamic clients can be created using web browsers and portable Java GUI's. A CORBA-OLE adapter allows integration to PC desktop applications. Other commercial software can access CORBA network objects in the information architecture through vendor API's.

  19. Upgrades to the TPSX Material Properties Database

    NASA Technical Reports Server (NTRS)

    Squire, T. H.; Milos, F. S.; Partridge, Harry (Technical Monitor)

    2001-01-01

    The TPSX Material Properties Database is a web-based tool that serves as a database for properties of advanced thermal protection materials. TPSX provides an easy user interface for retrieving material property information in a variety of forms, both graphical and text. The primary purpose and advantage of TPSX is to maintain a high quality source of often used thermal protection material properties in a convenient, easily accessible form, for distribution to government and aerospace industry communities. Last year a major upgrade to the TPSX web site was completed. This year, through the efforts of researchers at several NASA centers, the Office of the Chief Engineer awarded funds to update and expand the databases in TPSX. The FY01 effort focuses on updating correcting the Ames and Johnson thermal protection materials databases. In this session we will summarize the improvements made to the web site last year, report on the status of the on-going database updates, describe the planned upgrades for FY02 and FY03, and provide a demonstration of TPSX.

  20. The Geant4 physics validation repository

    DOE PAGES

    Wenzel, H.; Yarba, J.; Dotti, A.

    2015-12-23

    The Geant4 collaboration regularly performs validation and regression tests. The results are stored in a central repository and can be easily accessed via a web application. In this article we describe the Geant4 physics validation repository which consists of a relational database storing experimental data and Geant4 test results, a java API and a web application. Lastly, the functionality of these components and the technology choices we made are also described

  1. Design of Knowledge Bases for Plant Gene Regulatory Networks.

    PubMed

    Mukundi, Eric; Gomez-Cano, Fabio; Ouma, Wilberforce Zachary; Grotewold, Erich

    2017-01-01

    Developing a knowledge base that contains all the information necessary for the researcher studying gene regulation in a particular organism can be accomplished in four stages. This begins with defining the data scope. We describe here the necessary information and resources, and outline the methods for obtaining data. The second stage consists of designing the schema, which involves defining the entire arrangement of the database in a systematic plan. The third stage is the implementation, defined by actualization of the database by using software according to a predefined schema. The final stage is development, where the database is made available to users in a web-accessible system. The result is a knowledgebase that integrates all the information pertaining to gene regulation, and which is easily expandable and transferable.

  2. Wolf Testing: Open Source Testing Software

    NASA Astrophysics Data System (ADS)

    Braasch, P.; Gay, P. L.

    2004-12-01

    Wolf Testing is software for easily creating and editing exams. Wolf Testing allows the user to create an exam from a database of questions, view it on screen, and easily print it along with the corresponding answer guide. The questions can be multiple choice, short answer, long answer, or true and false varieties. This software can be accessed securely from any location, allowing the user to easily create exams from home. New questions, which can include associated pictures, can be added through a web-interface. After adding in questions, they can be edited, deleted, or duplicated into multiple versions. Long-term test creation is simplified, as you are able to quickly see what questions you have asked in the past and insert them, with or without editing, into future tests. All tests are archived in the database. Written in PHP and MySQL, this software can be installed on any UNIX / Linux platform, including Macintosh OS X. The secure interface keeps students out, and allows you to decide who can create tests and who can edit information already in the database. Tests can be output as either html with pictures or rich text without pictures, and there are plans to add PDF and MS Word formats as well. We would like to thank Dr. Wolfgang Rueckner and the Harvard University Science Center for providing incentive to start this project, computers and resources to complete this project, and inspiration for the project's name. We would also like to thank Dr. Ronald Newburgh for his assistance in beta testing.

  3. Software aspects of the Geant4 validation repository

    NASA Astrophysics Data System (ADS)

    Dotti, Andrea; Wenzel, Hans; Elvira, Daniel; Genser, Krzysztof; Yarba, Julia; Carminati, Federico; Folger, Gunter; Konstantinov, Dmitri; Pokorski, Witold; Ribon, Alberto

    2017-10-01

    The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER is easily accessible via a web application. In addition, a web service allows for programmatic access to the repository to extract records in JSON or XML exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.

  4. Software Aspects of the Geant4 Validation Repository

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dotti, Andrea; Wenzel, Hans; Elvira, Daniel

    2016-01-01

    The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientic Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER is easily accessible via a web application. In addition, a web service allows for programmatic access to the repository to extract records in JSON or XML exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.

  5. Database of mineral deposits in the Islamic Republic of Mauritania (phase V, deliverables 90 and 91): Chapter S in Second projet de renforcement institutionnel du secteur minier de la République Islamique de Mauritanie (PRISM-II)

    USGS Publications Warehouse

    Marsh, Erin; Anderson, Eric D.

    2015-01-01

    Three ore deposits databases from previous studies were evaluated and combined with new known mineral occurrences into one database, which can now be used to manage information about the known mineral occurrences of Mauritania. The Microsoft Access 2010 database opens with the list of tables and forms held within the database and a Switchboard control panel from which to easily navigate through the existing mineral deposit data and to enter data for new deposit locations. The database is a helpful tool for the organization of the basic information about the mineral occurrences of Mauritania. It is suggested the database be administered by a single operator in order to avoid data overlap and override that can result from shared real time data entry. It is proposed that the mineral occurrence database be used in concert with the geologic maps, geophysics and geochemistry datasets, as a publically advertised interface for the abundant geospatial information that the Mauritanian government can provide to interested parties.

  6. Design and implementation of relational databases relevant to the diverse needs of a tuberculosis case contact study in the Gambia.

    PubMed

    Jeffries, D J; Donkor, S; Brookes, R H; Fox, A; Hill, P C

    2004-09-01

    The data requirements of a large multidisciplinary tuberculosis case contact study are complex. We describe an ACCESS-based relational database system that meets our rigorous requirements for data entry and validation, while being user-friendly, flexible, exportable, and easy to install on a network or stand alone system. This includes the development of a double data entry package for epidemiology and laboratory data, semi-automated entry of ELISPOT data directly from the plate reader, and a suite of new programmes for the manipulation and integration of flow cytometry data. The double entered epidemiology and immunology databases are combined into a separate database, providing a near-real-time analysis of immuno-epidemiological data, allowing important trends to be identified early and major decisions about the study to be made and acted on. This dynamic data management model is portable and can easily be applied to other studies.

  7. BioMart: a data federation framework for large collaborative projects.

    PubMed

    Zhang, Junjun; Haider, Syed; Baran, Joachim; Cros, Anthony; Guberman, Jonathan M; Hsu, Jack; Liang, Yong; Yao, Long; Kasprzyk, Arek

    2011-01-01

    BioMart is a freely available, open source, federated database system that provides a unified access to disparate, geographically distributed data sources. It is designed to be data agnostic and platform independent, such that existing databases can easily be incorporated into the BioMart framework. BioMart allows databases hosted on different servers to be presented seamlessly to users, facilitating collaborative projects between different research groups. BioMart contains several levels of query optimization to efficiently manage large data sets and offers a diverse selection of graphical user interfaces and application programming interfaces to ensure that queries can be performed in whatever manner is most convenient for the user. The software has now been adopted by a large number of different biological databases spanning a wide range of data types and providing a rich source of annotation available to bioinformaticians and biologists alike.

  8. P43-S Computational Biology Applications Suite for High-Performance Computing (BioHPC.net)

    PubMed Central

    Pillardy, J.

    2007-01-01

    One of the challenges of high-performance computing (HPC) is user accessibility. At the Cornell University Computational Biology Service Unit, which is also a Microsoft HPC institute, we have developed a computational biology application suite that allows researchers from biological laboratories to submit their jobs to the parallel cluster through an easy-to-use Web interface. Through this system, we are providing users with popular bioinformatics tools including BLAST, HMMER, InterproScan, and MrBayes. The system is flexible and can be easily customized to include other software. It is also scalable; the installation on our servers currently processes approximately 8500 job submissions per year, many of them requiring massively parallel computations. It also has a built-in user management system, which can limit software and/or database access to specified users. TAIR, the major database of the plant model organism Arabidopsis, and SGN, the international tomato genome database, are both using our system for storage and data analysis. The system consists of a Web server running the interface (ASP.NET C#), Microsoft SQL server (ADO.NET), compute cluster running Microsoft Windows, ftp server, and file server. Users can interact with their jobs and data via a Web browser, ftp, or e-mail. The interface is accessible at http://cbsuapps.tc.cornell.edu/.

  9. An event database for rotational seismology

    NASA Astrophysics Data System (ADS)

    Salvermoser, Johannes; Hadziioannou, Celine; Hable, Sarah; Chow, Bryant; Krischer, Lion; Wassermann, Joachim; Igel, Heiner

    2016-04-01

    The ring laser sensor (G-ring) located at Wettzell, Germany, routinely observes earthquake-induced rotational ground motions around a vertical axis since its installation in 2003. Here we present results from a recently installed event database which is the first that will provide ring laser event data in an open access format. Based on the GCMT event catalogue and some search criteria, seismograms from the ring laser and the collocated broadband seismometer are extracted and processed. The ObsPy-based processing scheme generates plots showing waveform fits between rotation rate and transverse acceleration and extracts characteristic wavefield parameters such as peak ground motions, noise levels, Love wave phase velocities and waveform coherence. For each event, these parameters are stored in a text file (json dictionary) which is easily readable and accessible on the website. The database contains >10000 events starting in 2007 (Mw>4.5). It is updated daily and therefore provides recent events at a time lag of max. 24 hours. The user interface allows to filter events for epoch, magnitude, and source area, whereupon the events are displayed on a zoomable world map. We investigate how well the rotational motions are compatible with the expectations from the surface wave magnitude scale. In addition, the website offers some python source code examples for downloading and processing the openly accessible waveforms.

  10. DamaGIS: a multisource geodatabase for collection of flood-related damage data

    NASA Astrophysics Data System (ADS)

    Saint-Martin, Clotilde; Javelle, Pierre; Vinet, Freddy

    2018-06-01

    Every year in France, recurring flood events result in several million euros of damage, and reducing the heavy consequences of floods has become a high priority. However, actions to reduce the impact of floods are often hindered by the lack of damage data on past flood events. The present paper introduces a new database for collection and assessment of flood-related damage. The DamaGIS database offers an innovative bottom-up approach to gather and identify damage data from multiple sources, including new media. The study area has been defined as the south of France considering the high frequency of floods over the past years. This paper presents the structure and contents of the database. It also presents operating instructions in order to keep collecting damage data within the database. This paper also describes an easily reproducible method to assess the severity of flood damage regardless of the location or date of occurrence. A first analysis of the damage contents is also provided in order to assess data quality and the relevance of the database. According to this analysis, despite its lack of comprehensiveness, the DamaGIS database presents many advantages. Indeed, DamaGIS provides a high accuracy of data as well as simplicity of use. It also has the additional benefit of being accessible in multiple formats and is open access. The DamaGIS database is available at https://doi.org/10.5281/zenodo.1241089.

  11. An interactive program for computer-aided map design, display, and query: EMAPKGS2

    USGS Publications Warehouse

    Pouch, G.W.

    1997-01-01

    EMAPKGS2 is a user-friendly, PC-based electronic mapping tool for use in hydrogeologic exploration and appraisal. EMAPKGS2 allows the analyst to construct maps interactively from data stored in a relational database, perform point-oriented spatial queries such as locating all wells within a specified radius, perform geographic overlays, and export the data to other programs for further analysis. EMAPKGS2 runs under Microsoft?? Windows??? 3.1 and compatible operating systems. EMAPKGS2 is a public domain program available from the Kansas Geological Survey. EMAPKGS2 is the centerpiece of WHEAT, the Windows-based Hydrogeologic Exploration and Appraisal Toolkit, a suite of user-friendly Microsoft?? Windows??? programs for natural resource exploration and management. The principal goals in development of WHEAT have been ease of use, hardware independence, low cost, and end-user extensibility. WHEAT'S native data format is a Microsoft?? Access?? database. WHEAT stores a feature's geographic coordinates as attributes so they can be accessed easily by the user. The WHEAT programs are designed to be used in conjunction with other Microsoft?? Windows??? software to allow the natural resource scientist to perform work easily and effectively. WHEAT and EMAPKGS have been used at several of Kansas' Groundwater Management Districts and the Kansas Geological Survey on groundwater management operations, groundwater modeling projects, and geologic exploration projects. ?? 1997 Elsevier Science Ltd.

  12. Cyclebase 3.0: a multi-organism database on cell-cycle regulation and phenotypes.

    PubMed

    Santos, Alberto; Wernersson, Rasmus; Jensen, Lars Juhl

    2015-01-01

    The eukaryotic cell division cycle is a highly regulated process that consists of a complex series of events and involves thousands of proteins. Researchers have studied the regulation of the cell cycle in several organisms, employing a wide range of high-throughput technologies, such as microarray-based mRNA expression profiling and quantitative proteomics. Due to its complexity, the cell cycle can also fail or otherwise change in many different ways if important genes are knocked out, which has been studied in several microscopy-based knockdown screens. The data from these many large-scale efforts are not easily accessed, analyzed and combined due to their inherent heterogeneity. To address this, we have created Cyclebase--available at http://www.cyclebase.org--an online database that allows users to easily visualize and download results from genome-wide cell-cycle-related experiments. In Cyclebase version 3.0, we have updated the content of the database to reflect changes to genome annotation, added new mRNA and protein expression data, and integrated cell-cycle phenotype information from high-content screens and model-organism databases. The new version of Cyclebase also features a new web interface, designed around an overview figure that summarizes all the cell-cycle-related data for a gene. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. Inconsistencies in the red blood cell membrane proteome analysis: generation of a database for research and diagnostic applications

    PubMed Central

    Hegedűs, Tamás; Chaubey, Pururawa Mayank; Várady, György; Szabó, Edit; Sarankó, Hajnalka; Hofstetter, Lia; Roschitzki, Bernd; Sarkadi, Balázs

    2015-01-01

    Based on recent results, the determination of the easily accessible red blood cell (RBC) membrane proteins may provide new diagnostic possibilities for assessing mutations, polymorphisms or regulatory alterations in diseases. However, the analysis of the current mass spectrometry-based proteomics datasets and other major databases indicates inconsistencies—the results show large scattering and only a limited overlap for the identified RBC membrane proteins. Here, we applied membrane-specific proteomics studies in human RBC, compared these results with the data in the literature, and generated a comprehensive and expandable database using all available data sources. The integrated web database now refers to proteomic, genetic and medical databases as well, and contains an unexpected large number of validated membrane proteins previously thought to be specific for other tissues and/or related to major human diseases. Since the determination of protein expression in RBC provides a method to indicate pathological alterations, our database should facilitate the development of RBC membrane biomarker platforms and provide a unique resource to aid related further research and diagnostics. Database URL: http://rbcc.hegelab.org PMID:26078478

  14. Supplier Management System

    NASA Technical Reports Server (NTRS)

    Ramirez, Eric; Gutheinz, Sandy; Brison, James; Ho, Anita; Allen, James; Ceritelli, Olga; Tobar, Claudia; Nguyen, Thuykien; Crenshaw, Harrel; Santos, Roxann

    2008-01-01

    Supplier Management System (SMS) allows for a consistent, agency-wide performance rating system for suppliers used by NASA. This version (2.0) combines separate databases into one central database that allows for the sharing of supplier data. Information extracted from the NBS/Oracle database can be used to generate ratings. Also, supplier ratings can now be generated in the areas of cost, product quality, delivery, and audit data. Supplier data can be charted based on real-time user input. Based on these individual ratings, an overall rating can be generated. Data that normally would be stored in multiple databases, each requiring its own log-in, is now readily available and easily accessible with only one log-in required. Additionally, the database can accommodate the storage and display of quality-related data that can be analyzed and used in the supplier procurement decision-making process. Moreover, the software allows for a Closed-Loop System (supplier feedback), as well as the capability to communicate with other federal agencies.

  15. Footprint Database and web services for the Herschel space observatory

    NASA Astrophysics Data System (ADS)

    Verebélyi, Erika; Dobos, László; Kiss, Csaba

    2015-08-01

    Using all telemetry and observational meta-data, we created a searchable database of Herschel observation footprints. Data from the Herschel space observatory is freely available for everyone but no uniformly processed catalog of all observations has been published yet. As a first step, we unified the data model for all three Herschel instruments in all observation modes and compiled a database of sky coverage information. As opposed to methods using a pixellation of the sphere, in our database, sky coverage is stored in exact geometric form allowing for precise area calculations. Indexing of the footprints allows for very fast search among observations based on pointing, time, sky coverage overlap and meta-data. This enables us, for example, to find moving objects easily in Herschel fields. The database is accessible via a web site and also as a set of REST web service functions which makes it usable from program clients like Python or IDL scripts. Data is available in various formats including Virtual Observatory standards.

  16. Nursing leadership succession planning in Veterans Health Administration: creating a useful database.

    PubMed

    Weiss, Lizabeth M; Drake, Audrey

    2007-01-01

    An electronic database was developed for succession planning and placement of nursing leaders interested and ready, willing, and able to accept an assignment in a nursing leadership position. The tool is a 1-page form used to identify candidates for nursing leadership assignments. This tool has been deployed nationally, with access to the database restricted to nurse executives at every Veterans Health Administration facility for the purpose of entering the names of developed nurse leaders ready for a leadership assignment. The tool is easily accessed through the Veterans Health Administration Office of Nursing Service, and by limiting access to the nurse executive group, ensures candidates identified are qualified. Demographic information included on the survey tool includes the candidate's demographic information and other certifications/credentials. This completed information form is entered into a database from which a report can be generated, resulting in a listing of potential candidates to contact to supplement a local or Veterans Integrated Service Network wide position announcement. The data forms can be sorted by positions, areas of clinical or functional experience, training programs completed, and geographic preference. The forms can be edited or updated and/or added or deleted in the system as the need is identified. This tool allows facilities with limited internal candidates to have a resource with Department of Veterans Affairs prepared staff in which to seek additional candidates. It also provides a way for interested candidates to be considered for positions outside of their local geographic area.

  17. An Information System for European culture collections: the way forward.

    PubMed

    Casaregola, Serge; Vasilenko, Alexander; Romano, Paolo; Robert, Vincent; Ozerskaya, Svetlana; Kopf, Anna; Glöckner, Frank O; Smith, David

    2016-01-01

    Culture collections contain indispensable information about the microorganisms preserved in their repositories, such as taxonomical descriptions, origins, physiological and biochemical characteristics, bibliographic references, etc. However, information currently accessible in databases rarely adheres to common standard protocols. The resultant heterogeneity between culture collections, in terms of both content and format, notably hampers microorganism-based research and development (R&D). The optimized exploitation of these resources thus requires standardized, and simplified, access to the associated information. To this end, and in the interest of supporting R&D in the fields of agriculture, health and biotechnology, a pan-European distributed research infrastructure, MIRRI, including over 40 public culture collections and research institutes from 19 European countries, was established. A prime objective of MIRRI is to unite and provide universal access to the fragmented, and untapped, resources, information and expertise available in European public collections of microorganisms; a key component of which is to develop a dynamic Information System. For the first time, both culture collection curators as well as their users have been consulted and their feedback, concerning the needs and requirements for collection databases and data accessibility, utilised. Users primarily noted that databases were not interoperable, thus rendering a global search of multiple databases impossible. Unreliable or out-of-date and, in particular, non-homogenous, taxonomic information was also considered to be a major obstacle to searching microbial data efficiently. Moreover, complex searches are rarely possible in online databases thus limiting the extent of search queries. Curators also consider that overall harmonization-including Standard Operating Procedures, data structure, and software tools-is necessary to facilitate their work and to make high-quality data easily accessible to their users. Clearly, the needs of culture collection curators coincide with those of users on the crucial point of database interoperability. In this regard, and in order to design an appropriate Information System, important aspects on which the culture collection community should focus include: the interoperability of data sets with the ontologies to be used; setting best practice in data management, and the definition of an appropriate data standard.

  18. Customized laboratory information management system for a clinical and research leukemia cytogenetics laboratory.

    PubMed

    Bakshi, Sonal R; Shukla, Shilin N; Shah, Pankaj M

    2009-01-01

    We developed a Microsoft Access-based laboratory management system to facilitate database management of leukemia patients referred for cytogenetic tests in regards to karyotyping and fluorescence in situ hybridization (FISH). The database is custom-made for entry of patient data, clinical details, sample details, cytogenetics test results, and data mining for various ongoing research areas. A number of clinical research laboratoryrelated tasks are carried out faster using specific "queries." The tasks include tracking clinical progression of a particular patient for multiple visits, treatment response, morphological and cytogenetics response, survival time, automatic grouping of patient inclusion criteria in a research project, tracking various processing steps of samples, turn-around time, and revenue generated. Since 2005 we have collected of over 5,000 samples. The database is easily updated and is being adapted for various data maintenance and mining needs.

  19. [The Development and Application of the Orthopaedics Implants Failure Database Software Based on WEB].

    PubMed

    Huang, Jiahua; Zhou, Hai; Zhang, Binbin; Ding, Biao

    2015-09-01

    This article develops a new failure database software for orthopaedics implants based on WEB. The software is based on B/S mode, ASP dynamic web technology is used as its main development language to achieve data interactivity, Microsoft Access is used to create a database, these mature technologies make the software extend function or upgrade easily. In this article, the design and development idea of the software, the software working process and functions as well as relative technical features are presented. With this software, we can store many different types of the fault events of orthopaedics implants, the failure data can be statistically analyzed, and in the macroscopic view, it can be used to evaluate the reliability of orthopaedics implants and operations, it also can ultimately guide the doctors to improve the clinical treatment level.

  20. Viral genome analysis and knowledge management.

    PubMed

    Kuiken, Carla; Yoon, Hyejin; Abfalterer, Werner; Gaschen, Brian; Lo, Chienchi; Korber, Bette

    2013-01-01

    One of the challenges of genetic data analysis is to combine information from sources that are distributed around the world and accessible through a wide array of different methods and interfaces. The HIV database and its footsteps, the hepatitis C virus (HCV) and hemorrhagic fever virus (HFV) databases, have made it their mission to make different data types easily available to their users. This involves a large amount of behind-the-scenes processing, including quality control and analysis of the sequences and their annotation. Gene and protein sequences are distilled from the sequences that are stored in GenBank; to this end, both submitter annotation and script-generated sequences are used. Alignments of both nucleotide and amino acid sequences are generated, manually curated, distilled into an alignment model, and regenerated in an iterative cycle that results in ever better new alignments. Annotation of epidemiological and clinical information is parsed, checked, and added to the database. User interfaces are updated, and new interfaces are added based upon user requests. Vital for its success, the database staff are heavy users of the system, which enables them to fix bugs and find opportunities for improvement. In this chapter we describe some of the infrastructure that keeps these heavily used analysis platforms alive and vital after nearly 25 years of use. The database/analysis platforms described in this chapter can be accessed at http://hiv.lanl.gov http://hcv.lanl.gov http://hfv.lanl.gov.

  1. Development of an IHE MRRT-compliant open-source web-based reporting platform.

    PubMed

    Pinto Dos Santos, Daniel; Klos, G; Kloeckner, R; Oberle, R; Dueber, C; Mildenberger, P

    2017-01-01

    To develop a platform that uses structured reporting templates according to the IHE Management of Radiology Report Templates (MRRT) profile, and to implement this platform into clinical routine. The reporting platform uses standard web technologies (HTML / JavaScript and PHP / MySQL) only. Several freely available external libraries were used to simplify the programming. The platform runs on a standard web server, connects with the radiology information system (RIS) and PACS, and is easily accessible via a standard web browser. A prototype platform that allows structured reporting to be easily incorporated into the clinical routine was developed and successfully tested. To date, 797 reports were generated using IHE MRRT-compliant templates (many of them downloaded from the RSNA's radreport.org website). Reports are stored in a MySQL database and are easily accessible for further analyses. Development of an IHE MRRT-compliant platform for structured reporting is feasible using only standard web technologies. All source code will be made available upon request under a free license, and the participation of other institutions in further development is welcome. • A platform for structured reporting using IHE MRRT-compliant templates is presented. • Incorporating structured reporting into clinical routine is feasible. • Full source code will be provided upon request under a free license.

  2. Catalog Descriptions Using VOTable Files

    NASA Astrophysics Data System (ADS)

    Thompson, R.; Levay, K.; Kimball, T.; White, R.

    2008-08-01

    Additional information is frequently required to describe database table contents and make it understandable to users. For this reason, the Multimission Archive at Space Telescope (MAST) creates Òdescription filesÓ for each table/catalog. After trying various XML and CSV formats, we finally chose VOTable. These files are easy to update via an HTML form, easily read using an XML parser such as (in our case) the PHP5 SimpleXML extension, and have found multiple uses in our data access/retrieval process.

  3. Security Behavior Observatory: Infrastructure for Long-term Monitoring of Client Machines

    DTIC Science & Technology

    2014-07-14

    desired data. In Wmdows, this is most often a .NET language (e.g., C#, PowerShell), a command-line batch script, or Java . 3) Least privilege: To ensure...modules are written in Java , and thus should be easily-portable to any OS. B. Deployment There are several high-level requirements the SBO must meet...practically feasible with such solutions. Instead, one researcher with access to all the clients’ keys (stored in an isolated and secured MySQL database

  4. An Expert System for Automating Nuclear Strike Aircraft Replacement, Aircraft Beddown, and Logistics Movement for the Theater Warfare Exercise.

    DTIC Science & Technology

    1989-12-01

    that can be easily understood. (9) Parallelism. Several system components may need to execute in parallel. For example, the processing of sensor data...knowledge base are not accessible for processing by the database. Also in the likely case that the expert system poses a series of related queries, the...hiharken nxpfilcs’Iog - Knowledge base for the automation of loCgistics rr-ovenet T’he Ii rectorY containing the strike aircraft replacement knowledge base

  5. McMaster Optimal Aging Portal: an evidence-based database for geriatrics-focused health professionals.

    PubMed

    Barbara, Angela M; Dobbins, Maureen; Brian Haynes, R; Iorio, Alfonso; Lavis, John N; Raina, Parminder; Levinson, Anthony J

    2017-07-11

    The objective of this work was to provide easy access to reliable health information based on good quality research that will help health care professionals to learn what works best for seniors to stay as healthy as possible, manage health conditions and build supportive health systems. This will help meet the demands of our aging population that clinicians provide high quality care for older adults, that public health professionals deliver disease prevention and health promotion strategies across the life span, and that policymakers address the economic and social need to create a robust health system and a healthy society for all ages. The McMaster Optimal Aging Portal's (Portal) professional bibliographic database contains high quality scientific evidence about optimal aging specifically targeted to clinicians, public health professionals and policymakers. The database content comes from three information services: McMaster Premium LiteratUre Service (MacPLUS™), Health Evidence™ and Health Systems Evidence. The Portal is continually updated, freely accessible online, easily searchable, and provides email-based alerts when new records are added. The database is being continually assessed for value, usability and use. A number of improvements are planned, including French language translation of content, increased linkages between related records within the Portal database, and inclusion of additional types of content. While this article focuses on the professional database, the Portal also houses resources for patients, caregivers and the general public, which may also be of interest to geriatric practitioners and researchers.

  6. Importance of Data Management in a Long-term Biological Monitoring Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Sigurd W; Brandt, Craig C; McCracken, Kitty

    2011-01-01

    The long-term Biological Monitoring and Abatement Program (BMAP) has always needed to collect and retain high-quality data on which to base its assessments of ecological status of streams and their recovery after remediation. Its formal quality assurance, data processing, and data management components all contribute to this need. The Quality Assurance Program comprehensively addresses requirements from various institutions, funders, and regulators, and includes a data management component. Centralized data management began a few years into the program. An existing relational database was adapted and extended to handle biological data. Data modeling enabled the program's database to process, store, and retrievemore » its data. The data base's main data tables and several key reference tables are described. One of the most important related activities supporting long-term analyses was the establishing of standards for sampling site names, taxonomic identification, flagging, and other components. There are limitations. Some types of program data were not easily accommodated in the central systems, and many possible data-sharing and integration options are not easily accessible to investigators. The implemented relational database supports the transmittal of data to the Oak Ridge Environmental Information System (OREIS) as the permanent repository. From our experience we offer data management advice to other biologically oriented long-term environmental sampling and analysis programs.« less

  7. Importance of Data Management in a Long-Term Biological Monitoring Program

    NASA Astrophysics Data System (ADS)

    Christensen, Sigurd W.; Brandt, Craig C.; McCracken, Mary K.

    2011-06-01

    The long-term Biological Monitoring and Abatement Program (BMAP) has always needed to collect and retain high-quality data on which to base its assessments of ecological status of streams and their recovery after remediation. Its formal quality assurance, data processing, and data management components all contribute to meeting this need. The Quality Assurance Program comprehensively addresses requirements from various institutions, funders, and regulators, and includes a data management component. Centralized data management began a few years into the program when an existing relational database was adapted and extended to handle biological data. The database's main data tables and several key reference tables are described. One of the most important related activities supporting long-term analyses was the establishing of standards for sampling site names, taxonomic identification, flagging, and other components. The implemented relational database supports the transmittal of data to the Oak Ridge Environmental Information System (OREIS) as the permanent repository. We also discuss some limitations to our implementation. Some types of program data were not easily accommodated in the central systems, and many possible data-sharing and integration options are not easily accessible to investigators. From our experience we offer data management advice to other biologically oriented long-term environmental sampling and analysis programs.

  8. The Design of Lexical Database for Indonesian Language

    NASA Astrophysics Data System (ADS)

    Gunawan, D.; Amalia, A.

    2017-03-01

    Kamus Besar Bahasa Indonesia (KBBI), an official dictionary for Indonesian language, provides lists of words with their meaning. The online version can be accessed via Internet network. Another online dictionary is Kateglo. KBBI online and Kateglo only provides an interface for human. A machine cannot retrieve data from the dictionary easily without using advanced techniques. Whereas, lexical of words is required in research or application development which related to natural language processing, text mining, information retrieval or sentiment analysis. To address this requirement, we need to build a lexical database which provides well-defined structured information about words. A well-known lexical database is WordNet, which provides the relation among words in English. This paper proposes the design of a lexical database for Indonesian language based on the combination of KBBI 4th edition, Kateglo and WordNet structure. Knowledge representation by utilizing semantic networks depict the relation among words and provide the new structure of lexical database for Indonesian language. The result of this design can be used as the foundation to build the lexical database for Indonesian language.

  9. FAIR principles and the IEDB: short-term improvements and a long-term vision of OBO-foundry mediated machine-actionable interoperability

    PubMed Central

    Vita, Randi; Overton, James A; Mungall, Christopher J; Sette, Alessandro

    2018-01-01

    Abstract The Immune Epitope Database (IEDB), at www.iedb.org, has the mission to make published experimental data relating to the recognition of immune epitopes easily available to the scientific public. By presenting curated data in a searchable database, we have liberated it from the tables and figures of journal articles, making it more accessible and usable by immunologists. Recently, the principles of Findability, Accessibility, Interoperability and Reusability have been formulated as goals that data repositories should meet to enhance the usefulness of their data holdings. We here examine how the IEDB complies with these principles and identify broad areas of success, but also areas for improvement. We describe short-term improvements to the IEDB that are being implemented now, as well as a long-term vision of true ‘machine-actionable interoperability’, which we believe will require community agreement on standardization of knowledge representation that can be built on top of the shared use of ontologies. PMID:29688354

  10. The iMeteo is a web-based weather visualization tool

    NASA Astrophysics Data System (ADS)

    Tuni San-Martín, Max; San-Martín, Daniel; Cofiño, Antonio S.

    2010-05-01

    iMeteo is a web-based weather visualization tool. Designed with an extensible J2EE architecture, it is capable of displaying information from heterogeneous data sources such as gridded data from numerical models (in NetCDF format) or databases of local predictions. All this information is presented in a user-friendly way, being able to choose the specific tool to display data (maps, graphs, information tables) and customize it to desired locations. *Modular Display System* Visualization of the data is achieved through a set of mini tools called widgets. A user can add them at will and arrange them around the screen easily with a drag and drop movement. They can be of various types and each can be configured separately, forming a really powerful and configurable system. The "Map" is the most complex widget, since it can show several variables simultaneously (either gridded or point-based) through a layered display. Other useful widgets are the the "Histogram", which generates a graph with the frequency characteristics of a variable and the "Timeline" which shows the time evolution of a variable at a given location in an interactive way. *Customization and security* Following the trends in web development, the user can easily customize the way data is displayed. Due to programming in client side with technologies like AJAX, the interaction with the application is similar to the desktop ones because there are rapid respone times. If a user is registered then he could also save his settings in the database, allowing access from any system with Internet access with his particular setup. There is particular emphasis on application security. The administrator can define a set of user profiles, which may have associated restrictions on access to certain data sources, geographic areas or time intervals.

  11. Making a web based ulcer record work by aligning architecture, legislation and users - a formative evaluation study.

    PubMed

    Ekeland, Anne G; Skipenes, Eva; Nyheim, Beate; Christiansen, Ellen K

    2011-01-01

    The University Hospital of North Norway selected a web-based ulcer record used in Denmark, available from mobile phones. Data was stored in a common database and easily accessible. According to Norwegian legislation, only employees of the organization that owns an IT system can access the system, and use of mobile units requires strong security solutions. The system had to be changed. The paper addresses interactions in order to make the system legal, and assesses regulations that followed. By addressing conflicting scripts and the contingent nature of knowledge, we conducted a formative evaluation aiming at improving the object being studied. Participatory observation in a one year process, minutes from meetings and information from participants, constitute the data material. In the technological domain, one database was replaced by four. In the health care delivery domain, easy access was replaced by a more complicated log on procedure, and in the domain of law and security, a clarification of risk levels was obtained, thereby allowing for access by mobile phones with today's authentication mechanisms. Flexibility concerning predefined scripts was important in all domains. Changes were made that improved the platform for further development of legitimate communication of patient data via mobile units. The study also shows the value of formative evaluations in innovations.

  12. SNPchiMp v.3: integrating and standardizing single nucleotide polymorphism data for livestock species.

    PubMed

    Nicolazzi, Ezequiel L; Caprera, Andrea; Nazzicari, Nelson; Cozzi, Paolo; Strozzi, Francesco; Lawley, Cindy; Pirani, Ali; Soans, Chandrasen; Brew, Fiona; Jorjani, Hossein; Evans, Gary; Simpson, Barry; Tosser-Klopp, Gwenola; Brauning, Rudiger; Williams, John L; Stella, Alessandra

    2015-04-10

    In recent years, the use of genomic information in livestock species for genetic improvement, association studies and many other fields has become routine. In order to accommodate different market requirements in terms of genotyping cost, manufacturers of single nucleotide polymorphism (SNP) arrays, private companies and international consortia have developed a large number of arrays with different content and different SNP density. The number of currently available SNP arrays differs among species: ranging from one for goats to more than ten for cattle, and the number of arrays available is increasing rapidly. However, there is limited or no effort to standardize and integrate array- specific (e.g. SNP IDs, allele coding) and species-specific (i.e. past and current assemblies) SNP information. Here we present SNPchiMp v.3, a solution to these issues for the six major livestock species (cow, pig, horse, sheep, goat and chicken). Original data was collected directly from SNP array producers and specific international genome consortia, and stored in a MySQL database. The database was then linked to an open-access web tool and to public databases. SNPchiMp v.3 ensures fast access to the database (retrieving within/across SNP array data) and the possibility of annotating SNP array data in a user-friendly fashion. This platform allows easy integration and standardization, and it is aimed at both industry and research. It also enables users to easily link the information available from the array producer with data in public databases, without the need of additional bioinformatics tools or pipelines. In recognition of the open-access use of Ensembl resources, SNPchiMp v.3 was officially credited as an Ensembl E!mpowered tool. Availability at http://bioinformatics.tecnoparco.org/SNPchimp.

  13. DISTRIBUTED STRUCTURE-SEARCHABLE TOXICITY ...

    EPA Pesticide Factsheets

    The ability to assess the potential genotoxicity, carcinogenicity, or other toxicity of pharmaceutical or industrial chemicals based on chemical structure information is a highly coveted and shared goal of varied academic, commercial, and government regulatory groups. These diverse interests often employ different approaches and have different criteria and use for toxicity assessments, but they share a need for unrestricted access to existing public toxicity data linked with chemical structure information. Currently, there exists no central repository of toxicity information, commercial or public, that adequately meets the data requirements for flexible analogue searching, SAR model development, or building of chemical relational databases (CRD). The Distributed Structure-Searchable Toxicity (DSSTox) Public Database Network is being proposed as a community-supported, web-based effort to address these shared needs of the SAR and toxicology communities. The DSSTox project has the following major elements: 1) to adopt and encourage the use of a common standard file format (SDF) for public toxicity databases that includes chemical structure, text and property information, and that can easily be imported into available CRD applications; 2) to implement a distributed source approach, managed by a DSSTox Central Website, that will enable decentralized, free public access to structure-toxicity data files, and that will effectively link knowledgeable toxicity data s

  14. Type-Based Access Control in Data-Centric Systems

    NASA Astrophysics Data System (ADS)

    Caires, Luís; Pérez, Jorge A.; Seco, João Costa; Vieira, Hugo Torres; Ferrão, Lúcio

    Data-centric multi-user systems, such as web applications, require flexible yet fine-grained data security mechanisms. Such mechanisms are usually enforced by a specially crafted security layer, which adds extra complexity and often leads to error prone coding, easily causing severe security breaches. In this paper, we introduce a programming language approach for enforcing access control policies to data in data-centric programs by static typing. Our development is based on the general concept of refinement type, but extended so as to address realistic and challenging scenarios of permission-based data security, in which policies dynamically depend on the database state, and flexible combinations of column- and row-level protection of data are necessary. We state and prove soundness and safety of our type system, stating that well-typed programs never break the declared data access control policies.

  15. GenomeHubs: simple containerized setup of a custom Ensembl database and web server for any species

    PubMed Central

    Kumar, Sujai; Stevens, Lewis; Blaxter, Mark

    2017-01-01

    Abstract As the generation and use of genomic datasets is becoming increasingly common in all areas of biology, the need for resources to collate, analyse and present data from one or more genome projects is becoming more pressing. The Ensembl platform is a powerful tool to make genome data and cross-species analyses easily accessible through a web interface and a comprehensive application programming interface. Here we introduce GenomeHubs, which provide a containerized environment to facilitate the setup and hosting of custom Ensembl genome browsers. This simplifies mirroring of existing content and import of new genomic data into the Ensembl database schema. GenomeHubs also provide a set of analysis containers to decorate imported genomes with results of standard analyses and functional annotations and support export to flat files, including EMBL format for submission of assemblies and annotations to International Nucleotide Sequence Database Collaboration. Database URL: http://GenomeHubs.org PMID:28605774

  16. Integrating stations from the North America Gravity Database into a local GPS-based land gravity survey

    USGS Publications Warehouse

    Shoberg, Thomas G.; Stoddard, Paul R.

    2013-01-01

    The ability to augment local gravity surveys with additional gravity stations from easily accessible national databases can greatly increase the areal coverage and spatial resolution of a survey. It is, however, necessary to integrate such data seamlessly with the local survey. One challenge to overcome in integrating data from national databases is that these data are typically of unknown quality. This study presents a procedure for the evaluation and seamless integration of gravity data of unknown quality from a national database with data from a local Global Positioning System (GPS)-based survey. The starting components include the latitude, longitude, elevation and observed gravity at each station location. Interpolated surfaces of the complete Bouguer anomaly are used as a means of quality control and comparison. The result is an integrated dataset of varying quality with many stations having GPS accuracy and other reliable stations of unknown origin, yielding a wider coverage and greater spatial resolution than either survey alone.

  17. Encryption Characteristics of Two USB-based Personal Health Record Devices

    PubMed Central

    Wright, Adam; Sittig, Dean F.

    2007-01-01

    Personal health records (PHRs) hold great promise for empowering patients and increasing the accuracy and completeness of health information. We reviewed two small USB-based PHR devices that allow a patient to easily store and transport their personal health information. Both devices offer password protection and encryption features. Analysis of the devices shows that they store their data in a Microsoft Access database. Due to a flaw in the encryption of this database, recovering the user’s password can be accomplished with minimal effort. Our analysis also showed that, rather than encrypting health information with the password chosen by the user, the devices stored the user’s password as a string in the database and then encrypted that database with a common password set by the manufacturer. This is another serious vulnerability. This article describes the weaknesses we discovered, outlines three critical flaws with the security model used by the devices, and recommends four guidelines for improving the security of similar devices. PMID:17460132

  18. A virtual observatory for photoionized nebulae: the Mexican Million Models database (3MdB).

    NASA Astrophysics Data System (ADS)

    Morisset, C.; Delgado-Inglada, G.; Flores-Fajardo, N.

    2015-04-01

    Photoionization models obtained with numerical codes are widely used to study the physics of the interstellar medium (planetary nebulae, HII regions, etc). Grids of models are performed to understand the effects of the different parameters used to describe the regions on the observables (mainly emission line intensities). Most of the time, only a small part of the computed results of such grids are published, and they are sometimes hard to obtain in a user-friendly format. We present here the Mexican Million Models dataBase (3MdB), an effort to resolve both of these issues in the form of a database of photoionization models, easily accessible through the MySQL protocol, and containing a lot of useful outputs from the models, such as the intensities of 178 emission lines, the ionic fractions of all the ions, etc. Some examples of the use of the 3MdB are also presented.

  19. RAIN: RNA–protein Association and Interaction Networks

    PubMed Central

    Junge, Alexander; Refsgaard, Jan C.; Garde, Christian; Pan, Xiaoyong; Santos, Alberto; Alkan, Ferhat; Anthon, Christian; von Mering, Christian; Workman, Christopher T.; Jensen, Lars Juhl; Gorodkin, Jan

    2017-01-01

    Protein association networks can be inferred from a range of resources including experimental data, literature mining and computational predictions. These types of evidence are emerging for non-coding RNAs (ncRNAs) as well. However, integration of ncRNAs into protein association networks is challenging due to data heterogeneity. Here, we present a database of ncRNA–RNA and ncRNA–protein interactions and its integration with the STRING database of protein–protein interactions. These ncRNA associations cover four organisms and have been established from curated examples, experimental data, interaction predictions and automatic literature mining. RAIN uses an integrative scoring scheme to assign a confidence score to each interaction. We demonstrate that RAIN outperforms the underlying microRNA-target predictions in inferring ncRNA interactions. RAIN can be operated through an easily accessible web interface and all interaction data can be downloaded. Database URL: http://rth.dk/resources/rain PMID:28077569

  20. The BIRN Project: Imaging the Nervous System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellisman, Mark

    The grand goal in neuroscience research is to understand how the interplay of structural, chemical and electrical signals in nervous tissue gives rise to behavior. Experimental advances of the past decades have given the individual neuroscientist an increasingly powerful arsenal for obtaining data, from the level of molecules to nervous systems. Scientists have begun the arduous and challenging process of adapting and assembling neuroscience data at all scales of resolution and across disciplines into computerized databases and other easily accessed sources. These databases will complement the vast structural and sequence databases created to catalogue, organize and analyze gene sequences andmore » protein products. The general premise of the neuroscience goal is simple; namely that with "complete" knowledge of the genome and protein structures accruing rapidly we next need to assemble an infrastructure that will facilitate acquisition of an understanding for how functional complexes operate in their cell and tissue contexts.« less

  1. The BIRN Project: Imaging the Nervous System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellisman, Mark

    The grand goal in neuroscience research is to understand how the interplay of structural, chemical and electrical signals in nervous tissue gives rise to behavior. Experimental advances of the past decades have given the individual neuroscientist an increasingly powerful arsenal for obtaining data, from the level of molecules to nervous systems. Scientists have begun the arduous and challenging process of adapting and assembling neuroscience data at all scales of resolution and across disciplines into computerized databases and other easily accessed sources. These databases will complement the vast structural and sequence databases created to catalogue, organize and analyze gene sequences andmore » protein products. The general premise of the neuroscience goal is simple; namely that with 'complete' knowledge of the genome and protein structures accruing rapidly we next need to assemble an infrastructure that will facilitate acquisition of an understanding for how functional complexes operate in their cell and tissue contexts.« less

  2. Data on publications, structural analyses, and queries used to build and utilize the AlloRep database.

    PubMed

    Sousa, Filipa L; Parente, Daniel J; Hessman, Jacob A; Chazelle, Allen; Teichmann, Sarah A; Swint-Kruse, Liskin

    2016-09-01

    The AlloRep database (www.AlloRep.org) (Sousa et al., 2016) [1] compiles extensive sequence, mutagenesis, and structural information for the LacI/GalR family of transcription regulators. Sequence alignments are presented for >3000 proteins in 45 paralog subfamilies and as a subsampled alignment of the whole family. Phenotypic and biochemical data on almost 6000 mutants have been compiled from an exhaustive search of the literature; citations for these data are included herein. These data include information about oligomerization state, stability, DNA binding and allosteric regulation. Protein structural data for 65 proteins are presented as easily-accessible, residue-contact networks. Finally, this article includes example queries to enable the use of the AlloRep database. See the related article, "AlloRep: a repository of sequence, structural and mutagenesis data for the LacI/GalR transcription regulators" (Sousa et al., 2016) [1].

  3. Authomatization of Digital Collection Access Using Mobile and Wireless Data Terminals

    NASA Astrophysics Data System (ADS)

    Leontiev, I. V.

    Information technologies become vital due to information processing needs, database access, data analysis and decision support. Currently, a lot of scientific projects are oriented on database integration of heterogeneous systems. The problem of on-line and rapid access to large integrated systems of digital collections is also very important. Usually users move between different locations, either at work or at home. In most cases users need an efficient and remote access to information, stored in integrated data collections. Desktop computers are unable to fulfill the needs, so mobile and wireless devices become helpful. Handhelds and data terminals are nessessary in medical assistance (they store detailed information about each patient, and helpful for nurses), immediate access to data collections is used in a Highway patrol services (databanks of cars, owners, driver licences). Using mobile access, warehouse operations can be validated. Library and museum items cyclecounting will speed up using online barcode-scanning and central database access. That's why mobile devices - cell phones, PDA, handheld computers with wireless access, WindowsCE and PalmOS terminals become popular. Generally, mobile devices have a relatively slow processor, and limited display capabilities, but they are effective for storing and displaying textual data, recognize user hand-writing with stylus, support GUI. Users can perform operations on handheld terminal, and exchange data with the main system (using immediate radio access, or offline access during syncronization process) for update. In our report, we give an approach for mobile access to data collections, which raises an efficiency of data processing in a book library, helps to control available books, books in stock, validate service charges, eliminate staff mistakes, generate requests for book delivery. Our system uses mobile devices Symbol RF (with radio-channel access), and data terminals Symbol Palm Terminal for batch-processing and synchronization with remote library databases. We discuss the use of PalmOS-compatible devices, and WindowsCE terminals. Our software system is based on modular, scalable three-tier architecture. Additional functionality can be easily customized. Scalability is also supplied by Internet / Intranet technologies, and radio-access points. The base module of the system supports generic warehouse operations: cyclecounting with handheld barcode-scanners, efficient items delivery and issue, item movement, reserving, report generating on finished and in-process operations. Movements are optimized using worker's current location, operations are sorted in a priority order and transmitted to mobile and wireless worker's terminals. Mobile terminals improve of tasks processing control, eliminate staff mistakes, display actual information about main processes, provide data for online-reports, and significantly raise the efficiency of data exchange.

  4. Accredited Orthopaedic Sports Medicine Fellowship Websites: An Updated Assessment of Accessibility and Content.

    PubMed

    Yayac, Michael; Javandal, Mitra; Mulcahey, Mary K

    2017-01-01

    A substantial number of orthopaedic surgeons apply for sports medicine fellowships after residency completion. The Internet is one of the most important resources applicants use to obtain information about fellowship programs, with the program website serving as one of the most influential sources. The American Orthopaedic Society for Sports Medicine (AOSSM), San Francisco Match (SFM), and Arthroscopy Association of North America (AANA) maintain databases of orthopaedic sports medicine fellowship programs. A 2013 study evaluated the content and accessibility of the websites for accredited orthopaedic sports medicine fellowships. To reassess these websites based on the same parameters and compare the results with those of the study published in 2013 to determine whether any improvement has been made in fellowship website content or accessibility. Cross-sectional study. We reviewed all existing websites for the 95 accredited orthopaedic sports medicine fellowships included in the AOSSM, SFM, and AANA databases. Accessibility of the websites was determined by performing a Google search for each program. A total of 89 sports fellowship websites were evaluated for overall content. Websites for the remaining 6 programs could not be identified, so they were not included in content assessment. Of the 95 accredited sports medicine fellowships, 49 (52%) provided links in the AOSSM database, 89 (94%) in the SFM database, and 24 (25%) in the AANA database. Of the 89 websites, 89 (100%) provided a description of the program, 62 (70%) provided selection process information, and 40 (45%) provided a link to the SFM website. Two searches through Google were able to identify links to 88% and 92% of all accredited programs. The majority of accredited orthopaedic sports medicine fellowship programs fail to utilize the Internet to its full potential as a resource to provide applicants with detailed information about the program, which could help residents in the selection and ranking process. Orthopaedic sports medicine fellowship websites that are easily accessible through the AOSSM, SFM, AANA, or Google and that provide all relevant information for applicants would simplify the process of deciding where to apply, interview, and ultimately how to rank orthopaedic sports medicine fellowship programs for the Orthopaedic Sports Medicine Fellowship Match.

  5. Evaluation of the content and accessibility of web sites for accredited orthopaedic sports medicine fellowships.

    PubMed

    Mulcahey, Mary K; Gosselin, Michelle M; Fadale, Paul D

    2013-06-19

    The Internet is a common source of information for orthopaedic residents applying for sports medicine fellowships, with the web sites of the American Orthopaedic Society for Sports Medicine (AOSSM) and the San Francisco Match serving as central databases. We sought to evaluate the web sites for accredited orthopaedic sports medicine fellowships with regard to content and accessibility. We reviewed the existing web sites of the ninety-five accredited orthopaedic sports medicine fellowships included in the AOSSM and San Francisco Match databases from February to March 2012. A Google search was performed to determine the overall accessibility of program web sites and to supplement information obtained from the AOSSM and San Francisco Match web sites. The study sample consisted of the eighty-seven programs whose web sites connected to information about the fellowship. Each web site was evaluated for its informational value. Of the ninety-five programs, fifty-one (54%) had links listed in the AOSSM database. Three (3%) of all accredited programs had web sites that were linked directly to information about the fellowship. Eighty-eight (93%) had links listed in the San Francisco Match database; however, only five (5%) had links that connected directly to information about the fellowship. Of the eighty-seven programs analyzed in our study, all eighty-seven web sites (100%) provided a description of the program and seventy-six web sites (87%) included information about the application process. Twenty-one web sites (24%) included a list of current fellows. Fifty-six web sites (64%) described the didactic instruction, seventy (80%) described team coverage responsibilities, forty-seven (54%) included a description of cases routinely performed by fellows, forty-one (47%) described the role of the fellow in seeing patients in the office, eleven (13%) included call responsibilities, and seventeen (20%) described a rotation schedule. Two Google searches identified direct links for 67% to 71% of all accredited programs. Most accredited orthopaedic sports medicine fellowships lack easily accessible or complete web sites in the AOSSM or San Francisco Match databases. Improvement in the accessibility and quality of information on orthopaedic sports medicine fellowship web sites would facilitate the ability of applicants to obtain useful information.

  6. Oceans of Data: In what ways can learning research inform the development of electronic interfaces and tools for use by students accessing large scientific databases?

    NASA Astrophysics Data System (ADS)

    Krumhansl, R. A.; Foster, J.; Peach, C. L.; Busey, A.; Baker, I.

    2012-12-01

    The practice of science and engineering is being revolutionized by the development of cyberinfrastructure for accessing near real-time and archived observatory data. Large cyberinfrastructure projects have the potential to transform the way science is taught in high school classrooms, making enormous quantities of scientific data available, giving students opportunities to analyze and draw conclusions from many kinds of complex data, and providing students with experiences using state-of-the-art resources and techniques for scientific investigations. However, online interfaces to scientific data are built by scientists for scientists, and their design can significantly impede broad use by novices. Knowledge relevant to the design of student interfaces to complex scientific databases is broadly dispersed among disciplines ranging from cognitive science to computer science and cartography and is not easily accessible to designers of educational interfaces. To inform efforts at bridging scientific cyberinfrastructure to the high school classroom, Education Development Center, Inc. and the Scripps Institution of Oceanography conducted an NSF-funded 2-year interdisciplinary review of literature and expert opinion pertinent to making interfaces to large scientific databases accessible to and usable by precollege learners and their teachers. Project findings are grounded in the fundamentals of Cognitive Load Theory, Visual Perception, Schemata formation and Universal Design for Learning. The Knowledge Status Report (KSR) presents cross-cutting and visualization-specific guidelines that highlight how interface design features can address/ ameliorate challenges novice high school students face as they navigate complex databases to find data, and construct and look for patterns in maps, graphs, animations and other data visualizations. The guidelines present ways to make scientific databases more broadly accessible by: 1) adjusting the cognitive load imposed by the user interface and visualizations so that it doesn't exceed the amount of information the learner can actively process; 2) drawing attention to important features and patterns; and 3) enabling customization of visualizations and tools to meet the needs of diverse learners.

  7. Converting analog interpretive data to digital formats for use in database and GIS applications

    USGS Publications Warehouse

    Flocks, James G.

    2004-01-01

    There is a growing need by researchers and managers for comprehensive and unified nationwide datasets of scientific data. These datasets must be in a digital format that is easily accessible using database and GIS applications, providing the user with access to a wide variety of current and historical information. Although most data currently being collected by scientists are already in a digital format, there is still a large repository of information in the literature and paper archive. Converting this information into a format accessible by computer applications is typically very difficult and can result in loss of data. However, since scientific data are commonly collected in a repetitious, concise matter (i.e., forms, tables, graphs, etc.), these data can be recovered digitally by using a conversion process that relates the position of an attribute in two-dimensional space to the information that the attribute signifies. For example, if a table contains a certain piece of information in a specific row and column, then the space that the row and column occupies becomes an index of that information. An index key is used to identify the relation between the physical location of the attribute and the information the attribute contains. The conversion process can be achieved rapidly, easily and inexpensively using widely available digitizing and spreadsheet software, and simple programming code. In the geological sciences, sedimentary character is commonly interpreted from geophysical profiles and descriptions of sediment cores. In the field and laboratory, these interpretations were typically transcribed to paper. The information from these paper archives is still relevant and increasingly important to scientists, engineers and managers to understand geologic processes affecting our environment. Direct scanning of this information produces a raster facsimile of the data, which allows it to be linked to the electronic world. But true integration of the content with database and GIS software as point, vector or text information is commonly lost. Sediment core descriptions and interpretation of geophysical profiles are usually portrayed as lines, curves, symbols and text information. They have vertical and horizontal dimensions associated with depth, category, time, or geographic position. These dimensions are displayed in consistent positions, which can be digitized and converted to a digital format, such as a spreadsheet. Once this data is in a digital, tabulated form it can easily be made available to a wide variety of imaging and data manipulation software for compilation and world-wide dissemination.

  8. A user friendly database for use in ALARA job dose assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zodiates, A.M.; Willcock, A.

    1995-03-01

    The pressurized water reactor (PWR) design chosen for adoption by Nuclear Electric plc was based on the Westinghouse Standard Nuclear Unit Power Plant (SNUPPS). This design was developed to meet the United Kingdom requirements and these improvements are embodied in the Sizewell B plant which will start commercial operation in 1994. A user-friendly database was developed to assist the station in the dose and ALARP assessments of the work expected to be carried out during station operation and outage. The database stores the information in an easily accessible form and enables updating, editing, retrieval, and searches of the information. Themore » database contains job-related information such as job locations, number of workers required, job times, and the expected plant doserates. It also contains the means to flag job requirements such as requirements for temporary shielding, flushing, scaffolding, etc. Typical uses of the database are envisaged to be in the prediction of occupational doses, the identification of high collective and individual dose jobs, use in ALARP assessments, setting of dose targets, monitoring of dose control performance, and others.« less

  9. Materials properties numerical database system established and operational at CINDAS/Purdue University

    NASA Technical Reports Server (NTRS)

    Ho, C. Y.; Li, H. H.

    1989-01-01

    A computerized comprehensive numerical database system on the mechanical, thermophysical, electronic, electrical, magnetic, optical, and other properties of various types of technologically important materials such as metals, alloys, composites, dielectrics, polymers, and ceramics has been established and operational at the Center for Information and Numerical Data Analysis and Synthesis (CINDAS) of Purdue University. This is an on-line, interactive, menu-driven, user-friendly database system. Users can easily search, retrieve, and manipulate the data from the database system without learning special query language, special commands, standardized names of materials, properties, variables, etc. It enables both the direct mode of search/retrieval of data for specified materials, properties, independent variables, etc., and the inverted mode of search/retrieval of candidate materials that meet a set of specified requirements (which is the computer-aided materials selection). It enables also tabular and graphical displays and on-line data manipulations such as units conversion, variables transformation, statistical analysis, etc., of the retrieved data. The development, content, accessibility, etc., of the database system are presented and discussed.

  10. MOSAIC: an online database dedicated to the comparative genomics of bacterial strains at the intra-species level.

    PubMed

    Chiapello, Hélène; Gendrault, Annie; Caron, Christophe; Blum, Jérome; Petit, Marie-Agnès; El Karoui, Meriem

    2008-11-27

    The recent availability of complete sequences for numerous closely related bacterial genomes opens up new challenges in comparative genomics. Several methods have been developed to align complete genomes at the nucleotide level but their use and the biological interpretation of results are not straightforward. It is therefore necessary to develop new resources to access, analyze, and visualize genome comparisons. Here we present recent developments on MOSAIC, a generalist comparative bacterial genome database. This database provides the bacteriologist community with easy access to comparisons of complete bacterial genomes at the intra-species level. The strategy we developed for comparison allows us to define two types of regions in bacterial genomes: backbone segments (i.e., regions conserved in all compared strains) and variable segments (i.e., regions that are either specific to or variable in one of the aligned genomes). Definition of these segments at the nucleotide level allows precise comparative and evolutionary analyses of both coding and non-coding regions of bacterial genomes. Such work is easily performed using the MOSAIC Web interface, which allows browsing and graphical visualization of genome comparisons. The MOSAIC database now includes 493 pairwise comparisons and 35 multiple maximal comparisons representing 78 bacterial species. Genome conserved regions (backbones) and variable segments are presented in various formats for further analysis. A graphical interface allows visualization of aligned genomes and functional annotations. The MOSAIC database is available online at http://genome.jouy.inra.fr/mosaic.

  11. Extraction, integration and analysis of alternative splicing and protein structure distributed information

    PubMed Central

    D'Antonio, Matteo; Masseroli, Marco

    2009-01-01

    Background Alternative splicing has been demonstrated to affect most of human genes; different isoforms from the same gene encode for proteins which differ for a limited number of residues, thus yielding similar structures. This suggests possible correlations between alternative splicing and protein structure. In order to support the investigation of such relationships, we have developed the Alternative Splicing and Protein Structure Scrutinizer (PASS), a Web application to automatically extract, integrate and analyze human alternative splicing and protein structure data sparsely available in the Alternative Splicing Database, Ensembl databank and Protein Data Bank. Primary data from these databases have been integrated and analyzed using the Protein Identifier Cross-Reference, BLAST, CLUSTALW and FeatureMap3D software tools. Results A database has been developed to store the considered primary data and the results from their analysis; a system of Perl scripts has been implemented to automatically create and update the database and analyze the integrated data; a Web interface has been implemented to make the analyses easily accessible; a database has been created to manage user accesses to the PASS Web application and store user's data and searches. Conclusion PASS automatically integrates data from the Alternative Splicing Database with protein structure data from the Protein Data Bank. Additionally, it comprehensively analyzes the integrated data with publicly available well-known bioinformatics tools in order to generate structural information of isoform pairs. Further analysis of such valuable information might reveal interesting relationships between alternative splicing and protein structure differences, which may be significantly associated with different functions. PMID:19828075

  12. Techniques for searching the CINAHL database using the EBSCO interface.

    PubMed

    Lawrence, Janna C

    2007-04-01

    The cumulative index to Nursing and Allied Health Literature (CINAHL) is a useful research tool for accessing articles of interest to nurses and health care professionals. More than 2,800 journals are indexed by CINAHL and can be searched easily using assigned subject headings. Detailed instructions about conducting, combining, and saving searches in CINAHL are provided in this article. Establishing an account at EBSCO further allows a nurse to save references and searches and to receive e-mail alerts when new articles on a topic of interest are published.

  13. Partnerships - Working Together to Build The National Map

    USGS Publications Warehouse

    ,

    2004-01-01

    Through The National Map, the U.S. Geological Survey (USGS) is working with partners to ensure that current, accurate, and complete base geographic information is available for the Nation. Designed as a network of online digital databases, it provides a consistent geographic data framework for the country and serves as a foundation for integrating, sharing, and using data easily and reliably. It provides public access to high quality geospatial data and information from multiple partners to help inform decisionmaking by resource managers and the public, and to support intergovernmental homeland security and emergency management requirements.

  14. No programming required. Mobile PCs can help physicians work more efficiently, especially when the application is designed to fit the practice.

    PubMed

    Campbell, J

    2000-09-01

    The Jacobson Medical Group San Antonio Jacobson Medical Group (JMG) needed a way to effectively and efficiently coordinate referral information between their hospitalist physicians and specialists. JMG decided to replace paper-based binders with something more convenient and easily updated. The organization chose to implement a mobile solution that would provide its physicians with convenient access to a database of information via a hand-held computer. The hand-held solution provides physicians with full demographic profiles of primary care givers for each area where the group operates. The database includes multiple profiles based on different healthcare plans, along with details about preferred and authorized specialists. JMG adopted a user-friendly solution that the hospitalists and specialists would embrace and actually use.

  15. National Hospital Management Portal (NHMP): a framework for e-health implementation.

    PubMed

    Adetiba, E; Eleanya, M; Fatumo, S A; Matthews, V O

    2009-01-01

    Health information represents the main basis for health decision-making process and there have been some efforts to increase access to health information in developing countries. However, most of these efforts are based on the internet which has minimal penetration especially in the rural and sub-urban part of developing countries. In this work, a platform for medical record acquisition via the ubiquitous 2.5G/3G wireless communications technologies is presented. The National Hospital Management Portal (NHMP) platform has a central database at each specific country's national hospital which could be updated/accessed from hosts at health centres, clinics, medical laboratories, teaching hospitals, private hospitals and specialist hospitals across the country. With this, doctors can have access to patients' medical records more easily, get immediate access to test results from laboratories, deliver prescription directly to pharmacists. If a particular treatment can be provided to a patient more effectively in another country, NHMP makes it simpler to organise and carry out such treatment abroad.

  16. A new database sub-system for grain-size analysis

    NASA Astrophysics Data System (ADS)

    Suckow, Axel

    2013-04-01

    Detailed grain-size analyses of large depth profiles for palaeoclimate studies create large amounts of data. For instance (Novothny et al., 2011) presented a depth profile of grain-size analyses with 2 cm resolution and a total depth of more than 15 m, where each sample was measured with 5 repetitions on a Beckman Coulter LS13320 with 116 channels. This adds up to a total of more than four million numbers. Such amounts of data are not easily post-processed by spreadsheets or standard software; also MS Access databases would face serious performance problems. The poster describes a database sub-system dedicated to grain-size analyses. It expands the LabData database and laboratory management system published by Suckow and Dumke (2001). This compatibility with a very flexible database system provides ease to import the grain-size data, as well as the overall infrastructure of also storing geographic context and the ability to organize content like comprising several samples into one set or project. It also allows easy export and direct plot generation of final data in MS Excel. The sub-system allows automated import of raw data from the Beckman Coulter LS13320 Laser Diffraction Particle Size Analyzer. During post processing MS Excel is used as a data display, but no number crunching is implemented in Excel. Raw grain size spectra can be exported and controlled as Number- Surface- and Volume-fractions, while single spectra can be locked for further post-processing. From the spectra the usual statistical values (i.e. mean, median) can be computed as well as fractions larger than a grain size, smaller than a grain size, fractions between any two grain sizes or any ratio of such values. These deduced values can be easily exported into Excel for one or more depth profiles. However, such a reprocessing for large amounts of data also allows new display possibilities: normally depth profiles of grain-size data are displayed only with summarized parameters like the clay content, sand content, etc., which always only displays part of the available information at each depth. Alternatively, full spectra were displayed at one depth. The new software now allows to display the whole grain-size spectrum at each depth in a three dimensional display. LabData and the grain-size subsystem are based on MS Access as front-end and MS SQL Server as back-end database systems. The SQL code for the data model, SQL server procedures and triggers and the MS Access basic code for the front end are public domain code, published under the GNU GPL license agreement and are available free of charge. References: Novothny, Á., Frechen, M., Horváth, E., Wacha, L., Rolf, C., 2011. Investigating the penultimate and last glacial cycles of the Sütt dating, high-resolution grain size, and magnetic susceptibility data. Quaternary International 234, 75-85. Suckow, A., Dumke, I., 2001. A database system for geochemical, isotope hydrological and geochronological laboratories. Radiocarbon 43, 325-337.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riemer, R.L.

    The Panel on Basic Nuclear Data Compilations believes that it is important to provide the user with an evaluated nuclear database of the highest quality, dependability, and currency. It is also important that the evaluated nuclear data are easily accessible to the user. In the past the panel concentrated its concern on the cycle time for the publication of A-chain evaluations. However, the panel now recognizes that publication cycle time is no longer the appropriate goal. Sometime in the future, publication of the evaluated A-chains will evolve from the present hard-copy Nuclear Data Sheets on library shelves to purely electronicmore » publication, with the advent of universal access to terminals and the nuclear databases. Therefore, the literature cut-off date in the Evaluated Nuclear Structure Data File (ENSDF) is rapidly becoming the only important measure of the currency of an evaluated A-chain. Also, it has become exceedingly important to ensure that access to the databases is as user-friendly as possible and to enable electronic publication of the evaluated data files. Considerable progress has been made in these areas: use of the on-line systems has almost doubled in the past year, and there has been initial development of tools for electronic evaluation, publication, and dissemination. Currently, the nuclear data effort is in transition between the traditional and future methods of dissemination of the evaluated data. Also, many of the factors that adversely affect the publication cycle time simultaneously affect the currency of the evaluated nuclear database. Therefore, the panel continues to examine factors that can influence cycle time: the number of evaluators, the frequency with which an evaluation can be updated, the review of the evaluation, and the production of the evaluation, which currently exists as a hard-copy issue of Nuclear Data Sheets.« less

  18. cisPath: an R/Bioconductor package for cloud users for visualization and management of functional protein interaction networks.

    PubMed

    Wang, Likun; Yang, Luhe; Peng, Zuohan; Lu, Dan; Jin, Yan; McNutt, Michael; Yin, Yuxin

    2015-01-01

    With the burgeoning development of cloud technology and services, there are an increasing number of users who prefer cloud to run their applications. All software and associated data are hosted on the cloud, allowing users to access them via a web browser from any computer, anywhere. This paper presents cisPath, an R/Bioconductor package deployed on cloud servers for client users to visualize, manage, and share functional protein interaction networks. With this R package, users can easily integrate downloaded protein-protein interaction information from different online databases with private data to construct new and personalized interaction networks. Additional functions allow users to generate specific networks based on private databases. Since the results produced with the use of this package are in the form of web pages, cloud users can easily view and edit the network graphs via the browser, using a mouse or touch screen, without the need to download them to a local computer. This package can also be installed and run on a local desktop computer. Depending on user preference, results can be publicized or shared by uploading to a web server or cloud driver, allowing other users to directly access results via a web browser. This package can be installed and run on a variety of platforms. Since all network views are shown in web pages, such package is particularly useful for cloud users. The easy installation and operation is an attractive quality for R beginners and users with no previous experience with cloud services.

  19. cisPath: an R/Bioconductor package for cloud users for visualization and management of functional protein interaction networks

    PubMed Central

    2015-01-01

    Background With the burgeoning development of cloud technology and services, there are an increasing number of users who prefer cloud to run their applications. All software and associated data are hosted on the cloud, allowing users to access them via a web browser from any computer, anywhere. This paper presents cisPath, an R/Bioconductor package deployed on cloud servers for client users to visualize, manage, and share functional protein interaction networks. Results With this R package, users can easily integrate downloaded protein-protein interaction information from different online databases with private data to construct new and personalized interaction networks. Additional functions allow users to generate specific networks based on private databases. Since the results produced with the use of this package are in the form of web pages, cloud users can easily view and edit the network graphs via the browser, using a mouse or touch screen, without the need to download them to a local computer. This package can also be installed and run on a local desktop computer. Depending on user preference, results can be publicized or shared by uploading to a web server or cloud driver, allowing other users to directly access results via a web browser. Conclusions This package can be installed and run on a variety of platforms. Since all network views are shown in web pages, such package is particularly useful for cloud users. The easy installation and operation is an attractive quality for R beginners and users with no previous experience with cloud services. PMID:25708840

  20. Structuring osteosarcoma knowledge: an osteosarcoma-gene association database based on literature mining and manual annotation.

    PubMed

    Poos, Kathrin; Smida, Jan; Nathrath, Michaela; Maugg, Doris; Baumhoer, Daniel; Neumann, Anna; Korsching, Eberhard

    2014-01-01

    Osteosarcoma (OS) is the most common primary bone cancer exhibiting high genomic instability. This genomic instability affects multiple genes and microRNAs to a varying extent depending on patient and tumor subtype. Massive research is ongoing to identify genes including their gene products and microRNAs that correlate with disease progression and might be used as biomarkers for OS. However, the genomic complexity hampers the identification of reliable biomarkers. Up to now, clinico-pathological factors are the key determinants to guide prognosis and therapeutic treatments. Each day, new studies about OS are published and complicate the acquisition of information to support biomarker discovery and therapeutic improvements. Thus, it is necessary to provide a structured and annotated view on the current OS knowledge that is quick and easily accessible to researchers of the field. Therefore, we developed a publicly available database and Web interface that serves as resource for OS-associated genes and microRNAs. Genes and microRNAs were collected using an automated dictionary-based gene recognition procedure followed by manual review and annotation by experts of the field. In total, 911 genes and 81 microRNAs related to 1331 PubMed abstracts were collected (last update: 29 October 2013). Users can evaluate genes and microRNAs according to their potential prognostic and therapeutic impact, the experimental procedures, the sample types, the biological contexts and microRNA target gene interactions. Additionally, a pathway enrichment analysis of the collected genes highlights different aspects of OS progression. OS requires pathways commonly deregulated in cancer but also features OS-specific alterations like deregulated osteoclast differentiation. To our knowledge, this is the first effort of an OS database containing manual reviewed and annotated up-to-date OS knowledge. It might be a useful resource especially for the bone tumor research community, as specific information about genes or microRNAs is quick and easily accessible. Hence, this platform can support the ongoing OS research and biomarker discovery. Database URL: http://osteosarcoma-db.uni-muenster.de. © The Author(s) 2014. Published by Oxford University Press.

  1. Structuring osteosarcoma knowledge: an osteosarcoma-gene association database based on literature mining and manual annotation

    PubMed Central

    Poos, Kathrin; Smida, Jan; Nathrath, Michaela; Maugg, Doris; Baumhoer, Daniel; Neumann, Anna; Korsching, Eberhard

    2014-01-01

    Osteosarcoma (OS) is the most common primary bone cancer exhibiting high genomic instability. This genomic instability affects multiple genes and microRNAs to a varying extent depending on patient and tumor subtype. Massive research is ongoing to identify genes including their gene products and microRNAs that correlate with disease progression and might be used as biomarkers for OS. However, the genomic complexity hampers the identification of reliable biomarkers. Up to now, clinico-pathological factors are the key determinants to guide prognosis and therapeutic treatments. Each day, new studies about OS are published and complicate the acquisition of information to support biomarker discovery and therapeutic improvements. Thus, it is necessary to provide a structured and annotated view on the current OS knowledge that is quick and easily accessible to researchers of the field. Therefore, we developed a publicly available database and Web interface that serves as resource for OS-associated genes and microRNAs. Genes and microRNAs were collected using an automated dictionary-based gene recognition procedure followed by manual review and annotation by experts of the field. In total, 911 genes and 81 microRNAs related to 1331 PubMed abstracts were collected (last update: 29 October 2013). Users can evaluate genes and microRNAs according to their potential prognostic and therapeutic impact, the experimental procedures, the sample types, the biological contexts and microRNA target gene interactions. Additionally, a pathway enrichment analysis of the collected genes highlights different aspects of OS progression. OS requires pathways commonly deregulated in cancer but also features OS-specific alterations like deregulated osteoclast differentiation. To our knowledge, this is the first effort of an OS database containing manual reviewed and annotated up-to-date OS knowledge. It might be a useful resource especially for the bone tumor research community, as specific information about genes or microRNAs is quick and easily accessible. Hence, this platform can support the ongoing OS research and biomarker discovery. Database URL: http://osteosarcoma-db.uni-muenster.de PMID:24865352

  2. An innovative approach to capability-based emergency operations planning

    PubMed Central

    Keim, Mark E

    2013-01-01

    This paper describes the innovative use information technology for assisting disaster planners with an easily-accessible method for writing and improving evidence-based emergency operations plans. This process is used to identify all key objectives of the emergency response according to capabilities of the institution, community or society. The approach then uses a standardized, objective-based format, along with a consensus-based method for drafting capability-based operational-level plans. This information is then integrated within a relational database to allow for ease of access and enhanced functionality to search, sort and filter and emergency operations plan according to user need and technological capacity. This integrated approach is offered as an effective option for integrating best practices of planning with the efficiency, scalability and flexibility of modern information and communication technology. PMID:28228987

  3. An innovative approach to capability-based emergency operations planning.

    PubMed

    Keim, Mark E

    2013-01-01

    This paper describes the innovative use information technology for assisting disaster planners with an easily-accessible method for writing and improving evidence-based emergency operations plans. This process is used to identify all key objectives of the emergency response according to capabilities of the institution, community or society. The approach then uses a standardized, objective-based format, along with a consensus-based method for drafting capability-based operational-level plans. This information is then integrated within a relational database to allow for ease of access and enhanced functionality to search, sort and filter and emergency operations plan according to user need and technological capacity. This integrated approach is offered as an effective option for integrating best practices of planning with the efficiency, scalability and flexibility of modern information and communication technology.

  4. Improving usability and accessibility of cheminformatics tools for chemists through cyberinfrastructure and education.

    PubMed

    Guha, Rajarshi; Wiggins, Gary D; Wild, David J; Baik, Mu-Hyun; Pierce And, Marlon E; Fox, Geoffrey C

    Some of the latest trends in cheminformatics, computation, and the world wide web are reviewed with predictions of how these are likely to impact the field of cheminformatics in the next five years. The vision and some of the work of the Chemical Informatics and Cyberinfrastructure Collaboratory at Indiana University are described, which we base around the core concepts of e-Science and cyberinfrastructure that have proven successful in other fields. Our chemical informatics cyberinfrastructure is realized by building a flexible, generic infrastructure for cheminformatics tools and databases, exporting "best of breed" methods as easily-accessible web APIs for cheminformaticians, scientists, and researchers in other disciplines, and hosting a unique chemical informatics education program aimed at scientists and cheminformatics practitioners in academia and industry.

  5. TransAtlasDB: an integrated database connecting expression data, metadata and variants

    PubMed Central

    Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J

    2018-01-01

    Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361

  6. Introducing a New Interface for the Online MagIC Database by Integrating Data Uploading, Searching, and Visualization

    NASA Astrophysics Data System (ADS)

    Jarboe, N.; Minnett, R.; Constable, C.; Koppers, A. A.; Tauxe, L.

    2013-12-01

    The Magnetics Information Consortium (MagIC) is dedicated to supporting the paleomagnetic, geomagnetic, and rock magnetic communities through the development and maintenance of an online database (http://earthref.org/MAGIC/), data upload and quality control, searches, data downloads, and visualization tools. While MagIC has completed importing some of the IAGA paleomagnetic databases (TRANS, PINT, PSVRL, GPMDB) and continues to import others (ARCHEO, MAGST and SECVR), further individual data uploading from the community contributes a wealth of easily-accessible rich datasets. Previously uploading of data to the MagIC database required the use of an Excel spreadsheet using either a Mac or PC. The new method of uploading data utilizes an HTML 5 web interface where the only computer requirement is a modern browser. This web interface will highlight all errors discovered in the dataset at once instead of the iterative error checking process found in the previous Excel spreadsheet data checker. As a web service, the community will always have easy access to the most up-to-date and bug free version of the data upload software. The filtering search mechanism of the MagIC database has been changed to a more intuitive system where the data from each contribution is displayed in tables similar to how the data is uploaded (http://earthref.org/MAGIC/search/). Searches themselves can be saved as a permanent URL, if desired. The saved search URL could then be used as a citation in a publication. When appropriate, plots (equal area, Zijderveld, ARAI, demagnetization, etc.) are associated with the data to give the user a quicker understanding of the underlying dataset. The MagIC database will continue to evolve to meet the needs of the paleomagnetic, geomagnetic, and rock magnetic communities.

  7. JRC GMO-Amplicons: a collection of nucleic acid sequences related to genetically modified organisms

    PubMed Central

    Petrillo, Mauro; Angers-Loustau, Alexandre; Henriksson, Peter; Bonfini, Laura; Patak, Alex; Kreysa, Joachim

    2015-01-01

    The DNA target sequence is the key element in designing detection methods for genetically modified organisms (GMOs). Unfortunately this information is frequently lacking, especially for unauthorized GMOs. In addition, patent sequences are generally poorly annotated, buried in complex and extensive documentation and hard to link to the corresponding GM event. Here, we present the JRC GMO-Amplicons, a database of amplicons collected by screening public nucleotide sequence databanks by in silico determination of PCR amplification with reference methods for GMO analysis. The European Union Reference Laboratory for Genetically Modified Food and Feed (EU-RL GMFF) provides these methods in the GMOMETHODS database to support enforcement of EU legislation and GM food/feed control. The JRC GMO-Amplicons database is composed of more than 240 000 amplicons, which can be easily accessed and screened through a web interface. To our knowledge, this is the first attempt at pooling and collecting publicly available sequences related to GMOs in food and feed. The JRC GMO-Amplicons supports control laboratories in the design and assessment of GMO methods, providing inter-alia in silico prediction of primers specificity and GM targets coverage. The new tool can assist the laboratories in the analysis of complex issues, such as the detection and identification of unauthorized GMOs. Notably, the JRC GMO-Amplicons database allows the retrieval and characterization of GMO-related sequences included in patents documentation. Finally, it can help annotating poorly described GM sequences and identifying new relevant GMO-related sequences in public databases. The JRC GMO-Amplicons is freely accessible through a web-based portal that is hosted on the EU-RL GMFF website. Database URL: http://gmo-crl.jrc.ec.europa.eu/jrcgmoamplicons/ PMID:26424080

  8. JRC GMO-Amplicons: a collection of nucleic acid sequences related to genetically modified organisms.

    PubMed

    Petrillo, Mauro; Angers-Loustau, Alexandre; Henriksson, Peter; Bonfini, Laura; Patak, Alex; Kreysa, Joachim

    2015-01-01

    The DNA target sequence is the key element in designing detection methods for genetically modified organisms (GMOs). Unfortunately this information is frequently lacking, especially for unauthorized GMOs. In addition, patent sequences are generally poorly annotated, buried in complex and extensive documentation and hard to link to the corresponding GM event. Here, we present the JRC GMO-Amplicons, a database of amplicons collected by screening public nucleotide sequence databanks by in silico determination of PCR amplification with reference methods for GMO analysis. The European Union Reference Laboratory for Genetically Modified Food and Feed (EU-RL GMFF) provides these methods in the GMOMETHODS database to support enforcement of EU legislation and GM food/feed control. The JRC GMO-Amplicons database is composed of more than 240 000 amplicons, which can be easily accessed and screened through a web interface. To our knowledge, this is the first attempt at pooling and collecting publicly available sequences related to GMOs in food and feed. The JRC GMO-Amplicons supports control laboratories in the design and assessment of GMO methods, providing inter-alia in silico prediction of primers specificity and GM targets coverage. The new tool can assist the laboratories in the analysis of complex issues, such as the detection and identification of unauthorized GMOs. Notably, the JRC GMO-Amplicons database allows the retrieval and characterization of GMO-related sequences included in patents documentation. Finally, it can help annotating poorly described GM sequences and identifying new relevant GMO-related sequences in public databases. The JRC GMO-Amplicons is freely accessible through a web-based portal that is hosted on the EU-RL GMFF website. Database URL: http://gmo-crl.jrc.ec.europa.eu/jrcgmoamplicons/. © The Author(s) 2015. Published by Oxford University Press.

  9. Medical Images Remote Consultation

    NASA Astrophysics Data System (ADS)

    Ferraris, Maurizio; Frixione, Paolo; Squarcia, Sandro

    Teleconsultation of digital images among different medical centers is now a reality. The problem to be solved is how to interconnect all the clinical diagnostic devices in a hospital in order to allow physicians and health physicists, working in different places, to discuss on interesting clinical cases visualizing the same diagnostic images at the same time. Applying World Wide Web technologies, the proposed system can be easily used by people with no specific computer knowledge providing a verbose help to guide the user through the right steps of execution. Diagnostic images are retrieved from a relational database or from a standard DICOM-PACS through the DICOM-WWW gateway allowing connection of the usual Web browsers to DICOM applications via the HTTP protocol. The system, which is proposed for radiotherapy implementation, where radiographies play a fundamental role, can be easily converted to different field of medical applications where a remote access to secure data are compulsory.

  10. HPIDB 2.0: a curated database for host–pathogen interactions

    PubMed Central

    Ammari, Mais G.; Gresham, Cathy R.; McCarthy, Fiona M.; Nanduri, Bindu

    2016-01-01

    Identification and analysis of host–pathogen interactions (HPI) is essential to study infectious diseases. However, HPI data are sparse in existing molecular interaction databases, especially for agricultural host–pathogen systems. Therefore, resources that annotate, predict and display the HPI that underpin infectious diseases are critical for developing novel intervention strategies. HPIDB 2.0 (http://www.agbase.msstate.edu/hpi/main.html) is a resource for HPI data, and contains 45, 238 manually curated entries in the current release. Since the first description of the database in 2010, multiple enhancements to HPIDB data and interface services were made that are described here. Notably, HPIDB 2.0 now provides targeted biocuration of molecular interaction data. As a member of the International Molecular Exchange consortium, annotations provided by HPIDB 2.0 curators meet community standards to provide detailed contextual experimental information and facilitate data sharing. Moreover, HPIDB 2.0 provides access to rapidly available community annotations that capture minimum molecular interaction information to address immediate researcher needs for HPI network analysis. In addition to curation, HPIDB 2.0 integrates HPI from existing external sources and contains tools to infer additional HPI where annotated data are scarce. Compared to other interaction databases, our data collection approach ensures HPIDB 2.0 users access the most comprehensive HPI data from a wide range of pathogens and their hosts (594 pathogen and 70 host species, as of February 2016). Improvements also include enhanced search capacity, addition of Gene Ontology functional information, and implementation of network visualization. The changes made to HPIDB 2.0 content and interface ensure that users, especially agricultural researchers, are able to easily access and analyse high quality, comprehensive HPI data. All HPIDB 2.0 data are updated regularly, are publically available for direct download, and are disseminated to other molecular interaction resources. Database URL: http://www.agbase.msstate.edu/hpi/main.html PMID:27374121

  11. Sharing mutants and experimental information prepublication using FgMutantDb (https://scabusa.org/FgMutantDb).

    PubMed

    Baldwin, Thomas T; Basenko, Evelina; Harb, Omar; Brown, Neil A; Urban, Martin; Hammond-Kosack, Kim E; Bregitzer, Phil P

    2018-06-01

    There is no comprehensive storage for generated mutants of Fusarium graminearum or data associated with these mutants. Instead, researchers relied on several independent and non-integrated databases. FgMutantDb was designed as a simple spreadsheet that is accessible globally on the web that will function as a centralized source of information on F. graminearum mutants. FgMutantDb aids in the maintenance and sharing of mutants within a research community. It will serve also as a platform for disseminating prepublication results as well as negative results that often go unreported. Additionally, the highly curated information on mutants in FgMutantDb will be shared with other databases (FungiDB, Ensembl, PhytoPath, and PHI-base) through updating reports. Here we describe the creation and potential usefulness of FgMutantDb to the F. graminearum research community, and provide a tutorial on its use. This type of database could be easily emulated for other fungal species. Published by Elsevier Inc.

  12. Unified Planetary Coordinates System: A Searchable Database of Geodetic Information

    NASA Technical Reports Server (NTRS)

    Becker, K. J.a; Gaddis, L. R.; Soderblom, L. A.; Kirk, R. L.; Archinal, B. A.; Johnson, J. R.; Anderson, J. A.; Bowman-Cisneros, E.; LaVoie, S.; McAuley, M.

    2005-01-01

    Over the past 40 years, an enormous quantity of orbital remote sensing data has been collected for Mars from many missions and instruments. Unfortunately these datasets currently exist in a wide range of disparate coordinate systems, making it extremely difficult for the scientific community to easily correlate, combine, and compare data from different Mars missions and instruments. As part of our work for the PDS Imaging Node and on behalf of the USGS Astrogeology Team, we are working to solve this problem and to provide the NASA scientific research community with easy access to Mars orbital data in a unified, consistent coordinate system along with a wide variety of other key geometric variables. The Unified Planetary Coordinates (UPC) system is comprised of two main elements: (1) a database containing Mars orbital remote sensing data computed using a uniform coordinate system, and (2) a process by which continual maintainance and updates to the contents of the database are performed.

  13. An integrated data-analysis and database system for AMS 14C

    NASA Astrophysics Data System (ADS)

    Kjeldsen, Henrik; Olsen, Jesper; Heinemeier, Jan

    2010-04-01

    AMSdata is the name of a combined database and data-analysis system for AMS 14C and stable-isotope work that has been developed at Aarhus University. The system (1) contains routines for data analysis of AMS and MS data, (2) allows a flexible and accurate description of sample extraction and pretreatment, also when samples are split into several fractions, and (3) keeps track of all measured, calculated and attributed data. The structure of the database is flexible and allows an unlimited number of measurement and pretreatment procedures. The AMS 14C data analysis routine is fairly advanced and flexible, and it can be easily optimized for different kinds of measuring processes. Technically, the system is based on a Microsoft SQL server and includes stored SQL procedures for the data analysis. Microsoft Office Access is used for the (graphical) user interface, and in addition Excel, Word and Origin are exploited for input and output of data, e.g. for plotting data during data analysis.

  14. A database of aerothermal measurements in hypersonic flow for CFD validation

    NASA Technical Reports Server (NTRS)

    Holden, M. S.; Moselle, J. R.

    1992-01-01

    This paper presents an experimental database selected and compiled from aerothermal measurements obtained on basic model configurations on which fundamental flow phenomena could be most easily examined. The experimental studies were conducted in hypersonic flows in 48-inch, 96-inch, and 6-foot shock tunnels. A special computer program was constructed to provide easy access to the measurements in the database as well as the means to plot the measurements and compare them with imported data. The database contains tabulations of model configurations, freestream conditions, and measurements of heat transfer, pressure, and skin friction for each of the studies selected for inclusion. The first segment contains measurements in laminar flow emphasizing shock-wave boundary-layer interaction. In the second segment, measurements in transitional flows over flat plates and cones are given. The third segment comprises measurements in regions of shock-wave/turbulent-boundary-layer interactions. Studies of the effects of surface roughness of nosetips and conical afterbodies are presented in the fourth segment of the database. Detailed measurements in regions of shock/shock boundary layer interaction are contained in the fifth segment. Measurements in regions of wall jet and transpiration cooling are presented in the final two segments.

  15. Open resource metagenomics: a model for sharing metagenomic libraries.

    PubMed

    Neufeld, J D; Engel, K; Cheng, J; Moreno-Hagelsieb, G; Rose, D R; Charles, T C

    2011-11-30

    Both sequence-based and activity-based exploitation of environmental DNA have provided unprecedented access to the genomic content of cultivated and uncultivated microorganisms. Although researchers deposit microbial strains in culture collections and DNA sequences in databases, activity-based metagenomic studies typically only publish sequences from the hits retrieved from specific screens. Physical metagenomic libraries, conceptually similar to entire sequence datasets, are usually not straightforward to obtain by interested parties subsequent to publication. In order to facilitate unrestricted distribution of metagenomic libraries, we propose the adoption of open resource metagenomics, in line with the trend towards open access publishing, and similar to culture- and mutant-strain collections that have been the backbone of traditional microbiology and microbial genetics. The concept of open resource metagenomics includes preparation of physical DNA libraries, preferably in versatile vectors that facilitate screening in a diversity of host organisms, and pooling of clones so that single aliquots containing complete libraries can be easily distributed upon request. Database deposition of associated metadata and sequence data for each library provides researchers with information to select the most appropriate libraries for further research projects. As a starting point, we have established the Canadian MetaMicroBiome Library (CM(2)BL [1]). The CM(2)BL is a publicly accessible collection of cosmid libraries containing environmental DNA from soils collected from across Canada, spanning multiple biomes. The libraries were constructed such that the cloned DNA can be easily transferred to Gateway® compliant vectors, facilitating functional screening in virtually any surrogate microbial host for which there are available plasmid vectors. The libraries, which we are placing in the public domain, will be distributed upon request without restriction to members of both the academic research community and industry. This article invites the scientific community to adopt this philosophy of open resource metagenomics to extend the utility of functional metagenomics beyond initial publication, circumventing the need to start from scratch with each new research project.

  16. Open resource metagenomics: a model for sharing metagenomic libraries

    PubMed Central

    Neufeld, J.D.; Engel, K.; Cheng, J.; Moreno-Hagelsieb, G.; Rose, D.R.; Charles, T.C.

    2011-01-01

    Both sequence-based and activity-based exploitation of environmental DNA have provided unprecedented access to the genomic content of cultivated and uncultivated microorganisms. Although researchers deposit microbial strains in culture collections and DNA sequences in databases, activity-based metagenomic studies typically only publish sequences from the hits retrieved from specific screens. Physical metagenomic libraries, conceptually similar to entire sequence datasets, are usually not straightforward to obtain by interested parties subsequent to publication. In order to facilitate unrestricted distribution of metagenomic libraries, we propose the adoption of open resource metagenomics, in line with the trend towards open access publishing, and similar to culture- and mutant-strain collections that have been the backbone of traditional microbiology and microbial genetics. The concept of open resource metagenomics includes preparation of physical DNA libraries, preferably in versatile vectors that facilitate screening in a diversity of host organisms, and pooling of clones so that single aliquots containing complete libraries can be easily distributed upon request. Database deposition of associated metadata and sequence data for each library provides researchers with information to select the most appropriate libraries for further research projects. As a starting point, we have established the Canadian MetaMicroBiome Library (CM2BL [1]). The CM2BL is a publicly accessible collection of cosmid libraries containing environmental DNA from soils collected from across Canada, spanning multiple biomes. The libraries were constructed such that the cloned DNA can be easily transferred to Gateway® compliant vectors, facilitating functional screening in virtually any surrogate microbial host for which there are available plasmid vectors. The libraries, which we are placing in the public domain, will be distributed upon request without restriction to members of both the academic research community and industry. This article invites the scientific community to adopt this philosophy of open resource metagenomics to extend the utility of functional metagenomics beyond initial publication, circumventing the need to start from scratch with each new research project. PMID:22180823

  17. A billion stars, a few million galaxies

    NASA Astrophysics Data System (ADS)

    Humphreys, Roberta M.; Thurmes, Peter M.

    1994-05-01

    The creation of an all-sky computerized astronomical catalog is discussed. The data source for the catalog was the first National Geographic Society-Palomar Observatory Sky Survey (POSS 1). Most of the plates produced in POSS 1 with the Oschin 48-inch Schmidt telescope were recently scanned by a team of astronomers using an automated plate scanner (APS) which is a high-speed laser scanner designed specifically to digitized information on astronomical photographs. To access the cataloged information easily, a specialized database program called StarBase was written. The expected size of the complete database (the catalog of objects plus the pixel data for the detected images) is 400 gigabytes. Scanning of 644 pairs of blue and red plates, covering the entire sky except for the crowded region within 20 deg of the galactic plane, has been completed. been completed.

  18. pyGeno: A Python package for precision medicine and proteogenomics.

    PubMed

    Daouda, Tariq; Perreault, Claude; Lemieux, Sébastien

    2016-01-01

    pyGeno is a Python package mainly intended for precision medicine applications that revolve around genomics and proteomics. It integrates reference sequences and annotations from Ensembl, genomic polymorphisms from the dbSNP database and data from next-gen sequencing into an easy to use, memory-efficient and fast framework, therefore allowing the user to easily explore subject-specific genomes and proteomes. Compared to a standalone program, pyGeno gives the user access to the complete expressivity of Python, a general programming language. Its range of application therefore encompasses both short scripts and large scale genome-wide studies.

  19. pyGeno: A Python package for precision medicine and proteogenomics

    PubMed Central

    Daouda, Tariq; Perreault, Claude; Lemieux, Sébastien

    2016-01-01

    pyGeno is a Python package mainly intended for precision medicine applications that revolve around genomics and proteomics. It integrates reference sequences and annotations from Ensembl, genomic polymorphisms from the dbSNP database and data from next-gen sequencing into an easy to use, memory-efficient and fast framework, therefore allowing the user to easily explore subject-specific genomes and proteomes. Compared to a standalone program, pyGeno gives the user access to the complete expressivity of Python, a general programming language. Its range of application therefore encompasses both short scripts and large scale genome-wide studies. PMID:27785359

  20. Workflow based framework for life science informatics.

    PubMed

    Tiwari, Abhishek; Sekhar, Arvind K T

    2007-10-01

    Workflow technology is a generic mechanism to integrate diverse types of available resources (databases, servers, software applications and different services) which facilitate knowledge exchange within traditionally divergent fields such as molecular biology, clinical research, computational science, physics, chemistry and statistics. Researchers can easily incorporate and access diverse, distributed tools and data to develop their own research protocols for scientific analysis. Application of workflow technology has been reported in areas like drug discovery, genomics, large-scale gene expression analysis, proteomics, and system biology. In this article, we have discussed the existing workflow systems and the trends in applications of workflow based systems.

  1. Antidepressants for depressive disorder in children and adolescents: a database of randomised controlled trials.

    PubMed

    Zhang, Yuqing; Zhou, Xinyu; Pu, Juncai; Zhang, Hanping; Yang, Lining; Liu, Lanxiang; Zhou, Chanjuan; Yuan, Shuai; Jiang, Xiaofeng; Xie, Peng

    2018-05-31

    In recent years, whether, when and how to use antidepressants to treat depressive disorder in children and adolescents has been hotly debated. Relevant evidence on this topic has increased rapidly. In this paper, we present the construction and content of a database of randomised controlled trials of antidepressants to treat depressive disorder in children and adolescents. This database can be freely accessed via our website and will be regularly updated. Major bibliographic databases (PubMed, the Cochrane Library, Web of Science, Embase, CINAHL, PsycINFO and LiLACS), international trial registers and regulatory agencies' websites were systematically searched for published and unpublished studies up to April 30, 2017. We included randomised controlled trials in which the efficacy or tolerability of any oral antidepressant was compared with that of a control group or any other treatment. In total, 7377 citations from bibliographical databases and 3289 from international trial registers and regulatory agencies' websites were identified. Of these, 53 trials were eligible for inclusion in the final database. Selected data were extracted from each study, including characteristics of the participants (the study population, setting, diagnostic criteria, type of depression, age, sex, and comorbidity), characteristics of the treatment conditions (the treatment conditions, general information, and detail of pharmacotherapy and psychotherapy) and study characteristics (the sponsor, country, number of sites, blinding method, sample size, treatment duration, depression scales, other scales, and primary outcome measure used, and side-effect monitoring method). Moreover, the risk of bias for each trial were assessed. This database provides information on nearly all randomised controlled trials of antidepressants in children and adolescents. By using this database, researchers can improve research efficiency, avoid inadvertent errors and easily focus on the targeted subgroups in which they are interested. For authors of subsequent reviews, they could only use this database to insure that they have completed a comprehensive review, rather than relied solely on the data from this database. We expect this database could help to promote research on evidence-based practice in the treatment of depressive disorder in children and adolescents. The database could be freely accessed in our website: http://xiepengteam.cn/research/evidence-based-medicine .

  2. Generation of signature databases with fast codes

    NASA Astrophysics Data System (ADS)

    Bradford, Robert A.; Woodling, Arthur E.; Brazzell, James S.

    1990-09-01

    Using the FASTSIG signature code to generate optical signature databases for the Ground-based Surveillance and Traking System (GSTS) Program has improved the efficiency of the database generation process. The goal of the current GSTS database is to provide standardized, threat representative target signatures that can easily be used for acquisition and trk studies, discrimination algorithm development, and system simulations. Large databases, with as many as eight interpolalion parameters, are required to maintain the fidelity demands of discrimination and to generalize their application to other strateg systems. As the need increases for quick availability of long wave infrared (LWIR) target signatures for an evolving design4o-threat, FASTSIG has become a database generation alternative to using the industry standard OptiCal Signatures Code (OSC). FASTSIG, developed in 1985 to meet the unique strategic systems demands imposed by the discrimination function, has the significant advantage of being a faster running signature code than the OSC, typically requiring two percent of the cpu time. It uses analytical approximations to model axisymmetric targets, with the fidelity required for discrimination analysis. Access of the signature database is accomplished through use of the waveband integration and interpolation software, INTEG and SIGNAT. This paper gives details of this procedure as well as sample interpolated signatures and also covers sample verification by comparison to the OSC, in order to establish the fidelity of the FASTSIG generated database.

  3. SNPversity: a web-based tool for visualizing diversity

    PubMed Central

    Schott, David A; Vinnakota, Abhinav G; Portwood, John L; Andorf, Carson M

    2018-01-01

    Abstract Many stand-alone desktop software suites exist to visualize single nucleotide polymorphism (SNP) diversity, but web-based software that can be easily implemented and used for biological databases is absent. SNPversity was created to answer this need by building an open-source visualization tool that can be implemented on a Unix-like machine and served through a web browser that can be accessible worldwide. SNPversity consists of a HDF5 database back-end for SNPs, a data exchange layer powered by TASSEL libraries that represent data in JSON format, and an interface layer using PHP to visualize SNP information. SNPversity displays data in real-time through a web browser in grids that are color-coded according to a given SNP’s allelic status and mutational state. SNPversity is currently available at MaizeGDB, the maize community’s database, and will be soon available at GrainGenes, the clade-oriented database for Triticeae and Avena species, including wheat, barley, rye, and oat. The code and documentation are uploaded onto github, and they are freely available to the public. We expect that the tool will be highly useful for other biological databases with a similar need to display SNP diversity through their web interfaces. Database URL: https://www.maizegdb.org/snpversity PMID:29688387

  4. Development of a database system for mapping insertional mutations onto the mouse genome with large-scale experimental data

    PubMed Central

    2009-01-01

    Background Insertional mutagenesis is an effective method for functional genomic studies in various organisms. It can rapidly generate easily tractable mutations. A large-scale insertional mutagenesis with the piggyBac (PB) transposon is currently performed in mice at the Institute of Developmental Biology and Molecular Medicine (IDM), Fudan University in Shanghai, China. This project is carried out via collaborations among multiple groups overseeing interconnected experimental steps and generates a large volume of experimental data continuously. Therefore, the project calls for an efficient database system for recording, management, statistical analysis, and information exchange. Results This paper presents a database application called MP-PBmice (insertional mutation mapping system of PB Mutagenesis Information Center), which is developed to serve the on-going large-scale PB insertional mutagenesis project. A lightweight enterprise-level development framework Struts-Spring-Hibernate is used here to ensure constructive and flexible support to the application. The MP-PBmice database system has three major features: strict access-control, efficient workflow control, and good expandability. It supports the collaboration among different groups that enter data and exchange information on daily basis, and is capable of providing real time progress reports for the whole project. MP-PBmice can be easily adapted for other large-scale insertional mutation mapping projects and the source code of this software is freely available at http://www.idmshanghai.cn/PBmice. Conclusion MP-PBmice is a web-based application for large-scale insertional mutation mapping onto the mouse genome, implemented with the widely used framework Struts-Spring-Hibernate. This system is already in use by the on-going genome-wide PB insertional mutation mapping project at IDM, Fudan University. PMID:19958505

  5. Database for High Throughput Screening Hits (dHITS): a simple tool to retrieve gene specific phenotypes from systematic screens done in yeast.

    PubMed

    Chuartzman, Silvia G; Schuldiner, Maya

    2018-03-25

    In the last decade several collections of Saccharomyces cerevisiae yeast strains have been created. In these collections every gene is modified in a similar manner such as by a deletion or the addition of a protein tag. Such libraries have enabled a diversity of systematic screens, giving rise to large amounts of information regarding gene functions. However, often papers describing such screens focus on a single gene or a small set of genes and all other loci affecting the phenotype of choice ('hits') are only mentioned in tables that are provided as supplementary material and are often hard to retrieve or search. To help unify and make such data accessible, we have created a Database of High Throughput Screening Hits (dHITS). The dHITS database enables information to be obtained about screens in which genes of interest were found as well as the other genes that came up in that screen - all in a readily accessible and downloadable format. The ability to query large lists of genes at the same time provides a platform to easily analyse hits obtained from transcriptional analyses or other screens. We hope that this platform will serve as a tool to facilitate investigation of protein functions to the yeast community. © 2018 The Authors Yeast Published by John Wiley & Sons Ltd.

  6. The virtual microscopy database-sharing digital microscope images for research and education.

    PubMed

    Lee, Lisa M J; Goldman, Haviva M; Hortsch, Michael

    2018-02-14

    Over the last 20 years, virtual microscopy has become the predominant modus of teaching the structural organization of cells, tissues, and organs, replacing the use of optical microscopes and glass slides in a traditional histology or pathology laboratory setting. Although virtual microscopy image files can easily be duplicated, creating them requires not only quality histological glass slides but also an expensive whole slide microscopic scanner and massive data storage devices. These resources are not available to all educators and researchers, especially at new institutions in developing countries. This leaves many schools without access to virtual microscopy resources. The Virtual Microscopy Database (VMD) is a new resource established to address this problem. It is a virtual image file-sharing website that allows researchers and educators easy access to a large repository of virtual histology and pathology image files. With the support from the American Association of Anatomists (Bethesda, MD) and MBF Bioscience Inc. (Williston, VT), registration and use of the VMD are currently free of charge. However, the VMD site is restricted to faculty and staff of research and educational institutions. Virtual Microscopy Database users can upload their own collection of virtual slide files, as well as view and download image files for their own non-profit educational and research purposes that have been deposited by other VMD clients. Anat Sci Educ. © 2018 American Association of Anatomists. © 2018 American Association of Anatomists.

  7. ArrayBridge: Interweaving declarative array processing with high-performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, Haoyuan; Floratos, Sofoklis; Blanas, Spyros

    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aimsmore » to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.« less

  8. International Space Station Mechanisms and Maintenance Flight Control Documentation and Training Development

    NASA Technical Reports Server (NTRS)

    Daugherty, Colin C.

    2010-01-01

    International Space Station (ISS) crew and flight controller training documentation is used to aid in training operations. The Generic Simulations References SharePoint (Gen Sim) site is a database used as an aid during flight simulations. The Gen Sim site is used to make individual mission segment timelines, data, and flight information easily accessible to instructors. The Waste and Hygiene Compartment (WHC) training schematic includes simple and complex fluid schematics, as well as overall hardware locations. It is used as a teaching aid during WHC lessons for both ISS crew and flight controllers. ISS flight control documentation is used to support all aspects of ISS mission operations. The Quick Look Database and Consolidated Tool Page are imagery-based references used in real-time to help the Operations Support Officer (OSO) find data faster and improve discussions with the Flight Director and Capsule Communicator (CAPCOM). A Quick Look page was created for the Permanent Multipurpose Module (PMM) by locating photos of the module interior, labeling specific hardware, and organizing them in schematic form to match the layout of the PMM interior. A Tool Page was created for the Maintenance Work Area (MWA) by gathering images, detailed drawings, safety information, procedures, certifications, demonstration videos, and general facts of each MWA component and displaying them in an easily accessible and consistent format. Participation in ISS mechanisms and maintenance lessons, mission simulation On-the-Job Training (OJT), and real-time flight OJT was used as an opportunity to train for day-to-day operations as an OSO, as well as learn how to effectively respond to failures and emergencies during mission simulations and real-time flight operations.

  9. Enhancing UCSF Chimera through web services

    PubMed Central

    Huang, Conrad C.; Meng, Elaine C.; Morris, John H.; Pettersen, Eric F.; Ferrin, Thomas E.

    2014-01-01

    Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. PMID:24861624

  10. Vascular Access Tracking System: a Web-Based Clinical Tracking Tool for Identifying Catheter Related Blood Stream Infections in Interventional Radiology Placed Central Venous Catheters.

    PubMed

    Morrison, James; Kaufman, John

    2016-12-01

    Vascular access is invaluable in the treatment of hospitalized patients. Central venous catheters provide a durable and long-term solution while saving patients from repeated needle sticks for peripheral IVs and blood draws. The initial catheter placement procedure and long-term catheter usage place patients at risk for infection. The goal of this project was to develop a system to track and evaluate central line-associated blood stream infections related to interventional radiology placement of central venous catheters. A customized web-based clinical database was developed via open-source tools to provide a dashboard for data mining and analysis of the catheter placement and infection information. Preliminary results were gathered over a 4-month period confirming the utility of the system. The tools and methodology employed to develop the vascular access tracking system could be easily tailored to other clinical scenarios to assist in quality control and improvement programs.

  11. Usability evaluation of user interface of thesis title review system

    NASA Astrophysics Data System (ADS)

    Tri, Y.; Erna, A.; Gellysa, U.

    2018-03-01

    Presentation of programs with user interface that can be accessed online through the website of course greatly provide user benefits. User can easily access the program they need. There are usability values that serve as a benchmark for the success of a user accessible program, ie efficiency, effectiveness, and convenience. These usability values also determine the development of the program for the better use. Therefore, on the review title thesis program that will be implemented in STT Dumai was measured usability evaluation. It aims to see which sides are not yet perfect and need to be improved to improve the performance and utilization of the program. Usability evaluation was measured by using smartPLS software. Database used was the result of respondent questionnaires that include questions about the experience when they used program. The result of a review of thesis title program implemented in STT Dumai has an efficiency value of 22.615, the effectiveness of 20.612, and satisfaction of 33.177.

  12. MedXViewer: an extensible web-enabled software package for medical imaging

    NASA Astrophysics Data System (ADS)

    Looney, P. T.; Young, K. C.; Mackenzie, Alistair; Halling-Brown, Mark D.

    2014-03-01

    MedXViewer (Medical eXtensible Viewer) is an application designed to allow workstation-independent, PACS-less viewing and interaction with anonymised medical images (e.g. observer studies). The application was initially implemented for use in digital mammography and tomosynthesis but the flexible software design allows it to be easily extended to other imaging modalities. Regions of interest can be identified by a user and any associated information about a mark, an image or a study can be added. The questions and settings can be easily configured depending on the need of the research allowing both ROC and FROC studies to be performed. The extensible nature of the design allows for other functionality and hanging protocols to be available for each study. Panning, windowing, zooming and moving through slices are all available while modality-specific features can be easily enabled e.g. quadrant zooming in mammographic studies. MedXViewer can integrate with a web-based image database allowing results and images to be stored centrally. The software and images can be downloaded remotely from this centralised data-store. Alternatively, the software can run without a network connection where the images and results can be encrypted and stored locally on a machine or external drive. Due to the advanced workstation-style functionality, the simple deployment on heterogeneous systems over the internet without a requirement for administrative access and the ability to utilise a centralised database, MedXViewer has been used for running remote paper-less observer studies and is capable of providing a training infrastructure and co-ordinating remote collaborative viewing sessions (e.g. cancer reviews, interesting cases).

  13. MEGADOCK-Web: an integrated database of high-throughput structure-based protein-protein interaction predictions.

    PubMed

    Hayashi, Takanori; Matsuzaki, Yuri; Yanagisawa, Keisuke; Ohue, Masahito; Akiyama, Yutaka

    2018-05-08

    Protein-protein interactions (PPIs) play several roles in living cells, and computational PPI prediction is a major focus of many researchers. The three-dimensional (3D) structure and binding surface are important for the design of PPI inhibitors. Therefore, rigid body protein-protein docking calculations for two protein structures are expected to allow elucidation of PPIs different from known complexes in terms of 3D structures because known PPI information is not explicitly required. We have developed rapid PPI prediction software based on protein-protein docking, called MEGADOCK. In order to fully utilize the benefits of computational PPI predictions, it is necessary to construct a comprehensive database to gather prediction results and their predicted 3D complex structures and to make them easily accessible. Although several databases exist that provide predicted PPIs, the previous databases do not contain a sufficient number of entries for the purpose of discovering novel PPIs. In this study, we constructed an integrated database of MEGADOCK PPI predictions, named MEGADOCK-Web. MEGADOCK-Web provides more than 10 times the number of PPI predictions than previous databases and enables users to conduct PPI predictions that cannot be found in conventional PPI prediction databases. In MEGADOCK-Web, there are 7528 protein chains and 28,331,628 predicted PPIs from all possible combinations of those proteins. Each protein structure is annotated with PDB ID, chain ID, UniProt AC, related KEGG pathway IDs, and known PPI pairs. Additionally, MEGADOCK-Web provides four powerful functions: 1) searching precalculated PPI predictions, 2) providing annotations for each predicted protein pair with an experimentally known PPI, 3) visualizing candidates that may interact with the query protein on biochemical pathways, and 4) visualizing predicted complex structures through a 3D molecular viewer. MEGADOCK-Web provides a huge amount of comprehensive PPI predictions based on docking calculations with biochemical pathways and enables users to easily and quickly assess PPI feasibilities by archiving PPI predictions. MEGADOCK-Web also promotes the discovery of new PPIs and protein functions and is freely available for use at http://www.bi.cs.titech.ac.jp/megadock-web/ .

  14. A Scientific Collaboration Tool Built on the Facebook Platform

    PubMed Central

    Bedrick, Steven D.; Sittig, Dean F.

    2008-01-01

    We describe an application (“Medline Publications”) written for the Facebook platform that allows users to maintain and publish a list of their own Medline-indexed publications, as well as easily access their contacts’ lists. The system is semi-automatic in that it interfaces directly with the National Library of Medicine’s PubMed database to find and retrieve citation data. Furthermore, the system has the capability to present the user with sets of other users with similar publication profiles. As of July 2008, Medline Publications has attracted approximately 759 users, 624 of which have listed a total of 5,193 unique publications. PMID:18999247

  15. Introduction to an Open Source Internet-Based Testing Program for Medical Student Examinations

    PubMed Central

    2009-01-01

    The author developed a freely available open source internet-based testing program for medical examination. PHP and Java script were used as the programming language and postgreSQL as the database management system on an Apache web server and Linux operating system. The system approach was that a super user inputs the items, each school administrator inputs the examinees' information, and examinees access the system. The examinee's score is displayed immediately after examination with item analysis. The set-up of the system beginning with installation is described. This may help medical professors to easily adopt an internet-based testing system for medical education. PMID:20046457

  16. Introduction to an open source internet-based testing program for medical student examinations.

    PubMed

    Lee, Yoon-Hwan

    2009-12-20

    The author developed a freely available open source internet-based testing program for medical examination. PHP and Java script were used as the programming language and postgreSQL as the database management system on an Apache web server and Linux operating system. The system approach was that a super user inputs the items, each school administrator inputs the examinees' information, and examinees access the system. The examinee's score is displayed immediately after examination with item analysis. The set-up of the system beginning with installation is described. This may help medical professors to easily adopt an internet-based testing system for medical education.

  17. Design and implementation of the NPOI database and website

    NASA Astrophysics Data System (ADS)

    Newman, K.; Jorgensen, A. M.; Landavazo, M.; Sun, B.; Hutter, D. J.; Armstrong, J. T.; Mozurkewich, David; Elias, N.; van Belle, G. T.; Schmitt, H. R.; Baines, E. K.

    2014-07-01

    The Navy Precision Optical Interferometer (NPOI) has been recording astronomical observations for nearly two decades, at this point with hundreds of thousands of individual observations recorded to date for a total data volume of many terabytes. To make maximum use of the NPOI data it is necessary to organize them in an easily searchable manner and be able to extract essential diagnostic information from the data to allow users to quickly gauge data quality and suitability for a specific science investigation. This sets the motivation for creating a comprehensive database of observation metadata as well as, at least, reduced data products. The NPOI database is implemented in MySQL using standard database tools and interfaces. The use of standard database tools allows us to focus on top-level database and interface implementation and take advantage of standard features such as backup, remote access, mirroring, and complex queries which would otherwise be time-consuming to implement. A website was created in order to give scientists a user friendly interface for searching the database. It allows the user to select various metadata to search for and also allows them to decide how and what results are displayed. This streamlines the searches, making it easier and quicker for scientists to find the information they are looking for. The website has multiple browser and device support. In this paper we present the design of the NPOI database and website, and give examples of its use.

  18. SHERPA: Towards better accessibility of earthquake rupture archives

    NASA Astrophysics Data System (ADS)

    Théo, Yann; Sémo, Emmanuel; Mazet Roux, Gilles; Bossu, Rémy; Kamb, Linus; Frobert, Laurent

    2010-05-01

    Large crustal earthquakes are the subject of extensive field surveys in order to better understand the rupture process and its tectonic consequences. After the earthquake, pictures of the rupture can easily viewed quite easily on the web. However, once the event gets old, pictures disappear and can no longer be viewed, a heavy loss for researchers looking for information. Even when available, there are linked to a given survey and comparison between different earthquakes of the same phenomenon can not be easily performed. SHERPA, Sharing of Earthquake Rupture Pictures Archive, a web application developed at EMSC aims to fill this void. It aims at making available pictures of past earthquakes and sharing resources while strictly protecting the authors copyright and keeping the authors in charge of the diffusion to avoid unfair or inappropriate use of the photos. Our application is targeted at scientists and scientists only. Pictures uploaded on SHERPA are marked by a watermark "NOT FOR PUBLICATION" spread all over, and state the author's name. Authors and authors only have the possibility to remove this mark should they want their work to enter the public domain. If a user sees a picture he/she would like to use, he/she can put this picture in his/her cart. After the validation of this cart, a request (stating the name and purposes of the requestor) will be sent to the author(s) to ask to share the picture(s). If an author accepts this request, the requestor will be given the authorization to access a protected folder and download the unmarked picture. Without the author explicit consent, no picture will never be accessible to anyone. We want to state this point very clearly because ownership and copyright protection are essential to the SHERPA project. Uploading pictures is quick and easy: once registered, you can very simply upload pictures that can then be geolocalised using a Google map plugged on the web site. If the camera is equipped with a GPS, the software will automatically retrieve the location from the exif file. Pictures can be linked to an earthquake and be described through a system of tags. This way, they are searchable in the database. Once uploaded, pictures become available for browsing for any visitors. Using the tags, visitors can search the database for pictures of a same phenomenon in several events, or extract the ones from a given region, or a certain type of faulting. The selected pictures can be viewed on a map and on a carousel. By providing such a service we hope to contribute to a better accessibility of the pictures taken during field survey and then improving earthquake documentation which remain a key element for our field of research. http://sherpa.emsc-csem.org/

  19. PS1-41: Just Add Data: Implementing an Event-Based Data Model for Clinical Trial Tracking

    PubMed Central

    Fuller, Sharon; Carrell, David; Pardee, Roy

    2012-01-01

    Background/Aims Clinical research trials often have similar fundamental tracking needs, despite being quite variable in their specific logic and activities. A model tracking database that can be quickly adapted by a variety of studies has the potential to achieve significant efficiencies in database development and maintenance. Methods Over the course of several different clinical trials, we have developed a database model that is highly adaptable to a variety of projects. Rather than hard-coding each specific event that might occur in a trial, along with its logical consequences, this model considers each event and its parameters to be a data record in its own right. Each event may have related variables (metadata) describing its prerequisites, subsequent events due, associated mailings, or events that it overrides. The metadata for each event is stored in the same record with the event name. When changes are made to the study protocol, no structural changes to the database are needed. One has only to add or edit events and their metadata. Changes in the event metadata automatically determine any related logic changes. In addition to streamlining application code, this model simplifies communication between the programmer and other team members. Database requirements can be phrased as changes to the underlying data, rather than to the application code. The project team can review a single report of events and metadata and easily see where changes might be needed. In addition to benefitting from streamlined code, the front end database application can also implement useful standard features such as automated mail merges and to do lists. Results The event-based data model has proven itself to be robust, adaptable and user-friendly in a variety of study contexts. We have chosen to implement it as a SQL Server back end and distributed Access front end. Interested readers may request a copy of the Access front end and scripts for creating the back end database. Discussion An event-based database with a consistent, robust set of features has the potential to significantly reduce development time and maintenance expense for clinical trial tracking databases.

  20. A SPDS Node to Support the Systematic Interpretation of Cosmic Ray Data

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The purpose of this project was to establish and maintain a Space Physics Data System (SPDS) node that supports the analysis and interpretation of current and future galactic cosmic ray (GCR) measurements by (1) providing on-line databases relevant to GCR propagation studies; (2) providing other on-line services, such as anonymous FTP access, mail list service and pointers to e-mail address books, to support the cosmic ray community; (3) providing a mechanism for those in the community who might wish to submit similar contributions for public access; (4) maintaining the node to assure that the databases remain current; and (5) investigating other possibilities, such as CD-ROM, for public dissemination of the data products. Shortly after the original grant to support these activities was established at Louisiana State University a detailed study of alternate choices for the node hardware was initiated. The chosen hardware was an Apple Workgroup Server 9150/120 consisting of a 120 MHz PowerPC 601 processor, 32 MB of memory, two I GB disks and one 2 GB disk. This hardware was ordered and installed and has been operating reliably ever since. A preliminary version of the database server was available during the first year effort and was used as part of the very successful SPDS demonstration during the Rome, Italy International Cosmic Ray Conference. For this server version we were able to establish the html and anonymous FTP server software, develop a Web page structure which can be easily modified to include new items, provide an on-line database of charge changing total cross sections, include the cross section prediction software of Silberberg & Tsao as well as Webber, Kish and Schrier for download access, and provide an on-line bibliography of the cross section measurement references by the Transport Collaboration. The preliminary version of this SPDS Cosmic Ray node was examined by members of the C&H SPDS committee and returned comments were used to refine the implementation.

  1. A geodata warehouse: Using denormalisation techniques as a tool for delivering spatially enabled integrated geological information to geologists

    NASA Astrophysics Data System (ADS)

    Kingdon, Andrew; Nayembil, Martin L.; Richardson, Anne E.; Smith, A. Graham

    2016-11-01

    New requirements to understand geological properties in three dimensions have led to the development of PropBase, a data structure and delivery tools to deliver this. At the BGS, relational database management systems (RDBMS) has facilitated effective data management using normalised subject-based database designs with business rules in a centralised, vocabulary controlled, architecture. These have delivered effective data storage in a secure environment. However, isolated subject-oriented designs prevented efficient cross-domain querying of datasets. Additionally, the tools provided often did not enable effective data discovery as they struggled to resolve the complex underlying normalised structures providing poor data access speeds. Users developed bespoke access tools to structures they did not fully understand sometimes delivering them incorrect results. Therefore, BGS has developed PropBase, a generic denormalised data structure within an RDBMS to store property data, to facilitate rapid and standardised data discovery and access, incorporating 2D and 3D physical and chemical property data, with associated metadata. This includes scripts to populate and synchronise the layer with its data sources through structured input and transcription standards. A core component of the architecture includes, an optimised query object, to deliver geoscience information from a structure equivalent to a data warehouse. This enables optimised query performance to deliver data in multiple standardised formats using a web discovery tool. Semantic interoperability is enforced through vocabularies combined from all data sources facilitating searching of related terms. PropBase holds 28.1 million spatially enabled property data points from 10 source databases incorporating over 50 property data types with a vocabulary set that includes 557 property terms. By enabling property data searches across multiple databases PropBase has facilitated new scientific research, previously considered impractical. PropBase is easily extended to incorporate 4D data (time series) and is providing a baseline for new "big data" monitoring projects.

  2. Featured Article: Genotation: Actionable knowledge for the scientific reader

    PubMed Central

    Willis, Ethan; Sakauye, Mark; Jose, Rony; Chen, Hao; Davis, Robert L

    2016-01-01

    We present an article viewer application that allows a scientific reader to easily discover and share knowledge by linking genomics-related concepts to knowledge of disparate biomedical databases. High-throughput data streams generated by technical advancements have contributed to scientific knowledge discovery at an unprecedented rate. Biomedical Informaticists have created a diverse set of databases to store and retrieve the discovered knowledge. The diversity and abundance of such resources present biomedical researchers a challenge with knowledge discovery. These challenges highlight a need for a better informatics solution. We use a text mining algorithm, Genomine, to identify gene symbols from the text of a journal article. The identified symbols are supplemented with information from the GenoDB knowledgebase. Self-updating GenoDB contains information from NCBI Gene, Clinvar, Medgen, dbSNP, KEGG, PharmGKB, Uniprot, and Hugo Gene databases. The journal viewer is a web application accessible via a web browser. The features described herein are accessible on www.genotation.org. The Genomine algorithm identifies gene symbols with an accuracy shown by .65 F-Score. GenoDB currently contains information regarding 59,905 gene symbols, 5633 drug–gene relationships, 5981 gene–disease relationships, and 713 pathways. This application provides scientific readers with actionable knowledge related to concepts of a manuscript. The reader will be able to save and share supplements to be visualized in a graphical manner. This provides convenient access to details of complex biological phenomena, enabling biomedical researchers to generate novel hypothesis to further our knowledge in human health. This manuscript presents a novel application that integrates genomic, proteomic, and pharmacogenomic information to supplement content of a biomedical manuscript and enable readers to automatically discover actionable knowledge. PMID:26900164

  3. The Virtual Observatory Service TheoSSA: Establishing a Database of Synthetic Stellar Flux Standards II. NLTE Spectral Analysis of the OB-Type Subdwarf Feige 110

    NASA Technical Reports Server (NTRS)

    Rauch, T.; Rudkowski, A.; Kampka, D.; Werner, K.; Kruk, J. W.; Moehler, S.

    2014-01-01

    Context. In the framework of the Virtual Observatory (VO), the German Astrophysical VO (GAVO) developed the registered service TheoSSA (Theoretical Stellar Spectra Access). It provides easy access to stellar spectral energy distributions (SEDs) and is intended to ingest SEDs calculated by any model-atmosphere code, generally for all effective temperatures, surface gravities, and elemental compositions. We will establish a database of SEDs of flux standards that are easily accessible via TheoSSA's web interface. Aims. The OB-type subdwarf Feige 110 is a standard star for flux calibration. State-of-the-art non-local thermodynamic equilibrium stellar-atmosphere models that consider opacities of species up to trans-iron elements will be used to provide a reliable synthetic spectrum to compare with observations. Methods. In case of Feige 110, we demonstrate that the model reproduces not only its overall continuum shape from the far-ultraviolet (FUV) to the optical wavelength range but also the numerous metal lines exhibited in its FUV spectrum. Results. We present a state-of-the-art spectral analysis of Feige 110. We determined Teff =47 250 +/- 2000 K, log g=6.00 +/- 0.20, and the abundances of He, N, P, S, Ti, V, Cr, Mn, Fe, Co, Ni, Zn, and Ge. Ti, V, Mn, Co, Zn, and Ge were identified for the first time in this star. Upper abundance limits were derived for C, O, Si, Ca, and Sc. Conclusions. The TheoSSA database of theoretical SEDs of stellar flux standards guarantees that the flux calibration of astronomical data and cross-calibration between different instruments can be based on models and SEDs calculated with state-of-the-art model atmosphere codes.

  4. Featured Article: Genotation: Actionable knowledge for the scientific reader.

    PubMed

    Nagahawatte, Panduka; Willis, Ethan; Sakauye, Mark; Jose, Rony; Chen, Hao; Davis, Robert L

    2016-06-01

    We present an article viewer application that allows a scientific reader to easily discover and share knowledge by linking genomics-related concepts to knowledge of disparate biomedical databases. High-throughput data streams generated by technical advancements have contributed to scientific knowledge discovery at an unprecedented rate. Biomedical Informaticists have created a diverse set of databases to store and retrieve the discovered knowledge. The diversity and abundance of such resources present biomedical researchers a challenge with knowledge discovery. These challenges highlight a need for a better informatics solution. We use a text mining algorithm, Genomine, to identify gene symbols from the text of a journal article. The identified symbols are supplemented with information from the GenoDB knowledgebase. Self-updating GenoDB contains information from NCBI Gene, Clinvar, Medgen, dbSNP, KEGG, PharmGKB, Uniprot, and Hugo Gene databases. The journal viewer is a web application accessible via a web browser. The features described herein are accessible on www.genotation.org The Genomine algorithm identifies gene symbols with an accuracy shown by .65 F-Score. GenoDB currently contains information regarding 59,905 gene symbols, 5633 drug-gene relationships, 5981 gene-disease relationships, and 713 pathways. This application provides scientific readers with actionable knowledge related to concepts of a manuscript. The reader will be able to save and share supplements to be visualized in a graphical manner. This provides convenient access to details of complex biological phenomena, enabling biomedical researchers to generate novel hypothesis to further our knowledge in human health. This manuscript presents a novel application that integrates genomic, proteomic, and pharmacogenomic information to supplement content of a biomedical manuscript and enable readers to automatically discover actionable knowledge. © 2016 by the Society for Experimental Biology and Medicine.

  5. The Eclipsing Binary On-Line Atlas (EBOLA)

    NASA Astrophysics Data System (ADS)

    Bradstreet, D. H.; Steelman, D. P.; Sanders, S. J.; Hargis, J. R.

    2004-05-01

    In conjunction with the upcoming release of \\it Binary Maker 3.0, an extensive on-line database of eclipsing binaries is being made available. The purposes of the atlas are: \\begin {enumerate} Allow quick and easy access to information on published eclipsing binaries. Amass a consistent database of light and radial velocity curve solutions to aid in solving new systems. Provide invaluable querying capabilities on all of the parameters of the systems so that informative research can be quickly accomplished on a multitude of published results. Aid observers in establishing new observing programs based upon stars needing new light and/or radial velocity curves. Encourage workers to submit their published results so that others may have easy access to their work. Provide a vast but easily accessible storehouse of information on eclipsing binaries to accelerate the process of understanding analysis techniques and current work in the field. \\end {enumerate} The database will eventually consist of all published eclipsing binaries with light curve solutions. The following information and data will be supplied whenever available for each binary: original light curves in all bandpasses, original radial velocity observations, light curve parameters, RA and Dec, V-magnitudes, spectral types, color indices, periods, binary type, 3D representation of the system near quadrature, plots of the original light curves and synthetic models, plots of the radial velocity observations with theoretical models, and \\it Binary Maker 3.0 data files (parameter, light curve, radial velocity). The pertinent references for each star are also given with hyperlinks directly to the papers via the NASA Abstract website for downloading, if available. In addition the Atlas has extensive searching options so that workers can specifically search for binaries with specific characteristics. The website has more than 150 systems already uploaded. The URL for the site is http://ebola.eastern.edu/.

  6. ABrowse--a customizable next-generation genome browser framework.

    PubMed

    Kong, Lei; Wang, Jun; Zhao, Shuqi; Gu, Xiaocheng; Luo, Jingchu; Gao, Ge

    2012-01-05

    With the rapid growth of genome sequencing projects, genome browser is becoming indispensable, not only as a visualization system but also as an interactive platform to support open data access and collaborative work. Thus a customizable genome browser framework with rich functions and flexible configuration is needed to facilitate various genome research projects. Based on next-generation web technologies, we have developed a general-purpose genome browser framework ABrowse which provides interactive browsing experience, open data access and collaborative work support. By supporting Google-map-like smooth navigation, ABrowse offers end users highly interactive browsing experience. To facilitate further data analysis, multiple data access approaches are supported for external platforms to retrieve data from ABrowse. To promote collaborative work, an online user-space is provided for end users to create, store and share comments, annotations and landmarks. For data providers, ABrowse is highly customizable and configurable. The framework provides a set of utilities to import annotation data conveniently. To build ABrowse on existing annotation databases, data providers could specify SQL statements according to database schema. And customized pages for detailed information display of annotation entries could be easily plugged in. For developers, new drawing strategies could be integrated into ABrowse for new types of annotation data. In addition, standard web service is provided for data retrieval remotely, providing underlying machine-oriented programming interface for open data access. ABrowse framework is valuable for end users, data providers and developers by providing rich user functions and flexible customization approaches. The source code is published under GNU Lesser General Public License v3.0 and is accessible at http://www.abrowse.org/. To demonstrate all the features of ABrowse, a live demo for Arabidopsis thaliana genome has been built at http://arabidopsis.cbi.edu.cn/.

  7. Intraosseous vascular access in disasters and mass casualty events: A review of the literature.

    PubMed

    Burgert, James M

    2016-01-01

    The intraosseous (IO) route of vascular access has been increasingly used to administer resuscitative fluids and drugs to patients in whom reliable intravenous (IV) access could not be rapidly or easily obtained. It is unknown that to what extent the IO route has been used to gain vascular access during disasters and mass casualty events. The purpose of this review was to examine the existing literature to answer the research question, "What is the utility of the IO route compared to other routes for establishing vascular access in patients resulting from disasters and mass casualty events?" Keyword-based online database search of PubMed, CINAHL, and the Cochrane Database of Systematic Reviews. University-based academic research cell. Included evidence were randomized and nonrandomized trials, systematic reviews with and without meta-analysis, case series, and case reports. Excluded evidence included narrative reviews and expert opinion. Not applicable. Of 297 evidence sources located, 22 met inclusion criteria. Located evidence was organized into four categories including chemical agent poisoning, IO placement, while wearing chemical protective clothing (PPE), military trauma, and infectious disease outbreak. Evidence indicates that the IO route of infusion is pharmacokinetically equal to the IV route and superior to the intramuscular (IM) and endotracheal routes for the administration of antidotal drugs in animal models of chemical agent poisoning while wearing full chemical PPE. The IO route is superior to the IM route for antidote administration during hypovolemic shock. Civilian casualties of explosive attacks and mass shootings would likely benefit from expanded use of the IO route and military resuscitation strategies. The IO route is useful for fluid resuscitation in the management of diarrheal and hemorrhagic infectious disease outbreaks.

  8. mzDB: A File Format Using Multiple Indexing Strategies for the Efficient Analysis of Large LC-MS/MS and SWATH-MS Data Sets*

    PubMed Central

    Bouyssié, David; Dubois, Marc; Nasso, Sara; Gonzalez de Peredo, Anne; Burlet-Schiltz, Odile; Aebersold, Ruedi; Monsarrat, Bernard

    2015-01-01

    The analysis and management of MS data, especially those generated by data independent MS acquisition, exemplified by SWATH-MS, pose significant challenges for proteomics bioinformatics. The large size and vast amount of information inherent to these data sets need to be properly structured to enable an efficient and straightforward extraction of the signals used to identify specific target peptides. Standard XML based formats are not well suited to large MS data files, for example, those generated by SWATH-MS, and compromise high-throughput data processing and storing. We developed mzDB, an efficient file format for large MS data sets. It relies on the SQLite software library and consists of a standardized and portable server-less single-file database. An optimized 3D indexing approach is adopted, where the LC-MS coordinates (retention time and m/z), along with the precursor m/z for SWATH-MS data, are used to query the database for data extraction. In comparison with XML formats, mzDB saves ∼25% of storage space and improves access times by a factor of twofold up to even 2000-fold, depending on the particular data access. Similarly, mzDB shows also slightly to significantly lower access times in comparison with other formats like mz5. Both C++ and Java implementations, converting raw or XML formats to mzDB and providing access methods, will be released under permissive license. mzDB can be easily accessed by the SQLite C library and its drivers for all major languages, and browsed with existing dedicated GUIs. The mzDB described here can boost existing mass spectrometry data analysis pipelines, offering unprecedented performance in terms of efficiency, portability, compactness, and flexibility. PMID:25505153

  9. Database for propagation models

    NASA Astrophysics Data System (ADS)

    Kantak, Anil V.

    1991-07-01

    A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.

  10. Development of computer-assisted instruction application for statistical data analysis android platform as learning resource

    NASA Astrophysics Data System (ADS)

    Hendikawati, P.; Arifudin, R.; Zahid, M. Z.

    2018-03-01

    This study aims to design an android Statistics Data Analysis application that can be accessed through mobile devices to making it easier for users to access. The Statistics Data Analysis application includes various topics of basic statistical along with a parametric statistics data analysis application. The output of this application system is parametric statistics data analysis that can be used for students, lecturers, and users who need the results of statistical calculations quickly and easily understood. Android application development is created using Java programming language. The server programming language uses PHP with the Code Igniter framework, and the database used MySQL. The system development methodology used is the Waterfall methodology with the stages of analysis, design, coding, testing, and implementation and system maintenance. This statistical data analysis application is expected to support statistical lecturing activities and make students easier to understand the statistical analysis of mobile devices.

  11. Dermatopathology education in the era of modern technology.

    PubMed

    Shahriari, Neda; Grant-Kels, Jane; Murphy, Michael J

    2017-09-01

    Continuing technological advances are inevitably impacting the study and practice of dermatopathology (DP). We are seeing the transition from glass slide microscopy to virtual microscopy, which is serving both as an accessible educational medium for medical students, residents and fellows in the form of online databases and atlases, as well as a research tool to better inform us regarding the development of visual diagnostic expertise. Expansion in mobile technology is simplifying slide image attainment and providing greater opportunities for phone- and tablet-based microscopy, including teledermatopathology instruction and consultation in resource-poor areas with lack of specialists. Easily accessible mobile and computer-based applications ("apps"), including myDermPath and Clearpath, are providing an interactive medium for DP instruction. The Internet and social networking sites are enabling rapid global communication of DP information and image-sharing, promoting collaborative diagnostic research and scholastic endeavors. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Genalogical approaches to ethical implications of informational assimilative integrated discovery systems (AIDS) in business

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pharhizgar, K.D.; Lunce, S.E.

    1994-12-31

    Development of knowledge-based technological acquisition techniques and customers` information profiles are known as assimilative integrated discovery systems (AIDS) in modern organizations. These systems have access through processing to both deep and broad domains of information in modern societies. Through these systems organizations and individuals can predict future trend probabilities and events concerning their customers. AIDSs are new techniques which produce new information which informants can use without the help of the knowledge sources because of the existence of highly sophisticated computerized networks. This paper has analyzed the danger and side effects of misuse of information through the illegal, unethical andmore » immoral access to the data-base in an integrated and assimilative information system as described above. Cognivistic mapping, pragmatistic informational design gathering, and holistic classifiable and distributive techniques are potentially abusive systems whose outputs can be easily misused by businesses when researching the firm`s customers.« less

  13. TCIApathfinder: an R client for The Cancer Imaging Archive REST API.

    PubMed

    Russell, Pamela; Fountain, Kelly; Wolverton, Dulcy; Ghosh, Debashis

    2018-06-05

    The Cancer Imaging Archive (TCIA) hosts publicly available de-identified medical images of cancer from over 25 body sites and over 30,000 patients. Over 400 published studies have utilized freely available TCIA images. Images and metadata are available for download through a web interface or a REST API. Here we present TCIApathfinder, an R client for the TCIA REST API. TCIApathfinder wraps API access in user-friendly R functions that can be called interactively within an R session or easily incorporated into scripts. Functions are provided to explore the contents of the large database and to download image files. TCIApathfinder provides easy access to TCIA resources in the highly popular R programming environment. TCIApathfinder is freely available under the MIT license as a package on CRAN (https://cran.r-project.org/web/packages/TCIApathfinder/index.html) and at https://github.com/pamelarussell/TCIApathfinder. Copyright ©2018, American Association for Cancer Research.

  14. SInCRe—structural interactome computational resource for Mycobacterium tuberculosis

    PubMed Central

    Metri, Rahul; Hariharaputran, Sridhar; Ramakrishnan, Gayatri; Anand, Praveen; Raghavender, Upadhyayula S.; Ochoa-Montaño, Bernardo; Higueruelo, Alicia P.; Sowdhamini, Ramanathan; Chandra, Nagasuma R.; Blundell, Tom L.; Srinivasan, Narayanaswamy

    2015-01-01

    We have developed an integrated database for Mycobacterium tuberculosis H37Rv (Mtb) that collates information on protein sequences, domain assignments, functional annotation and 3D structural information along with protein–protein and protein–small molecule interactions. SInCRe (Structural Interactome Computational Resource) is developed out of CamBan (Cambridge and Bangalore) collaboration. The motivation for development of this database is to provide an integrated platform to allow easily access and interpretation of data and results obtained by all the groups in CamBan in the field of Mtb informatics. In-house algorithms and databases developed independently by various academic groups in CamBan are used to generate Mtb-specific datasets and are integrated in this database to provide a structural dimension to studies on tuberculosis. The SInCRe database readily provides information on identification of functional domains, genome-scale modelling of structures of Mtb proteins and characterization of the small-molecule binding sites within Mtb. The resource also provides structure-based function annotation, information on small-molecule binders including FDA (Food and Drug Administration)-approved drugs, protein–protein interactions (PPIs) and natural compounds that bind to pathogen proteins potentially and result in weakening or elimination of host–pathogen protein–protein interactions. Together they provide prerequisites for identification of off-target binding. Database URL: http://proline.biochem.iisc.ernet.in/sincre PMID:26130660

  15. A review of accessibility of administrative healthcare databases in the Asia-Pacific region.

    PubMed

    Milea, Dominique; Azmi, Soraya; Reginald, Praveen; Verpillat, Patrice; Francois, Clement

    2015-01-01

    We describe and compare the availability and accessibility of administrative healthcare databases (AHDB) in several Asia-Pacific countries: Australia, Japan, South Korea, Taiwan, Singapore, China, Thailand, and Malaysia. The study included hospital records, reimbursement databases, prescription databases, and data linkages. Databases were first identified through PubMed, Google Scholar, and the ISPOR database register. Database custodians were contacted. Six criteria were used to assess the databases and provided the basis for a tool to categorise databases into seven levels ranging from least accessible (Level 1) to most accessible (Level 7). We also categorised overall data accessibility for each country as high, medium, or low based on accessibility of databases as well as the number of academic articles published using the databases. Fifty-four administrative databases were identified. Only a limited number of databases allowed access to raw data and were at Level 7 [Medical Data Vision EBM Provider, Japan Medical Data Centre (JMDC) Claims database and Nihon-Chouzai Pharmacy Claims database in Japan, and Medicare, Pharmaceutical Benefits Scheme (PBS), Centre for Health Record Linkage (CHeReL), HealthLinQ, Victorian Data Linkages (VDL), SA-NT DataLink in Australia]. At Levels 3-6 were several databases from Japan [Hamamatsu Medical University Database, Medi-Trend, Nihon University School of Medicine Clinical Data Warehouse (NUSM)], Australia [Western Australia Data Linkage (WADL)], Taiwan [National Health Insurance Research Database (NHIRD)], South Korea [Health Insurance Review and Assessment Service (HIRA)], and Malaysia [United Nations University (UNU)-Casemix]. Countries were categorised as having a high level of data accessibility (Australia, Taiwan, and Japan), medium level of accessibility (South Korea), or a low level of accessibility (Thailand, China, Malaysia, and Singapore). In some countries, data may be available but accessibility was restricted based on requirements by data custodians. Compared with previous research, this study describes the landscape of databases in the selected countries with more granularity using an assessment tool developed for this purpose. A high number of databases were identified but most had restricted access, preventing their potential use to support research. We hope that this study helps to improve the understanding of the AHDB landscape, increase data sharing and database research in Asia-Pacific countries.

  16. Reflective Database Access Control

    ERIC Educational Resources Information Center

    Olson, Lars E.

    2009-01-01

    "Reflective Database Access Control" (RDBAC) is a model in which a database privilege is expressed as a database query itself, rather than as a static privilege contained in an access control list. RDBAC aids the management of database access controls by improving the expressiveness of policies. However, such policies introduce new interactions…

  17. WOVOdat Design Document: The Schema, Table Descriptions, and Create Table Statements for the Database of Worldwide Volcanic Unrest (WOVOdat Version 1.0)

    USGS Publications Warehouse

    Venezky, Dina Y.; Newhall, Christopher G.

    2007-01-01

    WOVOdat Overview During periods of volcanic unrest, the ability to forecast near future activity has been a primary concern for human populations living near volcanoes. Our ability to forecast future activity and mitigate hazards is based on knowledge of previous activity at the volcano exhibiting unrest and knowledge of previous activity at similar volcanoes. A small set of experts with past experience are often involved in forecasting. We need to both preserve the knowledge the experts use and continue to investigate volcanic data to make better forecasts. Advances in instrumentation, networking, and data storage technologies have greatly increased our ability to collect volcanic data and share observations with our colleagues. The wealth of data creates numerous opportunities for gaining a better understanding of magmatic conditions and processes, if the data can be easily accessed for comparison. To allow for comparison of volcanic unrest data, we are creating a central database called WOVOdat. WOVOdat will contain a subset of time-series and geo-referenced data from each WOVO observatory in common and easily accessible formats. WOVOdat is being created for volcano experts in charge of forecasting volcanic activity, scientists investigating volcanic processes, and the public. The types of queries each of these groups might ask range from, 'What volcanoes were active in November of 2002?' and 'What are the relationships between tectonic earthquakes and volcanic processes?' to complex analyses of volcanic unrest to determine what future activity might occur. A new structure for storing and accessing our data was needed to examine processes across a wide range of volcanologic conditions. WOVOdat provides this new structure using relationships to connect the data parameters such that searches can be created for analogs of unrest. The subset of data that will fill WOVOdat will continue to be collected by the observatories, who will remain the primary archives of raw and detailed data on individual episodes of unrest. MySQL, an Open Source database, was chosen as the WOVOdat database for its integration with common web languages. The question of where the data will be stored and how the disparate data sets will be integrated will not be discussed in detail here. The focus of this document is to explain the data types, formats, and table organization chosen for WOVOdat 1.0. It was written for database administrators, data loaders, query writers, and anyone who monitors volcanoes. We begin with an overview of several challenges faced and solutions used in creating the WOVOdat schema. Specifics are then given for the parameters and table organization. After each table organization section, basic create table statements are included for viewing the database field formats. In the next stage of the project, scripts will be needed for data conversion, entry, and cleansing. Views will also need to be created once the data have been loaded and the basic queries are better known. Many questions and opportunities remain. We look forward to the growth and continual improvement in efficiency of the system. We hope WOVOdat will improve our understanding of magmatic systems and help mitigate future volcanic hazards.

  18. Database Access Systems.

    ERIC Educational Resources Information Center

    Dalrymple, Prudence W.; Roderer, Nancy K.

    1994-01-01

    Highlights the changes that have occurred from 1987-93 in database access systems. Topics addressed include types of databases, including CD-ROMs; enduser interface; database selection; database access management, including library instruction and use of primary literature; economic issues; database users; the search process; and improving…

  19. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1992-01-01

    One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.

  20. RAACFDb: Rheumatoid arthritis ayurvedic classical formulations database.

    PubMed

    Mohamed Thoufic Ali, A M; Agrawal, Aakash; Sajitha Lulu, S; Mohana Priya, A; Vino, S

    2017-02-02

    In the past years, the treatment of rheumatoid arthritis (RA) has undergone remarkable changes in all therapeutic modes. The present newfangled care in clinical research is to determine and to pick a new track for better treatment options for RA. Recent ethnopharmacological investigations revealed that traditional herbal remedies are the most preferred modality of complementary and alternative medicine (CAM). However, several ayurvedic modes of treatments and formulations for RA are not much studied and documented from Indian traditional system of medicine. Therefore, this directed us to develop an integrated database, RAACFDb (acronym: Rheumatoid Arthritis Ayurvedic Classical Formulations Database) by consolidating data from the repository of Vedic Samhita - The Ayurveda to retrieve the available formulations information easily. Literature data was gathered using several search engines and from ayurvedic practitioners for loading information in the database. In order to represent the collected information about classical ayurvedic formulations, an integrated database is constructed and implemented on a MySQL and PHP back-end. The database is supported by describing all the ayurvedic classical formulations for the treatment rheumatoid arthritis. It includes composition, usage, plant parts used, active ingredients present in the composition and their structures. The prime objective is to locate ayurvedic formulations proven to be quite successful and highly effective among the patients with reduced side effects. The database (freely available at www.beta.vit.ac.in/raacfdb/index.html) hopefully enables easy access for clinical researchers and students to discover novel leads with reduced side effects. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. CLEARPOND: Cross-Linguistic Easy-Access Resource for Phonological and Orthographic Neighborhood Densities

    PubMed Central

    Marian, Viorica; Bartolotti, James; Chabal, Sarah; Shook, Anthony

    2012-01-01

    Past research has demonstrated cross-linguistic, cross-modal, and task-dependent differences in neighborhood density effects, indicating a need to control for neighborhood variables when developing and interpreting research on language processing. The goals of the present paper are two-fold: (1) to introduce CLEARPOND (Cross-Linguistic Easy-Access Resource for Phonological and Orthographic Neighborhood Densities), a centralized database of phonological and orthographic neighborhood information, both within and between languages, for five commonly-studied languages: Dutch, English, French, German, and Spanish; and (2) to show how CLEARPOND can be used to compare general properties of phonological and orthographic neighborhoods across languages. CLEARPOND allows researchers to input a word or list of words and obtain phonological and orthographic neighbors, neighborhood densities, mean neighborhood frequencies, word lengths by number of phonemes and graphemes, and spoken-word frequencies. Neighbors can be defined by substitution, deletion, and/or addition, and the database can be queried separately along each metric or summed across all three. Neighborhood values can be obtained both within and across languages, and outputs can optionally be restricted to neighbors of higher frequency. To enable researchers to more quickly and easily develop stimuli, CLEARPOND can also be searched by features, generating lists of words that meet precise criteria, such as a specific range of neighborhood sizes, lexical frequencies, and/or word lengths. CLEARPOND is freely-available to researchers and the public as a searchable, online database and for download at http://clearpond.northwestern.edu. PMID:22916227

  2. Query3d: a new method for high-throughput analysis of functional residues in protein structures.

    PubMed

    Ausiello, Gabriele; Via, Allegra; Helmer-Citterich, Manuela

    2005-12-01

    The identification of local similarities between two protein structures can provide clues of a common function. Many different methods exist for searching for similar subsets of residues in proteins of known structure. However, the lack of functional and structural information on single residues, together with the low level of integration of this information in comparison methods, is a limitation that prevents these methods from being fully exploited in high-throughput analyses. Here we describe Query3d, a program that is both a structural DBMS (Database Management System) and a local comparison method. The method conserves a copy of all the residues of the Protein Data Bank annotated with a variety of functional and structural information. New annotations can be easily added from a variety of methods and known databases. The algorithm makes it possible to create complex queries based on the residues' function and then to compare only subsets of the selected residues. Functional information is also essential to speed up the comparison and the analysis of the results. With Query3d, users can easily obtain statistics on how many and which residues share certain properties in all proteins of known structure. At the same time, the method also finds their structural neighbours in the whole PDB. Programs and data can be accessed through the PdbFun web interface.

  3. Information extraction from Italian medical reports: An ontology-driven approach.

    PubMed

    Viani, Natalia; Larizza, Cristiana; Tibollo, Valentina; Napolitano, Carlo; Priori, Silvia G; Bellazzi, Riccardo; Sacchi, Lucia

    2018-03-01

    In this work, we propose an ontology-driven approach to identify events and their attributes from episodes of care included in medical reports written in Italian. For this language, shared resources for clinical information extraction are not easily accessible. The corpus considered in this work includes 5432 non-annotated medical reports belonging to patients with rare arrhythmias. To guide the information extraction process, we built a domain-specific ontology that includes the events and the attributes to be extracted, with related regular expressions. The ontology and the annotation system were constructed on a development set, while the performance was evaluated on an independent test set. As a gold standard, we considered a manually curated hospital database named TRIAD, which stores most of the information written in reports. The proposed approach performs well on the considered Italian medical corpus, with a percentage of correct annotations above 90% for most considered clinical events. We also assessed the possibility to adapt the system to the analysis of another language (i.e., English), with promising results. Our annotation system relies on a domain ontology to extract and link information in clinical text. We developed an ontology that can be easily enriched and translated, and the system performs well on the considered task. In the future, it could be successfully used to automatically populate the TRIAD database. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Query3d: a new method for high-throughput analysis of functional residues in protein structures

    PubMed Central

    Ausiello, Gabriele; Via, Allegra; Helmer-Citterich, Manuela

    2005-01-01

    Background The identification of local similarities between two protein structures can provide clues of a common function. Many different methods exist for searching for similar subsets of residues in proteins of known structure. However, the lack of functional and structural information on single residues, together with the low level of integration of this information in comparison methods, is a limitation that prevents these methods from being fully exploited in high-throughput analyses. Results Here we describe Query3d, a program that is both a structural DBMS (Database Management System) and a local comparison method. The method conserves a copy of all the residues of the Protein Data Bank annotated with a variety of functional and structural information. New annotations can be easily added from a variety of methods and known databases. The algorithm makes it possible to create complex queries based on the residues' function and then to compare only subsets of the selected residues. Functional information is also essential to speed up the comparison and the analysis of the results. Conclusion With Query3d, users can easily obtain statistics on how many and which residues share certain properties in all proteins of known structure. At the same time, the method also finds their structural neighbours in the whole PDB. Programs and data can be accessed through the PdbFun web interface. PMID:16351754

  5. A review of accessibility of administrative healthcare databases in the Asia-Pacific region

    PubMed Central

    Milea, Dominique; Azmi, Soraya; Reginald, Praveen; Verpillat, Patrice; Francois, Clement

    2015-01-01

    Objective We describe and compare the availability and accessibility of administrative healthcare databases (AHDB) in several Asia-Pacific countries: Australia, Japan, South Korea, Taiwan, Singapore, China, Thailand, and Malaysia. Methods The study included hospital records, reimbursement databases, prescription databases, and data linkages. Databases were first identified through PubMed, Google Scholar, and the ISPOR database register. Database custodians were contacted. Six criteria were used to assess the databases and provided the basis for a tool to categorise databases into seven levels ranging from least accessible (Level 1) to most accessible (Level 7). We also categorised overall data accessibility for each country as high, medium, or low based on accessibility of databases as well as the number of academic articles published using the databases. Results Fifty-four administrative databases were identified. Only a limited number of databases allowed access to raw data and were at Level 7 [Medical Data Vision EBM Provider, Japan Medical Data Centre (JMDC) Claims database and Nihon-Chouzai Pharmacy Claims database in Japan, and Medicare, Pharmaceutical Benefits Scheme (PBS), Centre for Health Record Linkage (CHeReL), HealthLinQ, Victorian Data Linkages (VDL), SA-NT DataLink in Australia]. At Levels 3–6 were several databases from Japan [Hamamatsu Medical University Database, Medi-Trend, Nihon University School of Medicine Clinical Data Warehouse (NUSM)], Australia [Western Australia Data Linkage (WADL)], Taiwan [National Health Insurance Research Database (NHIRD)], South Korea [Health Insurance Review and Assessment Service (HIRA)], and Malaysia [United Nations University (UNU)-Casemix]. Countries were categorised as having a high level of data accessibility (Australia, Taiwan, and Japan), medium level of accessibility (South Korea), or a low level of accessibility (Thailand, China, Malaysia, and Singapore). In some countries, data may be available but accessibility was restricted based on requirements by data custodians. Conclusions Compared with previous research, this study describes the landscape of databases in the selected countries with more granularity using an assessment tool developed for this purpose. A high number of databases were identified but most had restricted access, preventing their potential use to support research. We hope that this study helps to improve the understanding of the AHDB landscape, increase data sharing and database research in Asia-Pacific countries. PMID:27123180

  6. EasyKSORD: A Platform of Keyword Search Over Relational Databases

    NASA Astrophysics Data System (ADS)

    Peng, Zhaohui; Li, Jing; Wang, Shan

    Keyword Search Over Relational Databases (KSORD) enables casual users to use keyword queries (a set of keywords) to search relational databases just like searching the Web, without any knowledge of the database schema or any need of writing SQL queries. Based on our previous work, we design and implement a novel KSORD platform named EasyKSORD for users and system administrators to use and manage different KSORD systems in a novel and simple manner. EasyKSORD supports advanced queries, efficient data-graph-based search engines, multiform result presentations, and system logging and analysis. Through EasyKSORD, users can search relational databases easily and read search results conveniently, and system administrators can easily monitor and analyze the operations of KSORD and manage KSORD systems much better.

  7. Using a Semi-Realistic Database to Support a Database Course

    ERIC Educational Resources Information Center

    Yue, Kwok-Bun

    2013-01-01

    A common problem for university relational database courses is to construct effective databases for instructions and assignments. Highly simplified "toy" databases are easily available for teaching, learning, and practicing. However, they do not reflect the complexity and practical considerations that students encounter in real-world…

  8. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1993-01-01

    One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.

  9. International Cancer Genome Consortium Data Portal--a one-stop shop for cancer genomics data.

    PubMed

    Zhang, Junjun; Baran, Joachim; Cros, A; Guberman, Jonathan M; Haider, Syed; Hsu, Jack; Liang, Yong; Rivkin, Elena; Wang, Jianxin; Whitty, Brett; Wong-Erasmus, Marie; Yao, Long; Kasprzyk, Arek

    2011-01-01

    The International Cancer Genome Consortium (ICGC) is a collaborative effort to characterize genomic abnormalities in 50 different cancer types. To make this data available, the ICGC has created the ICGC Data Portal. Powered by the BioMart software, the Data Portal allows each ICGC member institution to manage and maintain its own databases locally, while seamlessly presenting all the data in a single access point for users. The Data Portal currently contains data from 24 cancer projects, including ICGC, The Cancer Genome Atlas (TCGA), Johns Hopkins University, and the Tumor Sequencing Project. It consists of 3478 genomes and 13 cancer types and subtypes. Available open access data types include simple somatic mutations, copy number alterations, structural rearrangements, gene expression, microRNAs, DNA methylation and exon junctions. Additionally, simple germline variations are available as controlled access data. The Data Portal uses a web-based graphical user interface (GUI) to offer researchers multiple ways to quickly and easily search and analyze the available data. The web interface can assist in constructing complicated queries across multiple data sets. Several application programming interfaces are also available for programmatic access. Here we describe the organization, functionality, and capabilities of the ICGC Data Portal.

  10. Enhancing UCSF Chimera through web services.

    PubMed

    Huang, Conrad C; Meng, Elaine C; Morris, John H; Pettersen, Eric F; Ferrin, Thomas E

    2014-07-01

    Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Distributed structure-searchable toxicity (DSSTox) public database network: a proposal.

    PubMed

    Richard, Ann M; Williams, ClarLynda R

    2002-01-29

    The ability to assess the potential genotoxicity, carcinogenicity, or other toxicity of pharmaceutical or industrial chemicals based on chemical structure information is a highly coveted and shared goal of varied academic, commercial, and government regulatory groups. These diverse interests often employ different approaches and have different criteria and use for toxicity assessments, but they share a need for unrestricted access to existing public toxicity data linked with chemical structure information. Currently, there exists no central repository of toxicity information, commercial or public, that adequately meets the data requirements for flexible analogue searching, Structure-Activity Relationship (SAR) model development, or building of chemical relational databases (CRD). The distributed structure-searchable toxicity (DSSTox) public database network is being proposed as a community-supported, web-based effort to address these shared needs of the SAR and toxicology communities. The DSSTox project has the following major elements: (1) to adopt and encourage the use of a common standard file format (structure data file (SDF)) for public toxicity databases that includes chemical structure, text and property information, and that can easily be imported into available CRD applications; (2) to implement a distributed source approach, managed by a DSSTox Central Website, that will enable decentralized, free public access to structure-toxicity data files, and that will effectively link knowledgeable toxicity data sources with potential users of these data from other disciplines (such as chemistry, modeling, and computer science); and (3) to engage public/commercial/academic/industry groups in contributing to and expanding this community-wide, public data sharing and distribution effort. The DSSTox project's overall aims are to effect the closer association of chemical structure information with existing toxicity data, and to promote and facilitate structure-based exploration of these data within a common chemistry-based framework that spans toxicological disciplines.

  12. Smiles2Monomers: a link between chemical and biological structures for polymers.

    PubMed

    Dufresne, Yoann; Noé, Laurent; Leclère, Valérie; Pupin, Maude

    2015-01-01

    The monomeric composition of polymers is powerful for structure comparison and synthetic biology, among others. Many databases give access to the atomic structure of compounds but the monomeric structure of polymers is often lacking. We have designed a smart algorithm, implemented in the tool Smiles2Monomers (s2m), to infer efficiently and accurately the monomeric structure of a polymer from its chemical structure. Our strategy is divided into two steps: first, monomers are mapped on the atomic structure by an efficient subgraph-isomorphism algorithm ; second, the best tiling is computed so that non-overlapping monomers cover all the structure of the target polymer. The mapping is based on a Markovian index built by a dynamic programming algorithm. The index enables s2m to search quickly all the given monomers on a target polymer. After, a greedy algorithm combines the mapped monomers into a consistent monomeric structure. Finally, a local branch and cut algorithm refines the structure. We tested this method on two manually annotated databases of polymers and reconstructed the structures de novo with a sensitivity over 90 %. The average computation time per polymer is 2 s. s2m automatically creates de novo monomeric annotations for polymers, efficiently in terms of time computation and sensitivity. s2m allowed us to detect annotation errors in the tested databases and to easily find the accurate structures. So, s2m could be integrated into the curation process of databases of small compounds to verify the current entries and accelerate the annotation of new polymers. The full method can be downloaded or accessed via a website for peptide-like polymers at http://bioinfo.lifl.fr/norine/smiles2monomers.jsp.Graphical abstract:.

  13. PathwayAccess: CellDesigner plugins for pathway databases.

    PubMed

    Van Hemert, John L; Dickerson, Julie A

    2010-09-15

    CellDesigner provides a user-friendly interface for graphical biochemical pathway description. Many pathway databases are not directly exportable to CellDesigner models. PathwayAccess is an extensible suite of CellDesigner plugins, which connect CellDesigner directly to pathway databases using respective Java application programming interfaces. The process is streamlined for creating new PathwayAccess plugins for specific pathway databases. Three PathwayAccess plugins, MetNetAccess, BioCycAccess and ReactomeAccess, directly connect CellDesigner to the pathway databases MetNetDB, BioCyc and Reactome. PathwayAccess plugins enable CellDesigner users to expose pathway data to analytical CellDesigner functions, curate their pathway databases and visually integrate pathway data from different databases using standard Systems Biology Markup Language and Systems Biology Graphical Notation. Implemented in Java, PathwayAccess plugins run with CellDesigner version 4.0.1 and were tested on Ubuntu Linux, Windows XP and 7, and MacOSX. Source code, binaries, documentation and video walkthroughs are freely available at http://vrac.iastate.edu/~jlv.

  14. Students views of integrating web-based learning technology into the nursing curriculum - A descriptive survey.

    PubMed

    Adams, Audrey; Timmins, Fiona

    2006-01-01

    This paper describes students' experiences of a Web-based innovation at one university. This paper reports on the first phase of this development where two Web-based modules were developed. Using a survey approach (n=44) students' access to and use of computer technology were explored. Findings revealed that students' prior use of computers and Internet technologies was higher than previously reported, although use of databases was low. Skills in this area increased during the programme, with a significant rise in database, email, search engine and word processing use. Many specific computer skills were learned during the programme, with high numbers reporting ability to deal adequately with files and folders. Overall, the experience was a positive one for students. While a sense of student isolation was not reported, as many students kept in touch by phone and class attendance continued, some individual students did appear to isolate themselves. This teaching methodology has much to offer in the provision of convenient easy to access programmes that can be easily adapted to the individual lifestyle. However, student support mechanisms need careful consideration for students who are at risk of becoming isolated. Staff also need to supported in the provision of this methodology and face-to-face contact with teachers for some part of the programme is preferable.

  15. Computerized literature reference system: use of an optical scanner and optical character recognition software.

    PubMed

    Lossef, S V; Schwartz, L H

    1990-09-01

    A computerized reference system for radiology journal articles was developed by using an IBM-compatible personal computer with a hand-held optical scanner and optical character recognition software. This allows direct entry of scanned text from printed material into word processing or data-base files. Additionally, line diagrams and photographs of radiographs can be incorporated into these files. A text search and retrieval software program enables rapid searching for keywords in scanned documents. The hand scanner and software programs are commercially available, relatively inexpensive, and easily used. This permits construction of a personalized radiology literature file of readily accessible text and images requiring minimal typing or keystroke entry.

  16. Fast fingerprint database maintenance for indoor positioning based on UGV SLAM.

    PubMed

    Tang, Jian; Chen, Yuwei; Chen, Liang; Liu, Jingbin; Hyyppä, Juha; Kukko, Antero; Kaartinen, Harri; Hyyppä, Hannu; Chen, Ruizhi

    2015-03-04

    Indoor positioning technology has become more and more important in the last two decades. Utilizing Received Signal Strength Indicator (RSSI) fingerprints of Signals of OPportunity (SOP) is a promising alternative navigation solution. However, as the RSSIs vary during operation due to their physical nature and are easily affected by the environmental change, one challenge of the indoor fingerprinting method is maintaining the RSSI fingerprint database in a timely and effective manner. In this paper, a solution for rapidly updating the fingerprint database is presented, based on a self-developed Unmanned Ground Vehicles (UGV) platform NAVIS. Several SOP sensors were installed on NAVIS for collecting indoor fingerprint information, including a digital compass collecting magnetic field intensity, a light sensor collecting light intensity, and a smartphone which collects the access point number and RSSIs of the pre-installed WiFi network. The NAVIS platform generates a map of the indoor environment and collects the SOPs during processing of the mapping, and then the SOP fingerprint database is interpolated and updated in real time. Field tests were carried out to evaluate the effectiveness and efficiency of the proposed method. The results showed that the fingerprint databases can be quickly created and updated with a higher sampling frequency (5Hz) and denser reference points compared with traditional methods, and the indoor map can be generated without prior information. Moreover, environmental changes could also be detected quickly for fingerprint indoor positioning.

  17. GenColors-based comparative genome databases for small eukaryotic genomes.

    PubMed

    Felder, Marius; Romualdi, Alessandro; Petzold, Andreas; Platzer, Matthias; Sühnel, Jürgen; Glöckner, Gernot

    2013-01-01

    Many sequence data repositories can give a quick and easily accessible overview on genomes and their annotations. Less widespread is the possibility to compare related genomes with each other in a common database environment. We have previously described the GenColors database system (http://gencolors.fli-leibniz.de) and its applications to a number of bacterial genomes such as Borrelia, Legionella, Leptospira and Treponema. This system has an emphasis on genome comparison. It combines data from related genomes and provides the user with an extensive set of visualization and analysis tools. Eukaryote genomes are normally larger than prokaryote genomes and thus pose additional challenges for such a system. We have, therefore, adapted GenColors to also handle larger datasets of small eukaryotic genomes and to display eukaryotic gene structures. Further recent developments include whole genome views, genome list options and, for bacterial genome browsers, the display of horizontal gene transfer predictions. Two new GenColors-based databases for two fungal species (http://fgb.fli-leibniz.de) and for four social amoebas (http://sacgb.fli-leibniz.de) were set up. Both new resources open up a single entry point for related genomes for the amoebozoa and fungal research communities and other interested users. Comparative genomics approaches are greatly facilitated by these resources.

  18. Health in south-eastern Europe: a troubled past, an uncertain future.

    PubMed Central

    Rechel, Bernd; Schwalbe, Nina; McKee, Martin

    2004-01-01

    The political and economic turmoil that occurred in south-eastern Europe in the last decade of the twentieth century left a legacy of physical damage. This aspect of the conflict has received considerable coverage in the media. However, surprisingly less has been reported about the effects of that turmoil on the health of the people living in the region. In an attempt to identify and synthesize data on these effects, we carried out a systematic review and used the results to put together a searchable online database of documents, reports, and published material, the majority of which have not previously been easily accessible (http:// www.lshtm.ac.uk/ecohost/see/index.php). The database covers the period from the early 1990s to 2003 and will be of considerable interest to policy-makers. It contains 762 items, many of them annotated and available for downloading. This paper synthesizes the main findings obtained from the material in the database and emphasizes the need for concerted action to improve the health of people in south-eastern Europe. Furthermore, we also recommend that agencies working in post-conflict situations should invest in developing and maintaining online databases that would be useful to others working in the area. PMID:15500286

  19. A novel database of bio-effects from non-ionizing radiation.

    PubMed

    Leach, Victor; Weller, Steven; Redmayne, Mary

    2018-06-06

    A significant amount of electromagnetic field/electromagnetic radiation (EMF/EMR) research is available that examines biological and disease associated endpoints. The quantity, variety and changing parameters in the available research can be challenging when undertaking a literature review, meta-analysis, preparing a study design, building reference lists or comparing findings between relevant scientific papers. The Oceania Radiofrequency Scientific Advisory Association (ORSAA) has created a comprehensive, non-biased, multi-categorized, searchable database of papers on non-ionizing EMF/EMR to help address these challenges. It is regularly added to, freely accessible online and designed to allow data to be easily retrieved, sorted and analyzed. This paper demonstrates the content and search flexibility of the ORSAA database. Demonstration searches are presented by Effect/No Effect; frequency-band/s; in vitro; in vivo; biological effects; study type; and funding source. As of the 15th September 2017, the clear majority of 2653 papers captured in the database examine outcomes in the 300 MHz-3 GHz range. There are 3 times more biological "Effect" than "No Effect" papers; nearly a third of papers provide no funding statement; industry-funded studies more often than not find "No Effect", while institutional funding commonly reveal "Effects". Country of origin where the study is conducted/funded also appears to have a dramatic influence on the likely result outcome.

  20. bpRNA: large-scale automated annotation and analysis of RNA secondary structure.

    PubMed

    Danaee, Padideh; Rouches, Mason; Wiley, Michelle; Deng, Dezhong; Huang, Liang; Hendrix, David

    2018-05-09

    While RNA secondary structure prediction from sequence data has made remarkable progress, there is a need for improved strategies for annotating the features of RNA secondary structures. Here, we present bpRNA, a novel annotation tool capable of parsing RNA structures, including complex pseudoknot-containing RNAs, to yield an objective, precise, compact, unambiguous, easily-interpretable description of all loops, stems, and pseudoknots, along with the positions, sequence, and flanking base pairs of each such structural feature. We also introduce several new informative representations of RNA structure types to improve structure visualization and interpretation. We have further used bpRNA to generate a web-accessible meta-database, 'bpRNA-1m', of over 100 000 single-molecule, known secondary structures; this is both more fully and accurately annotated and over 20-times larger than existing databases. We use a subset of the database with highly similar (≥90% identical) sequences filtered out to report on statistical trends in sequence, flanking base pairs, and length. Both the bpRNA method and the bpRNA-1m database will be valuable resources both for specific analysis of individual RNA molecules and large-scale analyses such as are useful for updating RNA energy parameters for computational thermodynamic predictions, improving machine learning models for structure prediction, and for benchmarking structure-prediction algorithms.

  1. A web Accessible Framework for Discovery, Visualization and Dissemination of Polar Data

    NASA Astrophysics Data System (ADS)

    Kirsch, P. J.; Breen, P.; Barnes, T. D.

    2007-12-01

    A web accessible information framework, currently under development within the Physical Sciences Division of the British Antarctic Survey is described. The datasets accessed are generally heterogeneous in nature from fields including space physics, meteorology, atmospheric chemistry, ice physics, and oceanography. Many of these are returned in near real time over a 24/7 limited bandwidth link from remote Antarctic Stations and ships. The requirement is to provide various user groups - each with disparate interests and demands - a system incorporating a browsable and searchable catalogue; bespoke data summary visualization, metadata access facilities and download utilities. The system allows timely access to raw and processed datasets through an easily navigable discovery interface. Once discovered, a summary of the dataset can be visualized in a manner prescribed by the particular projects and user communities or the dataset may be downloaded, subject to accessibility restrictions that may exist. In addition, access to related ancillary information including software, documentation, related URL's and information concerning non-electronic media (of particular relevance to some legacy datasets) is made directly available having automatically been associated with a dataset during the discovery phase. Major components of the framework include the relational database containing the catalogue, the organizational structure of the systems holding the data - enabling automatic updates of the system catalogue and real-time access to data -, the user interface design, and administrative and data management scripts allowing straightforward incorporation of utilities, datasets and system maintenance.

  2. A Data Management System for International Space Station Simulation Tools

    NASA Technical Reports Server (NTRS)

    Betts, Bradley J.; DelMundo, Rommel; Elcott, Sharif; McIntosh, Dawn; Niehaus, Brian; Papasin, Richard; Mah, Robert W.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Groups associated with the design, operational, and training aspects of the International Space Station make extensive use of modeling and simulation tools. Users of these tools often need to access and manipulate large quantities of data associated with the station, ranging from design documents to wiring diagrams. Retrieving and manipulating this data directly within the simulation and modeling environment can provide substantial benefit to users. An approach for providing these kinds of data management services, including a database schema and class structure, is presented. Implementation details are also provided as a data management system is integrated into the Intelligent Virtual Station, a modeling and simulation tool developed by the NASA Ames Smart Systems Research Laboratory. One use of the Intelligent Virtual Station is generating station-related training procedures in a virtual environment, The data management component allows users to quickly and easily retrieve information related to objects on the station, enhancing their ability to generate accurate procedures. Users can associate new information with objects and have that information stored in a database.

  3. Using the STOQS Web Application for Access to in situ Oceanographic Data

    NASA Astrophysics Data System (ADS)

    McCann, M. P.

    2012-12-01

    Using the STOQS Web Application for Access to in situ Oceanographic Data Mike McCann 7 August 2012 With increasing measurement and sampling capabilities of autonomous oceanographic platforms (e.g. Gliders, Autonomous Underwater Vehicles, Wavegliders), the need to efficiently access and visualize the data they collect is growing. The Monterey Bay Aquarium Research Institute has designed and built the Spatial Temporal Oceanographic Query System (STOQS) specifically to address this issue. The need for STOQS arises from inefficiencies discovered from using CF-NetCDF point observation conventions for these data. The problem is that access efficiency decreases with decreasing dimension of CF-NetCDF data. For example, the Trajectory Common Data Model feature type has only one coordinate dimension, usually Time - positions of the trajectory (Depth, Latitude, Longitude) are stored as non-indexed record variables within the NetCDF file. If client software needs to access data between two depth values or from a bounded geographic area, then the whole data set must be read and the selection made within the client software. This is very inefficient. What is needed is a way to easily select data of interest from an archive given any number of spatial, temporal, or other constraints. Geospatial relational database technology provides this capability. The full STOQS application consists of a Postgres/PostGIS database, Mapserver, and Python-Django running on a server and Web 2.0 technology (jQuery, OpenLayers, Twitter Bootstrap) running in a modern web browser. The web application provides faceted search capabilities allowing a user to quickly drill into the data of interest. Data selection can be constrained by spatial, temporal, and depth selections as well as by parameter value and platform name. The web application layer also provides a REST (Representational State Transfer) Application Programming Interface allowing tools such as the Matlab stoqstoolbox to retrieve data directly from the database. STOQS is an open source software project built upon a framework of free and open source software and is available for anyone to use for making their data more accessible and usable. For more information please see: http://code.google.com/p/stoqs/.; In the above screen grab a user has selected the "mass_concentrtion_of_chlorophyll_in_sea_water" parameter and a time depth range that includes three weeks of AUV missions of just the upper 5 meters.

  4. The AMMA information system

    NASA Astrophysics Data System (ADS)

    Brissebrat, Guillaume; Fleury, Laurence; Boichard, Jean-Luc; Cloché, Sophie; Eymard, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim; Asencio, Nicole; Favot, Florence; Roussot, Odile

    2013-04-01

    The AMMA information system aims at expediting data and scientific results communication inside the AMMA community and beyond. It has already been adopted as the data management system by several projects and is meant to become a reference information system about West Africa area for the whole scientific community. The AMMA database and the associated on line tools have been developed and are managed by two French teams (IPSL Database Centre, Palaiseau and OMP Data Service, Toulouse). The complete system has been fully duplicated and is operated by AGRHYMET Regional Centre in Niamey, Niger. The AMMA database contains a wide variety of datasets: - about 250 local observation datasets, that cover geophysical components (atmosphere, ocean, soil, vegetation) and human activities (agronomy, health...) They come from either operational networks or scientific experiments, and include historical data in West Africa from 1850; - 1350 outputs of a socio-economics questionnaire; - 60 operational satellite products and several research products; - 10 output sets of meteorological and ocean operational models and 15 of research simulations. Database users can access all the data using either the portal http://database.amma-international.org or http://amma.agrhymet.ne/amma-data. Different modules are available. The complete catalogue enables to access metadata (i.e. information about the datasets) that are compliant with the international standards (ISO19115, INSPIRE...). Registration pages enable to read and sign the data and publication policy, and to apply for a user database account. The data access interface enables to easily build a data extraction request by selecting various criteria like location, time, parameters... At present, the AMMA database counts more than 740 registered users and process about 80 data requests every month In order to monitor day-to-day meteorological and environment information over West Africa, some quick look and report display websites have been developed. They met the operational needs for the observational teams during the AMMA 2006 (http://aoc.amma-international.org) and FENNEC 2011 (http://fenoc.sedoo.fr) campaigns. But they also enable scientific teams to share physical indices along the monsoon season (http://misva.sedoo.fr from 2011). A collaborative WIKINDX tool has been set on line in order to manage scientific publications and communications of interest to AMMA (http://biblio.amma-international.org). Now the bibliographic database counts about 1200 references. It is the most exhaustive document collection about African Monsoon available for all. Every scientist is invited to make use of the different AMMA on line tools and data. Scientists or project leaders who have data management needs for existing or future datasets over West Africa are welcome to use the AMMA database framework and to contact ammaAdmin@sedoo.fr .

  5. SU-F-T-231: Improving the Efficiency of a Radiotherapy Peer-Review System for Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, S; Basavatia, A; Garg, M

    Purpose: To improve the efficiency of a radiotherapy peer-review system using a commercially available software application for plan quality evaluation and documentation. Methods: A commercial application, FullAccess (Radialogica LLC, Version 1.4.4), was implemented in a Citrix platform for peer-review process and patient documentation. This application can display images, isodose lines, and dose-volume histograms and create plan reports for peer-review process. Dose metrics in the report can also be benchmarked for plan quality evaluation. Site-specific templates were generated based on departmental treatment planning policies and procedures for each disease site, which generally follow RTOG protocols as well as published prospective clinicalmore » trial data, including both conventional fractionation and hypo-fractionation schema. Once a plan is ready for review, the planner exports the plan to FullAccess, applies the site-specific template, and presents the report for plan review. The plan is still reviewed in the treatment planning system, as that is the legal record. Upon physician’s approval of a plan, the plan is packaged for peer review with the plan report and dose metrics are saved to the database. Results: The reports show dose metrics of PTVs and critical organs for the plans and also indicate whether or not the metrics are within tolerance. Graphical results with green, yellow, and red lights are displayed of whether planning objectives have been met. In addition, benchmarking statistics are collected to see where the current plan falls compared to all historical plans on each metric. All physicians in peer review can easily verify constraints by these reports. Conclusion: We have demonstrated the improvement in a radiotherapy peer-review system, which allows physicians to easily verify planning constraints for different disease sites and fractionation schema, allows for standardization in the clinic to ensure that departmental policies are maintained, and builds a comprehensive database for potential clinical outcome evaluation.« less

  6. Rice Annotation Project Database (RAP-DB): an integrative and interactive database for rice genomics.

    PubMed

    Sakai, Hiroaki; Lee, Sung Shin; Tanaka, Tsuyoshi; Numa, Hisataka; Kim, Jungsok; Kawahara, Yoshihiro; Wakimoto, Hironobu; Yang, Ching-chia; Iwamoto, Masao; Abe, Takashi; Yamada, Yuko; Muto, Akira; Inokuchi, Hachiro; Ikemura, Toshimichi; Matsumoto, Takashi; Sasaki, Takuji; Itoh, Takeshi

    2013-02-01

    The Rice Annotation Project Database (RAP-DB, http://rapdb.dna.affrc.go.jp/) has been providing a comprehensive set of gene annotations for the genome sequence of rice, Oryza sativa (japonica group) cv. Nipponbare. Since the first release in 2005, RAP-DB has been updated several times along with the genome assembly updates. Here, we present our newest RAP-DB based on the latest genome assembly, Os-Nipponbare-Reference-IRGSP-1.0 (IRGSP-1.0), which was released in 2011. We detected 37,869 loci by mapping transcript and protein sequences of 150 monocot species. To provide plant researchers with highly reliable and up to date rice gene annotations, we have been incorporating literature-based manually curated data, and 1,626 loci currently incorporate literature-based annotation data, including commonly used gene names or gene symbols. Transcriptional activities are shown at the nucleotide level by mapping RNA-Seq reads derived from 27 samples. We also mapped the Illumina reads of a Japanese leading japonica cultivar, Koshihikari, and a Chinese indica cultivar, Guangluai-4, to the genome and show alignments together with the single nucleotide polymorphisms (SNPs) and gene functional annotations through a newly developed browser, Short-Read Assembly Browser (S-RAB). We have developed two satellite databases, Plant Gene Family Database (PGFD) and Integrative Database of Cereal Gene Phylogeny (IDCGP), which display gene family and homologous gene relationships among diverse plant species. RAP-DB and the satellite databases offer simple and user-friendly web interfaces, enabling plant and genome researchers to access the data easily and facilitating a broad range of plant research topics.

  7. Cold Climate Foundation Retrofit Experimental Hygrothermal Performance: Cloquet Residential Research Facility Laboratory Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, Louise F.; Harmon, Anna C.

    2015-04-01

    Thermal and moisture problems in existing basements create a unique challenge because the exterior face of the wall is not easily or inexpensively accessible. This approach addresses thermal and moisture management from the interior face of the wall without disturbing the exterior soil and landscaping. the interior and exterior environments. This approach has the potential for improving durability, comfort, and indoor air quality. This project was funded jointly by the National Renewable Energy Laboratory (NREL) and Oak Ridge National Laboratory (ORNL). ORNL focused on developing a full basement wall system experimental database to enable others to validate hygrothermal simulation codes.more » NREL focused on testing the moisture durability of practical basement wall interior insulation retrofit solutions for cold climates. The project has produced a physically credible and reliable long-term hygrothermal performance database for retrofit foundation wall insulation systems in zone 6 and 7 climates that are fully compliant with the performance criteria in the 2009 Minnesota Energy Code. The experimental data were configured into a standard format that can be published online and that is compatible with standard commercially available spreadsheet and database software.« less

  8. Lunar e-Library: Putting Space History to Work

    NASA Technical Reports Server (NTRS)

    McMahan, Tracy A.; Shea, Charlotte A.; Finckenor, Miria

    2006-01-01

    As NASA plans and implements the Vision for Space Exploration, managers, engineers, and scientists need historically important information that is readily available and easily accessed. The Lunar e-Library - a searchable collection of 1100 electronic (.PDF) documents - makes it easy to find critical technical data and lessons learned and put space history knowledge in action. The Lunar e-Library, a DVD knowledge database, was developed by NASA to shorten research time and put knowledge at users' fingertips. Funded by NASA's Space Environments and Effects (SEE) Program headquartered at Marshall Space Flight Center (MSFC) and the MSFC Materials and Processes Laboratory, the goal of the Lunar e- Library effort was to identify key lessons learned from Apollo and other lunar programs and missions and to provide technical information from those programs in an easy-to-use format. The SEE Program began distributing the Lunar e-Library knowledge database in 2006. This paper describes the Lunar e-Library development process (including a description of the databases and resources used to acquire the documents) and the contents of the DVD product, demonstrates its usefulness with focused searches, and provides information on how to obtain this free resource.

  9. Biological and ecological traits of marine species

    PubMed Central

    Claus, Simon; Dekeyzer, Stefanie; Vandepitte, Leen; Tuama, Éamonn Ó; Lear, Dan; Tyler-Walters, Harvey

    2015-01-01

    This paper reviews the utility and availability of biological and ecological traits for marine species so as to prioritise the development of a world database on marine species traits. In addition, the ‘status’ of species for conservation, that is, whether they are introduced or invasive, of fishery or aquaculture interest, harmful, or used as an ecological indicator, were reviewed because these attributes are of particular interest to society. Whereas traits are an enduring characteristic of a species and/or population, a species status may vary geographically and over time. Criteria for selecting traits were that they could be applied to most taxa, were easily available, and their inclusion would result in new research and/or management applications. Numerical traits were favoured over categorical. Habitat was excluded as it can be derived from a selection of these traits. Ten traits were prioritized for inclusion in the most comprehensive open access database on marine species (World Register of Marine Species), namely taxonomic classification, environment, geography, depth, substratum, mobility, skeleton, diet, body size and reproduction. These traits and statuses are being added to the database and new use cases may further subdivide and expand upon them. PMID:26312188

  10. 75 FR 18255 - Passenger Facility Charge Database System for Air Carrier Reporting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-09

    ... Facility Charge Database System for Air Carrier Reporting AGENCY: Federal Aviation Administration (FAA... the Passenger Facility Charge (PFC) database system to report PFC quarterly report information. In... developed a national PFC database system in order to more easily track the PFC program on a nationwide basis...

  11. UCSC genome browser: deep support for molecular biomedical research.

    PubMed

    Mangan, Mary E; Williams, Jennifer M; Lathe, Scott M; Karolchik, Donna; Lathe, Warren C

    2008-01-01

    The volume and complexity of genomic sequence data, and the additional experimental data required for annotation of the genomic context, pose a major challenge for display and access for biomedical researchers. Genome browsers organize this data and make it available in various ways to extract useful information to advance research projects. The UCSC Genome Browser is one of these resources. The official sequence data for a given species forms the framework to display many other types of data such as expression, variation, cross-species comparisons, and more. Visual representations of the data are available for exploration. Data can be queried with sequences. Complex database queries are also easily achieved with the Table Browser interface. Associated tools permit additional query types or access to additional data sources such as images of in situ localizations. Support for solving researcher's issues is provided with active discussion mailing lists and by providing updated training materials. The UCSC Genome Browser provides a source of deep support for a wide range of biomedical molecular research (http://genome.ucsc.edu).

  12. WormQTL—public archive and analysis web portal for natural variation data in Caenorhabditis spp

    PubMed Central

    Snoek, L. Basten; Van der Velde, K. Joeri; Arends, Danny; Li, Yang; Beyer, Antje; Elvin, Mark; Fisher, Jasmin; Hajnal, Alex; Hengartner, Michael O.; Poulin, Gino B.; Rodriguez, Miriam; Schmid, Tobias; Schrimpf, Sabine; Xue, Feng; Jansen, Ritsert C.; Kammenga, Jan E.; Swertz, Morris A.

    2013-01-01

    Here, we present WormQTL (http://www.wormqtl.org), an easily accessible database enabling search, comparative analysis and meta-analysis of all data on variation in Caenorhabditis spp. Over the past decade, Caenorhabditis elegans has become instrumental for molecular quantitative genetics and the systems biology of natural variation. These efforts have resulted in a valuable amount of phenotypic, high-throughput molecular and genotypic data across different developmental worm stages and environments in hundreds of C. elegans strains. WormQTL provides a workbench of analysis tools for genotype–phenotype linkage and association mapping based on but not limited to R/qtl (http://www.rqtl.org). All data can be uploaded and downloaded using simple delimited text or Excel formats and are accessible via a public web user interface for biologists and R statistic and web service interfaces for bioinformaticians, based on open source MOLGENIS and xQTL workbench software. WormQTL welcomes data submissions from other worm researchers. PMID:23180786

  13. WormQTL--public archive and analysis web portal for natural variation data in Caenorhabditis spp.

    PubMed

    Snoek, L Basten; Van der Velde, K Joeri; Arends, Danny; Li, Yang; Beyer, Antje; Elvin, Mark; Fisher, Jasmin; Hajnal, Alex; Hengartner, Michael O; Poulin, Gino B; Rodriguez, Miriam; Schmid, Tobias; Schrimpf, Sabine; Xue, Feng; Jansen, Ritsert C; Kammenga, Jan E; Swertz, Morris A

    2013-01-01

    Here, we present WormQTL (http://www.wormqtl.org), an easily accessible database enabling search, comparative analysis and meta-analysis of all data on variation in Caenorhabditis spp. Over the past decade, Caenorhabditis elegans has become instrumental for molecular quantitative genetics and the systems biology of natural variation. These efforts have resulted in a valuable amount of phenotypic, high-throughput molecular and genotypic data across different developmental worm stages and environments in hundreds of C. elegans strains. WormQTL provides a workbench of analysis tools for genotype-phenotype linkage and association mapping based on but not limited to R/qtl (http://www.rqtl.org). All data can be uploaded and downloaded using simple delimited text or Excel formats and are accessible via a public web user interface for biologists and R statistic and web service interfaces for bioinformaticians, based on open source MOLGENIS and xQTL workbench software. WormQTL welcomes data submissions from other worm researchers.

  14. Internet (WWW) based system of ultrasonic image processing tools for remote image analysis.

    PubMed

    Zeng, Hong; Fei, Ding-Yu; Fu, Cai-Ting; Kraft, Kenneth A

    2003-07-01

    Ultrasonic Doppler color imaging can provide anatomic information and simultaneously render flow information within blood vessels for diagnostic purpose. Many researchers are currently developing ultrasound image processing algorithms in order to provide physicians with accurate clinical parameters from the images. Because researchers use a variety of computer languages and work on different computer platforms to implement their algorithms, it is difficult for other researchers and physicians to access those programs. A system has been developed using World Wide Web (WWW) technologies and HTTP communication protocols to publish our ultrasonic Angle Independent Doppler Color Image (AIDCI) processing algorithm and several general measurement tools on the Internet, where authorized researchers and physicians can easily access the program using web browsers to carry out remote analysis of their local ultrasonic images or images provided from the database. In order to overcome potential incompatibility between programs and users' computer platforms, ActiveX technology was used in this project. The technique developed may also be used for other research fields.

  15. The NASA NEESPI Data Portal: Products, Information, and Services

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Leptoukh, Gregory; Loboda, Tatiana; Csiszar, Ivan; Romanov, Peter; Gerasimov, Irina

    2008-01-01

    Studies have indicated that land cover and use changes in Northern Eurasia influence global climate system. However, the procedures are not fully understood and it is challenging to understand the interactions between the land changes in this region and the global climate. Having integrated data collections form multiple disciplines are important for studies of climate and environmental changes. Remote sensed and model data are particularly important die to sparse in situ measurements in many Eurasia regions especially in Siberia. The NASA GES DISC (Goddard Earth Sciences Data and Information Services Center) NEESPI data portal has generated infrastructure to provide satellite remote sensing and numerical model data for atmospheric, land surface, and cryosphere. Data searching, subsetting, and downloading functions are available. ONe useful tool is the Web-based online data analysis and visualization system, Giovanni (Goddard Interactive Online Visualization ANd aNalysis Infrastructure), which allows scientists to assess easily the state and dynamics of terrestrial ecosystems in Northern Eurasia and their interactions with global climate system. Recently, we have created a metadata database prototype to expand the NASA NEESPI data portal for providing a venue for NEESPI scientists fo find the desired data easily and leveraging data sharing within NEESPI projects. The database provides product level information. The desired data can be found through navigation and free text search and narrowed down by filtering with a number of constraints. In addition, we have developed a Web Map Service (WMS) prototype to allow access data and images from difference data resources.

  16. Selective access and editing in a database

    NASA Technical Reports Server (NTRS)

    Maluf, David A. (Inventor); Gawdiak, Yuri O. (Inventor)

    2010-01-01

    Method and system for providing selective access to different portions of a database by different subgroups of database users. Where N users are involved, up to 2.sup.N-1 distinguishable access subgroups in a group space can be formed, where no two access subgroups have the same members. Two or more members of a given access subgroup can edit, substantially simultaneously, a document accessible to each member.

  17. Publishing FAIR Data: An Exemplar Methodology Utilizing PHI-Base.

    PubMed

    Rodríguez-Iglesias, Alejandro; Rodríguez-González, Alejandro; Irvine, Alistair G; Sesma, Ane; Urban, Martin; Hammond-Kosack, Kim E; Wilkinson, Mark D

    2016-01-01

    Pathogen-Host interaction data is core to our understanding of disease processes and their molecular/genetic bases. Facile access to such core data is particularly important for the plant sciences, where individual genetic and phenotypic observations have the added complexity of being dispersed over a wide diversity of plant species vs. the relatively fewer host species of interest to biomedical researchers. Recently, an international initiative interested in scholarly data publishing proposed that all scientific data should be "FAIR"-Findable, Accessible, Interoperable, and Reusable. In this work, we describe the process of migrating a database of notable relevance to the plant sciences-the Pathogen-Host Interaction Database (PHI-base)-to a form that conforms to each of the FAIR Principles. We discuss the technical and architectural decisions, and the migration pathway, including observations of the difficulty and/or fidelity of each step. We examine how multiple FAIR principles can be addressed simultaneously through careful design decisions, including making data FAIR for both humans and machines with minimal duplication of effort. We note how FAIR data publishing involves more than data reformatting, requiring features beyond those exhibited by most life science Semantic Web or Linked Data resources. We explore the value-added by completing this FAIR data transformation, and then test the result through integrative questions that could not easily be asked over traditional Web-based data resources. Finally, we demonstrate the utility of providing explicit and reliable access to provenance information, which we argue enhances citation rates by encouraging and facilitating transparent scholarly reuse of these valuable data holdings.

  18. Publishing FAIR Data: An Exemplar Methodology Utilizing PHI-Base

    PubMed Central

    Rodríguez-Iglesias, Alejandro; Rodríguez-González, Alejandro; Irvine, Alistair G.; Sesma, Ane; Urban, Martin; Hammond-Kosack, Kim E.; Wilkinson, Mark D.

    2016-01-01

    Pathogen-Host interaction data is core to our understanding of disease processes and their molecular/genetic bases. Facile access to such core data is particularly important for the plant sciences, where individual genetic and phenotypic observations have the added complexity of being dispersed over a wide diversity of plant species vs. the relatively fewer host species of interest to biomedical researchers. Recently, an international initiative interested in scholarly data publishing proposed that all scientific data should be “FAIR”—Findable, Accessible, Interoperable, and Reusable. In this work, we describe the process of migrating a database of notable relevance to the plant sciences—the Pathogen-Host Interaction Database (PHI-base)—to a form that conforms to each of the FAIR Principles. We discuss the technical and architectural decisions, and the migration pathway, including observations of the difficulty and/or fidelity of each step. We examine how multiple FAIR principles can be addressed simultaneously through careful design decisions, including making data FAIR for both humans and machines with minimal duplication of effort. We note how FAIR data publishing involves more than data reformatting, requiring features beyond those exhibited by most life science Semantic Web or Linked Data resources. We explore the value-added by completing this FAIR data transformation, and then test the result through integrative questions that could not easily be asked over traditional Web-based data resources. Finally, we demonstrate the utility of providing explicit and reliable access to provenance information, which we argue enhances citation rates by encouraging and facilitating transparent scholarly reuse of these valuable data holdings. PMID:27433158

  19. Onco-STS: a web-based laboratory information management system for sample and analysis tracking in oncogenomic experiments.

    PubMed

    Gavrielides, Mike; Furney, Simon J; Yates, Tim; Miller, Crispin J; Marais, Richard

    2014-01-01

    Whole genomes, whole exomes and transcriptomes of tumour samples are sequenced routinely to identify the drivers of cancer. The systematic sequencing and analysis of tumour samples, as well other oncogenomic experiments, necessitates the tracking of relevant sample information throughout the investigative process. These meta-data of the sequencing and analysis procedures include information about the samples and projects as well as the sequencing centres, platforms, data locations, results locations, alignments, analysis specifications and further information relevant to the experiments. The current work presents a sample tracking system for oncogenomic studies (Onco-STS) to store these data and make them easily accessible to the researchers who work with the samples. The system is a web application, which includes a database and a front-end web page that allows the remote access, submission and updating of the sample data in the database. The web application development programming framework Grails was used for the development and implementation of the system. The resulting Onco-STS solution is efficient, secure and easy to use and is intended to replace the manual data handling of text records. Onco-STS allows simultaneous remote access to the system making collaboration among researchers more effective. The system stores both information on the samples in oncogenomic studies and details of the analyses conducted on the resulting data. Onco-STS is based on open-source software, is easy to develop and can be modified according to a research group's needs. Hence it is suitable for laboratories that do not require a commercial system.

  20. CoryneRegNet 4.0 – A reference database for corynebacterial gene regulatory networks

    PubMed Central

    Baumbach, Jan

    2007-01-01

    Background Detailed information on DNA-binding transcription factors (the key players in the regulation of gene expression) and on transcriptional regulatory interactions of microorganisms deduced from literature-derived knowledge, computer predictions and global DNA microarray hybridization experiments, has opened the way for the genome-wide analysis of transcriptional regulatory networks. The large-scale reconstruction of these networks allows the in silico analysis of cell behavior in response to changing environmental conditions. We previously published CoryneRegNet, an ontology-based data warehouse of corynebacterial transcription factors and regulatory networks. Initially, it was designed to provide methods for the analysis and visualization of the gene regulatory network of Corynebacterium glutamicum. Results Now we introduce CoryneRegNet release 4.0, which integrates data on the gene regulatory networks of 4 corynebacteria, 2 mycobacteria and the model organism Escherichia coli K12. As the previous versions, CoryneRegNet provides a web-based user interface to access the database content, to allow various queries, and to support the reconstruction, analysis and visualization of regulatory networks at different hierarchical levels. In this article, we present the further improved database content of CoryneRegNet along with novel analysis features. The network visualization feature GraphVis now allows the inter-species comparisons of reconstructed gene regulatory networks and the projection of gene expression levels onto that networks. Therefore, we added stimulon data directly into the database, but also provide Web Service access to the DNA microarray analysis platform EMMA. Additionally, CoryneRegNet now provides a SOAP based Web Service server, which can easily be consumed by other bioinformatics software systems. Stimulons (imported from the database, or uploaded by the user) can be analyzed in the context of known transcriptional regulatory networks to predict putative contradictions or further gene regulatory interactions. Furthermore, it integrates protein clusters by means of heuristically solving the weighted graph cluster editing problem. In addition, it provides Web Service based access to up to date gene annotation data from GenDB. Conclusion The release 4.0 of CoryneRegNet is a comprehensive system for the integrated analysis of procaryotic gene regulatory networks. It is a versatile systems biology platform to support the efficient and large-scale analysis of transcriptional regulation of gene expression in microorganisms. It is publicly available at . PMID:17986320

  1. EML, VEGA, ODM, LTER, GLEON - considerations and technologies for building a buoy information system at an LTER site

    NASA Astrophysics Data System (ADS)

    Gries, C.; Winslow, L.; Shin, P.; Hanson, P. C.; Barseghian, D.

    2010-12-01

    At the North Temperate Lakes Long Term Ecological Research (NTL LTER) site six buoys and one met station are maintained, each equipped with up to 20 sensors producing up to 45 separate data streams at a 1 or 10 minute frequency. Traditionally, this data volume has been managed in many matrix type tables, each described in the Ecological Metadata Language (EML) and accessed online by a query system based on the provided metadata. To develop a more flexible information system, several technologies are currently being experimented with. We will review, compare and evaluate these technologies and discuss constraints and advantages of network memberships and implementation of standards. A Data Turbine server is employed to stream data from data logger files into a database with the Real-time Data Viewer being used for monitoring sensor health. The Kepler work flow processor is being explored to introduce quality control routines into this data stream taking advantage of the Data Turbine actor. Kepler could replace traditional database triggers while adding visualization and advanced data access functionality for downstream modeling or other analytical applications. The data are currently streamed into the traditional matrix type tables and into an Observation Data Model (ODM) following the CUAHSI ODM 1.1 specifications. In parallel these sensor data are managed within the Global Lake Ecological Observatory Network (GLEON) where the software package Ziggy streams the data into a database of the VEGA data model. Contributing data to a network implies compliance with established standards for data delivery and data documentation. ODM or VEGA type data models are not easily described in EML, the metadata exchange standard for LTER sites, but are providing many advantages from an archival standpoint. Both GLEON and CUAHSI have developed advanced data access capabilities based on their respective data models and data exchange standards while LTER is currently in a phase of intense technology developments which will eventually provide standardized data access that includes ecological data set types currently not covered by either ODM or VEGA.

  2. Doing Your Science While You're in Orbit

    NASA Astrophysics Data System (ADS)

    Green, Mark L.; Miller, Stephen D.; Vazhkudai, Sudharshan S.; Trater, James R.

    2010-11-01

    Large-scale neutron facilities such as the Spallation Neutron Source (SNS) located at Oak Ridge National Laboratory need easy-to-use access to Department of Energy Leadership Computing Facilities and experiment repository data. The Orbiter thick- and thin-client and its supporting Service Oriented Architecture (SOA) based services (available at https://orbiter.sns.gov) consist of standards-based components that are reusable and extensible for accessing high performance computing, data and computational grid infrastructure, and cluster-based resources easily from a user configurable interface. The primary Orbiter system goals consist of (1) developing infrastructure for the creation and automation of virtual instrumentation experiment optimization, (2) developing user interfaces for thin- and thick-client access, (3) provide a prototype incorporating major instrument simulation packages, and (4) facilitate neutron science community access and collaboration. The secure Orbiter SOA authentication and authorization is achieved through the developed Virtual File System (VFS) services, which use Role-Based Access Control (RBAC) for data repository file access, thin-and thick-client functionality and application access, and computational job workflow management. The VFS Relational Database Management System (RDMS) consists of approximately 45 database tables describing 498 user accounts with 495 groups over 432,000 directories with 904,077 repository files. Over 59 million NeXus file metadata records are associated to the 12,800 unique NeXus file field/class names generated from the 52,824 repository NeXus files. Services that enable (a) summary dashboards of data repository status with Quality of Service (QoS) metrics, (b) data repository NeXus file field/class name full text search capabilities within a Google like interface, (c) fully functional RBAC browser for the read-only data repository and shared areas, (d) user/group defined and shared metadata for data repository files, (e) user, group, repository, and web 2.0 based global positioning with additional service capabilities are currently available. The SNS based Orbiter SOA integration progress with the Distributed Data Analysis for Neutron Scattering Experiments (DANSE) software development project is summarized with an emphasis on DANSE Central Services and the Virtual Neutron Facility (VNF). Additionally, the DANSE utilization of the Orbiter SOA authentication, authorization, and data transfer services best practice implementations are presented.

  3. Enhancing SAMOS Data Access in DOMS via a Neo4j Property Graph Database.

    NASA Astrophysics Data System (ADS)

    Stallard, A. P.; Smith, S. R.; Elya, J. L.

    2016-12-01

    The Shipboard Automated Meteorological and Oceanographic System (SAMOS) initiative provides routine access to high-quality marine meteorological and near-surface oceanographic observations from research vessels. The Distributed Oceanographic Match-Up Service (DOMS) under development is a centralized service that allows researchers to easily match in situ and satellite oceanographic data from distributed sources to facilitate satellite calibration, validation, and retrieval algorithm development. The service currently uses Apache Solr as a backend search engine on each node in the distributed network. While Solr is a high-performance solution that facilitates creation and maintenance of indexed data, it is limited in the sense that its schema is fixed. The property graph model escapes this limitation by creating relationships between data objects. The authors will present the development of the SAMOS Neo4j property graph database including new search possibilities that take advantage of the property graph model, performance comparisons with Apache Solr, and a vision for graph databases as a storage tool for oceanographic data. The integration of the SAMOS Neo4j graph into DOMS will also be described. Currently, Neo4j contains spatial and temporal records from SAMOS which are modeled into a time tree and r-tree using Graph Aware and Spatial plugin tools for Neo4j. These extensions provide callable Java procedures within CYPHER (Neo4j's query language) that generate in-graph structures. Once generated, these structures can be queried using procedures from these libraries, or directly via CYPHER statements. Neo4j excels at performing relationship and path-based queries, which challenge relational-SQL databases because they require memory intensive joins due to the limitation of their design. Consider a user who wants to find records over several years, but only for specific months. If a traditional database only stores timestamps, this type of query would be complex and likely prohibitively slow. Using the time tree model, one can specify a path from the root to the data which restricts resolutions to certain timeframes (e.g., months). This query can be executed without joins, unions, or other compute-intensive operations, putting Neo4j at a computational advantage to the SQL database alternative.

  4. Challenges in Database Design with Microsoft Access

    ERIC Educational Resources Information Center

    Letkowski, Jerzy

    2014-01-01

    Design, development and explorations of databases are popular topics covered in introductory courses taught at business schools. Microsoft Access is the most popular software used in those courses. Despite quite high complexity of Access, it is considered to be one of the most friendly database programs for beginners. A typical Access textbook…

  5. Ant-App-DB: a smart solution for monitoring arthropods activities, experimental data management and solar calculations without GPS in behavioral field studies.

    PubMed

    Ahmed, Zeeshan; Zeeshan, Saman; Fleischmann, Pauline; Rössler, Wolfgang; Dandekar, Thomas

    2014-01-01

    Field studies on arthropod ecology and behaviour require simple and robust monitoring tools, preferably with direct access to an integrated database. We have developed and here present a database tool allowing smart-phone based monitoring of arthropods. This smart phone application provides an easy solution to collect, manage and process the data in the field which has been a very difficult task for field biologists using traditional methods. To monitor our example species, the desert ant Cataglyphis fortis, we considered behavior, nest search runs, feeding habits and path segmentations including detailed information on solar position and azimuth calculation, ant orientation and time of day. For this we established a user friendly database system integrating the Ant-App-DB with a smart phone and tablet application, combining experimental data manipulation with data management and providing solar position and timing estimations without any GPS or GIS system. Moreover, the new desktop application Dataplus allows efficient data extraction and conversion from smart phone application to personal computers, for further ecological data analysis and sharing. All features, software code and database as well as Dataplus application are made available completely free of charge and sufficiently generic to be easily adapted to other field monitoring studies on arthropods or other migratory organisms. The software applications Ant-App-DB and Dataplus described here are developed using the Android SDK, Java, XML, C# and SQLite Database.

  6. Fast Fingerprint Database Maintenance for Indoor Positioning Based on UGV SLAM

    PubMed Central

    Tang, Jian; Chen, Yuwei; Chen, Liang; Liu, Jingbin; Hyyppä, Juha; Kukko, Antero; Kaartinen, Harri; Hyyppä, Hannu; Chen, Ruizhi

    2015-01-01

    Indoor positioning technology has become more and more important in the last two decades. Utilizing Received Signal Strength Indicator (RSSI) fingerprints of Signals of OPportunity (SOP) is a promising alternative navigation solution. However, as the RSSIs vary during operation due to their physical nature and are easily affected by the environmental change, one challenge of the indoor fingerprinting method is maintaining the RSSI fingerprint database in a timely and effective manner. In this paper, a solution for rapidly updating the fingerprint database is presented, based on a self-developed Unmanned Ground Vehicles (UGV) platform NAVIS. Several SOP sensors were installed on NAVIS for collecting indoor fingerprint information, including a digital compass collecting magnetic field intensity, a light sensor collecting light intensity, and a smartphone which collects the access point number and RSSIs of the pre-installed WiFi network. The NAVIS platform generates a map of the indoor environment and collects the SOPs during processing of the mapping, and then the SOP fingerprint database is interpolated and updated in real time. Field tests were carried out to evaluate the effectiveness and efficiency of the proposed method. The results showed that the fingerprint databases can be quickly created and updated with a higher sampling frequency (5Hz) and denser reference points compared with traditional methods, and the indoor map can be generated without prior information. Moreover, environmental changes could also be detected quickly for fingerprint indoor positioning. PMID:25746096

  7. Ant-App-DB: a smart solution for monitoring arthropods activities, experimental data management and solar calculations without GPS in behavioral field studies

    PubMed Central

    Ahmed, Zeeshan; Zeeshan, Saman; Fleischmann, Pauline; Rössler, Wolfgang; Dandekar, Thomas

    2015-01-01

    Field studies on arthropod ecology and behaviour require simple and robust monitoring tools, preferably with direct access to an integrated database. We have developed and here present a database tool allowing smart-phone based monitoring of arthropods. This smart phone application provides an easy solution to collect, manage and process the data in the field which has been a very difficult task for field biologists using traditional methods. To monitor our example species, the desert ant Cataglyphis fortis, we considered behavior, nest search runs, feeding habits and path segmentations including detailed information on solar position and azimuth calculation, ant orientation and time of day. For this we established a user friendly database system integrating the Ant-App-DB with a smart phone and tablet application, combining experimental data manipulation with data management and providing solar position and timing estimations without any GPS or GIS system. Moreover, the new desktop application Dataplus allows efficient data extraction and conversion from smart phone application to personal computers, for further ecological data analysis and sharing. All features, software code and database as well as Dataplus application are made available completely free of charge and sufficiently generic to be easily adapted to other field monitoring studies on arthropods or other migratory organisms. The software applications Ant-App-DB and Dataplus described here are developed using the Android SDK, Java, XML, C# and SQLite Database. PMID:25977753

  8. pvsR: An Open Source Interface to Big Data on the American Political Sphere.

    PubMed

    Matter, Ulrich; Stutzer, Alois

    2015-01-01

    Digital data from the political sphere is abundant, omnipresent, and more and more directly accessible through the Internet. Project Vote Smart (PVS) is a prominent example of this big public data and covers various aspects of U.S. politics in astonishing detail. Despite the vast potential of PVS' data for political science, economics, and sociology, it is hardly used in empirical research. The systematic compilation of semi-structured data can be complicated and time consuming as the data format is not designed for conventional scientific research. This paper presents a new tool that makes the data easily accessible to a broad scientific community. We provide the software called pvsR as an add-on to the R programming environment for statistical computing. This open source interface (OSI) serves as a direct link between a statistical analysis and the large PVS database. The free and open code is expected to substantially reduce the cost of research with PVS' new big public data in a vast variety of possible applications. We discuss its advantages vis-à-vis traditional methods of data generation as well as already existing interfaces. The validity of the library is documented based on an illustration involving female representation in local politics. In addition, pvsR facilitates the replication of research with PVS data at low costs, including the pre-processing of data. Similar OSIs are recommended for other big public databases.

  9. Availability and utility of crop composition data.

    PubMed

    Kitta, Kazumi

    2013-09-04

    The safety assessment of genetically modified (GM) crops is mandatory in many countries. Although the most important factor to take into account in these safety assessments is the primary effects of artificially introduced transgene-derived traits, possible unintended effects attributed to the insertion of transgenes must be carefully examined in parallel. However, foods are complex mixtures of compounds characterized by wide variations in composition and nutritional values. Food components are significantly affected by various factors such as cultivars and the cultivation environment including storage conditions after harvest, and it can thus be very difficult to detect potential adverse effects caused by the introduction of a transgene. A comparative approach focusing on the identification of differences between GM foods and their conventional counterparts has been performed to reveal potential safety issues and is considered the most appropriate strategy for the safety assessment of GM foods. This concept is widely shared by authorities in many countries. For the efficient safety assessment of GM crops, an easily accessible and wide-ranging compilation of crop composition data is required for use by researchers and regulatory agencies. Thus, we developed an Internet-accessible food composition database comprising key nutrients, antinutrients, endogenous toxicants, and physiologically active substances of staple crops such as rice and soybeans. The International Life Sciences Institute has also been addressing the same matter and has provided the public a crop composition database of soybeans, maize, and cotton.

  10. mtDNAmanager: a Web-based tool for the management and quality analysis of mitochondrial DNA control-region sequences

    PubMed Central

    Lee, Hwan Young; Song, Injee; Ha, Eunho; Cho, Sung-Bae; Yang, Woo Ick; Shin, Kyoung-Jin

    2008-01-01

    Background For the past few years, scientific controversy has surrounded the large number of errors in forensic and literature mitochondrial DNA (mtDNA) data. However, recent research has shown that using mtDNA phylogeny and referring to known mtDNA haplotypes can be useful for checking the quality of sequence data. Results We developed a Web-based bioinformatics resource "mtDNAmanager" that offers a convenient interface supporting the management and quality analysis of mtDNA sequence data. The mtDNAmanager performs computations on mtDNA control-region sequences to estimate the most-probable mtDNA haplogroups and retrieves similar sequences from a selected database. By the phased designation of the most-probable haplogroups (both expected and estimated haplogroups), mtDNAmanager enables users to systematically detect errors whilst allowing for confirmation of the presence of clear key diagnostic mutations and accompanying mutations. The query tools of mtDNAmanager also facilitate database screening with two options of "match" and "include the queried nucleotide polymorphism". In addition, mtDNAmanager provides Web interfaces for users to manage and analyse their own data in batch mode. Conclusion The mtDNAmanager will provide systematic routines for mtDNA sequence data management and analysis via easily accessible Web interfaces, and thus should be very useful for population, medical and forensic studies that employ mtDNA analysis. mtDNAmanager can be accessed at . PMID:19014619

  11. MAGA, a new database of gas natural emissions: a collaborative web environment for collecting data.

    NASA Astrophysics Data System (ADS)

    Cardellini, Carlo; Chiodini, Giovanni; Frigeri, Alessandro; Bagnato, Emanuela; Frondini, Francesco; Aiuppa, Alessandro

    2014-05-01

    The data on volcanic and non-volcanic gas emissions available online are, as today, are incomplete and most importantly, fragmentary. Hence, there is need for common frameworks to aggregate available data, in order to characterize and quantify the phenomena at various scales. A new and detailed web database (MAGA: MApping GAs emissions) has been developed, and recently improved, to collect data on carbon degassing form volcanic and non-volcanic environments. MAGA database allows researchers to insert data interactively and dynamically into a spatially referred relational database management system, as well as to extract data. MAGA kicked-off with the database set up and with the ingestion in to the database of the data from: i) a literature survey on publications on volcanic gas fluxes including data on active craters degassing, diffuse soil degassing and fumaroles both from dormant closed-conduit volcanoes (e.g., Vulcano, Phlegrean Fields, Santorini, Nysiros, Teide, etc.) and open-vent volcanoes (e.g., Etna, Stromboli, etc.) in the Mediterranean area and Azores, and ii) the revision and update of Googas database on non-volcanic emission of the Italian territory (Chiodini et al., 2008), in the framework of the Deep Earth Carbon Degassing (DECADE) research initiative of the Deep Carbon Observatory (DCO). For each geo-located gas emission site, the database holds images and description of the site and of the emission type (e.g., diffuse emission, plume, fumarole, etc.), gas chemical-isotopic composition (when available), gas temperature and gases fluxes magnitude. Gas sampling, analysis and flux measurement methods are also reported together with references and contacts to researchers expert of each site. In this phase data can be accessed on the network from a web interface, and data-driven web service, where software clients can request data directly from the database, are planned to be implemented shortly. This way Geographical Information Systems (GIS) and Virtual Globes (e.g., Google Earth) could easily access the database, and data could be exchanged with other database. At the moment the database includes: i) more than 1000 flux data about volcanic plume degassing from Etna and Stromboli volcanoes, ii) data from ~ 30 sites of diffuse soil degassing from Napoletan volcanoes, Azores, Canary, Etna, Stromboli, and Vulcano Island, several data on fumarolic emissions (~ 7 sites) with CO2 fluxes; iii) data from ~ 270 non volcanic gas emission site in Italy. We believe MAGA data-base is an important starting point to develop a large scale, expandable data-base aimed to excite, inspire, and encourage participation among researchers. In addition, the possibility to archive location and qualitative information for gas emission/sites not yet investigated, could stimulate the scientific community for future researches and will provide an indication on the current uncertainty on deep carbon fluxes global estimates

  12. Sting_RDB: a relational database of structural parameters for protein analysis with support for data warehousing and data mining.

    PubMed

    Oliveira, S R M; Almeida, G V; Souza, K R R; Rodrigues, D N; Kuser-Falcão, P R; Yamagishi, M E B; Santos, E H; Vieira, F D; Jardine, J G; Neshich, G

    2007-10-05

    An effective strategy for managing protein databases is to provide mechanisms to transform raw data into consistent, accurate and reliable information. Such mechanisms will greatly reduce operational inefficiencies and improve one's ability to better handle scientific objectives and interpret the research results. To achieve this challenging goal for the STING project, we introduce Sting_RDB, a relational database of structural parameters for protein analysis with support for data warehousing and data mining. In this article, we highlight the main features of Sting_RDB and show how a user can explore it for efficient and biologically relevant queries. Considering its importance for molecular biologists, effort has been made to advance Sting_RDB toward data quality assessment. To the best of our knowledge, Sting_RDB is one of the most comprehensive data repositories for protein analysis, now also capable of providing its users with a data quality indicator. This paper differs from our previous study in many aspects. First, we introduce Sting_RDB, a relational database with mechanisms for efficient and relevant queries using SQL. Sting_rdb evolved from the earlier, text (flat file)-based database, in which data consistency and integrity was not guaranteed. Second, we provide support for data warehousing and mining. Third, the data quality indicator was introduced. Finally and probably most importantly, complex queries that could not be posed on a text-based database, are now easily implemented. Further details are accessible at the Sting_RDB demo web page: http://www.cbi.cnptia.embrapa.br/StingRDB.

  13. Pseudonymisation of radiology data for research purposes

    NASA Astrophysics Data System (ADS)

    Noumeir, Rita; Lemay, Alain; Lina, Jean-Marc

    2005-04-01

    Medical image processing methods and algorithms, developed by researchers, need to be validated and tested. Test data should ideally be real clinical data especially when that clinical data is varied and exists in large volume. In nowadays, clinical data is accessible electronically and has important value for researchers. However, the usage of clinical data for research purposes should respect data confidentiality, patient right to privacy and the patient consent. In fact, clinical data is nominative given that it contains information about the patient such as name, age and identification number. Evidently, clinical data should be de-identified to be exported to research databases. However, the same patient is usually followed during a long period of time. The disease progression and the diagnostic evolution represent extremely valuable information for researchers, as well. Our objective is to build a research database from de-identified clinical data while enabling the database to be easily incremented by exporting new pseudonymous data, acquired over a long period of time. Pseudonymisation is data de-identification such that data belonging to the same individual in the clinical environment bear the same relation to each other in the de-identified research version. In this paper, we propose a software architecture that enables the implementation of a research database that can be incremented in time. We also evaluate its security and discuss its security pitfalls.

  14. New Version of SeismicHandler (SHX) based on ObsPy

    NASA Astrophysics Data System (ADS)

    Stammler, Klaus; Walther, Marcus

    2016-04-01

    The command line version of SeismicHandler (SH), a scientific analysis tool for seismic waveform data developed around 1990, has been redesigned in the recent years, based on a project funded by the Deutsche Forschungsgemeinschaft (DFG). The aim was to address new data access techniques, simplified metadata handling and a modularized software design. As a result the program was rewritten in Python in its main parts, taking advantage of simplicity of this script language and its variety of well developed software libraries, including ObsPy. SHX provides an easy access to waveforms and metadata via arclink and FDSN webservice protocols, also access to event catalogs is implemented. With single commands whole networks or stations within a certain area may be read in, the metadata are retrieved from the servers and stored in a local database. For data processing the large set of SH commands is available, as well as the SH scripting language. Via this SH language scripts or additional Python modules the command set of SHX is easily extendable. The program is open source, tested on Linux operating systems, documentation and download is found at URL "https://www.seismic-handler.org/".

  15. The Healthnet project: extending online information resources to end users in rural hospitals.

    PubMed

    Holtum, E; Zollo, S A

    1998-10-01

    The importance of easily available, high quality, and current biomedical literature within the clinical enterprise is now widely documented and accepted. Access to this information has been shown to have a direct bearing on diagnosis, choices of tests, choices of drugs, and length of hospital stay. However, many health professionals do not have adequate access to current health information, particularly those practicing in rural, isolated, or underserved hospitals. Thanks to a three-year telemedicine award from the National Library of Medicine, The University of Iowa (UI) has developed a high-speed, point-to-point telecommunications network to deliver clinical and educational applications to ten community-based Iowa hospitals. One of the services offered over the network allows health professionals from the site hospitals to access online health databases and order articles via an online document delivery service. Installation, training, and troubleshooting support are provided to the remote sites by UI project staff. To date, 1,339 health professionals from the ten networked hospitals have registered to use the Healthnet program. Despite the friendly interface on the computer workstations installed at the sites, training emerged as the key issue in maximizing health professional utilization of these programs.

  16. The Healthnet project: extending online information resources to end users in rural hospitals.

    PubMed Central

    Holtum, E; Zollo, S A

    1998-01-01

    The importance of easily available, high quality, and current biomedical literature within the clinical enterprise is now widely documented and accepted. Access to this information has been shown to have a direct bearing on diagnosis, choices of tests, choices of drugs, and length of hospital stay. However, many health professionals do not have adequate access to current health information, particularly those practicing in rural, isolated, or underserved hospitals. Thanks to a three-year telemedicine award from the National Library of Medicine, The University of Iowa (UI) has developed a high-speed, point-to-point telecommunications network to deliver clinical and educational applications to ten community-based Iowa hospitals. One of the services offered over the network allows health professionals from the site hospitals to access online health databases and order articles via an online document delivery service. Installation, training, and troubleshooting support are provided to the remote sites by UI project staff. To date, 1,339 health professionals from the ten networked hospitals have registered to use the Healthnet program. Despite the friendly interface on the computer workstations installed at the sites, training emerged as the key issue in maximizing health professional utilization of these programs. PMID:9803302

  17. Integrated Management and Visualization of Electronic Tag Data with Tagbase

    PubMed Central

    Lam, Chi Hin; Tsontos, Vardis M.

    2011-01-01

    Electronic tags have been used widely for more than a decade in studies of diverse marine species. However, despite significant investment in tagging programs and hardware, data management aspects have received insufficient attention, leaving researchers without a comprehensive toolset to manage their data easily. The growing volume of these data holdings, the large diversity of tag types and data formats, and the general lack of data management resources are not only complicating integration and synthesis of electronic tagging data in support of resource management applications but potentially threatening the integrity and longer-term access to these valuable datasets. To address this critical gap, Tagbase has been developed as a well-rounded, yet accessible data management solution for electronic tagging applications. It is based on a unified relational model that accommodates a suite of manufacturer tag data formats in addition to deployment metadata and reprocessed geopositions. Tagbase includes an integrated set of tools for importing tag datasets into the system effortlessly, and provides reporting utilities to interactively view standard outputs in graphical and tabular form. Data from the system can also be easily exported or dynamically coupled to GIS and other analysis packages. Tagbase is scalable and has been ported to a range of database management systems to support the needs of the tagging community, from individual investigators to large scale tagging programs. Tagbase represents a mature initiative with users at several institutions involved in marine electronic tagging research. PMID:21750734

  18. Reactome graph database: Efficient access to complex pathway data

    PubMed Central

    Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902

  19. Reactome graph database: Efficient access to complex pathway data.

    PubMed

    Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.

  20. The plant phenological online database (PPODB): an online database for long-term phenological data.

    PubMed

    Dierenbach, Jonas; Badeck, Franz-W; Schaber, Jörg

    2013-09-01

    We present an online database that provides unrestricted and free access to over 16 million plant phenological observations from over 8,000 stations in Central Europe between the years 1880 and 2009. Unique features are (1) a flexible and unrestricted access to a full-fledged database, allowing for a wide range of individual queries and data retrieval, (2) historical data for Germany before 1951 ranging back to 1880, and (3) more than 480 curated long-term time series covering more than 100 years for individual phenological phases and plants combined over Natural Regions in Germany. Time series for single stations or Natural Regions can be accessed through a user-friendly graphical geo-referenced interface. The joint databases made available with the plant phenological database PPODB render accessible an important data source for further analyses of long-term changes in phenology. The database can be accessed via www.ppodb.de .

  1. Fermilab Security Site Access Request Database

    Science.gov Websites

    Fermilab Security Site Access Request Database Use of the online version of the Fermilab Security Site Access Request Database requires that you login into the ESH&Q Web Site. Note: Only Fermilab generated from the ESH&Q Section's Oracle database on May 27, 2018 05:48 AM. If you have a question

  2. Global Location-Based Access to Web Applications Using Atom-Based Automatic Update

    NASA Astrophysics Data System (ADS)

    Singh, Kulwinder; Park, Dong-Won

    We propose an architecture which enables people to enquire about information available in directory services by voice using regular phones. We implement a Virtual User Agent (VUA) which mediates between the human user and a business directory service. The system enables the user to search for the nearest clinic, gas station by price, motel by price, food / coffee, banks/ATM etc. and fix an appointment, or automatically establish a call between the user and the business party if the user prefers. The user also has an option to receive appointment confirmation by phone, SMS, or e-mail. The VUA is accessible by a toll free DID (Direct Inward Dialing) number using a phone by anyone, anywhere, anytime. We use the Euclidean formula for distance measurement. Since, shorter geodesic distances (on the Earth’s surface) correspond to shorter Euclidean distances (measured by a straight line through the Earth). Our proposed architecture uses Atom XML syndication format protocol for data integration, VoiceXML for creating the voice user interface (VUI) and CCXML for controlling the call components. We also provide an efficient algorithm for parsing Atom feeds which provide data to the system. Moreover, we describe a cost-effective way for providing global access to the VUA based on Asterisk (an open source IP-PBX). We also provide some information on how our system can be integrated with GPS for locating the user coordinates and therefore efficiently and spontaneously enhancing the system response. Additionally, the system has a mechanism for validating the phone numbers in its database, and it updates the number and other information such as daily price of gas, motel etc. automatically using an Atom-based feed. Currently, the commercial directory services (Example 411) do not have facilities to update the listing in the database automatically, so that why callers most of the times get out-of-date phone numbers or other information. Our system can be integrated very easily with an existing web infrastructure, thereby making the wealth of Web information easily available to the user by phone. This kind of system can be deployed as an extension to 911 and 411 services to share the workload with human operators. This paper presents all the underlying principles, architecture, features, and an example of the real world deployment of our proposed system. The source code and documentations are available for commercial productions.

  3. Development of a gene expression database and related analysis programs for evaluation of anticancer compounds.

    PubMed

    Ushijima, Masaru; Mashima, Tetsuo; Tomida, Akihiro; Dan, Shingo; Saito, Sakae; Furuno, Aki; Tsukahara, Satomi; Seimiya, Hiroyuki; Yamori, Takao; Matsuura, Masaaki

    2013-03-01

    Genome-wide transcriptional expression analysis is a powerful strategy for characterizing the biological activity of anticancer compounds. It is often instructive to identify gene sets involved in the activity of a given drug compound for comparison with different compounds. Currently, however, there is no comprehensive gene expression database and related application system that is; (i) specialized in anticancer agents; (ii) easy to use; and (iii) open to the public. To develop a public gene expression database of antitumor agents, we first examined gene expression profiles in human cancer cells after exposure to 35 compounds including 25 clinically used anticancer agents. Gene signatures were extracted that were classified as upregulated or downregulated after exposure to the drug. Hierarchical clustering showed that drugs with similar mechanisms of action, such as genotoxic drugs, were clustered. Connectivity map analysis further revealed that our gene signature data reflected modes of action of the respective agents. Together with the database, we developed analysis programs that calculate scores for ranking changes in gene expression and for searching statistically significant pathways from the Kyoto Encyclopedia of Genes and Genomes database in order to analyze the datasets more easily. Our database and the analysis programs are available online at our website (http://scads.jfcr.or.jp/db/cs/). Using these systems, we successfully showed that proteasome inhibitors are selectively classified as endoplasmic reticulum stress inducers and induce atypical endoplasmic reticulum stress. Thus, our public access database and related analysis programs constitute a set of efficient tools to evaluate the mode of action of novel compounds and identify promising anticancer lead compounds. © 2012 Japanese Cancer Association.

  4. 47 CFR 54.410 - Subscriber eligibility determination and certification.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... eligibility by accessing one or more databases containing information regarding the subscriber's income (“income databases”), the eligible telecommunications carrier must access such income databases and... carrier cannot determine a prospective subscriber's income-based eligibility by accessing income databases...

  5. 47 CFR 54.410 - Subscriber eligibility determination and certification.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... eligibility by accessing one or more databases containing information regarding the subscriber's income (“income databases”), the eligible telecommunications carrier must access such income databases and... carrier cannot determine a prospective subscriber's income-based eligibility by accessing income databases...

  6. 47 CFR 54.410 - Subscriber eligibility determination and certification.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... eligibility by accessing one or more databases containing information regarding the subscriber's income (“income databases”), the eligible telecommunications carrier must access such income databases and... carrier cannot determine a prospective subscriber's income-based eligibility by accessing income databases...

  7. The AAS Working Group on Accessibility and Disability (WGAD) Year 1 Highlights and Database Access

    NASA Astrophysics Data System (ADS)

    Knierman, Karen A.; Diaz Merced, Wanda; Aarnio, Alicia; Garcia, Beatriz; Monkiewicz, Jacqueline A.; Murphy, Nicholas Arnold

    2017-06-01

    The AAS Working Group on Accessibility and Disability (WGAD) was formed in January of 2016 with the express purpose of seeking equity of opportunity and building inclusive practices for disabled astronomers at all educational and career stages. In this presentation, we will provide a summary of current activities, focusing on developing best practices for accessibility with respect to astronomical databases, publications, and meetings. Due to the reliance of space sciences on databases, it is important to have user centered design systems for data retrieval. The cognitive overload that may be experienced by users of current databases may be mitigated by use of multi-modal interfaces such as xSonify. Such interfaces would be in parallel or outside the original database and would not require additional software efforts from the original database. WGAD is partnering with the IAU Commission C1 WG Astronomy for Equity and Inclusion to develop such accessibility tools for databases and methods for user testing. To collect data on astronomical conference and meeting accessibility considerations, WGAD solicited feedback from January AAS attendees via a web form. These data, together with upcoming input from the community and analysis of accessibility documents of similar conferences, will be used to create a meeting accessibility document. Additionally, we will update the progress of journal access guidelines and our social media presence via Twitter. We recommend that astronomical journals form committees to evaluate the accessibility of their publications by performing user-centered usability studies.

  8. Updated Palaeotsunami Database for Aotearoa/New Zealand

    NASA Astrophysics Data System (ADS)

    Gadsby, M. R.; Goff, J. R.; King, D. N.; Robbins, J.; Duesing, U.; Franz, T.; Borrero, J. C.; Watkins, A.

    2016-12-01

    The updated configuration, design, and implementation of a national palaeotsunami (pre-historic tsunami) database for Aotearoa/New Zealand (A/NZ) is near completion. This tool enables correlation of events along different stretches of the NZ coastline, provides information on frequency and extent of local, regional and distant-source tsunamis, and delivers detailed information on the science and proxies used to identify the deposits. In A/NZ a plethora of data, scientific research and experience surrounds palaeotsunami deposits, but much of this information has been difficult to locate, has variable reporting standards, and lacked quality assurance. The original database was created by Professor James Goff while working at the National Institute of Water & Atmospheric Research in A/NZ, but has subsequently been updated during his tenure at the University of New South Wales. The updating and establishment of the national database was funded by the Ministry of Civil Defence and Emergency Management (MCDEM), led by Environment Canterbury Regional Council, and supported by all 16 regions of A/NZ's local government. Creation of a single database has consolidated a wide range of published and unpublished research contributions from many science providers on palaeotsunamis in A/NZ. The information is now easily accessible and quality assured and allows examination of frequency, extent and correlation of events. This provides authoritative scientific support for coastal-marine planning and risk management. The database will complement the GNS New Zealand Historical Database, and contributes to a heightened public awareness of tsunami by being a "one-stop-shop" for information on past tsunami impacts. There is scope for this to become an international database, enabling the pacific-wide correlation of large events, as well as identifying smaller regional ones. The Australian research community has already expressed an interest, and the database is also compatible with a similar one currently under development in Japan. Expressions of interest in collaborating with the A/NZ team to expand the database are invited from other Pacific nations.

  9. Internet-based profiler system as integrative framework to support translational research

    PubMed Central

    Kim, Robert; Demichelis, Francesca; Tang, Jeffery; Riva, Alberto; Shen, Ronglai; Gibbs, Doug F; Mahavishno, Vasudeva; Chinnaiyan, Arul M; Rubin, Mark A

    2005-01-01

    Background Translational research requires taking basic science observations and developing them into clinically useful tests and therapeutics. We have developed a process to develop molecular biomarkers for diagnosis and prognosis by integrating tissue microarray (TMA) technology and an internet-database tool, Profiler. TMA technology allows investigators to study hundreds of patient samples on a single glass slide resulting in the conservation of tissue and the reduction in inter-experimental variability. The Profiler system allows investigator to reliably track, store, and evaluate TMA experiments. Here within we describe the process that has evolved through an empirical basis over the past 5 years at two academic institutions. Results The generic design of this system makes it compatible with multiple organ system (e.g., prostate, breast, lung, renal, and hematopoietic system,). Studies and folders are restricted to authorized users as required. Over the past 5 years, investigators at 2 academic institutions have scanned 656 TMA experiments and collected 63,311 digital images of these tissue samples. 68 pathologists from 12 major user groups have accessed the system. Two groups directly link clinical data from over 500 patients for immediate access and the remaining groups choose to maintain clinical and pathology data on separate systems. Profiler currently has 170 K data points such as staining intensity, tumor grade, and nuclear size. Due to the relational database structure, analysis can be easily performed on single or multiple TMA experimental results. The TMA module of Profiler can maintain images acquired from multiple systems. Conclusion We have developed a robust process to develop molecular biomarkers using TMA technology and an internet-based database system to track all steps of this process. This system is extendable to other types of molecular data as separate modules and is freely available to academic institutions for licensing. PMID:16364175

  10. The GOLM-database standard- a framework for time-series data management based on free software

    NASA Astrophysics Data System (ADS)

    Eichler, M.; Francke, T.; Kneis, D.; Reusser, D.

    2009-04-01

    Monitoring and modelling projects usually involve time series data originating from different sources. Often, file formats, temporal resolution and meta-data documentation rarely adhere to a common standard. As a result, much effort is spent on converting, harmonizing, merging, checking, resampling and reformatting these data. Moreover, in work groups or during the course of time, these tasks tend to be carried out redundantly and repeatedly, especially when new data becomes available. The resulting duplication of data in various formats strains additional ressources. We propose a database structure and complementary scripts for facilitating these tasks. The GOLM- (General Observation and Location Management) framework allows for import and storage of time series data of different type while assisting in meta-data documentation, plausibility checking and harmonization. The imported data can be visually inspected and its coverage among locations and variables may be visualized. Supplementing scripts provide options for data export for selected stations and variables and resampling of the data to the desired temporal resolution. These tools can, for example, be used for generating model input files or reports. Since GOLM fully supports network access, the system can be used efficiently by distributed working groups accessing the same data over the internet. GOLM's database structure and the complementary scripts can easily be customized to specific needs. Any involved software such as MySQL, R, PHP, OpenOffice as well as the scripts for building and using the data base, including documentation, are free for download. GOLM was developed out of the practical requirements of the OPAQUE-project. It has been tested and further refined in the ERANET-CRUE and SESAM projects, all of which used GOLM to manage meteorological, hydrological and/or water quality data.

  11. Internet-based Profiler system as integrative framework to support translational research.

    PubMed

    Kim, Robert; Demichelis, Francesca; Tang, Jeffery; Riva, Alberto; Shen, Ronglai; Gibbs, Doug F; Mahavishno, Vasudeva; Chinnaiyan, Arul M; Rubin, Mark A

    2005-12-19

    Translational research requires taking basic science observations and developing them into clinically useful tests and therapeutics. We have developed a process to develop molecular biomarkers for diagnosis and prognosis by integrating tissue microarray (TMA) technology and an internet-database tool, Profiler. TMA technology allows investigators to study hundreds of patient samples on a single glass slide resulting in the conservation of tissue and the reduction in inter-experimental variability. The Profiler system allows investigator to reliably track, store, and evaluate TMA experiments. Here within we describe the process that has evolved through an empirical basis over the past 5 years at two academic institutions. The generic design of this system makes it compatible with multiple organ system (e.g., prostate, breast, lung, renal, and hematopoietic system,). Studies and folders are restricted to authorized users as required. Over the past 5 years, investigators at 2 academic institutions have scanned 656 TMA experiments and collected 63,311 digital images of these tissue samples. 68 pathologists from 12 major user groups have accessed the system. Two groups directly link clinical data from over 500 patients for immediate access and the remaining groups choose to maintain clinical and pathology data on separate systems. Profiler currently has 170 K data points such as staining intensity, tumor grade, and nuclear size. Due to the relational database structure, analysis can be easily performed on single or multiple TMA experimental results. The TMA module of Profiler can maintain images acquired from multiple systems. We have developed a robust process to develop molecular biomarkers using TMA technology and an internet-based database system to track all steps of this process. This system is extendable to other types of molecular data as separate modules and is freely available to academic institutions for licensing.

  12. Construction of Pará rubber tree genome and multi-transcriptome database accelerates rubber researches.

    PubMed

    Makita, Yuko; Kawashima, Mika; Lau, Nyok Sean; Othman, Ahmad Sofiman; Matsui, Minami

    2018-01-19

    Natural rubber is an economically important material. Currently the Pará rubber tree, Hevea brasiliensis is the main commercial source. Little is known about rubber biosynthesis at the molecular level. Next-generation sequencing (NGS) technologies brought draft genomes of three rubber cultivars and a variety of RNA sequencing (RNA-seq) data. However, no current genome or transcriptome databases (DB) are organized by gene. A gene-oriented database is a valuable support for rubber research. Based on our original draft genome sequence of H. brasiliensis RRIM600, we constructed a rubber tree genome and transcriptome DB. Our DB provides genome information including gene functional annotations and multi-transcriptome data of RNA-seq, full-length cDNAs including PacBio Isoform sequencing (Iso-Seq), ESTs and genome wide transcription start sites (TSSs) derived from CAGE technology. Using our original and publically available RNA-seq data, we calculated co-expressed genes for identifying functionally related gene sets and/or genes regulated by the same transcription factor (TF). Users can access multi-transcriptome data through both a gene-oriented web page and a genome browser. For the gene searching system, we provide keyword search, sequence homology search and gene expression search; users can also select their expression threshold easily. The rubber genome and transcriptome DB provides rubber tree genome sequence and multi-transcriptomics data. This DB is useful for comprehensive understanding of the rubber transcriptome. This will assist both industrial and academic researchers for rubber and economically important close relatives such as R. communis, M. esculenta and J. curcas. The Rubber Transcriptome DB release 2017.03 is accessible at http://matsui-lab.riken.jp/rubber/ .

  13. Human- and computer-accessible 2D correlation data for a more reliable structure determination of organic compounds. Future roles of researchers, software developers, spectrometer managers, journal editors, reviewers, publisher and database managers toward artificial-intelligence analysis of NMR spectra.

    PubMed

    Jeannerat, Damien

    2017-01-01

    The introduction of a universal data format to report the correlation data of 2D NMR spectra such as COSY, HSQC and HMBC spectra will have a large impact on the reliability of structure determination of small organic molecules. These lists of assigned cross peaks will bridge signals found in NMR 1D and 2D spectra and the assigned chemical structure. The record could be very compact, human and computer readable so that it can be included in the supplementary material of publications and easily transferred into databases of scientific literature and chemical compounds. The records will allow authors, reviewers and future users to test the consistency and, in favorable situations, the uniqueness of the assignment of the correlation data to the associated chemical structures. Ideally, the data format of the correlation data should include direct links to the NMR spectra to make it possible to validate their reliability and allow direct comparison of spectra. In order to take the full benefits of their potential, the correlation data and the NMR spectra should therefore follow any manuscript in the review process and be stored in open-access database after publication. Keeping all NMR spectra, correlation data and assigned structures together at all time will allow the future development of validation tools increasing the reliability of past and future NMR data. This will facilitate the development of artificial intelligence analysis of NMR spectra by providing a source of data than can be used efficiently because they have been validated or can be validated by future users. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. TAPIR--Finnish national geochemical baseline database.

    PubMed

    Jarva, Jaana; Tarvainen, Timo; Reinikainen, Jussi; Eklund, Mikael

    2010-09-15

    In Finland, a Government Decree on the Assessment of Soil Contamination and Remediation Needs has generated a need for reliable and readily accessible data on geochemical baseline concentrations in Finnish soils. According to the Decree, baseline concentrations, referring both to the natural geological background concentrations and the diffuse anthropogenic input of substances, shall be taken into account in the soil contamination assessment process. This baseline information is provided in a national geochemical baseline database, TAPIR, that is publicly available via the Internet. Geochemical provinces with elevated baseline concentrations were delineated to provide regional geochemical baseline values. The nationwide geochemical datasets were used to divide Finland into geochemical provinces. Several metals (Co, Cr, Cu, Ni, V, and Zn) showed anomalous concentrations in seven regions that were defined as metal provinces. Arsenic did not follow a similar distribution to any other elements, and four arsenic provinces were separately determined. Nationwide geochemical datasets were not available for some other important elements such as Cd and Pb. Although these elements are included in the TAPIR system, their distribution does not necessarily follow the ones pre-defined for metal and arsenic provinces. Regional geochemical baseline values, presented as upper limit of geochemical variation within the region, can be used as trigger values to assess potential soil contamination. Baseline values have also been used to determine upper and lower guideline values that must be taken into account as a tool in basic risk assessment. If regional geochemical baseline values are available, the national guideline values prescribed in the Decree based on ecological risks can be modified accordingly. The national geochemical baseline database provides scientifically sound, easily accessible and generally accepted information on the baseline values, and it can be used in various environmental applications. Copyright 2010 Elsevier B.V. All rights reserved.

  15. StatsDB: platform-agnostic storage and understanding of next generation sequencing run metrics

    PubMed Central

    Ramirez-Gonzalez, Ricardo H.; Leggett, Richard M.; Waite, Darren; Thanki, Anil; Drou, Nizar; Caccamo, Mario; Davey, Robert

    2014-01-01

    Modern sequencing platforms generate enormous quantities of data in ever-decreasing amounts of time. Additionally, techniques such as multiplex sequencing allow one run to contain hundreds of different samples. With such data comes a significant challenge to understand its quality and to understand how the quality and yield are changing across instruments and over time. As well as the desire to understand historical data, sequencing centres often have a duty to provide clear summaries of individual run performance to collaborators or customers. We present StatsDB, an open-source software package for storage and analysis of next generation sequencing run metrics. The system has been designed for incorporation into a primary analysis pipeline, either at the programmatic level or via integration into existing user interfaces. Statistics are stored in an SQL database and APIs provide the ability to store and access the data while abstracting the underlying database design. This abstraction allows simpler, wider querying across multiple fields than is possible by the manual steps and calculation required to dissect individual reports, e.g. ”provide metrics about nucleotide bias in libraries using adaptor barcode X, across all runs on sequencer A, within the last month”. The software is supplied with modules for storage of statistics from FastQC, a commonly used tool for analysis of sequence reads, but the open nature of the database schema means it can be easily adapted to other tools. Currently at The Genome Analysis Centre (TGAC), reports are accessed through our LIMS system or through a standalone GUI tool, but the API and supplied examples make it easy to develop custom reports and to interface with other packages. PMID:24627795

  16. [Bibliographic survey of the Orvosi Hetilap of Hungary: looking back and moving forward].

    PubMed

    Berhidi, Anna; Margittai, Zsuzsa; Vasas, Lívia

    2012-12-02

    The first step in the process of acquisition of impact factor for a scientific journal is to get registered at Thomson Reuters Web of Science database. The aim of this article is to evaluate the content and structure of Orvosi Hetilap with regards to selection criteria of Thomson Reuters, in particular to objectives of citation analysis. Authors evaluated issues of Orvosi Hetilap published in 2011 and calculated the unofficial impact factor of the journal based on systematic search in various citation index databases. Number of citations, quality of citing journals and scientific output of the editorial board members were evaluated. Adherence to guidelines of international publishers was assessed, as well. Unofficial impact factor of Orvosi Hetilap has been continuously rising every year in the past decade (except for 2004 and 2010). The articles of Orvosi Hetilap are widely cited by international authors and high impact factor journals, too. Further, more than half the articles cited are open access. The most frequently cited categories are original and review articles as well as clinical studies. Orvosi Hetilap is a weekly published journal, which is covered by many international databases such as PubMed/Medline, Scopus, Embase, and BIOSIS Previews. As regards to the scientific output of the editorial board members, the truncated mean of the number of their publications was 497, citations 2446, independent citations 2014 and h-index 21. While Orvosi Hetilap fulfils many criteria for getting covered by Thomson Reuters, it is worthwhile to implement a method of online citation system in order to increase the number of citations. In addition, scientific publications of all editorial board members should be made easily accessible. Finally, publications of comparative studies by multiple authors are encouraged as well as papers containing epidemiological data analyses.

  17. The Transcriptome Analysis and Comparison Explorer--T-ACE: a platform-independent, graphical tool to process large RNAseq datasets of non-model organisms.

    PubMed

    Philipp, E E R; Kraemer, L; Mountfort, D; Schilhabel, M; Schreiber, S; Rosenstiel, P

    2012-03-15

    Next generation sequencing (NGS) technologies allow a rapid and cost-effective compilation of large RNA sequence datasets in model and non-model organisms. However, the storage and analysis of transcriptome information from different NGS platforms is still a significant bottleneck, leading to a delay in data dissemination and subsequent biological understanding. Especially database interfaces with transcriptome analysis modules going beyond mere read counts are missing. Here, we present the Transcriptome Analysis and Comparison Explorer (T-ACE), a tool designed for the organization and analysis of large sequence datasets, and especially suited for transcriptome projects of non-model organisms with little or no a priori sequence information. T-ACE offers a TCL-based interface, which accesses a PostgreSQL database via a php-script. Within T-ACE, information belonging to single sequences or contigs, such as annotation or read coverage, is linked to the respective sequence and immediately accessible. Sequences and assigned information can be searched via keyword- or BLAST-search. Additionally, T-ACE provides within and between transcriptome analysis modules on the level of expression, GO terms, KEGG pathways and protein domains. Results are visualized and can be easily exported for external analysis. We developed T-ACE for laboratory environments, which have only a limited amount of bioinformatics support, and for collaborative projects in which different partners work on the same dataset from different locations or platforms (Windows/Linux/MacOS). For laboratories with some experience in bioinformatics and programming, the low complexity of the database structure and open-source code provides a framework that can be customized according to the different needs of the user and transcriptome project.

  18. LexisNexis

    EPA Pesticide Factsheets

    LexisNexis provides access to electronic legal and non-legal research databases to the Agency's attorneys, administrative law judges, law clerks, investigators, and certain non-legal staff (e.g. staff in the Office of Public Affairs). The agency requires access to the following types of electronic databases: Legal databases, Non-legal databases, Public Records databases, and Financial databases.

  19. Using STOQS and stoqstoolbox for in situ Measurement Data Access in Matlab

    NASA Astrophysics Data System (ADS)

    López-Castejón, F.; Schlining, B.; McCann, M. P.

    2012-12-01

    This poster presents the stoqstoolbox, an extension to Matlab that simplifies the loading of in situ measurement data directly from STOQS databases. STOQS (Spatial Temporal Oceanographic Query System) is a geospatial database tool designed to provide efficient access to data following the CF-NetCDF Discrete Samples Geometries convention. Data are loaded from CF-NetCDF files into a STOQS database where indexes are created on depth, spatial coordinates and other parameters, e.g. platform type. STOQS provides consistent, simple and efficient methods to query for data. For example, we can request all measurements with a standard_name of sea_water_temperature between two times and from between two depths. Data access is simpler because the data are retrieved by parameter irrespective of platform or mission file names. Access is more efficient because data are retrieved via the index on depth and only the requested data are retrieved from the database and transferred into the Matlab workspace. Applications in the stoqstoolbox query the STOQS database via an HTTP REST application programming interface; they follow the Data Access Object pattern, enabling highly customizable query construction. Data are loaded into Matlab structures that clearly indicate latitude, longitude, depth, measurement data value, and platform name. The stoqstoolbox is designed to be used in concert with other tools, such as nctoolbox, which can load data from any OPeNDAP data source. With these two toolboxes a user can easily work with in situ and other gridded data, such as from numerical models and remote sensing platforms. In order to show the capability of stoqstoolbox we will show an example of model validation using data collected during the May-June 2012 field experiment conducted by the Monterey Bay Aquarium Research Institute (MBARI) in Monterey Bay, California. The data are available from the STOQS server at http://odss.mbari.org/canon/stoqs_may2012/query/. Over 14 million data points of 18 parameters from 6 platforms measured over a 3-week period are available on this server. The model used for comparison is the Regional Ocean Modeling System developed by Jet Propulsion Laboratory for the Monterey Bay. The model output are loaded into Matlab using nctoolbox from the JPL server at http://ourocean.jpl.nasa.gov:8080/thredds/dodsC/MBNowcast. Model validation with in situ measurements can be difficult because of different file formats and because data may be spread across individual data systems for each platform. With stoqstoolbox the researcher must know only the URL of the STOQS server and the OPeNDAP URL of the model output. With selected depth and time constraints a user's Matlab program searches for all in situ measurements available for the same time, depth and variable of the model. STOQS and stoqstoolbox are open source software projects supported by MBARI and the David and Lucile Packard foundation. For more information please see http://code.google.com/p/stoqs.

  20. Conditions Database for the Belle II Experiment

    NASA Astrophysics Data System (ADS)

    Wood, L.; Elsethagen, T.; Schram, M.; Stephan, E.

    2017-10-01

    The Belle II experiment at KEK is preparing for first collisions in 2017. Processing the large amounts of data that will be produced will require conditions data to be readily available to systems worldwide in a fast and efficient manner that is straightforward for both the user and maintainer. The Belle II conditions database was designed with a straightforward goal: make it as easily maintainable as possible. To this end, HEP-specific software tools were avoided as much as possible and industry standard tools used instead. HTTP REST services were selected as the application interface, which provide a high-level interface to users through the use of standard libraries such as curl. The application interface itself is written in Java and runs in an embedded Payara-Micro Java EE application server. Scalability at the application interface is provided by use of Hazelcast, an open source In-Memory Data Grid (IMDG) providing distributed in-memory computing and supporting the creation and clustering of new application interface instances as demand increases. The IMDG provides fast and efficient access to conditions data via in-memory caching.

  1. BloodChIP: a database of comparative genome-wide transcription factor binding profiles in human blood cells.

    PubMed

    Chacon, Diego; Beck, Dominik; Perera, Dilmi; Wong, Jason W H; Pimanda, John E

    2014-01-01

    The BloodChIP database (http://www.med.unsw.edu.au/CRCWeb.nsf/page/BloodChIP) supports exploration and visualization of combinatorial transcription factor (TF) binding at a particular locus in human CD34-positive and other normal and leukaemic cells or retrieval of target gene sets for user-defined combinations of TFs across one or more cell types. Increasing numbers of genome-wide TF binding profiles are being added to public repositories, and this trend is likely to continue. For the power of these data sets to be fully harnessed by experimental scientists, there is a need for these data to be placed in context and easily accessible for downstream applications. To this end, we have built a user-friendly database that has at its core the genome-wide binding profiles of seven key haematopoietic TFs in human stem/progenitor cells. These binding profiles are compared with binding profiles in normal differentiated and leukaemic cells. We have integrated these TF binding profiles with chromatin marks and expression data in normal and leukaemic cell fractions. All queries can be exported into external sites to construct TF-gene and protein-protein networks and to evaluate the association of genes with cellular processes and tissue expression.

  2. MortalityPredictors.org: a manually-curated database of published biomarkers of human all-cause mortality

    PubMed Central

    Winslow, Ksenia; Ho, Andrew; Fortney, Kristen; Morgen, Eric

    2017-01-01

    Biomarkers of all-cause mortality are of tremendous clinical and research interest. Because of the long potential duration of prospective human lifespan studies, such biomarkers can play a key role in quantifying human aging and quickly evaluating any potential therapies. Decades of research into mortality biomarkers have resulted in numerous associations documented across hundreds of publications. Here, we present MortalityPredictors.org, a manually-curated, publicly accessible database, housing published, statistically-significant relationships between biomarkers and all-cause mortality in population-based or generally healthy samples. To gather the information for this database, we searched PubMed for appropriate research papers and then manually curated relevant data from each paper. We manually curated 1,576 biomarker associations, involving 471 distinct biomarkers. Biomarkers ranged in type from hematologic (red blood cell distribution width) to molecular (DNA methylation changes) to physical (grip strength). Via the web interface, the resulting data can be easily browsed, searched, and downloaded for further analysis. MortalityPredictors.org provides comprehensive results on published biomarkers of human all-cause mortality that can be used to compare biomarkers, facilitate meta-analysis, assist with the experimental design of aging studies, and serve as a central resource for analysis. We hope that it will facilitate future research into human mortality and aging. PMID:28858850

  3. SCPortalen: human and mouse single-cell centric database

    PubMed Central

    Noguchi, Shuhei; Böttcher, Michael; Hasegawa, Akira; Kouno, Tsukasa; Kato, Sachi; Tada, Yuhki; Ura, Hiroki; Abe, Kuniya; Shin, Jay W; Plessy, Charles; Carninci, Piero

    2018-01-01

    Abstract Published single-cell datasets are rich resources for investigators who want to address questions not originally asked by the creators of the datasets. The single-cell datasets might be obtained by different protocols and diverse analysis strategies. The main challenge in utilizing such single-cell data is how we can make the various large-scale datasets to be comparable and reusable in a different context. To challenge this issue, we developed the single-cell centric database ‘SCPortalen’ (http://single-cell.clst.riken.jp/). The current version of the database covers human and mouse single-cell transcriptomics datasets that are publicly available from the INSDC sites. The original metadata was manually curated and single-cell samples were annotated with standard ontology terms. Following that, common quality assessment procedures were conducted to check the quality of the raw sequence. Furthermore, primary data processing of the raw data followed by advanced analyses and interpretation have been performed from scratch using our pipeline. In addition to the transcriptomics data, SCPortalen provides access to single-cell image files whenever available. The target users of SCPortalen are all researchers interested in specific cell types or population heterogeneity. Through the web interface of SCPortalen users are easily able to search, explore and download the single-cell datasets of their interests. PMID:29045713

  4. MortalityPredictors.org: a manually-curated database of published biomarkers of human all-cause mortality.

    PubMed

    Peto, Maximus V; De la Guardia, Carlos; Winslow, Ksenia; Ho, Andrew; Fortney, Kristen; Morgen, Eric

    2017-08-31

    Biomarkers of all-cause mortality are of tremendous clinical and research interest. Because of the long potential duration of prospective human lifespan studies, such biomarkers can play a key role in quantifying human aging and quickly evaluating any potential therapies. Decades of research into mortality biomarkers have resulted in numerous associations documented across hundreds of publications. Here, we present MortalityPredictors.org , a manually-curated, publicly accessible database, housing published, statistically-significant relationships between biomarkers and all-cause mortality in population-based or generally healthy samples. To gather the information for this database, we searched PubMed for appropriate research papers and then manually curated relevant data from each paper. We manually curated 1,576 biomarker associations, involving 471 distinct biomarkers. Biomarkers ranged in type from hematologic (red blood cell distribution width) to molecular (DNA methylation changes) to physical (grip strength). Via the web interface, the resulting data can be easily browsed, searched, and downloaded for further analysis. MortalityPredictors.org provides comprehensive results on published biomarkers of human all-cause mortality that can be used to compare biomarkers, facilitate meta-analysis, assist with the experimental design of aging studies, and serve as a central resource for analysis. We hope that it will facilitate future research into human mortality and aging.

  5. The plant phenological online database (PPODB): an online database for long-term phenological data

    NASA Astrophysics Data System (ADS)

    Dierenbach, Jonas; Badeck, Franz-W.; Schaber, Jörg

    2013-09-01

    We present an online database that provides unrestricted and free access to over 16 million plant phenological observations from over 8,000 stations in Central Europe between the years 1880 and 2009. Unique features are (1) a flexible and unrestricted access to a full-fledged database, allowing for a wide range of individual queries and data retrieval, (2) historical data for Germany before 1951 ranging back to 1880, and (3) more than 480 curated long-term time series covering more than 100 years for individual phenological phases and plants combined over Natural Regions in Germany. Time series for single stations or Natural Regions can be accessed through a user-friendly graphical geo-referenced interface. The joint databases made available with the plant phenological database PPODB render accessible an important data source for further analyses of long-term changes in phenology. The database can be accessed via www.ppodb.de .

  6. 47 CFR 15.711 - Interference avoidance methods.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... channel availability for a TVBD is determined based on the geo-location and database access method described in paragraphs (a) and (b) of this section. (a) Geo-location and database access. A TVBD shall rely on the geo-location and database access mechanism to identify available television channels...

  7. 47 CFR 15.711 - Interference avoidance methods.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... channel availability for a TVBD is determined based on the geo-location and database access method described in paragraphs (a) and (b) of this section. (a) Geo-location and database access. A TVBD shall rely on the geo-location and database access mechanism to identify available television channels...

  8. 47 CFR 15.711 - Interference avoidance methods.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... channel availability for a TVBD is determined based on the geo-location and database access method described in paragraphs (a) and (b) of this section. (a) Geo-location and database access. A TVBD shall rely on the geo-location and database access mechanism to identify available television channels...

  9. 47 CFR 15.711 - Interference avoidance methods.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... channel availability for a TVBD is determined based on the geo-location and database access method described in paragraphs (a) and (b) of this section. (a) Geo-location and database access. A TVBD shall rely on the geo-location and database access mechanism to identify available television channels...

  10. NGL Viewer: a web application for molecular visualization

    PubMed Central

    Rose, Alexander S.; Hildebrand, Peter W.

    2015-01-01

    The NGL Viewer (http://proteinformatics.charite.de/ngl) is a web application for the visualization of macromolecular structures. By fully adopting capabilities of modern web browsers, such as WebGL, for molecular graphics, the viewer can interactively display large molecular complexes and is also unaffected by the retirement of third-party plug-ins like Flash and Java Applets. Generally, the web application offers comprehensive molecular visualization through a graphical user interface so that life scientists can easily access and profit from available structural data. It supports common structural file-formats (e.g. PDB, mmCIF) and a variety of molecular representations (e.g. ‘cartoon, spacefill, licorice’). Moreover, the viewer can be embedded in other web sites to provide specialized visualizations of entries in structural databases or results of structure-related calculations. PMID:25925569

  11. RayPlus: a Web-Based Platform for Medical Image Processing.

    PubMed

    Yuan, Rong; Luo, Ming; Sun, Zhi; Shi, Shuyue; Xiao, Peng; Xie, Qingguo

    2017-04-01

    Medical image can provide valuable information for preclinical research, clinical diagnosis, and treatment. As the widespread use of digital medical imaging, many researchers are currently developing medical image processing algorithms and systems in order to accommodate a better result to clinical community, including accurate clinical parameters or processed images from the original images. In this paper, we propose a web-based platform to present and process medical images. By using Internet and novel database technologies, authorized users can easily access to medical images and facilitate their workflows of processing with server-side powerful computing performance without any installation. We implement a series of algorithms of image processing and visualization in the initial version of Rayplus. Integration of our system allows much flexibility and convenience for both research and clinical communities.

  12. Automated rule-base creation via CLIPS-Induce

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick M.

    1994-01-01

    Many CLIPS rule-bases contain one or more rule groups that perform classification. In this paper we describe CLIPS-Induce, an automated system for the creation of a CLIPS classification rule-base from a set of test cases. CLIPS-Induce consists of two components, a decision tree induction component and a CLIPS production extraction component. ID3, a popular decision tree induction algorithm, is used to induce a decision tree from the test cases. CLIPS production extraction is accomplished through a top-down traversal of the decision tree. Nodes of the tree are used to construct query rules, and branches of the tree are used to construct classification rules. The learned CLIPS productions may easily be incorporated into a large CLIPS system that perform tasks such as accessing a database or displaying information.

  13. Engineering Information Infrastructure for Product Lifecycle Managment

    NASA Astrophysics Data System (ADS)

    Kimura, Fumihiko

    For proper management of total product life cycle, it is fundamentally important to systematize design and engineering information about product systems. For example, maintenance operation could be more efficiently performed, if appropriate parts design information is available at the maintenance site. Such information shall be available as an information infrastructure for various kinds of engineering operations, and it should be easily accessible during the whole product life cycle, such as transportation, marketing, usage, repair/upgrade, take-back and recycling/disposal. Different from the traditional engineering database, life cycle support information has several characteristic requirements, such as flexible extensibility, distributed architecture, multiple viewpoints, long-time archiving, and product usage information, etc. Basic approaches for managing engineering information infrastructure are investigated, and various information contents and associated life cycle applications are discussed.

  14. Deciding with the eye: how the visually manipulated accessibility of information in memory influences decision behavior.

    PubMed

    Platzer, Christine; Bröder, Arndt; Heck, Daniel W

    2014-05-01

    Decision situations are typically characterized by uncertainty: Individuals do not know the values of different options on a criterion dimension. For example, consumers do not know which is the healthiest of several products. To make a decision, individuals can use information about cues that are probabilistically related to the criterion dimension, such as sugar content or the concentration of natural vitamins. In two experiments, we investigated how the accessibility of cue information in memory affects which decision strategy individuals rely on. The accessibility of cue information was manipulated by means of a newly developed paradigm, the spatial-memory-cueing paradigm, which is based on a combination of the looking-at-nothing phenomenon and the spatial-cueing paradigm. The results indicated that people use different decision strategies, depending on the validity of easily accessible information. If the easily accessible information is valid, people stop information search and decide according to a simple take-the-best heuristic. If, however, information that comes to mind easily has a low predictive validity, people are more likely to integrate all available cue information in a compensatory manner.

  15. Freely Accessible Chemical Database Resources of Compounds for in Silico Drug Discovery.

    PubMed

    Yang, JingFang; Wang, Di; Jia, Chenyang; Wang, Mengyao; Hao, GeFei; Yang, GuangFu

    2018-05-07

    In silico drug discovery has been proved to be a solidly established key component in early drug discovery. However, this task is hampered by the limitation of quantity and quality of compound databases for screening. In order to overcome these obstacles, freely accessible database resources of compounds have bloomed in recent years. Nevertheless, how to choose appropriate tools to treat these freely accessible databases are crucial. To the best of our knowledge, this is the first systematic review on this issue. The existed advantages and drawbacks of chemical databases were analyzed and summarized based on the collected six categories of freely accessible chemical databases from literature in this review. Suggestions on how and in which conditions the usage of these databases could be reasonable were provided. Tools and procedures for building 3D structure chemical libraries were also introduced. In this review, we described the freely accessible chemical database resources for in silico drug discovery. In particular, the chemical information for building chemical database appears as attractive resources for drug design to alleviate experimental pressure. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  16. Dcs Data Viewer, an Application that Accesses ATLAS DCS Historical Data

    NASA Astrophysics Data System (ADS)

    Tsarouchas, C.; Schlenker, S.; Dimitrov, G.; Jahn, G.

    2014-06-01

    The ATLAS experiment at CERN is one of the four Large Hadron Collider experiments. The Detector Control System (DCS) of ATLAS is responsible for the supervision of the detector equipment, the reading of operational parameters, the propagation of the alarms and the archiving of important operational data in a relational database (DB). DCS Data Viewer (DDV) is an application that provides access to the ATLAS DCS historical data through a web interface. Its design is structured using a client-server architecture. The pythonic server connects to the DB and fetches the data by using optimized SQL requests. It communicates with the outside world, by accepting HTTP requests and it can be used stand alone. The client is an AJAX (Asynchronous JavaScript and XML) interactive web application developed under the Google Web Toolkit (GWT) framework. Its web interface is user friendly, platform and browser independent. The selection of metadata is done via a column-tree view or with a powerful search engine. The final visualization of the data is done using java applets or java script applications as plugins. The default output is a value-over-time chart, but other types of outputs like tables, ascii or ROOT files are supported too. Excessive access or malicious use of the database is prevented by a dedicated protection mechanism, allowing the exposure of the tool to hundreds of inexperienced users. The current configuration of the client and of the outputs can be saved in an XML file. Protection against web security attacks is foreseen and authentication constrains have been taken into account, allowing the exposure of the tool to hundreds of users world wide. Due to its flexible interface and its generic and modular approach, DDV could be easily used for other experiment control systems.

  17. CDSbank: taxonomy-aware extraction, selection, renaming and formatting of protein-coding DNA or amino acid sequences.

    PubMed

    Hazes, Bart

    2014-02-28

    Protein-coding DNA sequences and their corresponding amino acid sequences are routinely used to study relationships between sequence, structure, function, and evolution. The rapidly growing size of sequence databases increases the power of such comparative analyses but it makes it more challenging to prepare high quality sequence data sets with control over redundancy, quality, completeness, formatting, and labeling. Software tools for some individual steps in this process exist but manual intervention remains a common and time consuming necessity. CDSbank is a database that stores both the protein-coding DNA sequence (CDS) and amino acid sequence for each protein annotated in Genbank. CDSbank also stores Genbank feature annotation, a flag to indicate incomplete 5' and 3' ends, full taxonomic data, and a heuristic to rank the scientific interest of each species. This rich information allows fully automated data set preparation with a level of sophistication that aims to meet or exceed manual processing. Defaults ensure ease of use for typical scenarios while allowing great flexibility when needed. Access is via a free web server at http://hazeslab.med.ualberta.ca/CDSbank/. CDSbank presents a user-friendly web server to download, filter, format, and name large sequence data sets. Common usage scenarios can be accessed via pre-programmed default choices, while optional sections give full control over the processing pipeline. Particular strengths are: extract protein-coding DNA sequences just as easily as amino acid sequences, full access to taxonomy for labeling and filtering, awareness of incomplete sequences, and the ability to take one protein sequence and extract all synonymous CDS or identical protein sequences in other species. Finally, CDSbank can also create labeled property files to, for instance, annotate or re-label phylogenetic trees.

  18. Correlates of Access to Business Research Databases

    ERIC Educational Resources Information Center

    Gottfried, John C.

    2010-01-01

    This study examines potential correlates of business research database access through academic libraries serving top business programs in the United States. Results indicate that greater access to research databases is related to enrollment in graduate business programs, but not to overall enrollment or status as a public or private institution.…

  19. 47 CFR 51.217 - Nondiscriminatory access: Telephone numbers, operator services, directory assistance services...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... to have access to its directory assistance services, including directory assistance databases, so... provider, including transfer of the LECs' directory assistance databases in readily accessible magnetic.... Updates to the directory assistance database shall be made in the same format as the initial transfer...

  20. Interventional nephrology: Physical examination as a tool for surveillance for the hemodialysis arteriovenous access.

    PubMed

    Salman, Loay; Beathard, Gerald

    2013-07-01

    The prospective recognition of stenosis affecting dialysis vascular access and its prospective treatment is important in the management of the hemodialysis patient. Surveillance by physical examination is easily learned, easily performed, quickly done, and economical. In addition, it has a level of accuracy and reliability equivalent to other approaches that require special instrumentation. Physical examination should be part of any education to all hemodialysis care givers. This review presents the basic principles of physical examination of the hemodialysis vascular access and discusses the evidence behind its value.

  1. Data Processing on Database Management Systems with Fuzzy Query

    NASA Astrophysics Data System (ADS)

    Şimşek, Irfan; Topuz, Vedat

    In this study, a fuzzy query tool (SQLf) for non-fuzzy database management systems was developed. In addition, samples of fuzzy queries were made by using real data with the tool developed in this study. Performance of SQLf was tested with the data about the Marmara University students' food grant. The food grant data were collected in MySQL database by using a form which had been filled on the web. The students filled a form on the web to describe their social and economical conditions for the food grant request. This form consists of questions which have fuzzy and crisp answers. The main purpose of this fuzzy query is to determine the students who deserve the grant. The SQLf easily found the eligible students for the grant through predefined fuzzy values. The fuzzy query tool (SQLf) could be used easily with other database system like ORACLE and SQL server.

  2. 2DB: a Proteomics database for storage, analysis, presentation, and retrieval of information from mass spectrometric experiments.

    PubMed

    Allmer, Jens; Kuhlgert, Sebastian; Hippler, Michael

    2008-07-07

    The amount of information stemming from proteomics experiments involving (multi dimensional) separation techniques, mass spectrometric analysis, and computational analysis is ever-increasing. Data from such an experimental workflow needs to be captured, related and analyzed. Biological experiments within this scope produce heterogenic data ranging from pictures of one or two-dimensional protein maps and spectra recorded by tandem mass spectrometry to text-based identifications made by algorithms which analyze these spectra. Additionally, peptide and corresponding protein information needs to be displayed. In order to handle the large amount of data from computational processing of mass spectrometric experiments, automatic import scripts are available and the necessity for manual input to the database has been minimized. Information is in a generic format which abstracts from specific software tools typically used in such an experimental workflow. The software is therefore capable of storing and cross analysing results from many algorithms. A novel feature and a focus of this database is to facilitate protein identification by using peptides identified from mass spectrometry and link this information directly to respective protein maps. Additionally, our application employs spectral counting for quantitative presentation of the data. All information can be linked to hot spots on images to place the results into an experimental context. A summary of identified proteins, containing all relevant information per hot spot, is automatically generated, usually upon either a change in the underlying protein models or due to newly imported identifications. The supporting information for this report can be accessed in multiple ways using the user interface provided by the application. We present a proteomics database which aims to greatly reduce evaluation time of results from mass spectrometric experiments and enhance result quality by allowing consistent data handling. Import functionality, automatic protein detection, and summary creation act together to facilitate data analysis. In addition, supporting information for these findings is readily accessible via the graphical user interface provided. The database schema and the implementation, which can easily be installed on virtually any server, can be downloaded in the form of a compressed file from our project webpage.

  3. Arabidopsis Gene Family Profiler (aGFP)--user-oriented transcriptomic database with easy-to-use graphic interface.

    PubMed

    Dupl'áková, Nikoleta; Renák, David; Hovanec, Patrik; Honysová, Barbora; Twell, David; Honys, David

    2007-07-23

    Microarray technologies now belong to the standard functional genomics toolbox and have undergone massive development leading to increased genome coverage, accuracy and reliability. The number of experiments exploiting microarray technology has markedly increased in recent years. In parallel with the rapid accumulation of transcriptomic data, on-line analysis tools are being introduced to simplify their use. Global statistical data analysis methods contribute to the development of overall concepts about gene expression patterns and to query and compose working hypotheses. More recently, these applications are being supplemented with more specialized products offering visualization and specific data mining tools. We present a curated gene family-oriented gene expression database, Arabidopsis Gene Family Profiler (aGFP; http://agfp.ueb.cas.cz), which gives the user access to a large collection of normalised Affymetrix ATH1 microarray datasets. The database currently contains NASC Array and AtGenExpress transcriptomic datasets for various tissues at different developmental stages of wild type plants gathered from nearly 350 gene chips. The Arabidopsis GFP database has been designed as an easy-to-use tool for users needing an easily accessible resource for expression data of single genes, pre-defined gene families or custom gene sets, with the further possibility of keyword search. Arabidopsis Gene Family Profiler presents a user-friendly web interface using both graphic and text output. Data are stored at the MySQL server and individual queries are created in PHP script. The most distinguishable features of Arabidopsis Gene Family Profiler database are: 1) the presentation of normalized datasets (Affymetrix MAS algorithm and calculation of model-based gene-expression values based on the Perfect Match-only model); 2) the choice between two different normalization algorithms (Affymetrix MAS4 or MAS5 algorithms); 3) an intuitive interface; 4) an interactive "virtual plant" visualizing the spatial and developmental expression profiles of both gene families and individual genes. Arabidopsis GFP gives users the possibility to analyze current Arabidopsis developmental transcriptomic data starting with simple global queries that can be expanded and further refined to visualize comparative and highly selective gene expression profiles.

  4. Magnetic Moments in the Past: developing archaeomagnetic dating in the UK

    NASA Astrophysics Data System (ADS)

    Outram, Zoe; Batt, Catherine M.; Linford, Paul

    2010-05-01

    Magnetic studies of archaeological materials have a long history of development in the UK and the data produced by these studies is a key component of global models of the geomagnetic field. However, archaeomagnetic dating is not a widely used dating technique in UK archaeology, despite the potential to produce archaeologically significant information that directly relates to human activity. This often means that opportunities to improve our understanding of the past geomagnetic field are lost, because archaeologists are unaware of the potential of the method. This presentation discusses a project by the University of Bradford, UK and English Heritage to demonstrate and communicate the potential of archaeomagnetic dating of archaeological materials for routine use within the UK. The aims of the project were achieved through the production of a website and a database for all current and past archaeomagnetic studies carried out in the UK. The website provides archaeologists with the information required to consider the use of archaeomagnetic dating; including a general introduction to the technique, the features that can be sampled, the precision that can be expected from the dates and how much it costs. In addition, all archaeomagnetic studies carried out in the UK have been collated into a database, allowing similar studies to be identified on the basis of the location of the sites, the archaeological period and type of feature sampled. This clearly demonstrates how effective archaeomagnetic dating has been in different archaeological situations. The locations of the sites have been mapped using Google Earth so that studies carried out in a particular region, or from a specific time period can be easily identified. The database supports the continued development of archaeomagnetic dating in the UK, as the data required to construct the secular variation curves can be extracted easily. This allows the curves to be regularly updated following the production of new magnetic measurements. The information collated within the database will also be added to the global databases, such as MaGIC, contributing the improvement of the global models of the geomagnetic field. This project demonstrates the benefits that the presentation of clear, accessible information and increased communication with archaeologists can have on the study of the geomagnetic field. It is also hoped that similar approaches will beintroduced on a wider geographical scale in the future.

  5. Common Data Acquisition Systems (DAS) Software Development for Rocket Propulsion Test (RPT) Test Facilities

    NASA Technical Reports Server (NTRS)

    Hebert, Phillip W., Sr.; Davis, Dawn M.; Turowski, Mark P.; Holladay, Wendy T.; Hughes, Mark S.

    2012-01-01

    The advent of the commercial space launch industry and NASA's more recent resumption of operation of Stennis Space Center's large test facilities after thirty years of contractor control resulted in a need for a non-proprietary data acquisition systems (DAS) software to support government and commercial testing. The software is designed for modularity and adaptability to minimize the software development effort for current and future data systems. An additional benefit of the software's architecture is its ability to easily migrate to other testing facilities thus providing future commonality across Stennis. Adapting the software to other Rocket Propulsion Test (RPT) Centers such as MSFC, White Sands, and Plumbrook Station would provide additional commonality and help reduce testing costs for NASA. Ultimately, the software provides the government with unlimited rights and guarantees privacy of data to commercial entities. The project engaged all RPT Centers and NASA's Independent Verification & Validation facility to enhance product quality. The design consists of a translation layer which provides the transparency of the software application layers to underlying hardware regardless of test facility location and a flexible and easily accessible database. This presentation addresses system technical design, issues encountered, and the status of Stennis development and deployment.

  6. Cloud Based Drive Forensic and DDoS Analysis on Seafile as Case Study

    NASA Astrophysics Data System (ADS)

    Bahaweres, R. B.; Santo, N. B.; Ningsih, A. S.

    2017-01-01

    The rapid development of Internet due to increasing data rates through both broadband cable networks and 4G wireless mobile, make everyone easily connected to the internet. Storages as Services (StaaS) is more popular and many users want to store their data in one place so that whenever they need they can easily access anywhere, any place and anytime in the cloud. The use of the service makes it vulnerable to use by someone to commit a crime or can do Denial of Service (DoS) on cloud storage services. The criminals can use the cloud storage services to store, upload and download illegal file or document to the cloud storage. In this study, we try to implement a private cloud storage using Seafile on Raspberry Pi and perform simulations in Local Area Network and Wi-Fi environment to analyze forensically to discover or open a criminal act can be traced and proved forensically. Also, we can identify, collect and analyze the artifact of server and client, such as a registry of the desktop client, the file system, the log of seafile, the cache of the browser, and database forensic.

  7. NCBI2RDF: enabling full RDF-based access to NCBI databases.

    PubMed

    Anguita, Alberto; García-Remesal, Miguel; de la Iglesia, Diana; Maojo, Victor

    2013-01-01

    RDF has become the standard technology for enabling interoperability among heterogeneous biomedical databases. The NCBI provides access to a large set of life sciences databases through a common interface called Entrez. However, the latter does not provide RDF-based access to such databases, and, therefore, they cannot be integrated with other RDF-compliant databases and accessed via SPARQL query interfaces. This paper presents the NCBI2RDF system, aimed at providing RDF-based access to the complete NCBI data repository. This API creates a virtual endpoint for servicing SPARQL queries over different NCBI repositories and presenting to users the query results in SPARQL results format, thus enabling this data to be integrated and/or stored with other RDF-compliant repositories. SPARQL queries are dynamically resolved, decomposed, and forwarded to the NCBI-provided E-utilities programmatic interface to access the NCBI data. Furthermore, we show how our approach increases the expressiveness of the native NCBI querying system, allowing several databases to be accessed simultaneously. This feature significantly boosts productivity when working with complex queries and saves time and effort to biomedical researchers. Our approach has been validated with a large number of SPARQL queries, thus proving its reliability and enhanced capabilities in biomedical environments.

  8. DBAASP v.2: an enhanced database of structure and antimicrobial/cytotoxic activity of natural and synthetic peptides

    PubMed Central

    Pirtskhalava, Malak; Gabrielian, Andrei; Cruz, Phillip; Griggs, Hannah L.; Squires, R. Burke; Hurt, Darrell E.; Grigolava, Maia; Chubinidze, Mindia; Gogoladze, George; Vishnepolsky, Boris; Alekseev, Vsevolod; Rosenthal, Alex; Tartakovsky, Michael

    2016-01-01

    Antimicrobial peptides (AMPs) are anti-infectives that may represent a novel and untapped class of biotherapeutics. Increasing interest in AMPs means that new peptides (natural and synthetic) are discovered faster than ever before. We describe herein a new version of the Database of Antimicrobial Activity and Structure of Peptides (DBAASPv.2, which is freely accessible at http://dbaasp.org). This iteration of the database reports chemical structures and empirically-determined activities (MICs, IC50, etc.) against more than 4200 specific target microbes for more than 2000 ribosomal, 80 non-ribosomal and 5700 synthetic peptides. Of these, the vast majority are monomeric, but nearly 200 of these peptides are found as homo- or heterodimers. More than 6100 of the peptides are linear, but about 515 are cyclic and more than 1300 have other intra-chain covalent bonds. More than half of the entries in the database were added after the resource was initially described, which reflects the recent sharp uptick of interest in AMPs. New features of DBAASPv.2 include: (i) user-friendly utilities and reporting functions, (ii) a ‘Ranking Search’ function to query the database by target species and return a ranked list of peptides with activity against that target and (iii) structural descriptions of the peptides derived from empirical data or calculated by molecular dynamics (MD) simulations. The three-dimensional structural data are critical components for understanding structure–activity relationships and for design of new antimicrobial drugs. We created more than 300 high-throughput MD simulations specifically for inclusion in DBAASP. The resulting structures are described in the database by novel trajectory analysis plots and movies. Another 200+ DBAASP entries have links to the Protein DataBank. All of the structures are easily visualized directly in the web browser. PMID:26578581

  9. OliveNet™: a comprehensive library of compounds from Olea europaea

    PubMed Central

    Bonvino, Natalie P; Liang, Julia; McCord, Elizabeth D; Zafiris, Elena; Benetti, Natalia; Ray, Nancy B; Hung, Andrew; Boskou, Dimitrios

    2018-01-01

    Abstract Accumulated epidemiological, clinical and experimental evidence has indicated the beneficial health effects of the Mediterranean diet, which is typified by the consumption of virgin olive oil (VOO) as a main source of dietary fat. At the cellular level, compounds derived from various olive (Olea europaea), matrices, have demonstrated potent antioxidant and anti-inflammatory effects, which are thought to account, at least in part, for their biological effects. Research efforts are expanding into the characterization of compounds derived from Olea europaea, however, the considerable diversity and complexity of the vast array of chemical compounds have made their precise identification and quantification challenging. As such, only a relatively small subset of olive-derived compounds has been explored for their biological activity and potential health effects to date. Although there is adequate information describing the identification or isolation of olive-derived compounds, these are not easily searchable, especially when attempting to acquire chemical or biological properties. Therefore, we have created the OliveNet™ database containing a comprehensive catalogue of compounds identified from matrices of the olive, including the fruit, leaf and VOO, as well as in the wastewater and pomace accrued during oil production. From a total of 752 compounds, chemical analysis was sufficient for 676 individual compounds, which have been included in the database. The database is curated and comprehensively referenced containing information for the 676 compounds, which are divided into 13 main classes and 47 subclasses. Importantly, with respect to current research trends, the database includes 222 olive phenolics, which are divided into 13 subclasses. To our knowledge, OliveNet™ is currently the only curated open access database with a comprehensive collection of compounds associated with Olea europaea. Database URL: https://www.mccordresearch.com.au PMID:29688352

  10. The Gene Set Builder: collation, curation, and distribution of sets of genes

    PubMed Central

    Yusuf, Dimas; Lim, Jonathan S; Wasserman, Wyeth W

    2005-01-01

    Background In bioinformatics and genomics, there are many applications designed to investigate the common properties for a set of genes. Often, these multi-gene analysis tools attempt to reveal sequential, functional, and expressional ties. However, while tremendous effort has been invested in developing tools that can analyze a set of genes, minimal effort has been invested in developing tools that can help researchers compile, store, and annotate gene sets in the first place. As a result, the process of making or accessing a set often involves tedious and time consuming steps such as finding identifiers for each individual gene. These steps are often repeated extensively to shift from one identifier type to another; or to recreate a published set. In this paper, we present a simple online tool which – with the help of the gene catalogs Ensembl and GeneLynx – can help researchers build and annotate sets of genes quickly and easily. Description The Gene Set Builder is a database-driven, web-based tool designed to help researchers compile, store, export, and share sets of genes. This application supports the 17 eukaryotic genomes found in version 32 of the Ensembl database, which includes species from yeast to human. User-created information such as sets and customized annotations are stored to facilitate easy access. Gene sets stored in the system can be "exported" in a variety of output formats – as lists of identifiers, in tables, or as sequences. In addition, gene sets can be "shared" with specific users to facilitate collaborations or fully released to provide access to published results. The application also features a Perl API (Application Programming Interface) for direct connectivity to custom analysis tools. A downloadable Quick Reference guide and an online tutorial are available to help new users learn its functionalities. Conclusion The Gene Set Builder is an Ensembl-facilitated online tool designed to help researchers compile and manage sets of genes in a user-friendly environment. The application can be accessed via . PMID:16371163

  11. BioSearch: a semantic search engine for Bio2RDF

    PubMed Central

    Qiu, Honglei; Huang, Jiacheng

    2017-01-01

    Abstract Biomedical data are growing at an incredible pace and require substantial expertise to organize data in a manner that makes them easily findable, accessible, interoperable and reusable. Massive effort has been devoted to using Semantic Web standards and technologies to create a network of Linked Data for the life sciences, among others. However, while these data are accessible through programmatic means, effective user interfaces for non-experts to SPARQL endpoints are few and far between. Contributing to user frustrations is that data are not necessarily described using common vocabularies, thereby making it difficult to aggregate results, especially when distributed across multiple SPARQL endpoints. We propose BioSearch — a semantic search engine that uses ontologies to enhance federated query construction and organize search results. BioSearch also features a simplified query interface that allows users to optionally filter their keywords according to classes, properties and datasets. User evaluation demonstrated that BioSearch is more effective and usable than two state of the art search and browsing solutions. Database URL: http://ws.nju.edu.cn/biosearch/ PMID:29220451

  12. The Role of NOAA's National Data Centers in the Earth and Space Science Infrastructure

    NASA Astrophysics Data System (ADS)

    Fox, C. G.

    2008-12-01

    NOAA's National Data Centers (NNDC) provide access to long-term archives of environmental data from NOAA and other sources. The NNDCs face significant challenges in the volume and complexity of modern data sets. Data volume challenges are being addressed using more capable data archive systems such as the Comprehensive Large Array-Data Stewardship System (CLASS). Challenges in assuring data quality and stewardship are in many ways more challenging. In the past, scientists at the Data Centers could provide reasonable stewardship of data sets in their area of expertise. As staff levels have decreased and data complexity has increased, Data Centers depend on their data providers and user communities to provide high-quality metadata, feedback on data problems and improvements. This relationship requires strong partnerships between the NNDCs and academic, commercial, and international partners, as well as advanced data management and access tools that conform to established international standards when available. The NNDCs are looking to geospatial databases, interactive mapping, web services, and other Application Program Interface approaches to help preserve NNDC data and information and to make it easily available to the scientific community.

  13. OntologyWidget – a reusable, embeddable widget for easily locating ontology terms

    PubMed Central

    Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, JH Pate; Ball, Catherine A; Sherlock, Gavin

    2007-01-01

    Background Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. Results We have produced a tool, OntologyWidget, which allows users to rapidly search for and browse ontology terms. OntologyWidget can easily be embedded in other web-based applications. OntologyWidget is written using AJAX (Asynchronous JavaScript and XML) and has two related elements. The first is a dynamic auto-complete ontology search feature. As a user enters characters into the search box, the appropriate ontology is queried remotely for terms that match the typed-in text, and the query results populate a drop-down list with all potential matches. Upon selection of a term from the list, the user can locate this term within a generic and dynamic ontology browser, which comprises the second element of the tool. The ontology browser shows the paths from a selected term to the root as well as parent/child tree hierarchies. We have implemented web services at the Stanford Microarray Database (SMD), which provide the OntologyWidget with access to over 40 ontologies from the Open Biological Ontology (OBO) website [1]. Each ontology is updated weekly. Adopters of the OntologyWidget can either use SMD's web services, or elect to rely on their own. Deploying the OntologyWidget can be accomplished in three simple steps: (1) install Apache Tomcat [2] on one's web server, (2) download and install the OntologyWidget servlet stub that provides access to the SMD ontology web services, and (3) create an html (HyperText Markup Language) file that refers to the OntologyWidget using a simple, well-defined format. Conclusion We have developed OntologyWidget, an easy-to-use ontology search and display tool that can be used on any web page by creating a simple html description. OntologyWidget provides a rapid auto-complete search function paired with an interactive tree display. We have developed a web service layer that communicates between the web page interface and a database of ontology terms. We currently store 40 of the ontologies from the OBO website [1], as well as a several others. These ontologies are automatically updated on a weekly basis. OntologyWidget can be used in any web-based application to take advantage of the ontologies we provide via web services or any other ontology that is provided elsewhere in the correct format. The full source code for the JavaScript and description of the OntologyWidget is available from . PMID:17854506

  14. OntologyWidget - a reusable, embeddable widget for easily locating ontology terms.

    PubMed

    Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, J H Pate; Ball, Catherine A; Sherlock, Gavin

    2007-09-13

    Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. We have produced a tool, OntologyWidget, which allows users to rapidly search for and browse ontology terms. OntologyWidget can easily be embedded in other web-based applications. OntologyWidget is written using AJAX (Asynchronous JavaScript and XML) and has two related elements. The first is a dynamic auto-complete ontology search feature. As a user enters characters into the search box, the appropriate ontology is queried remotely for terms that match the typed-in text, and the query results populate a drop-down list with all potential matches. Upon selection of a term from the list, the user can locate this term within a generic and dynamic ontology browser, which comprises the second element of the tool. The ontology browser shows the paths from a selected term to the root as well as parent/child tree hierarchies. We have implemented web services at the Stanford Microarray Database (SMD), which provide the OntologyWidget with access to over 40 ontologies from the Open Biological Ontology (OBO) website 1. Each ontology is updated weekly. Adopters of the OntologyWidget can either use SMD's web services, or elect to rely on their own. Deploying the OntologyWidget can be accomplished in three simple steps: (1) install Apache Tomcat 2 on one's web server, (2) download and install the OntologyWidget servlet stub that provides access to the SMD ontology web services, and (3) create an html (HyperText Markup Language) file that refers to the OntologyWidget using a simple, well-defined format. We have developed OntologyWidget, an easy-to-use ontology search and display tool that can be used on any web page by creating a simple html description. OntologyWidget provides a rapid auto-complete search function paired with an interactive tree display. We have developed a web service layer that communicates between the web page interface and a database of ontology terms. We currently store 40 of the ontologies from the OBO website 1, as well as a several others. These ontologies are automatically updated on a weekly basis. OntologyWidget can be used in any web-based application to take advantage of the ontologies we provide via web services or any other ontology that is provided elsewhere in the correct format. The full source code for the JavaScript and description of the OntologyWidget is available from http://smd.stanford.edu/ontologyWidget/.

  15. Operational Experience with the Frontier System in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter

    2012-06-20

    The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been deliveringmore » about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.« less

  16. The VO-Dance web application at the IA2 data center

    NASA Astrophysics Data System (ADS)

    Molinaro, Marco; Knapic, Cristina; Smareglia, Riccardo

    2012-09-01

    Italian center for Astronomical Archives (IA2, http://ia2.oats.inaf.it) is a national infrastructure project of the Italian National Institute for Astrophysics (Istituto Nazionale di AstroFisica, INAF) that provides services for the astronomical community. Besides data hosting for the Large Binocular Telescope (LBT) Corporation, the Galileo National Telescope (Telescopio Nazionale Galileo, TNG) Consortium and other telescopes and instruments, IA2 offers proprietary and public data access through user portals (both developed and mirrored) and deploys resources complying the Virtual Observatory (VO) standards. Archiving systems and web interfaces are developed to be extremely flexible about adding new instruments from other telescopes. VO resources publishing, along with data access portals, implements the International Virtual Observatory Alliance (IVOA) protocols providing astronomers with new ways of analyzing data. Given the large variety of data flavours and IVOA standards, the need for tools to easily accomplish data ingestion and data publishing arises. This paper describes the VO-Dance tool, that IA2 started developing to address VO resources publishing in a dynamical way from already existent database tables or views. The tool consists in a Java web application, potentially DBMS and platform independent, that stores internally the services' metadata and information, exposes restful endpoints to accept VO queries for these services and dynamically translates calls to these endpoints to SQL queries coherent with the published table or view. In response to the call VO-Dance translates back the database answer in a VO compliant way.

  17. Where's My Data - WMD

    NASA Technical Reports Server (NTRS)

    Quach, William L.; Sesplaukis, Tadas; Owen-Mankovich, Kyran J.; Nakamura, Lori L.

    2012-01-01

    WMD provides a centralized interface to access data stored in the Mission Data Processing and Control System (MPCS) GDS (Ground Data Systems) databases during MSL (Mars Science Laboratory) Testbeds and ATLO (Assembly, Test, and Launch Operations) test sessions. The MSL project organizes its data based on venue (Testbed, ATLO, Ops), with each venue's data stored on a separate database, making it cumbersome for users to access data across the various venues. WMD allows sessions to be retrieved through a Web-based search using several criteria: host name, session start date, or session ID number. Sessions matching the search criteria will be displayed and users can then select a session to obtain and analyze the associated data. The uniqueness of this software comes from its collection of data retrieval and analysis features provided through a single interface. This allows users to obtain their data and perform the necessary analysis without having to worry about where and how to get the data, which may be stored in various locations. Additionally, this software is a Web application that only requires a standard browser without additional plug-ins, providing a cross-platform, lightweight solution for users to retrieve and analyze their data. This software solves the problem of efficiently and easily finding and retrieving data from thousands of MSL Testbed and ATLO sessions. WMD allows the user to retrieve their session in as little as one mouse click, and then to quickly retrieve additional data associated with the session.

  18. pvsR: An Open Source Interface to Big Data on the American Political Sphere

    PubMed Central

    2015-01-01

    Digital data from the political sphere is abundant, omnipresent, and more and more directly accessible through the Internet. Project Vote Smart (PVS) is a prominent example of this big public data and covers various aspects of U.S. politics in astonishing detail. Despite the vast potential of PVS’ data for political science, economics, and sociology, it is hardly used in empirical research. The systematic compilation of semi-structured data can be complicated and time consuming as the data format is not designed for conventional scientific research. This paper presents a new tool that makes the data easily accessible to a broad scientific community. We provide the software called pvsR as an add-on to the R programming environment for statistical computing. This open source interface (OSI) serves as a direct link between a statistical analysis and the large PVS database. The free and open code is expected to substantially reduce the cost of research with PVS’ new big public data in a vast variety of possible applications. We discuss its advantages vis-à-vis traditional methods of data generation as well as already existing interfaces. The validity of the library is documented based on an illustration involving female representation in local politics. In addition, pvsR facilitates the replication of research with PVS data at low costs, including the pre-processing of data. Similar OSIs are recommended for other big public databases. PMID:26132154

  19. Operational Experience with the Frontier System in CMS

    NASA Astrophysics Data System (ADS)

    Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter; Du, Ran; Wang, Weizhen

    2012-12-01

    The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been delivering about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.

  20. DoD Identity Matching Engine for Security and Analysis (IMESA) Access to Criminal Justice Information (CJI) and Terrorist Screening Databases (TSDB)

    DTIC Science & Technology

    2016-05-04

    IMESA) Access to Criminal Justice Information (CJI) and Terrorist Screening Databases (TSDB) References: See Enclosure 1 1. PURPOSE. In...CJI database mirror image files. (3) Memorandums of understanding with the FBI CJIS as the data broker for DoD organizations that need access ...not for access determinations. (3) Legal restrictions established by the Sex Offender Registration and Notification Act (SORNA) jurisdictions on

  1. Technical and Organizational Considerations for the Long-Term Maintenance and Development of Digital Brain Atlases and Web-Based Databases

    PubMed Central

    Ito, Kei

    2010-01-01

    Digital brain atlas is a kind of image database that specifically provide information about neurons and glial cells in the brain. It has various advantages that are unmatched by conventional paper-based atlases. Such advantages, however, may become disadvantages if appropriate cares are not taken. Because digital atlases can provide unlimited amount of data, they should be designed to minimize redundancy and keep consistency of the records that may be added incrementally by different staffs. The fact that digital atlases can easily be revised necessitates a system to assure that users can access previous versions that might have been cited in papers at a particular period. To inherit our knowledge to our descendants, such databases should be maintained for a very long period, well over 100 years, like printed books and papers. Technical and organizational measures to enable long-term archive should be considered seriously. Compared to the initial development of the database, subsequent efforts to increase the quality and quantity of its contents are not regarded highly, because such tasks do not materialize in the form of publications. This fact strongly discourages continuous expansion of, and external contributions to, the digital atlases after its initial launch. To solve these problems, the role of the biocurators is vital. Appreciation of the scientific achievements of the people who do not write papers, and establishment of the secure academic career path for them, are indispensable for recruiting talents for this very important job. PMID:20661458

  2. SLIMS--a user-friendly sample operations and inventory management system for genotyping labs.

    PubMed

    Van Rossum, Thea; Tripp, Ben; Daley, Denise

    2010-07-15

    We present the Sample-based Laboratory Information Management System (SLIMS), a powerful and user-friendly open source web application that provides all members of a laboratory with an interface to view, edit and create sample information. SLIMS aims to simplify common laboratory tasks with tools such as a user-friendly shopping cart for subjects, samples and containers that easily generates reports, shareable lists and plate designs for genotyping. Further key features include customizable data views, database change-logging and dynamically filled pre-formatted reports. Along with being feature-rich, SLIMS' power comes from being able to handle longitudinal data from multiple time-points and biological sources. This type of data is increasingly common from studies searching for susceptibility genes for common complex diseases that collect thousands of samples generating millions of genotypes and overwhelming amounts of data. LIMSs provide an efficient way to deal with this data while increasing accessibility and reducing laboratory errors; however, professional LIMS are often too costly to be practical. SLIMS gives labs a feasible alternative that is easily accessible, user-centrically designed and feature-rich. To facilitate system customization, and utilization for other groups, manuals have been written for users and developers. Documentation, source code and manuals are available at http://genapha.icapture.ubc.ca/SLIMS/index.jsp. SLIMS was developed using Java 1.6.0, JSPs, Hibernate 3.3.1.GA, DB2 and mySQL, Apache Tomcat 6.0.18, NetBeans IDE 6.5, Jasper Reports 3.5.1 and JasperSoft's iReport 3.5.1.

  3. WaveNet: A Web-Based Metocean Data Access, Processing and Analysis Tool. Part 4 - GLOS/GLCFS Database

    DTIC Science & Technology

    2014-06-01

    and Coastal Data Information Program ( CDIP ). This User’s Guide includes step-by-step instructions for accessing the GLOS/GLCFS database via WaveNet...access, processing and analysis tool; part 3 – CDIP database. ERDC/CHL CHETN-xx-14. Vicksburg, MS: U.S. Army Engineer Research and Development Center

  4. Evaluation of an Online Instructional Database Accessed by QR Codes to Support Biochemistry Practical Laboratory Classes

    ERIC Educational Resources Information Center

    Yip, Tor; Melling, Louise; Shaw, Kirsty J.

    2016-01-01

    An online instructional database containing information on commonly used pieces of laboratory equipment was created. In order to make the database highly accessible and to promote its use, QR codes were utilized. The instructional materials were available anytime and accessed using QR codes located on the equipment itself and within undergraduate…

  5. Design and implementation of a web directory for medical education (WDME): a tool to facilitate research in medical education.

    PubMed

    Changiz, Tahereh; Haghani, Fariba; Masoomi, Rasoul

    2012-01-01

    Access to the medical resources on the web is one of current challenges for researchers and medical science educators. The purpose of current project was to design and implement a comprehensive and specific subject/web directory of medical education. First, the categories to be incorporated in the directory were defined through reviewing related directories and obtaining medical education experts' opinions in a focus group. Then, number of sources such as (Meta) search engines, subject directories, databases and library catalogs searched/browsed for selecting and collecting high quality resources. Finally, the website was designed and the resources were entered into the directory. The main categories incorporating WDME resources are: Journals, Organizations, Best Evidence in Medical Education, and Textbooks. Each category is divided into sub-categories and related resources of each category are described shortly within it. The resources in this directory could be accessed both by browsing and keyword searching. WDME is accessible on http://medirectory.org. The innovative Web Directory for Medical Education (WDME) presented in this paper, is more comprehensive than other existing directories, and expandable through user suggestions. It may help medical educators to find their desirable resources more quickly and easily; hence have more informed decisions in education.

  6. Hierarchical data security in a Query-By-Example interface for a shared database.

    PubMed

    Taylor, Merwyn

    2002-06-01

    Whenever a shared database resource, containing critical patient data, is created, protecting the contents of the database is a high priority goal. This goal can be achieved by developing a Query-By-Example (QBE) interface, designed to access a shared database, and embedding within the QBE a hierarchical security module that limits access to the data. The security module ensures that researchers working in one clinic do not get access to data from another clinic. The security can be based on a flexible taxonomy structure that allows ordinary users to access data from individual clinics and super users to access data from all clinics. All researchers submit queries through the same interface and the security module processes the taxonomy and user identifiers to limit access. Using this system, two different users with different access rights can submit the same query and get different results thus reducing the need to create different interfaces for different clinics and access rights.

  7. NCBI2RDF: Enabling Full RDF-Based Access to NCBI Databases

    PubMed Central

    Anguita, Alberto; García-Remesal, Miguel; de la Iglesia, Diana; Maojo, Victor

    2013-01-01

    RDF has become the standard technology for enabling interoperability among heterogeneous biomedical databases. The NCBI provides access to a large set of life sciences databases through a common interface called Entrez. However, the latter does not provide RDF-based access to such databases, and, therefore, they cannot be integrated with other RDF-compliant databases and accessed via SPARQL query interfaces. This paper presents the NCBI2RDF system, aimed at providing RDF-based access to the complete NCBI data repository. This API creates a virtual endpoint for servicing SPARQL queries over different NCBI repositories and presenting to users the query results in SPARQL results format, thus enabling this data to be integrated and/or stored with other RDF-compliant repositories. SPARQL queries are dynamically resolved, decomposed, and forwarded to the NCBI-provided E-utilities programmatic interface to access the NCBI data. Furthermore, we show how our approach increases the expressiveness of the native NCBI querying system, allowing several databases to be accessed simultaneously. This feature significantly boosts productivity when working with complex queries and saves time and effort to biomedical researchers. Our approach has been validated with a large number of SPARQL queries, thus proving its reliability and enhanced capabilities in biomedical environments. PMID:23984425

  8. Accessing Cloud Properties and Satellite Imagery: A tool for visualization and data mining

    NASA Astrophysics Data System (ADS)

    Chee, T.; Nguyen, L.; Minnis, P.; Spangenberg, D.; Palikonda, R.

    2016-12-01

    Providing public access to imagery of cloud macro and microphysical properties and the underlying satellite imagery is a key concern for the NASA Langley Research Center Cloud and Radiation Group. This work describes a tool and system that allows end users to easily browse cloud information and satellite imagery that is otherwise difficult to acquire and manipulate. The tool has two uses, one to visualize the data and the other to access the data directly. It uses a widely used access protocol, the Open Geospatial Consortium's Web Map and Processing Services, to encourage user to access the data we produce. Internally, we leverage our practical experience with large, scalable application practices to develop a system that has the largest potential for scalability as well as the ability to be deployed on the cloud. One goal of the tool is to provide a demonstration of the back end capability to end users so that they can use the dynamically generated imagery and data as an input to their own work flows or to set up data mining constraints. We build upon NASA Langley Cloud and Radiation Group's experience with making real-time and historical satellite cloud product information and satellite imagery accessible and easily searchable. Increasingly, information is used in a "mash-up" form where multiple sources of information are combined to add value to disparate but related information. In support of NASA strategic goals, our group aims to make as much cutting edge scientific knowledge, observations and products available to the citizen science, research and interested communities for these kinds of "mash-ups" as well as provide a means for automated systems to data mine our information. This tool and access method provides a valuable research tool to a wide audience both as a standalone research tool and also as an easily accessed data source that can easily be mined or used with existing tools.

  9. Using linked administrative and disease-specific databases to study end-of-life care on a population level.

    PubMed

    Maetens, Arno; De Schreye, Robrecht; Faes, Kristof; Houttekier, Dirk; Deliens, Luc; Gielen, Birgit; De Gendt, Cindy; Lusyne, Patrick; Annemans, Lieven; Cohen, Joachim

    2016-10-18

    The use of full-population databases is under-explored to study the use, quality and costs of end-of-life care. Using the case of Belgium, we explored: (1) which full-population databases provide valid information about end-of-life care, (2) what procedures are there to use these databases, and (3) what is needed to integrate separate databases. Technical and privacy-related aspects of linking and accessing Belgian administrative databases and disease registries were assessed in cooperation with the database administrators and privacy commission bodies. For all relevant databases, we followed procedures in cooperation with database administrators to link the databases and to access the data. We identified several databases as fitting for end-of-life care research in Belgium: the InterMutualistic Agency's national registry of health care claims data, the Belgian Cancer Registry including data on incidence of cancer, and databases administrated by Statistics Belgium including data from the death certificate database, the socio-economic survey and fiscal data. To obtain access to the data, approval was required from all database administrators, supervisory bodies and two separate national privacy bodies. Two Trusted Third Parties linked the databases via a deterministic matching procedure using multiple encrypted social security numbers. In this article we describe how various routinely collected population-level databases and disease registries can be accessed and linked to study patterns in the use, quality and costs of end-of-life care in the full population and in specific diagnostic groups.

  10. Database of Geoscientific References Through 2007 for Afghanistan, Version 2

    USGS Publications Warehouse

    Eppinger, Robert G.; Sipeki, Julianna; Scofield, M.L. Sco

    2007-01-01

    This report describes an accompanying database of geoscientific references for the country of Afghanistan. Included is an accompanying Microsoft? Access 2003 database of geoscientific references for the country of Afghanistan. The reference compilation is part of a larger joint study of Afghanistan's energy, mineral, and water resources, and geologic hazards, currently underway by the U.S. Geological Survey, the British Geological Survey, and the Afghanistan Geological Survey. The database includes both published (n = 2,462) and unpublished (n = 174) references compiled through September, 2007. The references comprise two separate tables in the Access database. The reference database includes a user-friendly, keyword-searchable, interface and only minimum knowledge of the use of Microsoft? Access is required.

  11. A social-ecological database to advance research on infrastructure development impacts in the Brazilian Amazon.

    PubMed

    Tucker Lima, Joanna M; Valle, Denis; Moretto, Evandro Mateus; Pulice, Sergio Mantovani Paiva; Zuca, Nadia Lucia; Roquetti, Daniel Rondinelli; Beduschi, Liviam Elizabeth Cordeiro; Praia, Amanda Salles; Okamoto, Claudia Parucce Franco; da Silva Carvalhaes, Vinicius Leite; Branco, Evandro Albiach; Barbezani, Bruna; Labandera, Emily; Timpe, Kelsie; Kaplan, David

    2016-08-30

    Recognized as one of the world's most vital natural and cultural resources, the Amazon faces a wide variety of threats from natural resource and infrastructure development. Within this context, rigorous scientific study of the region's complex social-ecological system is critical to inform and direct decision-making toward more sustainable environmental and social outcomes. Given the Amazon's tightly linked social and ecological components and the scope of potential development impacts, effective study of this system requires an easily accessible resource that provides a broad and reliable data baseline. This paper brings together multiple datasets from diverse disciplines (including human health, socio-economics, environment, hydrology, and energy) to provide investigators with a variety of baseline data to explore the multiple long-term effects of infrastructure development in the Brazilian Amazon.

  12. A social-ecological database to advance research on infrastructure development impacts in the Brazilian Amazon

    PubMed Central

    Tucker Lima, Joanna M.; Valle, Denis; Moretto, Evandro Mateus; Pulice, Sergio Mantovani Paiva; Zuca, Nadia Lucia; Roquetti, Daniel Rondinelli; Beduschi, Liviam Elizabeth Cordeiro; Praia, Amanda Salles; Okamoto, Claudia Parucce Franco; da Silva Carvalhaes, Vinicius Leite; Branco, Evandro Albiach; Barbezani, Bruna; Labandera, Emily; Timpe, Kelsie; Kaplan, David

    2016-01-01

    Recognized as one of the world’s most vital natural and cultural resources, the Amazon faces a wide variety of threats from natural resource and infrastructure development. Within this context, rigorous scientific study of the region’s complex social-ecological system is critical to inform and direct decision-making toward more sustainable environmental and social outcomes. Given the Amazon’s tightly linked social and ecological components and the scope of potential development impacts, effective study of this system requires an easily accessible resource that provides a broad and reliable data baseline. This paper brings together multiple datasets from diverse disciplines (including human health, socio-economics, environment, hydrology, and energy) to provide investigators with a variety of baseline data to explore the multiple long-term effects of infrastructure development in the Brazilian Amazon. PMID:27575915

  13. Taking control of your digital library: how modern citation managers do more than just referencing.

    PubMed

    Mahajan, Amit K; Hogarth, D Kyle

    2013-12-01

    Physicians are constantly navigating the overwhelming body of medical literature available on the Internet. Although early citation managers were capable of limited searching of index databases and tedious bibliography production, modern versions of citation managers such as EndNote, Zotero, and Mendeley are powerful web-based tools for searching, organizing, and sharing medical literature. Effortless point-and-click functions provide physicians with the ability to develop robust digital libraries filled with literature relevant to their fields of interest. In addition to easily creating manuscript bibliographies, various citation managers allow physicians to readily access medical literature, share references for teaching purposes, collaborate with colleagues, and even participate in social networking. If physicians are willing to invest the time to familiarize themselves with modern citation managers, they will reap great benefits in the future.

  14. Task 1.13 -- Data collection and database development for clean coal technology by-product characteristics and management practices. Semi-annual report, July 1--December 31, 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pflughoeft-Hassett, D.F.

    1997-08-01

    Information from DOE projects and commercial endeavors in fluidized-bed combustion and coal gasification is the focus of this task by the Energy and Environmental Research Center. The primary goal of this task is to provide an easily accessible compilation of characterization information on CCT (Clean Coal Technology) by-products to government agencies and industry to facilitate sound regulatory and management decisions. Supporting objectives are (1) to fully utilize information from previous DOE projects, (2) to coordinate with industry and other research groups, (3) to focus on by-products from pressurized fluidized-bed combustion (PFBC) and gasification, and (4) to provide information relevant tomore » the EPA evaluation criteria for the Phase 2 decision.« less

  15. Querying and Computing with BioCyc Databases

    PubMed Central

    Krummenacker, Markus; Paley, Suzanne; Mueller, Lukas; Yan, Thomas; Karp, Peter D.

    2006-01-01

    Summary We describe multiple methods for accessing and querying the complex and integrated cellular data in the BioCyc family of databases: access through multiple file formats, access through Application Program Interfaces (APIs) for LISP, Perl and Java, and SQL access through the BioWarehouse relational database. Availability The Pathway Tools software and 20 BioCyc DBs in Tiers 1 and 2 are freely available to academic users; fees apply to some types of commercial use. For download instructions see http://BioCyc.org/download.shtml PMID:15961440

  16. A brief overview of the Chemistry-Aerosol Mediterranean Experiment (ChArMEx) database and campaign operation centre (ChOC)

    NASA Astrophysics Data System (ADS)

    Ferré, Hélène; Dulac, François; Belmahfoud, Nizar; Brissebrat, Guillaume; Cloché, Sophie; Descloitres, Jacques; Fleury, Laurence; Focsa, Loredana; Henriot, Nicolas; Ramage, Karim; Vermeulen, Anne

    2016-04-01

    Initiated in 2010 in the framework of the multidisciplinary research programme MISTRALS (Mediterranean Integrated Studies at Regional and Local Scales; http:www.mistrals-home.org), the Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at federating the scientific community for an updated assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project combines mid- and long-term monitoring, intensive field campaigns, use of satellite data, and modelling studies. In this presentation we provide an overview of the campaign operation centre (http://choc.sedoo.fr/) and project database (http://mistrals.sedoo.fr/ChArMEx), at the end of the first experimental phase of the project that included a series of large campaigns based on airborne means (including balloons and various aircraft) and a network of surface stations. Those campaigns were performed mainly in the western Mediterranean basin in the summer of 2012, 2013 and 2014 with the help of the ChArMEx Operation Centre (ChOC), an open web site that has the objective to gather and display daily quick-looks from model forecasts and near-real time in situ and remote sensing observations of physical and chemical weather conditions relevant for the everyday campaign operation decisions. The ChOC is also useful for post campaign analyses and can be completed with a number of quick-looks of campaign results obtained later in order to offer an easy access to, and comprehensive view of all available data during the campaign period. The items included are selected according to the objectives and location of the given campaigns. The second experimental phase of ChArMEx from 2015 on is more focused on the eastern basin. In addition, the project operation centre is planned to be adapted for a joint MERMEX-ChArMEx oceanographic cruise (PEACETIME) for a study at the air-sea interface focused on the biogeochemical impact of atmospheric deposition. The database includes a wide diversity of data and parameters relevant to atmospheric chemistry. The objective of the database task team is to organize data management, distribution system and services, such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between ICARE, IPSL and OMP data centers and has been set up in the framework of the MISTRALS programme data portal. ChArMEx data, either produced or used by the project, are documented and made easily accessible through the database website, which offers expected user-friendly functionalities: data catalog, user registration procedure, search tool to select and access data based on parameters, instruments, countries, platform or project, information of dataset PIs about downloadings... The metadata (data description) are standardized, and comply with international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). A Digital Object Identifier (DOI) assignement procedure allows to automatically register the datasets, in order to make them easier to access, cite, reuse and verify. At present, the ChArMEx database contains about 160 datasets, including more than 120 in situ datasets (from a total of 7 campaigns and various monitoring stations including the background atmospheric station of Ersa (June 2012-July 2014), 30 model output sets (dust model intercomparison, MEDCORDEX scenarios...), a high resolution emission inventory over the Mediterranean made available as part of the ECCAD database (http://eccad.sedoo.fr/eccad_extract_interface/JSF/page_charmex.jsf), etc. Some in situ datasets have been inserted in a relational database in order to enable more accurate selection and download of different datasets in a shared format. Many dedicated satellite products (SEVIRI, TRIMM, PARASOL...) are processed and will soon be accessible through the database website. Every scientist is welcome to visit the ChArMEx websites, to register and request data, and to contact charmex-database@sedoo.fr for any question.

  17. X-ray Photoelectron Spectroscopy Database (Version 4.1)

    National Institute of Standards and Technology Data Gateway

    SRD 20 X-ray Photoelectron Spectroscopy Database (Version 4.1) (Web, free access)   The NIST XPS Database gives access to energies of many photoelectron and Auger-electron spectral lines. The database contains over 22,000 line positions, chemical shifts, doublet splittings, and energy separations of photoelectron and Auger-electron lines.

  18. Accounting Data to Web Interface Using PERL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hargeaves, C

    2001-08-13

    This document will explain the process to create a web interface for the accounting information generated by the High Performance Storage Systems (HPSS) accounting report feature. The accounting report contains useful data but it is not easily accessed in a meaningful way. The accounting report is the only way to see summarized storage usage information. The first step is to take the accounting data, make it meaningful and store the modified data in persistent databases. The second step is to generate the various user interfaces, HTML pages, that will be used to access the data. The third step is tomore » transfer all required files to the web server. The web pages pass parameters to Common Gateway Interface (CGI) scripts that generate dynamic web pages and graphs. The end result is a web page with specific information presented in text with or without graphs. The accounting report has a specific format that allows the use of regular expressions to verify if a line is storage data. Each storage data line is stored in a detailed database file with a name that includes the run date. The detailed database is used to create a summarized database file that also uses run date in its name. The summarized database is used to create the group.html web page that includes a list of all storage users. Scripts that query the database folder to build a list of available databases generate two additional web pages. A master script that is run monthly as part of a cron job, after the accounting report has completed, manages all of these individual scripts. All scripts are written in the PERL programming language. Whenever possible data manipulation scripts are written as filters. All scripts are written to be single source, which means they will function properly on both the open and closed networks at LLNL. The master script handles the command line inputs for all scripts, file transfers to the web server and records run information in a log file. The rest of the scripts manipulate the accounting data or use the files created to generate HTML pages. Each script will be described in detail herein. The following is a brief description of HPSS taken directly from an HPSS web site. ''HPSS is a major development project, which began in 1993 as a Cooperative Research and Development Agreement (CRADA) between government and industry. The primary objective of HPSS is to move very large data objects between high performance computers, workstation clusters, and storage libraries at speeds many times faster than is possible with today's software systems. For example, HPSS can manage parallel data transfers from multiple network-connected disk arrays at rates greater than 1 Gbyte per second, making it possible to access high definition digitized video in real time.'' The HPSS accounting report is a canned report whose format is controlled by the HPSS developers.« less

  19. Update on NASA Space Shuttle Earth Observations Photography on the laser videodisc for rapid image access

    NASA Technical Reports Server (NTRS)

    Lulla, Kamlesh

    1994-01-01

    There have been many significant improvements in the public access to the Space Shuttle Earth Observations Photography Database. New information is provided for the user community on the recently released videodisc of this database. Topics covered included the following: earlier attempts; our first laser videodisc in 1992; the new laser videodisc in 1994; and electronic database access.

  20. Developing Vocabularies to Improve Understanding and Use of NOAA Observing Systems

    NASA Astrophysics Data System (ADS)

    Austin, M.

    2014-12-01

    The NOAA Observing System Integrated Analysis project (NOSIA II), is an attempt to capture and tell the story of how valuable observing systems are in producing products and services that are required to fulfill the NOAA's diverse mission. NOAA's goals and mission areas cover a broad range of environmental data; a complexity exists in terms and vocabulary as applied to the creation of observing system derived products. The NOSIA data collection focused first on decomposing NOAA's goals in the creation and acceptance of Mission Service Areas (MSAs) by NOAA senior leadership. Products and services that supported the MSAs were then identified through the process of interviewing product producers across NOAA organization. Product Data inputs including models, databases and observing system were also identified. The NOSIA model contains over 20,000 nodes each representing levels in a network connecting products, datasources, users and desired outcomes. An immediate need became apparent that the complexity and variety of the data collected required data management to mature the quality and the content of the NOSIA model. The NOSIA Analysis Database (ADB) was developed initially to improve consistency of terms and data types to allow for the linkage of observing systems, products and NOAA's Goals and mission. The ADB also allowed for the prototyping of reports and product generation in an easily accessible and comprehensive format for the first time. Web based visualization of relationships between products, datasources, users, producers were generated to make the information easily understood This includes developing ontologies/vocabularies that are used for the development of users type specific products for NOAA leadership, Observing System Portfolio mangers and the users of NOAA data.

  1. DoGSD: the dog and wolf genome SNP database.

    PubMed

    Bai, Bing; Zhao, Wen-Ming; Tang, Bi-Xia; Wang, Yan-Qing; Wang, Lu; Zhang, Zhang; Yang, He-Chuan; Liu, Yan-Hu; Zhu, Jun-Wei; Irwin, David M; Wang, Guo-Dong; Zhang, Ya-Ping

    2015-01-01

    The rapid advancement of next-generation sequencing technology has generated a deluge of genomic data from domesticated dogs and their wild ancestor, grey wolves, which have simultaneously broadened our understanding of domestication and diseases that are shared by humans and dogs. To address the scarcity of single nucleotide polymorphism (SNP) data provided by authorized databases and to make SNP data more easily/friendly usable and available, we propose DoGSD (http://dogsd.big.ac.cn), the first canidae-specific database which focuses on whole genome SNP data from domesticated dogs and grey wolves. The DoGSD is a web-based, open-access resource comprising ∼ 19 million high-quality whole-genome SNPs. In addition to the dbSNP data set (build 139), DoGSD incorporates a comprehensive collection of SNPs from two newly sequenced samples (1 wolf and 1 dog) and collected SNPs from three latest dog/wolf genetic studies (7 wolves and 68 dogs), which were taken together for analysis with the population genetic statistics, Fst. In addition, DoGSD integrates some closely related information including SNP annotation, summary lists of SNPs located in genes, synonymous and non-synonymous SNPs, sampling location and breed information. All these features make DoGSD a useful resource for in-depth analysis in dog-/wolf-related studies. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. "Trust is not something you can reclaim easily": patenting in the field of direct-to-consumer genetic testing.

    PubMed

    Sterckx, Sigrid; Cockbain, Julian; Howard, Heidi; Huys, Isabelle; Borry, Pascal

    2013-05-01

    Recently, 23andMe announced that it had obtained its first patent, related to "polymorphisms associated with Parkinson's disease" (US-B-8187811). This announcement immediately sparked controversy in the community of 23andMe users and research participants, especially with regard to issues of transparency and trust. The purpose of this article was to analyze the patent portfolio of this prominent direct-to-consumer genetic testing company and discuss the potential ethical implications of patenting in this field for public participation in Web-based genetic research. We searched the publicly accessible patent database Espacenet as well as the commercially available database Micropatent for published patents and patent applications of 23andMe. Six patent families were identified for 23andMe. These included patent applications related to: genetic comparisons between grandparents and grandchildren, family inheritance, genome sharing, processing data from genotyping chips, gamete donor selection based on genetic calculations, finding relatives in a database, and polymorphisms associated with Parkinson disease. An important lesson to be drawn from this ongoing controversy seems to be that any (private or public) organization involved in research that relies on human participation, whether by providing information, body material, or both, needs to be transparent, not only about its research goals but also about its strategies and policies regarding commercialization.

  3. Pleurochrysome: A Web Database of Pleurochrysis Transcripts and Orthologs Among Heterogeneous Algae

    PubMed Central

    Fujiwara, Shoko; Takatsuka, Yukiko; Hirokawa, Yasutaka; Tsuzuki, Mikio; Takano, Tomoyuki; Kobayashi, Masaaki; Suda, Kunihiro; Asamizu, Erika; Yokoyama, Koji; Shibata, Daisuke; Tabata, Satoshi; Yano, Kentaro

    2016-01-01

    Pleurochrysis is a coccolithophorid genus, which belongs to the Coccolithales in the Haptophyta. The genus has been used extensively for biological research, together with Emiliania in the Isochrysidales, to understand distinctive features between the two coccolithophorid-including orders. However, molecular biological research on Pleurochrysis such as elucidation of the molecular mechanism behind coccolith formation has not made great progress at least in part because of lack of comprehensive gene information. To provide such information to the research community, we built an open web database, the Pleurochrysome (http://bioinf.mind.meiji.ac.jp/phapt/), which currently stores 9,023 unique gene sequences (designated as UNIGENEs) assembled from expressed sequence tag sequences of P. haptonemofera as core information. The UNIGENEs were annotated with gene sequences sharing significant homology, conserved domains, Gene Ontology, KEGG Orthology, predicted subcellular localization, open reading frames and orthologous relationship with genes of 10 other algal species, a cyanobacterium and the yeast Saccharomyces cerevisiae. This sequence and annotation information can be easily accessed via several search functions. Besides fundamental functions such as BLAST and keyword searches, this database also offers search functions to explore orthologous genes in the 12 organisms and to seek novel genes. The Pleurochrysome will promote molecular biological and phylogenetic research on coccolithophorids and other haptophytes by helping scientists mine data from the primary transcriptome of P. haptonemofera. PMID:26746174

  4. A Simple and Novel Method to Attain Retrograde Ureteral Access after Previous Cohen Cross-Trigonal Ureteral Reimplantation

    PubMed Central

    Adam, Ahmed

    2017-01-01

    Objective To describe a simple, novel method to achieve ureteric access in the Cohen crossed reimplanted ureter, which will allow retrograde working access via the conventional transurethral method. Materials and Methods Under cystoscopic vision, suprapubic needle puncture was performed. The needle was directed (bevel facing) towards the desired ureteric orifice (UO). A guidewire (with a floppy-tip) was then inserted into the suprapubic needle passing into the bladder, and then easily passed into the crossed-reimplanted UO. The distal end of the guidewire was then removed through the urethra with cystoscopic grasping forceps. The straightened ureter then easily facilitated ureteroscopy access, retrograde pyelogram studies, and JJ stent insertion in a conventional transurethral method. Results The UO and ureter were aligned in a more conventional orthotopic course, to allow for conventional transurethral working access. Conclusion A novel method to access the Cohen crossed reimplanted ureter was described. All previously published methods of accessing the crossed ureter were critically appraised. PMID:29463976

  5. Heterogeneous distributed query processing: The DAVID system

    NASA Technical Reports Server (NTRS)

    Jacobs, Barry E.

    1985-01-01

    The objective of the Distributed Access View Integrated Database (DAVID) project is the development of an easy to use computer system with which NASA scientists, engineers and administrators can uniformly access distributed heterogeneous databases. Basically, DAVID will be a database management system that sits alongside already existing database and file management systems. Its function is to enable users to access the data in other languages and file systems without having to learn the data manipulation languages. Given here is an outline of a talk on the DAVID project and several charts.

  6. Evidence generation from healthcare databases: recommendations for managing change.

    PubMed

    Bourke, Alison; Bate, Andrew; Sauer, Brian C; Brown, Jeffrey S; Hall, Gillian C

    2016-07-01

    There is an increasing reliance on databases of healthcare records for pharmacoepidemiology and other medical research, and such resources are often accessed over a long period of time so it is vital to consider the impact of changes in data, access methodology and the environment. The authors discuss change in communication and management, and provide a checklist of issues to consider for both database providers and users. The scope of the paper is database research, and changes are considered in relation to the three main components of database research: the data content itself, how it is accessed, and the support and tools needed to use the database. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Seismic Search Engine: A distributed database for mining large scale seismic data

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Vaidya, S.; Kuzma, H. A.

    2009-12-01

    The International Monitoring System (IMS) of the CTBTO collects terabytes worth of seismic measurements from many receiver stations situated around the earth with the goal of detecting underground nuclear testing events and distinguishing them from other benign, but more common events such as earthquakes and mine blasts. The International Data Center (IDC) processes and analyzes these measurements, as they are collected by the IMS, to summarize event detections in daily bulletins. Thereafter, the data measurements are archived into a large format database. Our proposed Seismic Search Engine (SSE) will facilitate a framework for data exploration of the seismic database as well as the development of seismic data mining algorithms. Analogous to GenBank, the annotated genetic sequence database maintained by NIH, through SSE, we intend to provide public access to seismic data and a set of processing and analysis tools, along with community-generated annotations and statistical models to help interpret the data. SSE will implement queries as user-defined functions composed from standard tools and models. Each query is compiled and executed over the database internally before reporting results back to the user. Since queries are expressed with standard tools and models, users can easily reproduce published results within this framework for peer-review and making metric comparisons. As an illustration, an example query is “what are the best receiver stations in East Asia for detecting events in the Middle East?” Evaluating this query involves listing all receiver stations in East Asia, characterizing known seismic events in that region, and constructing a profile for each receiver station to determine how effective its measurements are at predicting each event. The results of this query can be used to help prioritize how data is collected, identify defective instruments, and guide future sensor placements.

  8. OGDD (Olive Genetic Diversity Database): a microsatellite markers' genotypes database of worldwide olive trees for cultivar identification and virgin olive oil traceability

    PubMed Central

    Ben Ayed, Rayda; Ben Hassen, Hanen; Ennouri, Karim; Ben Marzoug, Riadh; Rebai, Ahmed

    2016-01-01

    Olive (Olea europaea), whose importance is mainly due to nutritional and health features, is one of the most economically significant oil-producing trees in the Mediterranean region. Unfortunately, the increasing market demand towards virgin olive oil could often result in its adulteration with less expensive oils, which is a serious problem for the public and quality control evaluators of virgin olive oil. Therefore, to avoid frauds, olive cultivar identification and virgin olive oil authentication have become a major issue for the producers and consumers of quality control in the olive chain. Presently, genetic traceability using SSR is the cost effective and powerful marker technique that can be employed to resolve such problems. However, to identify an unknown monovarietal virgin olive oil cultivar, a reference system has become necessary. Thus, an Olive Genetic Diversity Database (OGDD) (http://www.bioinfo-cbs.org/ogdd/) is presented in this work. It is a genetic, morphologic and chemical database of worldwide olive tree and oil having a double function. In fact, besides being a reference system generated for the identification of unkown olive or virgin olive oil cultivars based on their microsatellite allele size(s), it provides users additional morphological and chemical information for each identified cultivar. Currently, OGDD is designed to enable users to easily retrieve and visualize biologically important information (SSR markers, and olive tree and oil characteristics of about 200 cultivars worldwide) using a set of efficient query interfaces and analysis tools. It can be accessed through a web service from any modern programming language using a simple hypertext transfer protocol call. The web site is implemented in java, JavaScript, PHP, HTML and Apache with all major browsers supported. Database URL: http://www.bioinfo-cbs.org/ogdd/ PMID:26827236

  9. Compilation, quality control, analysis, and summary of discrete suspended-sediment and ancillary data in the United States, 1901-2010

    USGS Publications Warehouse

    Lee, Casey J.; Glysson, G. Douglas

    2013-01-01

    Human-induced and natural changes to the transport of sediment and sediment-associated constituents can degrade aquatic ecosystems and limit human uses of streams and rivers. The lack of a dedicated, easily accessible, quality-controlled database of sediment and ancillary data has made it difficult to identify sediment-related water-quality impairments and has limited understanding of how human actions affect suspended-sediment concentrations and transport. The purpose of this report is to describe the creation of a quality-controlled U.S. Geological Survey suspended-sediment database, provide guidance for its use, and summarize characteristics of suspended-sediment data through 2010. The database is provided as an online application at http://cida.usgs.gov/sediment to allow users to view, filter, and retrieve available suspended-sediment and ancillary data. A data recovery, filtration, and quality-control process was performed to expand the availability, representativeness, and utility of existing suspended-sediment data collected by the U.S. Geological Survey in the United States before January 1, 2011. Information on streamflow condition, sediment grain size, and upstream landscape condition were matched to sediment data and sediment-sampling sites to place data in context with factors that may influence sediment transport. Suspended-sediment and selected ancillary data are presented from across the United States with respect to time, streamflow, and landscape condition. Examples of potential uses of this database for identifying sediment-related impairments, assessing trends, and designing new data collection activities are provided. This report and database can support local and national-level decision making, project planning, and data mining activities related to the transport of suspended-sediment and sediment-associated constituents.

  10. The NOAO Data Lab PHAT Photometry Database

    NASA Astrophysics Data System (ADS)

    Olsen, Knut; Williams, Ben; Fitzpatrick, Michael; PHAT Team

    2018-01-01

    We present a database containing both the combined photometric object catalog and the single epoch measurements from the Panchromatic Hubble Andromeda Treasury (PHAT). This database is hosted by the NOAO Data Lab (http://datalab.noao.edu), and as such exposes a number of data services to the PHAT photometry, including access through a Table Access Protocol (TAP) service, direct PostgreSQL queries, web-based and programmatic query interfaces, remote storage space for personal database tables and files, and a JupyterHub-based Notebook analysis environment, as well as image access through a Simple Image Access (SIA) service. We show how the Data Lab database and Jupyter Notebook environment allow for straightforward and efficient analyses of PHAT catalog data, including maps of object density, depth, and color, extraction of light curves of variable objects, and proper motion exploration.

  11. Resourcing the clinical complementary medicine information needs of Australian medical students: Results of a grounded theory study.

    PubMed

    Templeman, Kate; Robinson, Anske; McKenna, Lisa

    2016-09-01

    The aim of this study was to identify Australian medical students' complementary medicine information needs. Thirty medical students from 10 medical education faculties across Australian universities were recruited. Data were generated using in-depth semi-structured interviews and constructivist grounded theory method was used to analyze and construct data. Students sought complementary medicine information from a range of inadequate sources, such as pharmacological texts, Internet searches, peer-reviewed medical journals, and drug databases. The students identified that many complementary medicine resources may not be regarded as objective, reliable, differentiated, or comprehensive, leaving much that medical education needs to address. Most students sought succinct, easily accessible, evidence-based information to inform safe and appropriate clinical decisions about complementary medicines. A number of preferred resources were identified that can be recommended and actively promoted to medical students. Therefore, specific, evidence-based complementary medicine databases and secondary resources should be subscribed and recommended to medical schools and students, to assist meeting professional responsibilities regarding complementary medicines. These findings may help inform the development of appropriate medical information resources regarding complementary medicines. © 2016 John Wiley & Sons Australia, Ltd.

  12. Physical Activity and Yoga-Based Approaches for Pregnancy-Related Low Back and Pelvic Pain.

    PubMed

    Kinser, Patricia Anne; Pauli, Jena; Jallo, Nancy; Shall, Mary; Karst, Kailee; Hoekstra, Michelle; Starkweather, Angela

    To conduct an integrative review to evaluate current literature about nonpharmacologic, easily accessible management strategies for pregnancy-related low back and pelvic pain (PR-LBPP). PubMed, CINAHL, Cochrane Database of Systematic Reviews. Original research articles were considered for review if they were full-length publications written in English and published in peer-reviewed journals from 2005 through 2015, included measures of pain and symptoms related to PR-LBPP, and evaluated treatment modalities that used a physical exercise or yoga-based approach for the described conditions. Electronic database searches yielded 1,435 articles. A total of 15 articles met eligibility criteria for further review. These modalities show preliminary promise for pain relief and other related symptoms, including stress and depression. However, our findings also indicate several gaps in knowledge about these therapies for PR-LBPP and methodologic issues with the current literature. Although additional research is required, the results of this integrative review suggest that clinicians may consider recommending nonpharmacologic treatment options, such as gentle physical activity and yoga-based interventions, for PR-LBPP and related symptoms. Copyright © 2017 AWHONN, the Association of Women’s Health, Obstetric and Neonatal Nurses. Published by Elsevier Inc. All rights reserved.

  13. A Model Based Mars Climate Database for the Mission Design

    NASA Technical Reports Server (NTRS)

    2005-01-01

    A viewgraph presentation on a model based climate database is shown. The topics include: 1) Why a model based climate database?; 2) Mars Climate Database v3.1 Who uses it ? (approx. 60 users!); 3) The new Mars Climate database MCD v4.0; 4) MCD v4.0: what's new ? 5) Simulation of Water ice clouds; 6) Simulation of Water ice cycle; 7) A new tool for surface pressure prediction; 8) Acces to the database MCD 4.0; 9) How to access the database; and 10) New web access

  14. SIDECACHE: Information access, management and dissemination framework for web services.

    PubMed

    Doderer, Mark S; Burkhardt, Cory; Robbins, Kay A

    2011-06-14

    Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.

  15. Chlamydia screening interventions from community pharmacies: a systematic review.

    PubMed

    Gudka, Sajni; Afuwape, Folasade E; Wong, Bessie; Yow, Xuan Li; Anderson, Claire; Clifford, Rhonda M

    2013-07-01

    Chlamydia (Chlamydia trachomatis) is the most commonly notified sexually transmissible infection in Australia. Increasing the number of people aged 16-25 years being tested for chlamydia has become a key objective. The strategy recommends that chlamydia screening sites should be easy to access. Community pharmacies are conveniently located and easily accessible. This review aimed to determine the different types of pharmacy-based chlamydia screening interventions, describe their uptake rates, and understand issues around the acceptability of and barriers to testing. Seven electronic databases were searched for peer-reviewed articles published up to 30 October 2011 for studies that reported chlamydia screening interventions from community pharmacies, or had qualitative evidence on acceptability or barriers linked with interventions. Of the 163 publications identified, 12 met the inclusion criteria. Nine reported chlamydia screening interventions in a pharmacy setting, whereas three focussed on perspectives on chlamydia screening. Pharmacists could offer a chlamydia test to consumers attending the pharmacy for a sexual health-related consultation, or consumers could request a chlamydia test as part of a population-based intervention. Participating consumers said pharmacies were accessible and convenient, and pharmacists were competent when offering a chlamydia test. Pharmacists reported selectively offering tests to women they thought would be most at risk, undermining the principles of opportunistic interventions. Chlamydia screening from community pharmacies is feasible, and can provide an accessible, convenient venue to get a test. Professional implementation support, alongside resources, education and training programs, and incentives may overcome the issue of pharmacists selectively offering the test.

  16. DSSTOX WEBSITE LAUNCH: IMPROVING PUBLIC ACCESS TO DATABASES FOR BUILDING STRUCTURE-TOXICITY PREDICTION MODELS

    EPA Science Inventory

    DSSTox Website Launch: Improving Public Access to Databases for Building Structure-Toxicity Prediction Models
    Ann M. Richard
    US Environmental Protection Agency, Research Triangle Park, NC, USA

    Distributed: Decentralized set of standardized, field-delimited databases,...

  17. Software Engineering Laboratory (SEL) database organization and user's guide, revision 2

    NASA Technical Reports Server (NTRS)

    Morusiewicz, Linda; Bristow, John

    1992-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base table is described. In addition, techniques for accessing the database through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL) are discussed.

  18. Software Engineering Laboratory (SEL) database organization and user's guide

    NASA Technical Reports Server (NTRS)

    So, Maria; Heller, Gerard; Steinberg, Sandra; Spiegel, Douglas

    1989-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base tables is described. In addition, techniques for accessing the database, through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL), are discussed.

  19. Full-Text Linking: Affiliated versus Nonaffiliated Access in a Free Database.

    ERIC Educational Resources Information Center

    Grogg, Jill E.; Andreadis, Debra K.; Kirk, Rachel A.

    2002-01-01

    Presents a comparison of access to full-text articles from a free bibliographic database (PubSCIENCE) for affiliated and unaffiliated users. Found that affiliated users had access to more full-text articles than unaffiliated users had, and that both types of users could increase their level of access through additional searching and greater…

  20. A User-Friendly, Keyword-Searchable Database of Geoscientific References Through 2007 for Afghanistan

    USGS Publications Warehouse

    Eppinger, Robert G.; Sipeki, Julianna; Scofield, M.L. Sco

    2008-01-01

    This report includes a document and accompanying Microsoft Access 2003 database of geoscientific references for the country of Afghanistan. The reference compilation is part of a larger joint study of Afghanistan?s energy, mineral, and water resources, and geologic hazards currently underway by the U.S. Geological Survey, the British Geological Survey, and the Afghanistan Geological Survey. The database includes both published (n = 2,489) and unpublished (n = 176) references compiled through calendar year 2007. The references comprise two separate tables in the Access database. The reference database includes a user-friendly, keyword-searchable interface and only minimum knowledge of the use of Microsoft Access is required.

  1. Nutrient estimation from an FFQ developed for a black Zimbabwean population

    PubMed Central

    Merchant, Anwar T; Dehghan, Mahshid; Chifamba, Jephat; Terera, Getrude; Yusuf, Salim

    2005-01-01

    Background There is little information in the literature on methods of food composition database development to calculate nutrient intake from food frequency questionnaire (FFQ) data. The aim of this study is to describe the development of an FFQ and a food composition table to calculate nutrient intake in a Black Zimbabwean population. Methods Trained interviewers collected 24-hour dietary recalls (24 hr DR) from high and low income families in urban and rural Zimbabwe. Based on these data and input from local experts we developed an FFQ, containing a list of frequently consumed foods, standard portion sizes, and categories of consumption frequency. We created a food composition table of the foods found in the FFQ so that we could compute nutrient intake. We used the USDA nutrient database as the main resource because it is relatively complete, updated, and easily accessible. To choose the food item in the USDA nutrient database that most closely matched the nutrient content of the local food we referred to a local food composition table. Results Almost all the participants ate sadza (maize porridge) at least 5 times a week, and about half had matemba (fish) and caterpillar more than once a month. Nutrient estimates obtained from the FFQ data by using the USDA and Zimbabwean food composition tables were similar for total energy intake intra class correlation (ICC) = 0.99, and carbohydrate (ICC = 0.99), but different for vitamin A (ICC = 0.53), and total folate (ICC = 0.68). Conclusion We have described a standardized process of FFQ and food composition database development for a Black Zimbabwean population. PMID:16351722

  2. Second-Tier Database for Ecosystem Focus, 2002-2003 Annual Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Holmes, Chris; Muongchanh, Christine; Anderson, James J.

    2003-11-01

    The Second-Tier Database for Ecosystem Focus (Contract 00004124) provides direct and timely public access to Columbia Basin environmental, operational, fishery and riverine data resources for federal, state, public and private entities. The Second-Tier Database known as Data Access in Realtime (DART) integrates public data for effective access, consideration and application. DART also provides analysis tools and performance measures helpful in evaluating the condition of Columbia Basin salmonid stocks.

  3. A service-oriented data access control model

    NASA Astrophysics Data System (ADS)

    Meng, Wei; Li, Fengmin; Pan, Juchen; Song, Song; Bian, Jiali

    2017-01-01

    The development of mobile computing, cloud computing and distributed computing meets the growing individual service needs. Facing with complex application system, it's an urgent problem to ensure real-time, dynamic, and fine-grained data access control. By analyzing common data access control models, on the basis of mandatory access control model, the paper proposes a service-oriented access control model. By regarding system services as subject and data of databases as object, the model defines access levels and access identification of subject and object, and ensures system services securely to access databases.

  4. The unified database for the fixed target experiment BM@N

    NASA Astrophysics Data System (ADS)

    Gertsenberger, K. V.

    2016-09-01

    The article describes the developed database designed as comprehensive data storage of the fixed target experiment BM@N [1] at Joint Institute for Nuclear Research (JINR) in Dubna. The structure and purposes of the BM@N facility will be briefly presented. The scheme of the unified database and its parameters will be described in detail. The use of the BM@N database implemented on the PostgreSQL database management system (DBMS) allows one to provide user access to the actual information of the experiment. Also the interfaces developed for the access to the database will be presented. One was implemented as the set of C++ classes to access the data without SQL statements, the other-Web-interface being available on the Web page of the BM@N experiment.

  5. Access to digital library databases in higher education: design problems and infrastructural gaps.

    PubMed

    Oswal, Sushil K

    2014-01-01

    After defining accessibility and usability, the author offers a broad survey of the research studies on digital content databases which have thus far primarily depended on data drawn from studies conducted by sighted researchers with non-disabled users employing screen readers and low vision devices. This article aims at producing a detailed description of the difficulties confronted by blind screen reader users with online library databases which now hold most of the academic, peer-reviewed journal and periodical content essential for research and teaching in higher education. The approach taken here is borrowed from descriptive ethnography which allows the author to create a complete picture of the accessibility and usability problems faced by an experienced academic user of digital library databases and screen readers. The author provides a detailed analysis of the different aspects of accessibility issues in digital databases under several headers with a special focus on full-text PDF files. The author emphasizes that long-term studies with actual, blind screen reader users employing both qualitative and computerized research tools can yield meaningful data for the designers and developers to improve these databases to a level that they begin to provide an equal access to the blind.

  6. Common Data Acquisition Systems (DAS) Software Development for Rocket Propulsion Test (RPT) Test Facilities - A General Overview

    NASA Technical Reports Server (NTRS)

    Hebert, Phillip W., Sr.; Hughes, Mark S.; Davis, Dawn M.; Turowski, Mark P.; Holladay, Wendy T.; Marshall, PeggL.; Duncan, Michael E.; Morris, Jon A.; Franzl, Richard W.

    2012-01-01

    The advent of the commercial space launch industry and NASA's more recent resumption of operation of Stennis Space Center's large test facilities after thirty years of contractor control resulted in a need for a non-proprietary data acquisition system (DAS) software to support government and commercial testing. The software is designed for modularity and adaptability to minimize the software development effort for current and future data systems. An additional benefit of the software's architecture is its ability to easily migrate to other testing facilities thus providing future commonality across Stennis. Adapting the software to other Rocket Propulsion Test (RPT) Centers such as MSFC, White Sands, and Plumbrook Station would provide additional commonality and help reduce testing costs for NASA. Ultimately, the software provides the government with unlimited rights and guarantees privacy of data to commercial entities. The project engaged all RPT Centers and NASA's Independent Verification & Validation facility to enhance product quality. The design consists of a translation layer which provides the transparency of the software application layers to underlying hardware regardless of test facility location and a flexible and easily accessible database. This presentation addresses system technical design, issues encountered, and the status of Stennis' development and deployment.

  7. Accelerating Cancer Systems Biology Research through Semantic Web Technology

    PubMed Central

    Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S.

    2012-01-01

    Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute’s caBIG®, so users can not only interact with the DMR through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers’ intellectual property. PMID:23188758

  8. Accelerating cancer systems biology research through Semantic Web technology.

    PubMed

    Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S

    2013-01-01

    Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute's caBIG, so users can interact with the DMR not only through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers' intellectual property. Copyright © 2012 Wiley Periodicals, Inc.

  9. Atomic Spectra Database (ASD)

    National Institute of Standards and Technology Data Gateway

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  10. EELAB: an innovative educational resource in occupational medicine.

    PubMed

    Zhou, A Y; Dodman, J; Hussey, L; Sen, D; Rayner, C; Zarin, N; Agius, R

    2017-07-01

    Postgraduate education, training and clinical governance in occupational medicine (OM) require easily accessible yet rigorous, research and evidence-based tools based on actual clinical practice. To develop and evaluate an online resource helping physicians develop their OM skills using their own cases of work-related ill-health (WRIH). WRIH data reported by general practitioners (GPs) to The Health and Occupation Research (THOR) network were used to identify common OM clinical problems, their reported causes and management. Searches were undertaken for corresponding evidence-based and audit guidelines. A web portal entitled Electronic, Experiential, Learning, Audit and Benchmarking (EELAB) was designed to enable access to interactive resources preferably by entering data about actual cases. EELAB offered disease-specific online learning and self-assessment, self-audit of clinical management against external standards and benchmarking against their peers' practices as recorded in the research database. The resource was made available to 250 GPs and 224 occupational physicians in UK as well as postgraduate OM students for evaluation. Feedback was generally very favourable with physicians reporting their EELAB use for case-based assignments. Comments such as those suggesting a wider range of clinical conditions have guided further improvement. External peer-reviewed evaluation resulted in accreditation by the Royal College of GPs and by the Faculties of OM (FOM) of London and of Ireland. This innovative resource has been shown to achieve education, self-audit and benchmarking objectives, based on the participants' clinical practice and an extensive research database. © The Author 2017. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  11. AnatomicalTerms.info: heading for an online solution to the anatomical synonym problem hurdles in data-reuse from the Terminologia Anatomica and the foundational model of anatomy and potentials for future development.

    PubMed

    Gobée, O Paul; Jansma, Daniël; DeRuiter, Marco C

    2011-10-01

    The many synonyms for anatomical structures confuse medical students and complicate medical communication. Easily accessible translations would alleviate this problem. None of the presently available resources-Terminologia Anatomica (TA), digital terminologies such as the Foundational Model of Anatomy (FMA), and websites-are fully satisfactory to this aim. Internet technologies offer new possibilities to solve the problem. Several authors have called for an online TA. An online translation resource should be easily accessible, user-friendly, comprehensive, expandable, and its quality determinable. As first step towards this goal, we built a translation website that we named www.AnatomicalTerms.info, based on the database of the FMA. It translates between English, Latin, eponyms, and to a lesser extent other languages, and presently contains over 31,000 terms for 7,250 structures, covering 95% of TA. In addition, it automatically presents searches for images, documents and anatomical variations regarding the sought structure. Several terminological and conceptual issues were encountered in transferring data from TA and FMA into AnatomicalTerms.info, resultant from these resources' different set-ups (paper versus digital) and targets (machine versus human-user). To the best of our knowledge, AnatomicalTerms.info is unique in its combination of user-friendliness and comprehensiveness. As next step, wiki-like expandability will be added to enable open contribution of clinical synonyms and terms in different languages. Specific quality measures will be taken to strike a balance between open contribution and quality assurance. AnatomicalTerms.info's mechanism that "translates" terms to structures furthermore may enhance targeted searching by linking images, descriptions, and other anatomical resources to the structures. Copyright © 2011 Wiley-Liss, Inc.

  12. Validation of a New Risk Measure for Chronic Obstructive Pulmonary Disease Exacerbation Using Health Insurance Claims Data.

    PubMed

    Stanford, Richard H; Nag, Arpita; Mapel, Douglas W; Lee, Todd A; Rosiello, Richard; Vekeman, Francis; Gauthier-Loiselle, Marjolaine; Duh, Mei Sheng; Merrigan, J F Philip; Schatz, Michael

    2016-07-01

    Current chronic obstructive pulmonary disease (COPD) exacerbation risk prediction models are based on clinical data not easily accessible to national quality-of-care organizations and payers. Models developed from data sources available to these organizations are needed. This study aimed to validate a risk measure constructed using pharmacy claims in patients with COPD. Administrative claims data were used to construct a risk model to test and validate the ratio of controller (maintenance) medications to total COPD medications (CTR) as an independent risk measure for COPD exacerbations. The ability of the CTR to predict the risk of COPD exacerbations was also assessed. This was a retrospective study using health insurance claims data from the Truven MarketScan database (2006-2011), whereby exacerbation risk factors of patients with COPD were observed over a 12-month period and exacerbations monitored in the following year. Exacerbations were defined as moderate (emergency department or outpatient treatment with oral corticosteroid dispensings within 7 d) or severe (hospital admission) on the basis of diagnosis codes. Models were developed and validated using split-sample data from the MarketScan database and further validated using the Reliant Medical Group database. The performance of prediction models was evaluated using C-statistics. A total of 258,668 patients with COPD from the MarketScan database were included. A CTR of greater than or equal to 0.3 was significantly associated with a reduced risk for any (adjusted odds ratio [OR], 0.91; 95% confidence interval [CI], 0.85-0.97); moderate (OR, 0.93; 95% CI, 0.87-1.00), or severe (OR, 0.87; 95% CI, 0.80-0.95) exacerbation. The CTR, at a ratio of greater than or equal to 0.3, was predictive in various subpopulations, including those without a history of asthma and those with or without a history of moderate/severe exacerbations. The C-statistics ranged from 0.750 to 0.761 for the development set and 0.714 to 0.761 in the validation sets, indicating the CTR performed well in predicting exacerbation risk. The ratio of controller to total medications dispensed for COPD is a measure that can easily be calculated using only pharmacy claims data. A CTR of greater than or equal to 0.3 can potentially be used as a quality-of-care measurement for prevention of exacerbations.

  13. Electronic Reference Library: Silverplatter's Database Networking Solution.

    ERIC Educational Resources Information Center

    Millea, Megan

    Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…

  14. 48 CFR 504.602-71 - Federal Procurement Data System-Public access to data.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Procurement Data System—Public access to data. (a) The FPDS database. The General Services Administration awarded a contract for creation and operation of the Federal Procurement Data System (FPDS) database. That database includes information reported by departments and agencies as required by Federal Acquisition...

  15. 48 CFR 504.602-71 - Federal Procurement Data System-Public access to data.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Procurement Data System—Public access to data. (a) The FPDS database. The General Services Administration awarded a contract for creation and operation of the Federal Procurement Data System (FPDS) database. That database includes information reported by departments and agencies as required by Federal Acquisition...

  16. DICOM image integration into an electronic medical record using thin viewing clients

    NASA Astrophysics Data System (ADS)

    Stewart, Brent K.; Langer, Steven G.; Taira, Ricky K.

    1998-07-01

    Purpose -- To integrate radiological DICOM images into our currently existing web-browsable Electronic Medical Record (MINDscape). Over the last five years the University of Washington has created a clinical data repository combining in a distributed relational database information from multiple departmental databases (MIND). A text-based view of this data called the Mini Medical Record (MMR) has been available for three years. MINDscape, unlike the text based MMR, provides a platform independent, web browser view of the MIND dataset that can easily be linked to other information resources on the network. We have now added the integration of radiological images into MINDscape through a DICOM webserver. Methods/New Work -- we have integrated a commercial webserver that acts as a DICOM Storage Class Provider to our, computed radiography (CR), computed tomography (CT), digital fluoroscopy (DF), magnetic resonance (MR) and ultrasound (US) scanning devices. These images can be accessed through CGI queries or by linking the image server database using ODBC or SQL gateways. This allows the use of dynamic HTML links to the images on the DICOM webserver from MINDscape, so that the radiology reports already resident in the MIND repository can be married with the associated images through the unique examination accession number generated by our Radiology Information System (RIS). The web browser plug-in used provides a wavelet decompression engine (up to 16-bits per pixel) and performs the following image manipulation functions: window/level, flip, invert, sort, rotate, zoom, cine-loop and save as JPEG. Results -- Radiological DICOM image sets (CR, CT, MR and US) are displayed with associated exam reports for referring physician and clinicians anywhere within the widespread academic medical center on PCs, Macs, X-terminals and Unix computers. This system is also being used for home teleradiology application. Conclusion -- Radiological DICOM images can be made available medical center wide to physicians quickly using low-cost and ubiquitous, thin client browsing technology and wavelet compression.

  17. Web client and ODBC access to legacy database information: a low cost approach.

    PubMed Central

    Sanders, N. W.; Mann, N. H.; Spengler, D. M.

    1997-01-01

    A new method has been developed for the Department of Orthopaedics of Vanderbilt University Medical Center to access departmental clinical data. Previously this data was stored only in the medical center's mainframe DB2 database, it is now additionally stored in a departmental SQL database. Access to this data is available via any ODBC compliant front-end or a web client. With a small budget and no full time staff, we were able to give our department on-line access to many years worth of patient data that was previously inaccessible. PMID:9357735

  18. High-Performance Secure Database Access Technologies for HEP Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less

  19. Preview of the BATSE Earth Occultation Catalog of Low Energy Gamma Ray Sources

    NASA Technical Reports Server (NTRS)

    Harmon, B. A.; Wilson, C. A.; Fishman, G. J.; McCollough, M. L.; Robinson, C. R.; Sahi, M.; Paciesas, W. S.; Zhang, S. N.

    1999-01-01

    The Burst and Transient Source Experiment (BATSE) aboard the Compton Gamma Ray Observatory (CGRO) has been detecting and monitoring point sources in the high energy sky since 1991. Although BATSE is best known for gamma ray bursts, it also monitors the sky for longer-lived sources of radiation. Using the Earth occultation technique to extract flux information, a catalog is being prepared of about 150 sources potential emission in the large area detectors (20-1000 keV). The catalog will contain light curves, representative spectra, and parametric data for black hole and neutron star binaries, active galaxies, and super-nova remnants. In this preview, we present light curves for persistent and transient sources, and also show examples of what type of information can be obtained from the BATSE Earth occultation database. Options for making the data easily accessible as an "on line" WWW document are being explored.

  20. Discovering, Indexing and Interlinking Information Resources

    PubMed Central

    Celli, Fabrizio; Keizer, Johannes; Jaques, Yves; Konstantopoulos, Stasinos; Vudragović, Dušan

    2015-01-01

    The social media revolution is having a dramatic effect on the world of scientific publication. Scientists now publish their research interests, theories and outcomes across numerous channels, including personal blogs and other thematic web spaces where ideas, activities and partial results are discussed. Accordingly, information systems that facilitate access to scientific literature must learn to cope with this valuable and varied data, evolving to make this research easily discoverable and available to end users. In this paper we describe the incremental process of discovering web resources in the domain of agricultural science and technology. Making use of Linked Open Data methodologies, we interlink a wide array of custom-crawled resources with the AGRIS bibliographic database in order to enrich the user experience of the AGRIS website. We also discuss the SemaGrow Stack, a query federation and data integration infrastructure used to estimate the semantic distance between crawled web resources and AGRIS. PMID:26834982

  1. Automated Cough Assessment on a Mobile Platform

    PubMed Central

    2014-01-01

    The development of an Automated System for Asthma Monitoring (ADAM) is described. This consists of a consumer electronics mobile platform running a custom application. The application acquires an audio signal from an external user-worn microphone connected to the device analog-to-digital converter (microphone input). This signal is processed to determine the presence or absence of cough sounds. Symptom tallies and raw audio waveforms are recorded and made easily accessible for later review by a healthcare provider. The symptom detection algorithm is based upon standard speech recognition and machine learning paradigms and consists of an audio feature extraction step followed by a Hidden Markov Model based Viterbi decoder that has been trained on a large database of audio examples from a variety of subjects. Multiple Hidden Markov Model topologies and orders are studied. Performance of the recognizer is presented in terms of the sensitivity and the rate of false alarm as determined in a cross-validation test. PMID:25506590

  2. NGL Viewer: a web application for molecular visualization.

    PubMed

    Rose, Alexander S; Hildebrand, Peter W

    2015-07-01

    The NGL Viewer (http://proteinformatics.charite.de/ngl) is a web application for the visualization of macromolecular structures. By fully adopting capabilities of modern web browsers, such as WebGL, for molecular graphics, the viewer can interactively display large molecular complexes and is also unaffected by the retirement of third-party plug-ins like Flash and Java Applets. Generally, the web application offers comprehensive molecular visualization through a graphical user interface so that life scientists can easily access and profit from available structural data. It supports common structural file-formats (e.g. PDB, mmCIF) and a variety of molecular representations (e.g. 'cartoon, spacefill, licorice'). Moreover, the viewer can be embedded in other web sites to provide specialized visualizations of entries in structural databases or results of structure-related calculations. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. The LSST metrics analysis framework (MAF)

    NASA Astrophysics Data System (ADS)

    Jones, R. L.; Yoachim, Peter; Chandrasekharan, Srinivasan; Connolly, Andrew J.; Cook, Kem H.; Ivezic, Željko; Krughoff, K. S.; Petry, Catherine; Ridgway, Stephen T.

    2014-07-01

    We describe the Metrics Analysis Framework (MAF), an open-source python framework developed to provide a user-friendly, customizable, easily-extensible set of tools for analyzing data sets. MAF is part of the Large Synoptic Survey Telescope (LSST) Simulations effort. Its initial goal is to provide a tool to evaluate LSST Operations Simulation (OpSim) simulated surveys to help understand the effects of telescope scheduling on survey performance, however MAF can be applied to a much wider range of datasets. The building blocks of the framework are Metrics (algorithms to analyze a given quantity of data), Slicers (subdividing the overall data set into smaller data slices as relevant for each Metric), and Database classes (to access the dataset and read data into memory). We describe how these building blocks work together, and provide an example of using MAF to evaluate different dithering strategies. We also outline how users can write their own custom Metrics and use these within the framework.

  4. Chemotherapy-Induced Nausea and Vomiting Mitigation With Music Interventions
.

    PubMed

    Kiernan, Jason M; Conradi Stark, Jody; Vallerand, April H

    2018-01-01

    Despite three decades of studies examining music interventions as a mitigant of chemotherapy-induced nausea and vomiting (CINV), to date, no systematic review of this literature exists.
. PubMed, Scopus, PsycInfo®, CINAHL®, Cochrane Library, and Google Scholar were searched. Keywords for all databases were music, chemotherapy, and nausea.
. All studies were appraised for methodology and results.
. 10 studies met inclusion criteria for review. Sample sizes were generally small and nonrandomized. Locus of control for music selection was more often with the investigator rather than the participant. Few studies controlled for the emetogenicity of the chemotherapy administered, nor for known patient-specific risk factors for CINV.
. The existing data have been largely generated by nurse scientists, and implications for nursing practice are many, because music interventions are low-cost, easily accessible, and without known adverse effects. However, this specific body of knowledge requires additional substantive inquiry to generate clinically relevant data.

  5. Flexible solution for interoperable cloud healthcare systems.

    PubMed

    Vida, Mihaela Marcella; Lupşe, Oana Sorina; Stoicu-Tivadar, Lăcrămioara; Bernad, Elena

    2012-01-01

    It is extremely important for the healthcare domain to have a standardized communication because will improve the quality of information and in the end the resulting benefits will improve the quality of patients' life. The standards proposed to be used are: HL7 CDA and CCD. For a better access to the medical data a solution based on cloud computing (CC) is investigated. CC is a technology that supports flexibility, seamless care, and reduced costs of the medical act. To ensure interoperability between healthcare information systems a solution creating a Web Custom Control is presented. The control shows the database tables and fields used to configure the two standards. This control will facilitate the work of the medical staff and hospital administrators, because they can configure the local system easily and prepare it for communication with other systems. The resulted information will have a higher quality and will provide knowledge that will support better patient management and diagnosis.

  6. The braingraph.org database of high resolution structural connectomes and the brain graph tools.

    PubMed

    Kerepesi, Csaba; Szalkai, Balázs; Varga, Bálint; Grolmusz, Vince

    2017-10-01

    Based on the data of the NIH-funded Human Connectome Project, we have computed structural connectomes of 426 human subjects in five different resolutions of 83, 129, 234, 463 and 1015 nodes and several edge weights. The graphs are given in anatomically annotated GraphML format that facilitates better further processing and visualization. For 96 subjects, the anatomically classified sub-graphs can also be accessed, formed from the vertices corresponding to distinct lobes or even smaller regions of interests of the brain. For example, one can easily download and study the connectomes, restricted to the frontal lobes or just to the left precuneus of 96 subjects using the data. Partially directed connectomes of 423 subjects are also available for download. We also present a GitHub-deposited set of tools, called the Brain Graph Tools, for several processing tasks of the connectomes on the site http://braingraph.org.

  7. Ebbie: automated analysis and storage of small RNA cloning data using a dynamic web server

    PubMed Central

    Ebhardt, H Alexander; Wiese, Kay C; Unrau, Peter J

    2006-01-01

    Background DNA sequencing is used ubiquitously: from deciphering genomes[1] to determining the primary sequence of small RNAs (smRNAs) [2-5]. The cloning of smRNAs is currently the most conventional method to determine the actual sequence of these important regulators of gene expression. Typical smRNA cloning projects involve the sequencing of hundreds to thousands of smRNA clones that are delimited at their 5' and 3' ends by fixed sequence regions. These primers result from the biochemical protocol used to isolate and convert the smRNA into clonable PCR products. Recently we completed a smRNA cloning project involving tobacco plants, where analysis was required for ~700 smRNA sequences[6]. Finding no easily accessible research tool to enter and analyze smRNA sequences we developed Ebbie to assist us with our study. Results Ebbie is a semi-automated smRNA cloning data processing algorithm, which initially searches for any substring within a DNA sequencing text file, which is flanked by two constant strings. The substring, also termed smRNA or insert, is stored in a MySQL and BlastN database. These inserts are then compared using BlastN to locally installed databases allowing the rapid comparison of the insert to both the growing smRNA database and to other static sequence databases. Our laboratory used Ebbie to analyze scores of DNA sequencing data originating from an smRNA cloning project[6]. Through its built-in instant analysis of all inserts using BlastN, we were able to quickly identify 33 groups of smRNAs from ~700 database entries. This clustering allowed the easy identification of novel and highly expressed clusters of smRNAs. Ebbie is available under GNU GPL and currently implemented on Conclusion Ebbie was designed for medium sized smRNA cloning projects with about 1,000 database entries [6-8].Ebbie can be used for any type of sequence analysis where two constant primer regions flank a sequence of interest. The reliable storage of inserts, and their annotation in a MySQL database, BlastN[9] comparison of new inserts to dynamic and static databases make it a powerful new tool in any laboratory using DNA sequencing. Ebbie also prevents manual mistakes during the excision process and speeds up annotation and data-entry. Once the server is installed locally, its access can be restricted to protect sensitive new DNA sequencing data. Ebbie was primarily designed for smRNA cloning projects, but can be applied to a variety of RNA and DNA cloning projects[2,3,10,11]. PMID:16584563

  8. Mapping Application for Penguin Populations and Projected Dynamics (MAPPPD): Data and Tools for Dynamic Management and Decision Support

    NASA Technical Reports Server (NTRS)

    Humphries, G. R. W.; Naveen, R.; Schwaller, M.; Che-Castaldo, C.; McDowall, P.; Schrimpf, M.; Schrimpf, Michael; Lynch, H. J.

    2017-01-01

    The Mapping Application for Penguin Populations and Projected Dynamics (MAPPPD) is a web-based, open access, decision-support tool designed to assist scientists, non-governmental organizations and policy-makers working to meet the management objectives as set forth by the Commission for the Conservation of Antarctic Marine Living Resources (CCAMLR) and other components of the Antarctic Treaty System (ATS) (that is, Consultative Meetings and the ATS Committee on Environmental Protection). MAPPPD was designed specifically to complement existing efforts such as the CCAMLR Ecosystem Monitoring Program (CEMP) and the ATS site guidelines for visitors. The database underlying MAPPPD includes all publicly available (published and unpublished) count data on emperor, gentoo, Adelie) and chinstrap penguins in Antarctica. Penguin population models are used to assimilate available data into estimates of abundance for each site and year.Results are easily aggregated across multiple sites to obtain abundance estimates over any user-defined area of interest. A front end web interface located at www.penguinmap.com provides free and ready access to the most recent count and modelled data, and can act as a facilitator for data transfer between scientists and Antarctic stakeholders to help inform management decisions for the continent.

  9. Initiating a Human Variome Project Country Node.

    PubMed

    AlAama, Jumana; Smith, Timothy D; Lo, Alan; Howard, Heather; Kline, Alexandria A; Lange, Matthew; Kaput, Jim; Cotton, Richard G H

    2011-05-01

    Genetic diseases are a pressing global health problem that requires comprehensive access to basic clinical and genetic data to counter. The creation of regional and international databases that can be easily accessed by clinicians and diagnostic labs will greatly improve our ability to accurately diagnose and treat patients with genetic disorders. The Human Variome Project is currently working in conjunction with human genetics societies to achieve this by establishing systems to collect every mutation reported by a diagnostic laboratory, clinic, or research laboratory in a country and store these within a national repository, or HVP Country Node. Nodes have already been initiated in Australia, Belgium, China, Egypt, Malaysia, and Kuwait. Each is examining how to systematically collect and share genetic, clinical, and biochemical information in a country-specific manner that is sensitive to local ethical and cultural issues. This article gathers cases of genetic data collection within countries and takes recommendations from the global community to develop a procedure for countries wishing to establish their own collection system as part of the Human Variome Project. We hope this may lead to standard practices to facilitate global collection of data and allow efficient use in clinical practice, research and therapy. © 2011 Wiley-Liss, Inc.

  10. Understanding the patient perspective on research access to national health records databases for conduct of randomized registry trials.

    PubMed

    Avram, Robert; Marquis-Gravel, Guillaume; Simard, François; Pacheco, Christine; Couture, Étienne; Tremblay-Gravel, Maxime; Desplantie, Olivier; Malhamé, Isabelle; Bibas, Lior; Mansour, Samer; Parent, Marie-Claude; Farand, Paul; Harvey, Luc; Lessard, Marie-Gabrielle; Ly, Hung; Liu, Geoffrey; Hay, Annette E; Marc Jolicoeur, E

    2018-07-01

    Use of health administrative databases is proposed for screening and monitoring of participants in randomized registry trials. However, access to these databases raises privacy concerns. We assessed patient's preferences regarding use of personal information to link their research records with national health databases, as part of a hypothetical randomized registry trial. Cardiology patients were invited to complete an anonymous self-reported survey that ascertained preferences related to the concept of accessing government health databases for research, the type of personal identifiers to be shared and the type of follow-up preferred as participants in a hypothetical trial. A total of 590 responders completed the survey (90% response rate), the majority of which were Caucasians (90.4%), male (70.0%) with a median age of 65years (interquartile range, 8). The majority responders (80.3%) would grant researchers access to health administrative databases for screening and follow-up. To this end, responders endorsed the recording of their personal identifiers by researchers for future record linkage, including their name (90%), and health insurance number (83.9%), but fewer responders agreed with the recording of their social security number (61.4%, p<0.05 with date of birth as reference). Prior participation in a trial predicted agreement for granting researchers access to the administrative databases (OR: 1.69, 95% confidence interval: 1.03-2.90; p=0.04). The majority of Cardiology patients surveyed were supportive of use of their personal identifiers to access administrative health databases and conduct long-term monitoring in the context of a randomized registry trial. Copyright © 2018 Elsevier Ireland Ltd. All rights reserved.

  11. IntPath--an integrated pathway gene relationship database for model organisms and important pathogens.

    PubMed

    Zhou, Hufeng; Jin, Jingjing; Zhang, Haojun; Yi, Bo; Wozniak, Michal; Wong, Limsoon

    2012-01-01

    Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and relationship errors in KEGG). We turn complicated and incompatible xml data formats and inconsistent gene and gene relationship representations from different source databases into normalized and unified pathway-gene and pathway-gene pair relationships neatly recorded in simple tab-delimited text format and MySQL tables, which facilitates convenient automatic computation and large-scale referencing in many related studies. IntPath data can be downloaded in text format or MySQL dump. IntPath data can also be retrieved and analyzed conveniently through web service by local programs or through web interface by mouse clicks. Several useful analysis tools are also provided in IntPath. We have overcome in IntPath the issues of compatibility, consistency, and comprehensiveness that often hamper effective use of pathway databases. We have included four organisms in the current release of IntPath. Our methodology and programs described in this work can be easily applied to other organisms; and we will include more model organisms and important pathogens in future releases of IntPath. IntPath maintains regular updates and is freely available at http://compbio.ddns.comp.nus.edu.sg:8080/IntPath.

  12. CM-DataONE: A Framework for collaborative analysis of climate model output

    NASA Astrophysics Data System (ADS)

    Xu, Hao; Bai, Yuqi; Li, Sha; Dong, Wenhao; Huang, Wenyu; Xu, Shiming; Lin, Yanluan; Wang, Bin

    2015-04-01

    CM-DataONE is a distributed collaborative analysis framework for climate model data which aims to break through the data access barriers of increasing file size and to accelerate research process. As data size involved in project such as the fifth Coupled Model Intercomparison Project (CMIP5) has reached petabytes, conventional methods for analysis and diagnosis of model outputs have been rather time-consuming and redundant. CM-DataONE is developed for data publishers and researchers from relevant areas. It can enable easy access to distributed data and provide extensible analysis functions based on tools such as NCAR Command Language, NetCDF Operators (NCO) and Climate Data Operators (CDO). CM-DataONE can be easily installed, configured, and maintained. The main web application has two separate parts which communicate with each other through APIs based on HTTP protocol. The analytic server is designed to be installed in each data node while a data portal can be configured anywhere and connect to a nearest node. Functions such as data query, analytic task submission, status monitoring, visualization and product downloading are provided to end users by data portal. Data conform to CMIP5 Model Output Format in each peer node can be scanned by the server and mapped to a global information database. A scheduler included in the server is responsible for task decomposition, distribution and consolidation. Analysis functions are always executed where data locate. Analysis function package included in the server has provided commonly used functions such as EOF analysis, trend analysis and time series. Functions are coupled with data by XML descriptions and can be easily extended. Various types of results can be obtained by users for further studies. This framework has significantly decreased the amount of data to be transmitted and improved efficiency in model intercomparison jobs by supporting online analysis and multi-node collaboration. To end users, data query is therefore accelerated and the size of data to be downloaded is reduced. Methodology can be easily shared among scientists, avoiding unnecessary replication. Currently, a prototype of CM-DataONE has been deployed on two data nodes of Tsinghua University.

  13. The PMDB Protein Model Database

    PubMed Central

    Castrignanò, Tiziana; De Meo, Paolo D'Onorio; Cozzetto, Domenico; Talamo, Ivano Giuseppe; Tramontano, Anna

    2006-01-01

    The Protein Model Database (PMDB) is a public resource aimed at storing manually built 3D models of proteins. The database is designed to provide access to models published in the scientific literature, together with validating experimental data. It is a relational database and it currently contains >74 000 models for ∼240 proteins. The system is accessible at and allows predictors to submit models along with related supporting evidence and users to download them through a simple and intuitive interface. Users can navigate in the database and retrieve models referring to the same target protein or to different regions of the same protein. Each model is assigned a unique identifier that allows interested users to directly access the data. PMID:16381873

  14. Cloud-Based Computational Tools for Earth Science Applications

    NASA Astrophysics Data System (ADS)

    Arendt, A. A.; Fatland, R.; Howe, B.

    2015-12-01

    Earth scientists are increasingly required to think across disciplines and utilize a wide range of datasets in order to solve complex environmental challenges. Although significant progress has been made in distributing data, researchers must still invest heavily in developing computational tools to accommodate their specific domain. Here we document our development of lightweight computational data systems aimed at enabling rapid data distribution, analytics and problem solving tools for Earth science applications. Our goal is for these systems to be easily deployable, scalable and flexible to accommodate new research directions. As an example we describe "Ice2Ocean", a software system aimed at predicting runoff from snow and ice in the Gulf of Alaska region. Our backend components include relational database software to handle tabular and vector datasets, Python tools (NumPy, pandas and xray) for rapid querying of gridded climate data, and an energy and mass balance hydrological simulation model (SnowModel). These components are hosted in a cloud environment for direct access across research teams, and can also be accessed via API web services using a REST interface. This API is a vital component of our system architecture, as it enables quick integration of our analytical tools across disciplines, and can be accessed by any existing data distribution centers. We will showcase several data integration and visualization examples to illustrate how our system has expanded our ability to conduct cross-disciplinary research.

  15. [The database server for the medical bibliography database at Charles University].

    PubMed

    Vejvalka, J; Rojíková, V; Ulrych, O; Vorísek, M

    1998-01-01

    In the medical community, bibliographic databases are widely accepted as a most important source of information both for theoretical and clinical disciplines. To improve access to medical bibliographic databases at Charles University, a database server (ERL by Silver Platter) was set up at the 2nd Faculty of Medicine in Prague. The server, accessible by Internet 24 hours/7 days, hosts now 14 years' MEDLINE and 10 years' EMBASE Paediatrics. Two different strategies are available for connecting to the server: a specialized client program that communicates over the Internet (suitable for professional searching) and a web-based access that requires no specialized software (except the WWW browser) on the client side. The server is now offered to academic community to host further databases, possibly subscribed by consortia whose individual members would not subscribe them by themselves.

  16. Database citation in supplementary data linked to Europe PubMed Central full text biomedical articles.

    PubMed

    Kafkas, Şenay; Kim, Jee-Hyub; Pi, Xingjun; McEntyre, Johanna R

    2015-01-01

    In this study, we present an analysis of data citation practices in full text research articles and their corresponding supplementary data files, made available in the Open Access set of articles from Europe PubMed Central. Our aim is to investigate whether supplementary data files should be considered as a source of information for integrating the literature with biomolecular databases. Using text-mining methods to identify and extract a variety of core biological database accession numbers, we found that the supplemental data files contain many more database citations than the body of the article, and that those citations often take the form of a relatively small number of articles citing large collections of accession numbers in text-based files. Moreover, citation of value-added databases derived from submission databases (such as Pfam, UniProt or Ensembl) is common, demonstrating the reuse of these resources as datasets in themselves. All the database accession numbers extracted from the supplementary data are publicly accessible from http://dx.doi.org/10.5281/zenodo.11771. Our study suggests that supplementary data should be considered when linking articles with data, in curation pipelines, and in information retrieval tasks in order to make full use of the entire research article. These observations highlight the need to improve the management of supplemental data in general, in order to make this information more discoverable and useful.

  17. 17 CFR 162.3 - Affiliate marketing opt out and exceptions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... places that information into a common database that the covered affiliate may access. (3) Service... maintains or accesses a common database that the covered affiliate may access) receives eligibility... the notice and opt-out provisions under other privacy rules under the FCRA, the GLB Act or the CEA. ...

  18. 17 CFR 162.3 - Affiliate marketing opt out and exceptions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... places that information into a common database that the covered affiliate may access. (3) Service... maintains or accesses a common database that the covered affiliate may access) receives eligibility... the notice and opt-out provisions under other privacy rules under the FCRA, the GLB Act or the CEA. ...

  19. 17 CFR 162.3 - Affiliate marketing opt out and exceptions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... places that information into a common database that the covered affiliate may access. (3) Service... maintains or accesses a common database that the covered affiliate may access) receives eligibility... the notice and opt-out provisions under other privacy rules under the FCRA, the GLB Act or the CEA. ...

  20. EROS Main Image File: A Picture Perfect Database for Landsat Imagery and Aerial Photography.

    ERIC Educational Resources Information Center

    Jack, Robert F.

    1984-01-01

    Describes Earth Resources Observation System online database, which provides access to computerized images of Earth obtained via satellite. Highlights include retrieval system and commands, types of images, search strategies, other online functions, and interpretation of accessions. Satellite information, sources and samples of accessions, and…

  1. GraQL: A Query Language for High-Performance Attributed Graph Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Castellana, Vito G.; Morari, Alessandro

    Graph databases have gained increasing interest in the last few years due to the emergence of data sources which are not easily analyzable in traditional relational models or for which a graph data model is the natural representation. In order to understand the design and implementation choices for an attributed graph database backend and query language, we have started to design our infrastructure for attributed graph databases. In this paper, we describe the design considerations of our in-memory attributed graph database system with a particular focus on the data definition and query language components.

  2. E&P data lifecycle: a case study in Petrobras Company

    NASA Astrophysics Data System (ADS)

    Mastella, Laura; Campinho, Vania; Alonso, João

    2013-04-01

    Petrobras, the biggest Brazilian Petroleum Company, has been studying and working on Brazilian sedimentary basins for nearly 60 years. The corporate database currently registers over 25000 wells and all their associated products (geophysical logs, cores, sidewall samples) and analyses. There are thousands of samples, descriptions, pictures, measures, and other scientific data resulted from petroleum exploration and production. This data constitutes a huge scientific database which is applied to support Petrobras economic strategy. Geological models built during the exploration phase continue to be refined during both the development and production phases: data should be continually manipulated, correlated and integrated. As E&P assets reach maturity, a new cycle starts: data is re-analyzed and new hypotheses are made in order to increase hydrocarbon productivity. Initial geological models then evolve from accumulated knowledge throughout all the E&P phases. Therefore the quality control must be performed in the first phases of data acquisition, i.e., during the exploration phase, to avoid reworking and loss of information. The last decade witnessed a great evolution in petroleum industry technology. As a consequence, the complexity and particulars of the information generated have increased accordingly. Current technology has also facilitated access to networks and databases, making it possible to store large amounts of information. This scenario makes available a large mass of information from difference sources, which uses heterogeneous vocabulary as well as different scales and measurement units. In this context, knowledge might be diluted and the total amount of information cannot be applied in E&P process. In order to provide adequate data governance, data input is controlled by rules, standards and policies, implemented by corporate software systems. Petrobras' integrated E&P database is a centralized repository to which all E&P systems can have access. The quality of the data that goes into the database can be increased by means of information management practices: • data validation, • language internationalization, • dictionaries, patterns, metadata. Moreover, stored data must be kept consistent, and any changes in the data should be registered while maintaining, if possible, the original data, associating the modification with its author, timestamp and reason. These practices lead to the creation of a database that serves and benefits the company's knowledge. Information retrieval and visualization is one of the main issues concerning petroleum industries. In order to make significant information available for end-users, it is fundamental to have an efficient data integration strategy. The integration of E&P data, such as geological, geophysical, geographical and operational data, is the end goal of the exploratory activities. Petrobras corporate systems are evolving towards it so as to make available various data from diverse sources and to create a dashboard that can be easily accessed at any time by geoscientists and reservoir engineers. The main goal is to maintain scientific integrity of information, from generators to consumers, during all E&P data life cycle.

  3. Database Objects vs Files: Evaluation of alternative strategies for managing large remote sensing data

    NASA Astrophysics Data System (ADS)

    Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram

    2010-05-01

    Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available. Another consideration is the strategy used for partitioning large data collections, and large datasets within collections, using round-robin vs hash partitioning vs range partitioning methods. Each has different characteristics in terms of spatial locality of data and resultant degree of declustering of the computations on the data. Furthermore, we have observed that, in practice, there can be large variations in the frequency of access to different parts of a large data collection and/or dataset, thereby creating "hotspots" in the data. We will evaluate the ability of different approaches for dealing effectively with such hotspots and alternative strategies for dealing with hotspots.

  4. Microcomputer-Based Access to Machine-Readable Numeric Databases.

    ERIC Educational Resources Information Center

    Wenzel, Patrick

    1988-01-01

    Describes the use of microcomputers and relational database management systems to improve access to numeric databases by the Data and Program Library Service at the University of Wisconsin. The internal records management system, in-house reference tools, and plans to extend these tools to the entire campus are discussed. (3 references) (CLB)

  5. Ionic Liquids Database- (ILThermo)

    National Institute of Standards and Technology Data Gateway

    SRD 147 NIST Ionic Liquids Database- (ILThermo) (Web, free access)   IUPAC Ionic Liquids Database, ILThermo, is a free web research tool that allows users worldwide to access an up-to-date data collection from the publications on experimental investigations of thermodynamic, and transport properties of ionic liquids as well as binary and ternary mixtures containing ionic liquids.

  6. FirstSearch and NetFirst--Web and Dial-up Access: Plus Ca Change, Plus C'est la Meme Chose?

    ERIC Educational Resources Information Center

    Koehler, Wallace; Mincey, Danielle

    1996-01-01

    Compares and evaluates the differences between OCLC's dial-up and World Wide Web FirstSearch access methods and their interfaces with the underlying databases. Also examines NetFirst, OCLC's new Internet catalog, the only Internet tracking database from a "traditional" database service. (Author/PEN)

  7. 48 CFR 504.605-70 - Federal Procurement Data System-Public access to data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Procurement Data System—Public access to data. (a) The FPDS database. The General Services Administration awarded a contract for creation and operation of the Federal Procurement Data System (FPDS) database. That database includes information reported by departments and agencies as required by FAR subpart 4.6. One of...

  8. 48 CFR 504.605-70 - Federal Procurement Data System-Public access to data.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Procurement Data System—Public access to data. (a) The FPDS database. The General Services Administration awarded a contract for creation and operation of the Federal Procurement Data System (FPDS) database. That database includes information reported by departments and agencies as required by FAR subpart 4.6. One of...

  9. 48 CFR 504.605-70 - Federal Procurement Data System-Public access to data.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Procurement Data System—Public access to data. (a) The FPDS database. The General Services Administration awarded a contract for creation and operation of the Federal Procurement Data System (FPDS) database. That database includes information reported by departments and agencies as required by FAR subpart 4.6. One of...

  10. Tao of Gateway: Providing Internet Access to Licensed Databases.

    ERIC Educational Resources Information Center

    McClellan, Gregory A.; Garrison, William V.

    1997-01-01

    Illustrates an approach for providing networked access to licensed databases over the Internet by positioning the library between patron and vendor. Describes how the gateway systems and database connection servers work and discusses how treatment of security has evolved with the introduction of the World Wide Web. Outlines plans to reimplement…

  11. Web Database Development: Implications for Academic Publishing.

    ERIC Educational Resources Information Center

    Fernekes, Bob

    This paper discusses the preliminary planning, design, and development of a pilot project to create an Internet accessible database and search tool for locating and distributing company data and scholarly work. Team members established four project objectives: (1) to develop a Web accessible database and decision tool that creates Web pages on the…

  12. Ocean Drilling Program: Web Site Access Statistics

    Science.gov Websites

    and products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP's main See statistics for JOIDES members. See statistics for Janus database. 1997 October November December accessible only on www-odp.tamu.edu. ** End of ODP, start of IODP. Privacy Policy ODP | Search | Database

  13. Windows on the brain: the emerging role of atlases and databases in neuroscience

    NASA Technical Reports Server (NTRS)

    Van Essen, David C.; VanEssen, D. C. (Principal Investigator)

    2002-01-01

    Brain atlases and associated databases have great potential as gateways for navigating, accessing, and visualizing a wide range of neuroscientific data. Recent progress towards realizing this potential includes the establishment of probabilistic atlases, surface-based atlases and associated databases, combined with improvements in visualization capabilities and internet access.

  14. Pan European Phenological database (PEP725): a single point of access for European data.

    PubMed

    Templ, Barbara; Koch, Elisabeth; Bolmgren, Kjell; Ungersböck, Markus; Paul, Anita; Scheifinger, Helfried; Rutishauser, This; Busto, Montserrat; Chmielewski, Frank-M; Hájková, Lenka; Hodzić, Sabina; Kaspar, Frank; Pietragalla, Barbara; Romero-Fresneda, Ramiro; Tolvanen, Anne; Vučetič, Višnja; Zimmermann, Kirsten; Zust, Ana

    2018-06-01

    The Pan European Phenology (PEP) project is a European infrastructure to promote and facilitate phenological research, education, and environmental monitoring. The main objective is to maintain and develop a Pan European Phenological database (PEP725) with an open, unrestricted data access for science and education. PEP725 is the successor of the database developed through the COST action 725 "Establishing a European phenological data platform for climatological applications" working as a single access point for European-wide plant phenological data. So far, 32 European meteorological services and project partners from across Europe have joined and supplied data collected by volunteers from 1868 to the present for the PEP725 database. Most of the partners actively provide data on a regular basis. The database presently holds almost 12 million records, about 46 growing stages and 265 plant species (including cultivars), and can be accessed via http://www.pep725.eu/ . Users of the PEP725 database have studied a diversity of topics ranging from climate change impact, plant physiological question, phenological modeling, and remote sensing of vegetation to ecosystem productivity.

  15. Pan European Phenological database (PEP725): a single point of access for European data

    NASA Astrophysics Data System (ADS)

    Templ, Barbara; Koch, Elisabeth; Bolmgren, Kjell; Ungersböck, Markus; Paul, Anita; Scheifinger, Helfried; Rutishauser, This; Busto, Montserrat; Chmielewski, Frank-M.; Hájková, Lenka; Hodzić, Sabina; Kaspar, Frank; Pietragalla, Barbara; Romero-Fresneda, Ramiro; Tolvanen, Anne; Vučetič, Višnja; Zimmermann, Kirsten; Zust, Ana

    2018-02-01

    The Pan European Phenology (PEP) project is a European infrastructure to promote and facilitate phenological research, education, and environmental monitoring. The main objective is to maintain and develop a Pan European Phenological database (PEP725) with an open, unrestricted data access for science and education. PEP725 is the successor of the database developed through the COST action 725 "Establishing a European phenological data platform for climatological applications" working as a single access point for European-wide plant phenological data. So far, 32 European meteorological services and project partners from across Europe have joined and supplied data collected by volunteers from 1868 to the present for the PEP725 database. Most of the partners actively provide data on a regular basis. The database presently holds almost 12 million records, about 46 growing stages and 265 plant species (including cultivars), and can be accessed via http://www.pep725.eu/. Users of the PEP725 database have studied a diversity of topics ranging from climate change impact, plant physiological question, phenological modeling, and remote sensing of vegetation to ecosystem productivity.

  16. The CoFactor database: organic cofactors in enzyme catalysis.

    PubMed

    Fischer, Julia D; Holliday, Gemma L; Thornton, Janet M

    2010-10-01

    Organic enzyme cofactors are involved in many enzyme reactions. Therefore, the analysis of cofactors is crucial to gain a better understanding of enzyme catalysis. To aid this, we have created the CoFactor database. CoFactor provides a web interface to access hand-curated data extracted from the literature on organic enzyme cofactors in biocatalysis, as well as automatically collected information. CoFactor includes information on the conformational and solvent accessibility variation of the enzyme-bound cofactors, as well as mechanistic and structural information about the hosting enzymes. The database is publicly available and can be accessed at http://www.ebi.ac.uk/thornton-srv/databases/CoFactor.

  17. Implementation of Data Citations and Persistent Identifiers at the ORNL DAAC

    NASA Astrophysics Data System (ADS)

    Cook, R. B.; Santhana Vannan, S.; Devarakonda, Ranjeet; McMurry, B. F.; Kidder, J. H.; Shanafield, H. A.; Palanisamy, G.

    2013-12-01

    As research in Earth Science becomes more data intensive, a critical requirement of data archives is that data needs to be easily discovered, accessed, and used. One approach to improving data discovery and access is through data citations coupled with Digital Object Identifiers (DOI). Beginning in 1998, the Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) has issued data product citations that have been accepted and used in AGU and other peer-reviewed journals. Citation elements established by the ORNL DAAC are similar to those used for journal articles (authors, titles, information to locate, and version), and beginning in 2007 included a DOI that is persistent, actionable, specific, and complete. The citation approach used at the DAAC also allows for referring to specific subsets of the data, by including within the citation the temporal and spatial portions of the data actually used. Citations allow others to find data and reproduce the results of the research article, and also use those data to test new hypotheses, design new sample collections, or construct or evaluate models. In addition to enhancing discovery and access of the data used in a research article, the citation gives credit to data generators, data centers and their funders, and, through citation indices, determine the scientific impact of a data set. The ORNL DAAC has developed a database that links research articles and their use of ORNL DAAC data products. The database allows determination of who, in which journal, and how the data have been used, in a manner analogous to author citation indices. The ORNL DAAC has been an initial contributor to the Thomson Reuters Data Citation Index. In addition, research data products deposited at the ORNL DAAC are linked using DOIs to relevant articles in Elsevier journals available on ScienceDirect. The ultimate goal of this implementation is that citations to data products become a routine part of the scientific process.

  18. Patient-Reported Outcome (PRO) Assessment in Clinical Trials: A Systematic Review of Guidance for Trial Protocol Writers

    PubMed Central

    Calvert, Melanie; Kyte, Derek; Duffy, Helen; Gheorghe, Adrian; Mercieca-Bebber, Rebecca; Ives, Jonathan; Draper, Heather; Brundage, Michael; Blazeby, Jane; King, Madeleine

    2014-01-01

    Background Evidence suggests there are inconsistencies in patient-reported outcome (PRO) assessment and reporting in clinical trials, which may limit the use of these data to inform patient care. For trials with a PRO endpoint, routine inclusion of key PRO information in the protocol may help improve trial conduct and the reporting and appraisal of PRO results; however, it is currently unclear exactly what PRO-specific information should be included. The aim of this review was to summarize the current PRO-specific guidance for clinical trial protocol developers. Methods and Findings We searched the MEDLINE, EMBASE, CINHAL and Cochrane Library databases (inception to February 2013) for PRO-specific guidance regarding trial protocol development. Further guidance documents were identified via Google, Google scholar, requests to members of the UK Clinical Research Collaboration registered clinical trials units and international experts. Two independent investigators undertook title/abstract screening, full text review and data extraction, with a third involved in the event of disagreement. 21,175 citations were screened and 54 met the inclusion criteria. Guidance documents were difficult to access: electronic database searches identified just 8 documents, with the remaining 46 sourced elsewhere (5 from citation tracking, 27 from hand searching, 7 from the grey literature review and 7 from experts). 162 unique PRO-specific protocol recommendations were extracted from included documents. A further 10 PRO recommendations were identified relating to supporting trial documentation. Only 5/162 (3%) recommendations appeared in ≥50% of guidance documents reviewed, indicating a lack of consistency. Conclusions PRO-specific protocol guidelines were difficult to access, lacked consistency and may be challenging to implement in practice. There is a need to develop easily accessible consensus-driven PRO protocol guidance. Guidance should be aimed at ensuring key PRO information is routinely included in appropriate trial protocols, in order to facilitate rigorous collection/reporting of PRO data, to effectively inform patient care. PMID:25333995

  19. Foot and Ankle Fellowship Websites: An Assessment of Accessibility and Quality.

    PubMed

    Hinds, Richard M; Danna, Natalie R; Capo, John T; Mroczek, Kenneth J

    2017-08-01

    The Internet has been reported to be the first informational resource for many fellowship applicants. The objective of this study was to assess the accessibility of orthopaedic foot and ankle fellowship websites and to evaluate the quality of information provided via program websites. The American Orthopaedic Foot and Ankle Society (AOFAS) and the Fellowship and Residency Electronic Interactive Database (FREIDA) fellowship databases were accessed to generate a comprehensive list of orthopaedic foot and ankle fellowship programs. The databases were reviewed for links to fellowship program websites and compared with program websites accessed from a Google search. Accessible fellowship websites were then analyzed for the quality of recruitment and educational content pertinent to fellowship applicants. Forty-seven orthopaedic foot and ankle fellowship programs were identified. The AOFAS database featured direct links to 7 (15%) fellowship websites with the independent Google search yielding direct links to 29 (62%) websites. No direct website links were provided in the FREIDA database. Thirty-six accessible websites were analyzed for content. Program websites featured a mean 44% (range = 5% to 75%) of the total assessed content. The most commonly presented recruitment and educational content was a program description (94%) and description of fellow operative experience (83%), respectively. There is substantial variability in the accessibility and quality of orthopaedic foot and ankle fellowship websites. Recognition of deficits in accessibility and content quality may assist foot and ankle fellowships in improving program information online. Level IV.

  20. DBAASP v.2: an enhanced database of structure and antimicrobial/cytotoxic activity of natural and synthetic peptides.

    PubMed

    Pirtskhalava, Malak; Gabrielian, Andrei; Cruz, Phillip; Griggs, Hannah L; Squires, R Burke; Hurt, Darrell E; Grigolava, Maia; Chubinidze, Mindia; Gogoladze, George; Vishnepolsky, Boris; Alekseyev, Vsevolod; Rosenthal, Alex; Tartakovsky, Michael

    2016-01-04

    Antimicrobial peptides (AMPs) are anti-infectives that may represent a novel and untapped class of biotherapeutics. Increasing interest in AMPs means that new peptides (natural and synthetic) are discovered faster than ever before. We describe herein a new version of the Database of Antimicrobial Activity and Structure of Peptides (DBAASPv.2, which is freely accessible at http://dbaasp.org). This iteration of the database reports chemical structures and empirically-determined activities (MICs, IC50, etc.) against more than 4200 specific target microbes for more than 2000 ribosomal, 80 non-ribosomal and 5700 synthetic peptides. Of these, the vast majority are monomeric, but nearly 200 of these peptides are found as homo- or heterodimers. More than 6100 of the peptides are linear, but about 515 are cyclic and more than 1300 have other intra-chain covalent bonds. More than half of the entries in the database were added after the resource was initially described, which reflects the recent sharp uptick of interest in AMPs. New features of DBAASPv.2 include: (i) user-friendly utilities and reporting functions, (ii) a 'Ranking Search' function to query the database by target species and return a ranked list of peptides with activity against that target and (iii) structural descriptions of the peptides derived from empirical data or calculated by molecular dynamics (MD) simulations. The three-dimensional structural data are critical components for understanding structure-activity relationships and for design of new antimicrobial drugs. We created more than 300 high-throughput MD simulations specifically for inclusion in DBAASP. The resulting structures are described in the database by novel trajectory analysis plots and movies. Another 200+ DBAASP entries have links to the Protein DataBank. All of the structures are easily visualized directly in the web browser. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Italian Present-day Stress Indicators: IPSI Database

    NASA Astrophysics Data System (ADS)

    Mariucci, M. T.; Montone, P.

    2017-12-01

    In Italy, since the 90s of the last century, researches concerning the contemporary stress field have been developing at Istituto Nazionale di Geofisica e Vulcanologia (INGV) with local and regional scale studies. Throughout the years many data have been analysed and collected: now they are organized and available for an easy end-use online. IPSI (Italian Present-day Stress Indicators) database, is the first geo-referenced repository of information on the crustal present-day stress field maintained at INGV through a web application database and website development by Gabriele Tarabusi. Data consist of horizontal stress orientations analysed and compiled in a standardized format and quality-ranked for reliability and comparability on a global scale with other database. Our first database release includes 855 data records updated to December 2015. Here we present an updated version that will be released in 2018, after new earthquake data entry up to December 2017. The IPSI web site (http://ipsi.rm.ingv.it/) allows accessing data on a standard map viewer and choose which data (category and/or quality) to plot easily. The main information of each single element (type, quality, orientation) can be viewed simply going over the related symbol, all the information appear by clicking the element. At the same time, simple basic information on the different data type, tectonic regime assignment, quality ranking method are available with pop-up windows. Data records can be downloaded in some common formats, moreover it is possible to download a file directly usable with SHINE, a web based application to interpolate stress orientations (http://shine.rm.ingv.it). IPSI is mainly conceived for those interested in studying the characters of Italian peninsula and surroundings although Italian data are part of the World Stress Map (http://www.world-stress-map.org/) as evidenced by many links that redirect to this database for more details on standard practices in this field.

  2. Pulling on the Long Tail with Flyover Country, a Mobile App to Expose, Visualize, Discover, and Explore Open Geoscience Data

    NASA Astrophysics Data System (ADS)

    Myrbo, A.; Loeffler, S.; Ai, S.; McEwan, R.

    2015-12-01

    The ultimate EarthCube product has been described as a mobile app that provides all of the known geoscience data for a geographic point or polygon, from the top of the atmosphere to the core of the Earth, throughout geologic time. The database queries are hidden from the user, and the data are visually rendered for easy recognition of patterns and associations. This fanciful vision is not so remote: NSF EarthCube and Geoinformatics support has already fostered major advances in database interoperability and harmonization of APIs; numerous "domain repositories," databases curated by subject matter experts, now provide a vast wealth of open, easily-accessible georeferenced data on rock and sediment chemistry and mineralogy, paleobiology, stratigraphy, rock magnetics, and more. New datasets accrue daily, including many harvested from the literature by automated means. None of these constitute big data - all are part of the long tail of geoscience, heterogeneous data consisting of relatively small numbers of measurements made by a large number of people, typically on physical samples. This vision of mobile data discovery requires a software package to cleverly expose these domain repositories' holdings; currently, queries mainly come from single investigators to single databases. The NSF-funded mobile app Flyover Country (FC; fc.umn.edu), developed for geoscience outreach and education, has been welcomed by data curators and cyberinfrastructure developers as a testing ground for their API services, data provision, and scalability. FC pulls maps and data within a bounding envelope and caches them for offline use; location-based services alert users to nearby points of interest (POI). The incorporation of data from multiple databases across domains requires parsimonious data requests and novel visualization techniques, especially for mapping of data with a time or stratigraphic depth component. The preservation of data provenance and authority is critical for researcher buy-in to all community databases, and further allows exploration and suggestions of collaborators, based upon geography and topical relevance.

  3. Establishing Community Learning and Information Centers (CLICs) in Underserved Malian Communities. Final Report

    ERIC Educational Resources Information Center

    Academy for Educational Development, 2005

    2005-01-01

    The purpose of the Community Learning and Information Center (CLIC) project was "to accelerate economic, social and political growth by providing residents in twelve underserved Malian communities with access to easily accessible development information and affordable access to information and communication technology (ICT), high-value…

  4. The development of a new database of gas emissions: MAGA, a collaborative web environment for collecting data

    NASA Astrophysics Data System (ADS)

    Cardellini, C.; Chiodini, G.; Frigeri, A.; Bagnato, E.; Aiuppa, A.; McCormick, B.

    2013-12-01

    The data on volcanic and non-volcanic gas emissions available online are, as today, incomplete and most importantly, fragmentary. Hence, there is need for common frameworks to aggregate available data, in order to characterize and quantify the phenomena at various spatial and temporal scales. Building on the Googas experience we are now extending its capability, particularly on the user side, by developing a new web environment for collecting and publishing data. We have started to create a new and detailed web database (MAGA: MApping GAs emissions) for the deep carbon degassing in the Mediterranean area. This project is part of the Deep Earth Carbon Degassing (DECADE) research initiative, lunched in 2012 by the Deep Carbon Observatory (DCO) to improve the global budget of endogenous carbon from volcanoes. MAGA database is planned to complement and integrate the work in progress within DECADE in developing CARD (Carbon Degassing) database. MAGA database will allow researchers to insert data interactively and dynamically into a spatially referred relational database management system, as well as to extract data. MAGA kicked-off with the database set up and a complete literature survey on publications on volcanic gas fluxes, by including data on active craters degassing, diffuse soil degassing and fumaroles both from dormant closed-conduit volcanoes (e.g., Vulcano, Phlegrean Fields, Santorini, Nysiros, Teide, etc.) and open-vent volcanoes (e.g., Etna, Stromboli, etc.) in the Mediterranean area and Azores. For each geo-located gas emission site, the database holds images and description of the site and of the emission type (e.g., diffuse emission, plume, fumarole, etc.), gas chemical-isotopic composition (when available), gas temperature and gases fluxes magnitude. Gas sampling, analysis and flux measurement methods are also reported together with references and contacts to researchers expert of the site. Data can be accessed on the network from a web interface or as a data-driven web service, where software clients can request data directly from the database. This way Geographical Information Systems (GIS) and Virtual Globes (e.g., Google Earth) can easily access the database, and data can be exchanged with other database. In details the database now includes: i) more than 1000 flux data about volcanic plume degassing from Etna (4 summit craters and bulk degassing) and Stromboli volcanoes, with time averaged CO2 fluxes of ~ 18000 and 766 t/d, respectively; ii) data from ~ 30 sites of diffuse soil degassing from Napoletan volcanoes, Azores, Canary, Etna, Stromboli, and Vulcano Island, with a wide range of CO2 fluxes (from les than 1 to 1500 t/d) and iii) several data on fumarolic emissions (~ 7 sites) with CO2 fluxes up to 1340 t/day (i.e., Stromboli). When available, time series of compositional data have been archived in the database (e.g., for Campi Flegrei fumaroles). We believe MAGA data-base is an important starting point to develop a large scale, expandable data-base aimed to excite, inspire, and encourage participation among researchers. In addition, the possibility to archive location and qualitative information for gas emission/sites not yet investigated, could stimulate the scientific community for future researches and will provide an indication on the current uncertainty on deep carbon fluxes global estimates.

  5. Erectile Dysfunction Herbs: A Natural Treatment for ED?

    MedlinePlus

    ... Analysis. 2015;102:476. DHEA. Natural Medicines Comprehensive Database. http://www.naturaldatabase.com. Accessed Nov. 1, 2015. L-arginine. Natural Medicines Comprehensive Database. http://www.naturaldatabase.com. Accessed Nov. 1, 2015. ...

  6. ERMes: Open Source Simplicity for Your E-Resource Management

    ERIC Educational Resources Information Center

    Doering, William; Chilton, Galadriel

    2009-01-01

    ERMes, the latest version of electronic resource management system (ERM), is a relational database; content in different tables connects to, and works with, content in other tables. ERMes requires Access 2007 (Windows) or Access 2008 (Mac) to operate as the database utilizes functionality not available in previous versions of Microsoft Access. The…

  7. Databases and Electronic Resources - Betty Petersen Memorial Library

    Science.gov Websites

    of NOAA-Wide and Open Access Databases on the NOAA Central Library website. American Meteorological to a nonfederal website. Open Science Directory Open Science Directory contains collections of Open Access Journals (e.g. Directory of Open Access Journals) and journals in the special programs (Hinari

  8. DECADE Web Portal: Integrating MaGa, EarthChem and GVP Will Further Our Knowledge on Earth Degassing

    NASA Astrophysics Data System (ADS)

    Cardellini, C.; Frigeri, A.; Lehnert, K. A.; Ash, J.; McCormick, B.; Chiodini, G.; Fischer, T. P.; Cottrell, E.

    2014-12-01

    The release of gases from the Earth's interior to the exosphere takes place in both volcanic and non-volcanic areas of the planet. Fully understanding this complex process requires the integration of geochemical, petrological and volcanological data. At present, major online data repositories relevant to studies of degassing are not linked and interoperable. We are developing interoperability between three of those, which will support more powerful synoptic studies of degassing. The three data systems that will make their data accessible via the DECADE portal are: (1) the Smithsonian Institution's Global Volcanism Program database (GVP) of volcanic activity data, (2) EarthChem databases for geochemical and geochronological data of rocks and melt inclusions, and (3) the MaGa database (Mapping Gas emissions) which contains compositional and flux data of gases released at volcanic and non-volcanic degassing sites. These databases are developed and maintained by institutions or groups of experts in a specific field, and data are archived in formats specific to these databases. In the framework of the Deep Earth Carbon Degassing (DECADE) initiative of the Deep Carbon Observatory (DCO), we are developing a web portal that will create a powerful search engine of these databases from a single entry point. The portal will return comprehensive multi-component datasets, based on the search criteria selected by the user. For example, a single geographic or temporal search will return data relating to compositions of emitted gases and erupted products, the age of the erupted products, and coincident activity at the volcano. The development of this level of capability for the DECADE Portal requires complete synergy between these databases, including availability of standard-based web services (WMS, WFS) at all data systems. Data and metadata can thus be extracted from each system without interfering with each database's local schema or being replicated to achieve integration at the DECADE web portal. The DECADE portal will enable new synoptic perspectives on the Earth degassing process. Other data systems can be easily plugged in using the existing framework. Our vision is to explore Earth degassing related datasets over previously unexplored spatial or temporal ranges.

  9. A method to implement fine-grained access control for personal health records through standard relational database queries.

    PubMed

    Sujansky, Walter V; Faus, Sam A; Stone, Ethan; Brennan, Patricia Flatley

    2010-10-01

    Online personal health records (PHRs) enable patients to access, manage, and share certain of their own health information electronically. This capability creates the need for precise access-controls mechanisms that restrict the sharing of data to that intended by the patient. The authors describe the design and implementation of an access-control mechanism for PHR repositories that is modeled on the eXtensible Access Control Markup Language (XACML) standard, but intended to reduce the cognitive and computational complexity of XACML. The authors implemented the mechanism entirely in a relational database system using ANSI-standard SQL statements. Based on a set of access-control rules encoded as relational table rows, the mechanism determines via a single SQL query whether a user who accesses patient data from a specific application is authorized to perform a requested operation on a specified data object. Testing of this query on a moderately large database has demonstrated execution times consistently below 100ms. The authors include the details of the implementation, including algorithms, examples, and a test database as Supplementary materials. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Crystallography Open Database – an open-access collection of crystal structures

    PubMed Central

    Gražulis, Saulius; Chateigner, Daniel; Downs, Robert T.; Yokochi, A. F. T.; Quirós, Miguel; Lutterotti, Luca; Manakova, Elena; Butkus, Justas; Moeck, Peter; Le Bail, Armel

    2009-01-01

    The Crystallography Open Database (COD), which is a project that aims to gather all available inorganic, metal–organic and small organic molecule structural data in one database, is described. The database adopts an open-access model. The COD currently contains ∼80 000 entries in crystallographic information file format, with nearly full coverage of the International Union of Crystallography publications, and is growing in size and quality. PMID:22477773

  11. DynAstVO : a Europlanet database of NEA orbits

    NASA Astrophysics Data System (ADS)

    Desmars, J.; Thuillot, W.; Hestroffer, D.; David, P.; Le Sidaner, P.

    2017-09-01

    DynAstVO is a new orbital database developed within the Europlanet 2020 RI and the Virtual European Solar and Planetary Access (VESPA) frameworks. The database is dedicated to Near-Earth asteroids and provide parameters related to orbits: osculating elements, observational information, ephemeris through SPICE kernel, and in particular, orbit uncertainty and associated covariance matrix. DynAstVO is daily updated on a automatic process of orbit determination on the basis of the Minor Planet Electronic Circulars that reports new observations or the discover of a new asteroid. This database conforms to EPN-TAP environment and is accessible through VO protocols and on the VESPA portal web access (http://vespa.obspm.fr/). A comparison with other classical databases such as Astorb, MPCORB, NEODyS and JPL is also presented.

  12. SeedStor: A Germplasm Information Management System and Public Database

    PubMed Central

    Horler, RSP; Turner, AS; Fretter, P; Ambrose, M

    2018-01-01

    Abstract SeedStor (https://www.seedstor.ac.uk) acts as the publicly available database for the seed collections held by the Germplasm Resources Unit (GRU) based at the John Innes Centre, Norwich, UK. The GRU is a national capability supported by the Biotechnology and Biological Sciences Research Council (BBSRC). The GRU curates germplasm collections of a range of temperate cereal, legume and Brassica crops and their associated wild relatives, as well as precise genetic stocks, near-isogenic lines and mapping populations. With >35,000 accessions, the GRU forms part of the UK’s plant conservation contribution to the Multilateral System (MLS) of the International Treaty for Plant Genetic Resources for Food and Agriculture (ITPGRFA) for wheat, barley, oat and pea. SeedStor is a fully searchable system that allows our various collections to be browsed species by species through to complicated multipart phenotype criteria-driven queries. The results from these searches can be downloaded for later analysis or used to order germplasm via our shopping cart. The user community for SeedStor is the plant science research community, plant breeders, specialist growers, hobby farmers and amateur gardeners, and educationalists. Furthermore, SeedStor is much more than a database; it has been developed to act internally as a Germplasm Information Management System that allows team members to track and process germplasm requests, determine regeneration priorities, handle cost recovery and Material Transfer Agreement paperwork, manage the Seed Store holdings and easily report on a wide range of the aforementioned tasks. PMID:29228298

  13. Telescience Support Center Data System Software

    NASA Technical Reports Server (NTRS)

    Rahman, Hasan

    2010-01-01

    The Telescience Support Center (TSC) team has developed a databasedriven, increment-specific Data Require - ment Document (DRD) generation tool that automates much of the work required for generating and formatting the DRD. It creates a database to load the required changes to configure the TSC data system, thus eliminating a substantial amount of labor in database entry and formatting. The TSC database contains the TSC systems configuration, along with the experimental data, in which human physiological data must be de-commutated in real time. The data for each experiment also must be cataloged and archived for future retrieval. TSC software provides tools and resources for ground operation and data distribution to remote users consisting of PIs (principal investigators), bio-medical engineers, scientists, engineers, payload specialists, and computer scientists. Operations support is provided for computer systems access, detailed networking, and mathematical and computational problems of the International Space Station telemetry data. User training is provided for on-site staff and biomedical researchers and other remote personnel in the usage of the space-bound services via the Internet, which enables significant resource savings for the physical facility along with the time savings versus traveling to NASA sites. The software used in support of the TSC could easily be adapted to other Control Center applications. This would include not only other NASA payload monitoring facilities, but also other types of control activities, such as monitoring and control of the electric grid, chemical, or nuclear plant processes, air traffic control, and the like.

  14. Sequence and structural analyses of nuclear export signals in the NESdb database

    PubMed Central

    Xu, Darui; Farmer, Alicia; Collett, Garen; Grishin, Nick V.; Chook, Yuh Min

    2012-01-01

    We compiled >200 nuclear export signal (NES)–containing CRM1 cargoes in a database named NESdb. We analyzed the sequences and three-dimensional structures of natural, experimentally identified NESs and of false-positive NESs that were generated from the database in order to identify properties that might distinguish the two groups of sequences. Analyses of amino acid frequencies, sequence logos, and agreement with existing NES consensus sequences revealed strong preferences for the Φ1-X3-Φ2-X2-Φ3-X-Φ4 pattern and for negatively charged amino acids in the nonhydrophobic positions of experimentally identified NESs but not of false positives. Strong preferences against certain hydrophobic amino acids in the hydrophobic positions were also revealed. These findings led to a new and more precise NES consensus. More important, three-dimensional structures are now available for 68 NESs within 56 different cargo proteins. Analyses of these structures showed that experimentally identified NESs are more likely than the false positives to adopt α-helical conformations that transition to loops at their C-termini and more likely to be surface accessible within their protein domains or be present in disordered or unobserved parts of the structures. Such distinguishing features for real NESs might be useful in future NES prediction efforts. Finally, we also tested CRM1-binding of 40 NESs that were found in the 56 structures. We found that 16 of the NES peptides did not bind CRM1, hence illustrating how NESs are easily misidentified. PMID:22833565

  15. The Challenges of Searching, Finding, Reading, Understanding and Using Mars Mission Datasets for Science Analysis

    NASA Technical Reports Server (NTRS)

    Johnson, Jeffrey R.

    2006-01-01

    This viewgraph presentation reviews the problems that non-mission researchers have in accessing data to use in their analysis of Mars. The increasing complexity of Mars datasets results in custom software development by instrument teams that is often the only means to visualize and analyze the data. The solutions to the problem are to continue efforts toward synergizing data from multiple missions and making the data, s/w, derived products available in standardized, easily-accessible formats, encourage release of "lite" versions of mission-related software prior to end-of-mission, and planetary image data should be systematically processed in a coordinated way and made available in an easily accessed form. The recommendations of Mars Environmental GIS Workshop are reviewed.

  16. Slushie World: An In-Class Access Database Tutorial

    ERIC Educational Resources Information Center

    Wynn, Donald E., Jr.; Pratt, Renée M. E.

    2015-01-01

    The Slushie World case study is designed to teach the basics of Microsoft Access and database management over a series of three 75-minute class sessions. Students are asked to build a basic database to track sales and inventory for a small business. Skills to be learned include table creation, data entry and importing, form and report design,…

  17. NATIONAL URBAN DATABASE AND ACCESS PORTAL TOOL (NUDAPT): FACILITATING ADVANCEMENTS IN URBAN METEOROLOGY AND CLIMATE MODELING WITH COMMUNITY-BASED URBAN DATABASES

    EPA Science Inventory

    We discuss the initial design and application of the National Urban Database and Access Portal Tool (NUDAPT). This new project is sponsored by the USEPA and involves collaborations and contributions from many groups from federal and state agencies, and from private and academic i...

  18. Space environment data storage and access: lessons learned and recommendations for the future

    NASA Astrophysics Data System (ADS)

    Evans, Hugh; Heynderickx, Daniel

    2012-07-01

    With the ever increasing volume of space environment data available at present and planned for the near future, the demands on data storage and access methods are increasing as well. In addition, continued access to historical, archived data remains crucial. On the basis of many years of experience, the authors identify the following issues as important for continued and efficient handling of datasets now and in the future: The huge data volumes currently or very soon avaiable from a number of space missions will limi direct Internet download access to even relatively short epoch ranges of data. Therefore, data providers should establish or extend standardised data (post-) processing services so that only data query results should be downloaded. Although a single standardised data format will in all likelihood remain utopia, data providers should at least include extensive metadata with their data products, according to established standards and practices (e.g. ISTP, SPASE). Standardisation of (sets of) metadata greatly facilitates data mining and querying. The use of SQL database storage should be considered instead of, or in parallel with, classic storage of data files. The use of SQL does away with having to handle file parsing and processing, while at the same time standard access protocols can be used to (remotely) connect to such data repositories. Many data holdings are still lacking in extensive descriptions of data provenance (e.g. instrument description), content and format. Unfortunately, detailed data information is usually rejected by scientific and technical journals. Re-processing of historical archived datasets into modern formats, making them easily available and usable, is urgently required, as knowledge is being lost. A global data directory has still not been achieved; policy makers should enforce stricter rules for "broadcasting" dataset information.

  19. MOSAIC: An organic geochemical and sedimentological database for marine surface sediments

    NASA Astrophysics Data System (ADS)

    Tavagna, Maria Luisa; Usman, Muhammed; De Avelar, Silvania; Eglinton, Timothy

    2015-04-01

    Modern ocean sediments serve as the interface between the biosphere and the geosphere, play a key role in biogeochemical cycles and provide a window on how contemporary processes are written into the sedimentary record. Research over past decades has resulted in a wealth of information on the content and composition of organic matter in marine sediments, with ever-more sophisticated techniques continuing to yield information of greater detail and as an accelerating pace. However, there has been no attempt to synthesize this wealth of information. We are establishing a new database that incorporates information relevant to local, regional and global-scale assessment of the content, source and fate of organic materials accumulating in contemporary marine sediments. In the MOSAIC (Modern Ocean Sediment Archive and Inventory of Carbon) database, particular emphasis is placed on molecular and isotopic information, coupled with relevant contextual information (e.g., sedimentological properties) relevant to elucidating factors that influence the efficiency and nature of organic matter burial. The main features of MOSAIC include: (i) Emphasis on continental margin sediments as major loci of carbon burial, and as the interface between terrestrial and oceanic realms; (ii) Bulk to molecular-level organic geochemical properties and parameters, including concentration and isotopic compositions; (iii) Inclusion of extensive contextual data regarding the depositional setting, in particular with respect to sedimentological and redox characteristics. The ultimate goal is to create an open-access instrument, available on the web, to be utilized for research and education by the international community who can both contribute to, and interrogate the database. The submission will be accomplished by means of a pre-configured table available on the MOSAIC webpage. The information on the filled tables will be checked and eventually imported, via the Structural Query Language (SQL), into MOSAIC. MOSAIC is programmed with PostgreSQL, an open-source database management system. In order to locate geographically the data, each element/datum is associated to a latitude, longitude and depth, facilitating creation of a geospatial database which can be easily interfaced to a Geographic Information System (GIS). In order to make the database broadly accessible, a HTML-PHP language-based website will ultimately be created and linked to the database. Consulting the website will allow for both data visualization as well as export of data in txt format for utilization with common software solutions (e.g. ODV, Excel, Matlab, Python, Word, PPT, Illustrator…). In this very early stage, MOSAIC presently contains approximately 10000 analyses conducted on more than 1800 samples which were collected from over 1600 different geographical locations around the world. Through participation of the international research community, MOSAIC will rapidly develop into a rich archive and versatile tool for investigation of distribution and composition of organic matter accumulating in seafloor sediments. The present contribution will outline the structure of MOSAIC, provide examples of data output, and solicit feedback on desirable features to be included in the database and associated software tools.

  20. CERES Search and Subset Tool

    Atmospheric Science Data Center

    2016-06-24

    ... data granules using a high resolution spatial metadata database and directly accessing the archived data granules. Subset results are ... data granules using a high resolution spatial metadata database and directly accessing the archived data granules. Subset results are ...

  1. Fault-tolerant symmetrically-private information retrieval

    NASA Astrophysics Data System (ADS)

    Wang, Tian-Yin; Cai, Xiao-Qiu; Zhang, Rui-Ling

    2016-08-01

    We propose two symmetrically-private information retrieval protocols based on quantum key distribution, which provide a good degree of database and user privacy while being flexible, loss-resistant and easily generalized to a large database similar to the precedent works. Furthermore, one protocol is robust to a collective-dephasing noise, and the other is robust to a collective-rotation noise.

  2. WaveNet: A Web-Based Metocean Data Access, Processing, and Analysis Tool. Part 3 - CDIP Database

    DTIC Science & Technology

    2014-06-01

    and Analysis Tool; Part 3 – CDIP Database by Zeki Demirbilek, Lihwa Lin, and Derek Wilson PURPOSE: This Coastal and Hydraulics Engineering...Technical Note (CHETN) describes coupling of the Coastal Data Information Program ( CDIP ) database to WaveNet, the first module of MetOcnDat (Meteorological...provides a step-by-step procedure to access, process, and analyze wave and wind data from the CDIP database. BACKGROUND: WaveNet addresses a basic

  3. MIPS: a database for protein sequences, homology data and yeast genome information.

    PubMed Central

    Mewes, H W; Albermann, K; Heumann, K; Liebl, S; Pfeiffer, F

    1997-01-01

    The MIPS group (Martinsried Institute for Protein Sequences) at the Max-Planck-Institute for Biochemistry, Martinsried near Munich, Germany, collects, processes and distributes protein sequence data within the framework of the tripartite association of the PIR-International Protein Sequence Database (,). MIPS contributes nearly 50% of the data input to the PIR-International Protein Sequence Database. The database is distributed on CD-ROM together with PATCHX, an exhaustive supplement of unique, unverified protein sequences from external sources compiled by MIPS. Through its WWW server (http://www.mips.biochem.mpg.de/ ) MIPS permits internet access to sequence databases, homology data and to yeast genome information. (i) Sequence similarity results from the FASTA program () are stored in the FASTA database for all proteins from PIR-International and PATCHX. The database is dynamically maintained and permits instant access to FASTA results. (ii) Starting with FASTA database queries, proteins have been classified into families and superfamilies (PROT-FAM). (iii) The HPT (hashed position tree) data structure () developed at MIPS is a new approach for rapid sequence and pattern searching. (iv) MIPS provides access to the sequence and annotation of the complete yeast genome (), the functional classification of yeast genes (FunCat) and its graphical display, the 'Genome Browser' (). A CD-ROM based on the JAVA programming language providing dynamic interactive access to the yeast genome and the related protein sequences has been compiled and is available on request. PMID:9016498

  4. NNDC Data Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuli, J.K.; Sonzogni,A.

    The National Nuclear Data Center has provided remote access to some of its resources since 1986. The major databases and other resources available currently through NNDC Web site are summarized. The National Nuclear Data Center (NNDC) has provided remote access to the nuclear physics databases it maintains and to other resources since 1986. With considerable innovation access is now mostly through the Web. The NNDC Web pages have been modernized to provide a consistent state-of-the-art style. The improved database services and other resources available from the NNOC site at www.nndc.bnl.govwill be described.

  5. Secure, web-accessible call rosters for academic radiology departments.

    PubMed

    Nguyen, A V; Tellis, W M; Avrin, D E

    2000-05-01

    Traditionally, radiology department call rosters have been posted via paper and bulletin boards. Frequently, changes to these lists are made by multiple people independently, but often not synchronized, resulting in confusion among the house staff and technical staff as to who is on call and when. In addition, multiple and disparate copies exist in different sections of the department, and changes made would not be propagated to all the schedules. To eliminate such difficulties, a paperless call scheduling application was developed. Our call scheduling program allowed Java-enabled web access to a database by designated personnel from each radiology section who have privileges to make the necessary changes. Once a person made a change, everyone accessing the database would see the modification. This eliminates the chaos resulting from people swapping shifts at the last minute and not having the time to record or broadcast the change. Furthermore, all changes to the database were logged. Users are given a log-in name and password and can only edit their section; however, all personnel have access to all sections' schedules. Our applet was written in Java 2 using the latest technology in database access. We access our Interbase database through the DataExpress and DB Swing (Borland, Scotts Valley, CA) components. The result is secure access to the call rosters via the web. There are many advantages to the web-enabled access, mainly the ability for people to make changes and have the changes recorded and propagated in a single virtual location and available to all who need to know.

  6. Development of an electronic radiation oncology patient information management system.

    PubMed

    Mandal, Abhijit; Asthana, Anupam Kumar; Aggarwal, Lalit Mohan

    2008-01-01

    The quality of patient care is critically influenced by the availability of accurate information and its efficient management. Radiation oncology consists of many information components, for example there may be information related to the patient (e.g., profile, disease site, stage, etc.), to people (radiation oncologists, radiological physicists, technologists, etc.), and to equipment (diagnostic, planning, treatment, etc.). These different data must be integrated. A comprehensive information management system is essential for efficient storage and retrieval of the enormous amounts of information. A radiation therapy patient information system (RTPIS) has been developed using open source software. PHP and JAVA script was used as the programming languages, MySQL as the database, and HTML and CSF as the design tool. This system utilizes typical web browsing technology using a WAMP5 server. Any user having a unique user ID and password can access this RTPIS. The user ID and password is issued separately to each individual according to the person's job responsibilities and accountability, so that users will be able to only access data that is related to their job responsibilities. With this system authentic users will be able to use a simple web browsing procedure to gain instant access. All types of users in the radiation oncology department should find it user-friendly. The maintenance of the system will not require large human resources or space. The file storage and retrieval process would be be satisfactory, unique, uniform, and easily accessible with adequate data protection. There will be very little possibility of unauthorized handling with this system. There will also be minimal risk of loss or accidental destruction of information.

  7. Nencki Genomics Database--Ensembl funcgen enhanced with intersections, user data and genome-wide TFBS motifs.

    PubMed

    Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal

    2013-01-01

    We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql -h database.nencki-genomics.org -u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface.

  8. Nencki Genomics Database—Ensembl funcgen enhanced with intersections, user data and genome-wide TFBS motifs

    PubMed Central

    Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal

    2013-01-01

    We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql –h database.nencki-genomics.org –u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface. Database URL: http://www.nencki-genomics.org. PMID:24089456

  9. Improved Information Retrieval Performance on SQL Database Using Data Adapter

    NASA Astrophysics Data System (ADS)

    Husni, M.; Djanali, S.; Ciptaningtyas, H. T.; Wicaksana, I. G. N. A.

    2018-02-01

    The NoSQL databases, short for Not Only SQL, are increasingly being used as the number of big data applications increases. Most systems still use relational databases (RDBs), but as the number of data increases each year, the system handles big data with NoSQL databases to analyze and access data more quickly. NoSQL emerged as a result of the exponential growth of the internet and the development of web applications. The query syntax in the NoSQL database differs from the SQL database, therefore requiring code changes in the application. Data adapter allow applications to not change their SQL query syntax. Data adapters provide methods that can synchronize SQL databases with NotSQL databases. In addition, the data adapter provides an interface which is application can access to run SQL queries. Hence, this research applied data adapter system to synchronize data between MySQL database and Apache HBase using direct access query approach, where system allows application to accept query while synchronization process in progress. From the test performed using data adapter, the results obtained that the data adapter can synchronize between SQL databases, MySQL, and NoSQL database, Apache HBase. This system spends the percentage of memory resources in the range of 40% to 60%, and the percentage of processor moving from 10% to 90%. In addition, from this system also obtained the performance of database NoSQL better than SQL database.

  10. Monitoring of small laboratory animal experiments by a designated web-based database.

    PubMed

    Frenzel, T; Grohmann, C; Schumacher, U; Krüll, A

    2015-10-01

    Multiple-parametric small animal experiments require, by their very nature, a sufficient number of animals which may need to be large to obtain statistically significant results.(1) For this reason database-related systems are required to collect the experimental data as well as to support the later (re-) analysis of the information gained during the experiments. In particular, the monitoring of animal welfare is simplified by the inclusion of warning signals (for instance, loss in body weight >20%). Digital patient charts have been developed for human patients but are usually not able to fulfill the specific needs of animal experimentation. To address this problem a unique web-based monitoring system using standard MySQL, PHP, and nginx has been created. PHP was used to create the HTML-based user interface and outputs in a variety of proprietary file formats, namely portable document format (PDF) or spreadsheet files. This article demonstrates its fundamental features and the easy and secure access it offers to the data from any place using a web browser. This information will help other researchers create their own individual databases in a similar way. The use of QR-codes plays an important role for stress-free use of the database. We demonstrate a way to easily identify all animals and samples and data collected during the experiments. Specific ways to record animal irradiations and chemotherapy applications are shown. This new analysis tool allows the effective and detailed analysis of huge amounts of data collected through small animal experiments. It supports proper statistical evaluation of the data and provides excellent retrievable data storage. © The Author(s) 2015.

  11. SpecDB: The AAVSO’s Public Repository for Spectra of Variable Stars

    NASA Astrophysics Data System (ADS)

    Kafka, Stella; Weaver, John; Silvis, George; Beck, Sara

    2018-01-01

    SpecDB is the American Association of Variable Star Observers (AAVSO) spectral database. Accessible to any astronomer with the capability to perform spectroscopy, SpecDB provides an unprecedented scientific opportunity for amateur and professional astronomers around the globe. Backed by the Variable Star Index, one of the most utilized variable star catalogs, SpecDB is expected to become one of the world leading databases of its kind. Once verified by a team of expert spectroscopists, an observer can upload spectra of variable stars target easily and efficiently. Uploaded spectra can then be searched for, previewed, and downloaded for inclusion in publications. Close community development and involvement will ensure a user-friendly and versatile database, compatible with the needs of 21st century astrophysics. Observations of 1D spectra are submitted as FITS files. All spectra are required to be preprocessed for wavelength calibration and dark subtraction; Bias and flat are strongly recommended. First time observers are required to submit a spectrum of a standard (non-variable) star to be checked for errors in technique or equipment. Regardless of user validation, FITS headers must include several value cards detailing the observation, as well as information regarding the observer, equipment, and observing site in accordance with existing AAVSO records. This enforces consistency and provides necessary details for follow up analysis. Requirements are provided to users in a comprehensive guidebook and accompanying technical manual. Upon submission, FITS headers are automatically checked for errors and any anomalies are immediately fed back to the user. Successful candidates can then submit at will, including multiple simultaneous submissions. All published observations can be searched and interactively previewed. Community involvement will be enhanced by an associated forum where users can discuss observation techniques and suggest improvements to the database.

  12. Space Images for NASA JPL Android Version

    NASA Technical Reports Server (NTRS)

    Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice

    2013-01-01

    This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.

  13. Supplement to the Carcinogenic Potency Database (CPDB): Results ofanimal bioassays published in the general literature through 1997 and bythe National Toxicology Program in 1997-1998

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gold, Lois Swirsky; Manley, Neela B.; Slone, Thomas H.

    2005-04-08

    The Carcinogenic Potency Database (CPDB) is a systematic and unifying resource that standardizes the results of chronic, long-term animal cancer tests which have been conducted since the 1950s. The analyses include sufficient information on each experiment to permit research into many areas of carcinogenesis. Both qualitative and quantitative information is reported on positive and negative experiments that meet a set of inclusion criteria. A measure of carcinogenic potency, TD50 (daily dose rate in mg/kg body weight/day to induce tumors in half of test animals that would have remained tumor-free at zero dose), is estimated for each tissue-tumor combination reported. Thismore » article is the ninth publication of a chronological plot of the CPDB; it presents results on 560 experiments of 188 chemicals in mice, rats, and hamsters from 185 publications in the general literature updated through 1997, and from 15 Reports of the National Toxicology Program in 1997-1998. The test agents cover a wide variety of uses and chemical classes. The CPDB Web Site(http://potency.berkeley.edu/) presents the combined database of all published plots in a variety of formats as well as summary tables by chemical and by target organ, supplemental materials on dosing and survival, a detailed guide to using the plot formats, and documentation of methods and publications. The overall CPDB, including the results in this article, presents easily accessible results of 6153 experiments on 1485 chemicals from 1426 papers and 429 NCI/NTP (National Cancer Institute/National Toxicology program) Technical Reports. A tab-separated format of the full CPDB for reading the data into spreadsheets or database applications is available on the Web Site.« less

  14. Advanced technologies for scalable ATLAS conditions database access on the grid

    NASA Astrophysics Data System (ADS)

    Basset, R.; Canali, L.; Dimitrov, G.; Girone, M.; Hawkings, R.; Nevski, P.; Valassi, A.; Vaniachine, A.; Viegas, F.; Walker, R.; Wong, A.

    2010-04-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  15. Are There Any Natural Remedies That Reduce Chronic Fatigue Associated with Chronic Fatigue Syndrome?

    MedlinePlus

    ... management of chronic fatigue syndrome. Natural Medicines Comprehensive Database. http://www.naturaldatabase.com. Accessed Feb. 23, 2015. Magnesium. Natural Medicines Comprehensive Database. http://www.naturaldatabase.com. Accessed Feb. 24, 2015. ...

  16. Pan European Phenological database (PEP725): a single point of access for European data

    NASA Astrophysics Data System (ADS)

    Templ, Barbara; Koch, Elisabeth; Bolmgren, Kjell; Ungersböck, Markus; Paul, Anita; Scheifinger, Helfried; Rutishauser, This; Busto, Montserrat; Chmielewski, Frank-M.; Hájková, Lenka; Hodzić, Sabina; Kaspar, Frank; Pietragalla, Barbara; Romero-Fresneda, Ramiro; Tolvanen, Anne; Vučetič, Višnja; Zimmermann, Kirsten; Zust, Ana

    2018-06-01

    The Pan European Phenology (PEP) project is a European infrastructure to promote and facilitate phenological research, education, and environmental monitoring. The main objective is to maintain and develop a Pan European Phenological database (PEP725) with an open, unrestricted data access for science and education. PEP725 is the successor of the database developed through the COST action 725 "Establishing a European phenological data platform for climatological applications" working as a single access point for European-wide plant phenological data. So far, 32 European meteorological services and project partners from across Europe have joined and supplied data collected by volunteers from 1868 to the present for the PEP725 database. Most of the partners actively provide data on a regular basis. The database presently holds almost 12 million records, about 46 growing stages and 265 plant species (including cultivars), and can be accessed via http://www.pep725.eu/ . Users of the PEP725 database have studied a diversity of topics ranging from climate change impact, plant physiological question, phenological modeling, and remote sensing of vegetation to ecosystem productivity.

  17. SkyDOT: a publicly accessible variability database, containing multiple sky surveys and real-time data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Starr, D. L.; Wozniak, P. R.; Vestrand, W. T.

    2002-01-01

    SkyDOT (Sky Database for Objects in Time-Domain) is a Virtual Observatory currently comprised of data from the RAPTOR, ROTSE I, and OGLE I1 survey projects. This makes it a very large time domain database. In addition, the RAPTOR project provides SkyDOT with real-time variability data as well as stereoscopic information. With its web interface, we believe SkyDOT will be a very useful tool for both astronomers, and the public. Our main task has been to construct an efficient relational database containing all existing data, while handling a real-time inflow of data. We also provide a useful web interface allowing easymore » access to both astronomers and the public. Initially, this server will allow common searches, specific queries, and access to light curves. In the future we will include machine learning classification tools and access to spectral information.« less

  18. Intelligent Access to Sequence and Structure Databases (IASSD) - an interface for accessing information from major web databases.

    PubMed

    Ganguli, Sayak; Gupta, Manoj Kumar; Basu, Protip; Banik, Rahul; Singh, Pankaj Kumar; Vishal, Vineet; Bera, Abhisek Ranjan; Chakraborty, Hirak Jyoti; Das, Sasti Gopal

    2014-01-01

    With the advent of age of big data and advances in high throughput technology accessing data has become one of the most important step in the entire knowledge discovery process. Most users are not able to decipher the query result that is obtained when non specific keywords or a combination of keywords are used. Intelligent access to sequence and structure databases (IASSD) is a desktop application for windows operating system. It is written in Java and utilizes the web service description language (wsdl) files and Jar files of E-utilities of various databases such as National Centre for Biotechnology Information (NCBI) and Protein Data Bank (PDB). Apart from that IASSD allows the user to view protein structure using a JMOL application which supports conditional editing. The Jar file is freely available through e-mail from the corresponding author.

  19. Migration of legacy mumps applications to relational database servers.

    PubMed

    O'Kane, K C

    2001-07-01

    An extended implementation of the Mumps language is described that facilitates vendor neutral migration of legacy Mumps applications to SQL-based relational database servers. Implemented as a compiler, this system translates Mumps programs to operating system independent, standard C code for subsequent compilation to fully stand-alone, binary executables. Added built-in functions and support modules extend the native hierarchical Mumps database with access to industry standard, networked, relational database management servers (RDBMS) thus freeing Mumps applications from dependence upon vendor specific, proprietary, unstandardized database models. Unlike Mumps systems that have added captive, proprietary RDMBS access, the programs generated by this development environment can be used with any RDBMS system that supports common network access protocols. Additional features include a built-in web server interface and the ability to interoperate directly with programs and functions written in other languages.

  20. Correspondence: World Wide Web access to the British Universities Human Embryo Database

    PubMed Central

    AITON, JAMES F.; MCDONOUGH, ARIANA; MCLACHLAN, JOHN C.; SMART, STEVEN D.; WHITEN, SUSAN C.

    1997-01-01

    The British Universities Human Embryo Database has been created by merging information from the Walmsley Collection of Human Embryos at the School of Biological and Medical Sciences, University of St Andrews and from the Boyd Collection of Human Embryos at the Department of Anatomy, University of Cambridge. The database has been made available electronically on the Internet and World Wide Web browsers can be used to implement interactive access to the information stored in the British Universities Human Embryo Database. The database can, therefore, be accessed and searched from remote sites and specific embryos can be identified in terms of their location, age, developmental stage, plane of section, staining technique, and other parameters. It is intended to add information from other similar collections in the UK as it becomes available. PMID:9034891

  1. DNAproDB: an interactive tool for structural analysis of DNA–protein complexes

    PubMed Central

    Sagendorf, Jared M.

    2017-01-01

    Abstract Many biological processes are mediated by complex interactions between DNA and proteins. Transcription factors, various polymerases, nucleases and histones recognize and bind DNA with different levels of binding specificity. To understand the physical mechanisms that allow proteins to recognize DNA and achieve their biological functions, it is important to analyze structures of DNA–protein complexes in detail. DNAproDB is a web-based interactive tool designed to help researchers study these complexes. DNAproDB provides an automated structure-processing pipeline that extracts structural features from DNA–protein complexes. The extracted features are organized in structured data files, which are easily parsed with any programming language or viewed in a browser. We processed a large number of DNA–protein complexes retrieved from the Protein Data Bank and created the DNAproDB database to store this data. Users can search the database by combining features of the DNA, protein or DNA–protein interactions at the interface. Additionally, users can upload their own structures for processing privately and securely. DNAproDB provides several interactive and customizable tools for creating visualizations of the DNA–protein interface at different levels of abstraction that can be exported as high quality figures. All functionality is documented and freely accessible at http://dnaprodb.usc.edu. PMID:28431131

  2. Protein structural similarity search by Ramachandran codes

    PubMed Central

    Lo, Wei-Cheng; Huang, Po-Jung; Chang, Chih-Hung; Lyu, Ping-Chiang

    2007-01-01

    Background Protein structural data has increased exponentially, such that fast and accurate tools are necessary to access structure similarity search. To improve the search speed, several methods have been designed to reduce three-dimensional protein structures to one-dimensional text strings that are then analyzed by traditional sequence alignment methods; however, the accuracy is usually sacrificed and the speed is still unable to match sequence similarity search tools. Here, we aimed to improve the linear encoding methodology and develop efficient search tools that can rapidly retrieve structural homologs from large protein databases. Results We propose a new linear encoding method, SARST (Structural similarity search Aided by Ramachandran Sequential Transformation). SARST transforms protein structures into text strings through a Ramachandran map organized by nearest-neighbor clustering and uses a regenerative approach to produce substitution matrices. Then, classical sequence similarity search methods can be applied to the structural similarity search. Its accuracy is similar to Combinatorial Extension (CE) and works over 243,000 times faster, searching 34,000 proteins in 0.34 sec with a 3.2-GHz CPU. SARST provides statistically meaningful expectation values to assess the retrieved information. It has been implemented into a web service and a stand-alone Java program that is able to run on many different platforms. Conclusion As a database search method, SARST can rapidly distinguish high from low similarities and efficiently retrieve homologous structures. It demonstrates that the easily accessible linear encoding methodology has the potential to serve as a foundation for efficient protein structural similarity search tools. These search tools are supposed applicable to automated and high-throughput functional annotations or predictions for the ever increasing number of published protein structures in this post-genomic era. PMID:17716377

  3. rEHR: An R package for manipulating and analysing Electronic Health Record data.

    PubMed

    Springate, David A; Parisi, Rosa; Olier, Ivan; Reeves, David; Kontopantelis, Evangelos

    2017-01-01

    Research with structured Electronic Health Records (EHRs) is expanding as data becomes more accessible; analytic methods advance; and the scientific validity of such studies is increasingly accepted. However, data science methodology to enable the rapid searching/extraction, cleaning and analysis of these large, often complex, datasets is less well developed. In addition, commonly used software is inadequate, resulting in bottlenecks in research workflows and in obstacles to increased transparency and reproducibility of the research. Preparing a research-ready dataset from EHRs is a complex and time consuming task requiring substantial data science skills, even for simple designs. In addition, certain aspects of the workflow are computationally intensive, for example extraction of longitudinal data and matching controls to a large cohort, which may take days or even weeks to run using standard software. The rEHR package simplifies and accelerates the process of extracting ready-for-analysis datasets from EHR databases. It has a simple import function to a database backend that greatly accelerates data access times. A set of generic query functions allow users to extract data efficiently without needing detailed knowledge of SQL queries. Longitudinal data extractions can also be made in a single command, making use of parallel processing. The package also contains functions for cutting data by time-varying covariates, matching controls to cases, unit conversion and construction of clinical code lists. There are also functions to synthesise dummy EHR. The package has been tested with one for the largest primary care EHRs, the Clinical Practice Research Datalink (CPRD), but allows for a common interface to other EHRs. This simplified and accelerated work flow for EHR data extraction results in simpler, cleaner scripts that are more easily debugged, shared and reproduced.

  4. 78 FR 48177 - Submission for OMB Review; 30-day Comment Request: National Institute of Mental Health Data...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-07

    ... Access Request and Use Certification (previously National Database for Autism Research Data Access... approval for use of the National Database for Autism Research (NDAR) Data Use Certification (DUC) Form...

  5. Interactive, Automated Management of Icing Data

    NASA Technical Reports Server (NTRS)

    Levinson, Laurie H.

    2009-01-01

    IceVal DatAssistant is software (see figure) that provides an automated, interactive solution for the management of data from research on aircraft icing. This software consists primarily of (1) a relational database component used to store ice shape and airfoil coordinates and associated data on operational and environmental test conditions and (2) a graphically oriented database access utility, used to upload, download, process, and/or display data selected by the user. The relational database component consists of a Microsoft Access 2003 database file with nine tables containing data of different types. Included in the database are the data for all publicly releasable ice tracings with complete and verifiable test conditions from experiments conducted to date in the Glenn Research Center Icing Research Tunnel. Ice shapes from computational simulations with the correspond ing conditions performed utilizing the latest version of the LEWICE ice shape prediction code are likewise included, and are linked to the equivalent experimental runs. The database access component includes ten Microsoft Visual Basic 6.0 (VB) form modules and three VB support modules. Together, these modules enable uploading, downloading, processing, and display of all data contained in the database. This component also affords the capability to perform various database maintenance functions for example, compacting the database or creating a new, fully initialized but empty database file.

  6. The "GeneTrustee": a universal identification system that ensures privacy and confidentiality for human genetic databases.

    PubMed

    Burnett, Leslie; Barlow-Stewart, Kris; Proos, Anné L; Aizenberg, Harry

    2003-05-01

    This article describes a generic model for access to samples and information in human genetic databases. The model utilises a "GeneTrustee", a third-party intermediary independent of the subjects and of the investigators or database custodians. The GeneTrustee model has been implemented successfully in various community genetics screening programs and has facilitated research access to genetic databases while protecting the privacy and confidentiality of research subjects. The GeneTrustee model could also be applied to various types of non-conventional genetic databases, including neonatal screening Guthrie card collections, and to forensic DNA samples.

  7. Open Geoscience Database

    NASA Astrophysics Data System (ADS)

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data treatment could be conducted in other programs after extraction the filtered data into *.csv file. It makes the database understandable for non-experts. The database employs open data format (*.csv) and wide spread tools: PHP as the program language, MySQL as database management system, JavaScript for interaction with GoogleMaps and JQueryUI for create user interface. The database is multilingual: there are association tables, which connect with elements of the database. In total the development required about 150 hours. The database still has several problems. The main problem is the reliability of the data. Actually it needs an expert system for estimation the reliability, but the elaboration of such a system would take more resources than the database itself. The second problem is the problem of stream selection - how to select the stations that are connected with each other (for example, belong to one water stream) and indicate their sequence. Currently the interface is English and Russian. However it can be easily translated to your language. But some problems we decided. For example problem "the problem of the same station" (sometimes the distance between stations is smaller, than the error of position): when you adding new station to the database our application automatically find station near this place. Also we decided problem of object and parameter type (how to regard "EC" and "electrical conductivity" as the same parameter). This problem has been solved using "associative tables". If you would like to see the interface on your language, just contact us. We should send you the list of terms and phrases for translation on your language. The main advantage of the database is that it is totally open: everybody can see, extract the data from the database and use them for non-commercial purposes with no charge. Registered users can contribute to the database without getting paid. We hope, that it will be widely used first of all for education purposes, but professional scientists could use it also.

  8. An Introduction to Database Structure and Database Machines.

    ERIC Educational Resources Information Center

    Detweiler, Karen

    1984-01-01

    Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…

  9. Querying XML Data with SPARQL

    NASA Astrophysics Data System (ADS)

    Bikakis, Nikos; Gioldasis, Nektarios; Tsinaraki, Chrisa; Christodoulakis, Stavros

    SPARQL is today the standard access language for Semantic Web data. In the recent years XML databases have also acquired industrial importance due to the widespread applicability of XML in the Web. In this paper we present a framework that bridges the heterogeneity gap and creates an interoperable environment where SPARQL queries are used to access XML databases. Our approach assumes that fairly generic mappings between ontology constructs and XML Schema constructs have been automatically derived or manually specified. The mappings are used to automatically translate SPARQL queries to semantically equivalent XQuery queries which are used to access the XML databases. We present the algorithms and the implementation of SPARQL2XQuery framework, which is used for answering SPARQL queries over XML databases.

  10. Second-Tier Database for Ecosystem Focus, 2003-2004 Annual Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    University of Washington, Columbia Basin Research, DART Project Staff,

    2004-12-01

    The Second-Tier Database for Ecosystem Focus (Contract 00004124) provides direct and timely public access to Columbia Basin environmental, operational, fishery and riverine data resources for federal, state, public and private entities essential to sound operational and resource management. The database also assists with juvenile and adult mainstem passage modeling supporting federal decisions affecting the operation of the FCRPS. The Second-Tier Database known as Data Access in Real Time (DART) integrates public data for effective access, consideration and application. DART also provides analysis tools and performance measures for evaluating the condition of Columbia Basin salmonid stocks. These services are critical tomore » BPA's implementation of its fish and wildlife responsibilities under the Endangered Species Act (ESA).« less

  11. Forest Vegetation Simulator translocation techniques with the Bureau of Land Management's Forest Vegetation Information system database

    Treesearch

    Timothy A. Bottomley

    2008-01-01

    The BLM uses a database, called the Forest Vegetation Information System (FORVIS), to store, retrieve, and analyze forest resource information on a majority of their forested lands. FORVIS also has the capability of easily transferring appropriate data electronically into Forest Vegetation Simulator (FVS) for simulation runs. Only minor additional data inputs or...

  12. The importance of data quality for generating reliable distribution models for rare, elusive, and cryptic species

    Treesearch

    Keith B. Aubry; Catherine M. Raley; Kevin S. McKelvey

    2017-01-01

    The availability of spatially referenced environmental data and species occurrence records in online databases enable practitioners to easily generate species distribution models (SDMs) for a broad array of taxa. Such databases often include occurrence records of unknown reliability, yet little information is available on the influence of data quality on SDMs generated...

  13. Generation of standard gas mixtures of halogenated, aliphatic, and aromatic compounds and prediction of the individual output rates based on molecular formula and boiling point.

    PubMed

    Thorenz, Ute R; Kundel, Michael; Müller, Lars; Hoffmann, Thorsten

    2012-11-01

    In this work, we describe a simple diffusion capillary device for the generation of various organic test gases. Using a set of basic equations the output rate of the test gas devices can easily be predicted only based on the molecular formula and the boiling point of the compounds of interest. Since these parameters are easily accessible for a large number of potential analytes, even for those compounds which are typically not listed in physico-chemical handbooks or internet databases, the adjustment of the test gas source to the concentration range required for the individual analytical application is straightforward. The agreement of the predicted and measured values is shown to be valid for different groups of chemicals, such as halocarbons, alkanes, alkenes, and aromatic compounds and for different dimensions of the diffusion capillaries. The limits of the predictability of the output rates are explored and observed to result in an underprediction of the output rates when very thin capillaries are used. It is demonstrated that pressure variations are responsible for the observed deviation of the output rates. To overcome the influence of pressure variations and at the same time to establish a suitable test gas source for highly volatile compounds, also the usability of permeation sources is explored, for example for the generation of molecular bromine test gases.

  14. The Alaska Volcano Observatory Website a Tool for Information Management and Dissemination

    NASA Astrophysics Data System (ADS)

    Snedigar, S. F.; Cameron, C. E.; Nye, C. J.

    2006-12-01

    The Alaska Volcano Observatory's (AVO's) website served as a primary information management tool during the 2006 eruption of Augustine Volcano. The AVO website is dynamically generated from a database back- end. This system enabled AVO to quickly and easily update the website, and provide content based on user- queries to the database. During the Augustine eruption, the new AVO website was heavily used by members of the public (up to 19 million hits per day), and this was largely because the AVO public pages were an excellent source of up-to-date information. There are two different, yet fully integrated parts of the website. An external, public site (www.avo.alaska.edu) allows the general public to track eruptive activity by viewing the latest photographs, webcam images, webicorder graphs, and official information releases about activity at the volcano, as well as maps, previous eruption information, bibliographies, and rich information about other Alaska volcanoes. The internal half of the website hosts diverse geophysical and geological data (as browse images) in a format equally accessible by AVO staff in different locations. In addition, an observation log allows users to enter information about anything from satellite passes to seismic activity to ash fall reports into a searchable database. The individual(s) on duty at the watch office use forms on the internal website to post a summary of the latest activity directly to the public website, ensuring that the public website is always up to date. The internal website also serves as a starting point for monitoring Alaska's volcanoes. AVO's extensive image database allows AVO personnel to upload many photos, diagrams, and videos which are then available to be browsed by anyone in the AVO community. Selected images are viewable from the public page. The primary webserver is housed at the University of Alaska Fairbanks, and holds a MySQL database with over 200 tables and several thousand lines of php code gluing the database and website together. The database currently holds 95 GB of data. Webcam images and webicorder graphs are pulled from servers in Anchorage every few minutes. Other servers in Fairbanks generate earthquake location plots and spectrograms.

  15. A Database as a Service for the Healthcare System to Store Physiological Signal Data.

    PubMed

    Chang, Hsien-Tsung; Lin, Tsai-Huei

    2016-01-01

    Wearable devices that measure physiological signals to help develop self-health management habits have become increasingly popular in recent years. These records are conducive for follow-up health and medical care. In this study, based on the characteristics of the observed physiological signal records- 1) a large number of users, 2) a large amount of data, 3) low information variability, 4) data privacy authorization, and 5) data access by designated users-we wish to resolve physiological signal record-relevant issues utilizing the advantages of the Database as a Service (DaaS) model. Storing a large amount of data using file patterns can reduce database load, allowing users to access data efficiently; the privacy control settings allow users to store data securely. The results of the experiment show that the proposed system has better database access performance than a traditional relational database, with a small difference in database volume, thus proving that the proposed system can improve data storage performance.

  16. A Database as a Service for the Healthcare System to Store Physiological Signal Data

    PubMed Central

    Lin, Tsai-Huei

    2016-01-01

    Wearable devices that measure physiological signals to help develop self-health management habits have become increasingly popular in recent years. These records are conducive for follow-up health and medical care. In this study, based on the characteristics of the observed physiological signal records– 1) a large number of users, 2) a large amount of data, 3) low information variability, 4) data privacy authorization, and 5) data access by designated users—we wish to resolve physiological signal record-relevant issues utilizing the advantages of the Database as a Service (DaaS) model. Storing a large amount of data using file patterns can reduce database load, allowing users to access data efficiently; the privacy control settings allow users to store data securely. The results of the experiment show that the proposed system has better database access performance than a traditional relational database, with a small difference in database volume, thus proving that the proposed system can improve data storage performance. PMID:28033415

  17. Insert Modifications Improve Access to Artificial Red-Cockaded Woodpecker Next Cavities

    Treesearch

    John W. Edwards; Ernest E. Stevens; Charles A. Dachelet

    1997-01-01

    A designfor a modified, artificial Red-cockaded Woodpecker(Picoides borealis) cavity insert is presented. This modification allowed eggs and young to be inspected easily, removed, and replaced throughout the nesting period. Modifications to cavity inserts are best done before installation, but can be easily retrofitted in existing artificial cavities...

  18. Comparison of Online Agricultural Information Services.

    ERIC Educational Resources Information Center

    Reneau, Fred; Patterson, Richard

    1984-01-01

    Outlines major online agricultural information services--agricultural databases, databases with agricultural services, educational databases in agriculture--noting services provided, access to the database, and costs. Benefits of online agricultural database sources (availability of agricultural marketing, weather, commodity prices, management…

  19. Enhanced DIII-D Data Management Through a Relational Database

    NASA Astrophysics Data System (ADS)

    Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.

    2000-10-01

    A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.

  20. NATIONAL URBAN DATABASE AND ACCESS PROTAL TOOL

    EPA Science Inventory

    Current mesoscale weather prediction and microscale dispersion models are limited in their ability to perform accurate assessments in urban areas. A project called the National Urban Database with Access Portal Tool (NUDAPT) is beginning to provide urban data and improve the para...

  1. VO Access to BASECOL Database

    NASA Astrophysics Data System (ADS)

    Moreau, N.; Dubernet, M. L.

    2006-07-01

    Basecol is a combination of a website (using PHP and HTML) and a MySQL database concerning molecular ro-vibrational transitions induced by collisions with atoms or molecules. This database has been created in view of the scientific preparation of the Heterodyne Instrument for the Far-Infrared on board the Herschel Space Observatory (HSO). Basecol offers an access to numerical and bibliographic data through various output methods such as ASCII, HTML or VOTable (which is a first step towards a VO compliant system). A web service using Apache Axis has been developed in order to provide a direct access to data for external applications.

  2. Designing a Multi-Petabyte Database for LSST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becla, Jacek; Hanushevsky, Andrew; Nikolaev, Sergei

    2007-01-10

    The 3.2 giga-pixel LSST camera will produce approximately half a petabyte of archive images every month. These data need to be reduced in under a minute to produce real-time transient alerts, and then added to the cumulative catalog for further analysis. The catalog is expected to grow about three hundred terabytes per year. The data volume, the real-time transient alerting requirements of the LSST, and its spatio-temporal aspects require innovative techniques to build an efficient data access system at reasonable cost. As currently envisioned, the system will rely on a database for catalogs and metadata. Several database systems are beingmore » evaluated to understand how they perform at these data rates, data volumes, and access patterns. This paper describes the LSST requirements, the challenges they impose, the data access philosophy, results to date from evaluating available database technologies against LSST requirements, and the proposed database architecture to meet the data challenges.« less

  3. IntPath--an integrated pathway gene relationship database for model organisms and important pathogens

    PubMed Central

    2012-01-01

    Background Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. Results In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and relationship errors in KEGG). We turn complicated and incompatible xml data formats and inconsistent gene and gene relationship representations from different source databases into normalized and unified pathway-gene and pathway-gene pair relationships neatly recorded in simple tab-delimited text format and MySQL tables, which facilitates convenient automatic computation and large-scale referencing in many related studies. IntPath data can be downloaded in text format or MySQL dump. IntPath data can also be retrieved and analyzed conveniently through web service by local programs or through web interface by mouse clicks. Several useful analysis tools are also provided in IntPath. Conclusions We have overcome in IntPath the issues of compatibility, consistency, and comprehensiveness that often hamper effective use of pathway databases. We have included four organisms in the current release of IntPath. Our methodology and programs described in this work can be easily applied to other organisms; and we will include more model organisms and important pathogens in future releases of IntPath. IntPath maintains regular updates and is freely available at http://compbio.ddns.comp.nus.edu.sg:8080/IntPath. PMID:23282057

  4. BIOSPIDA: A Relational Database Translator for NCBI.

    PubMed

    Hagen, Matthew S; Lee, Eva K

    2010-11-13

    As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time.

  5. Visualizing Cloud Properties and Satellite Imagery: A Tool for Visualization and Information Integration

    NASA Astrophysics Data System (ADS)

    Chee, T.; Nguyen, L.; Smith, W. L., Jr.; Spangenberg, D.; Palikonda, R.; Bedka, K. M.; Minnis, P.; Thieman, M. M.; Nordeen, M.

    2017-12-01

    Providing public access to research products including cloud macro and microphysical properties and satellite imagery are a key concern for the NASA Langley Research Center Cloud and Radiation Group. This work describes a web based visualization tool and API that allows end users to easily create customized cloud product and satellite imagery, ground site data and satellite ground track information that is generated dynamically. The tool has two uses, one to visualize the dynamically created imagery and the other to provide access to the dynamically generated imagery directly at a later time. Internally, we leverage our practical experience with large, scalable application practices to develop a system that has the largest potential for scalability as well as the ability to be deployed on the cloud to accommodate scalability issues. We build upon NASA Langley Cloud and Radiation Group's experience with making real-time and historical satellite cloud product information, satellite imagery, ground site data and satellite track information accessible and easily searchable. This tool is the culmination of our prior experience with dynamic imagery generation and provides a way to build a "mash-up" of dynamically generated imagery and related kinds of information that are visualized together to add value to disparate but related information. In support of NASA strategic goals, our group aims to make as much scientific knowledge, observations and products available to the citizen science, research and interested communities as well as for automated systems to acquire the same information for data mining or other analytic purposes. This tool and the underlying API's provide a valuable research tool to a wide audience both as a standalone research tool and also as an easily accessed data source that can easily be mined or used with existing tools.

  6. Database Reports Over the Internet

    NASA Technical Reports Server (NTRS)

    Smith, Dean Lance

    2002-01-01

    Most of the summer was spent developing software that would permit existing test report forms to be printed over the web on a printer that is supported by Adobe Acrobat Reader. The data is stored in a DBMS (Data Base Management System). The client asks for the information from the database using an HTML (Hyper Text Markup Language) form in a web browser. JavaScript is used with the forms to assist the user and verify the integrity of the entered data. Queries to a database are made in SQL (Sequential Query Language), a widely supported standard for making queries to databases. Java servlets, programs written in the Java programming language running under the control of network server software, interrogate the database and complete a PDF form template kept in a file. The completed report is sent to the browser requesting the report. Some errors are sent to the browser in an HTML web page, others are reported to the server. Access to the databases was restricted since the data are being transported to new DBMS software that will run on new hardware. However, the SQL queries were made to Microsoft Access, a DBMS that is available on most PCs (Personal Computers). Access does support the SQL commands that were used, and a database was created with Access that contained typical data for the report forms. Some of the problems and features are discussed below.

  7. For 481 biomedical open access journals, articles are not searchable in the Directory of Open Access Journals nor in conventional biomedical databases.

    PubMed

    Liljekvist, Mads Svane; Andresen, Kristoffer; Pommergaard, Hans-Christian; Rosenberg, Jacob

    2015-01-01

    Background. Open access (OA) journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases' criteria, hindering dissemination. The Directory of Open Access Journals (DOAJ) is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ's coverage of biomedical OA journals compared with the conventional biomedical databases. Methods. Information on all journals listed in four conventional biomedical databases (MEDLINE, PubMed Central, EMBASE and SCOPUS) and DOAJ were gathered. Journals were included if they were (1) actively publishing, (2) full OA, (3) prospectively indexed in one or more database, and (4) of biomedical subject. Impact factor and journal language were also collected. DOAJ was compared with conventional databases regarding the proportion of journals covered, along with their impact factor and publishing language. The proportion of journals with articles indexed by DOAJ was determined. Results. In total, 3,236 biomedical OA journals were included in the study. Of the included journals, 86.7% were listed in DOAJ. Combined, the conventional biomedical databases listed 75.0% of the journals; 18.7% in MEDLINE; 36.5% in PubMed Central; 51.5% in SCOPUS and 50.6% in EMBASE. Of the journals in DOAJ, 88.7% published in English and 20.6% had received impact factor for 2012 compared with 93.5% and 26.0%, respectively, for journals in the conventional biomedical databases. A subset of 51.1% and 48.5% of the journals in DOAJ had articles indexed from 2012 and 2013, respectively. Of journals exclusively listed in DOAJ, one journal had received an impact factor for 2012, and 59.6% of the journals had no content from 2013 indexed in DOAJ. Conclusions. DOAJ is the most complete registry of biomedical OA journals compared with five conventional biomedical databases. However, DOAJ only indexes articles for half of the biomedical journals listed, making it an incomplete source for biomedical research papers in general.

  8. Alternative treatment technology information center computer database system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, D.

    1995-10-01

    The Alternative Treatment Technology Information Center (ATTIC) computer database system was developed pursuant to the 1986 Superfund law amendments. It provides up-to-date information on innovative treatment technologies to clean up hazardous waste sites. ATTIC v2.0 provides access to several independent databases as well as a mechanism for retrieving full-text documents of key literature. It can be accessed with a personal computer and modem 24 hours a day, and there are no user fees. ATTIC provides {open_quotes}one-stop shopping{close_quotes} for information on alternative treatment options by accessing several databases: (1) treatment technology database; this contains abstracts from the literature on all typesmore » of treatment technologies, including biological, chemical, physical, and thermal methods. The best literature as viewed by experts is highlighted. (2) treatability study database; this provides performance information on technologies to remove contaminants from wastewaters and soils. It is derived from treatability studies. This database is available through ATTIC or separately as a disk that can be mailed to you. (3) underground storage tank database; this presents information on underground storage tank corrective actions, surface spills, emergency response, and remedial actions. (4) oil/chemical spill database; this provides abstracts on treatment and disposal of spilled oil and chemicals. In addition to these separate databases, ATTIC allows immediate access to other disk-based systems such as the Vendor Information System for Innovative Treatment Technologies (VISITT) and the Bioremediation in the Field Search System (BFSS). The user may download these programs to their own PC via a high-speed modem. Also via modem, users are able to download entire documents through the ATTIC system. Currently, about fifty publications are available, including Superfund Innovative Technology Evaluation (SITE) program documents.« less

  9. OLS Client and OLS Dialog: Open Source Tools to Annotate Public Omics Datasets.

    PubMed

    Perez-Riverol, Yasset; Ternent, Tobias; Koch, Maximilian; Barsnes, Harald; Vrousgou, Olga; Jupp, Simon; Vizcaíno, Juan Antonio

    2017-10-01

    The availability of user-friendly software to annotate biological datasets and experimental details is becoming essential in data management practices, both in local storage systems and in public databases. The Ontology Lookup Service (OLS, http://www.ebi.ac.uk/ols) is a popular centralized service to query, browse and navigate biomedical ontologies and controlled vocabularies. Recently, the OLS framework has been completely redeveloped (version 3.0), including enhancements in the data model, like the added support for Web Ontology Language based ontologies, among many other improvements. However, the new OLS is not backwards compatible and new software tools are needed to enable access to this widely used framework now that the previous version is no longer available. We here present the OLS Client as a free, open-source Java library to retrieve information from the new version of the OLS. It enables rapid tool creation by providing a robust, pluggable programming interface and common data model to programmatically access the OLS. The library has already been integrated and is routinely used by several bioinformatics resources and related data annotation tools. Secondly, we also introduce an updated version of the OLS Dialog (version 2.0), a Java graphical user interface that can be easily plugged into Java desktop applications to access the OLS. The software and related documentation are freely available at https://github.com/PRIDE-Utilities/ols-client and https://github.com/PRIDE-Toolsuite/ols-dialog. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Using information communication technologies to increase the institutional capacity of local health organisations in Africa: a case study of the Kenya Civil Society Portal for Health.

    PubMed

    Juma, Charles; Sundsmo, Aaron; Maket, Boniface; Powell, Richard; Aluoch, Gilbert

    2015-01-01

    Achieving the healthcare components of the United Nations' Millennium Development Goals is significantly premised on effective service delivery by civil society organisations (CSOs). However, many CSOs across Africalack the necessary capacity to perform this role robustly. This paper reports on an evaluation of the use, and perceived impact, of aknowledge management tool upon institutional strengthening among CSOs working in Kenya's health sector. Three methods were used: analytics data; user satisfaction surveys; and a furtherkey informant survey. Satisfaction with the portal was consistently high, with 99% finding the quality and relevance of the content very good or good for institutional strengthening standards, governance, and planning and resource mobilisation. Critical facilitators to the success of knowledge management for CSO institutional strengthening were identified as people/culture (developed resources and organisational narratives) and technology (easily accessible, enabling information exchange, tools/resources available, access to consultants/partners). Critical barriers were identified as people/culture (database limitations, materials limitations, and lack of active users), and process (limited access, limited interactions, and limited approval process). This pilot study demonstrated the perceived utility of a web-based knowledge management portal among developing nations' CSOs, with widespread satisfaction across multiple domains, which increased over time. Providing increased opportunities for collective mutual learning, promoting a culture of data use for decision making, and encouraging all health organisations to be learning institutions should be a priority for those interested in promoting sustainable long-term solutions for Africa.

  11. Extending key sharing: how to generate a key tightly coupled to a network security policy

    NASA Astrophysics Data System (ADS)

    Kazantzidis, Matheos

    2006-04-01

    Current state of the art security policy technologies, besides the small scale limitation and largely manual nature of accompanied management methods, are lacking a) in real-timeliness of policy implementation and b) vulnerabilities and inflexibility stemming from the centralized policy decision making; even if, for example, a policy description or access control database is distributed, the actual decision is often a centralized action and forms a system single point of failure. In this paper we are presenting a new fundamental concept that allows implement a security policy by a systematic and efficient key distribution procedure. Specifically, we extend the polynomial Shamir key splitting. According to this, a global key is split into n parts, any k of which can re-construct the original key. In this paper we present a method that instead of having "any k parts" be able to re-construct the original key, the latter can only be reconstructed if keys are combined as any access control policy describes. This leads into an easily deployable key generation procedure that results a single key per entity that "knows" its role in the specific access control policy from which it was derived. The system is considered efficient as it may be used to avoid expensive PKI operations or pairwise key distributions as well as provides superior security due to its distributed nature, the fact that the key is tightly coupled to the policy, and that policy change may be implemented easier and faster.

  12. Earth in Space: A CD-ROM Version for Pre-College Teachers

    NASA Astrophysics Data System (ADS)

    Pedigo, P.

    2003-12-01

    Earth in Space, a magazine about the Earth and space sciences for pre-college science teachers, was published by AGU between 1987 and 2001 (9 issues each year). The goal of Earth in Space was to make research at the frontiers of the geosciences accessible to teachers and students and engage them in thinking about scientific careers. Each issue contained two or three recent research articles, rewritten for a high school level audience from the original version published in peer-reviewed AGU journals, which were supplemented with short news items and biographic information about the authors. As part of a 2003 summer internship with AGU, sponsored by the AGU Committee on Education and Human Resources (CEHR) and the American Institute of Physics, this collection of Earth in Space magazines was converted into an easily accessible electronic resource for K-12 teachers and students. Every issue was scanned into a PDF file. The entire collection of articles was cataloged in a database indexed to key topic terms (e.g., volcanoes, global climate change, space weather). A front-page was designed in order to facilitate rapid access to articles concerning specific topics within the Earth and space sciences of particular interest to high school students. A compact CD-ROM version of this resource will be distributed to science teachers at future meetings of the National Science Teachers Association and will be made available through AGU's Outreach and Research Support program.

  13. OGDD (Olive Genetic Diversity Database): a microsatellite markers' genotypes database of worldwide olive trees for cultivar identification and virgin olive oil traceability.

    PubMed

    Ben Ayed, Rayda; Ben Hassen, Hanen; Ennouri, Karim; Ben Marzoug, Riadh; Rebai, Ahmed

    2016-01-01

    Olive (Olea europaea), whose importance is mainly due to nutritional and health features, is one of the most economically significant oil-producing trees in the Mediterranean region. Unfortunately, the increasing market demand towards virgin olive oil could often result in its adulteration with less expensive oils, which is a serious problem for the public and quality control evaluators of virgin olive oil. Therefore, to avoid frauds, olive cultivar identification and virgin olive oil authentication have become a major issue for the producers and consumers of quality control in the olive chain. Presently, genetic traceability using SSR is the cost effective and powerful marker technique that can be employed to resolve such problems. However, to identify an unknown monovarietal virgin olive oil cultivar, a reference system has become necessary. Thus, an Olive Genetic Diversity Database (OGDD) (http://www.bioinfo-cbs.org/ogdd/) is presented in this work. It is a genetic, morphologic and chemical database of worldwide olive tree and oil having a double function. In fact, besides being a reference system generated for the identification of unkown olive or virgin olive oil cultivars based on their microsatellite allele size(s), it provides users additional morphological and chemical information for each identified cultivar. Currently, OGDD is designed to enable users to easily retrieve and visualize biologically important information (SSR markers, and olive tree and oil characteristics of about 200 cultivars worldwide) using a set of efficient query interfaces and analysis tools. It can be accessed through a web service from any modern programming language using a simple hypertext transfer protocol call. The web site is implemented in java, JavaScript, PHP, HTML and Apache with all major browsers supported. Database URL: http://www.bioinfo-cbs.org/ogdd/. © The Author(s) 2016. Published by Oxford University Press.

  14. LD Hub: a centralized database and web interface to perform LD score regression that maximizes the potential of summary level GWAS data for SNP heritability and genetic correlation analysis.

    PubMed

    Zheng, Jie; Erzurumluoglu, A Mesut; Elsworth, Benjamin L; Kemp, John P; Howe, Laurence; Haycock, Philip C; Hemani, Gibran; Tansey, Katherine; Laurin, Charles; Pourcain, Beate St; Warrington, Nicole M; Finucane, Hilary K; Price, Alkes L; Bulik-Sullivan, Brendan K; Anttila, Verneri; Paternoster, Lavinia; Gaunt, Tom R; Evans, David M; Neale, Benjamin M

    2017-01-15

    LD score regression is a reliable and efficient method of using genome-wide association study (GWAS) summary-level results data to estimate the SNP heritability of complex traits and diseases, partition this heritability into functional categories, and estimate the genetic correlation between different phenotypes. Because the method relies on summary level results data, LD score regression is computationally tractable even for very large sample sizes. However, publicly available GWAS summary-level data are typically stored in different databases and have different formats, making it difficult to apply LD score regression to estimate genetic correlations across many different traits simultaneously. In this manuscript, we describe LD Hub - a centralized database of summary-level GWAS results for 173 diseases/traits from different publicly available resources/consortia and a web interface that automates the LD score regression analysis pipeline. To demonstrate functionality and validate our software, we replicated previously reported LD score regression analyses of 49 traits/diseases using LD Hub; and estimated SNP heritability and the genetic correlation across the different phenotypes. We also present new results obtained by uploading a recent atopic dermatitis GWAS meta-analysis to examine the genetic correlation between the condition and other potentially related traits. In response to the growing availability of publicly accessible GWAS summary-level results data, our database and the accompanying web interface will ensure maximal uptake of the LD score regression methodology, provide a useful database for the public dissemination of GWAS results, and provide a method for easily screening hundreds of traits for overlapping genetic aetiologies. The web interface and instructions for using LD Hub are available at http://ldsc.broadinstitute.org/ CONTACT: jie.zheng@bristol.ac.ukSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  15. RCDB: Renal Cancer Gene Database.

    PubMed

    Ramana, Jayashree

    2012-05-18

    Renal cell carcinoma or RCC is one of the common and most lethal urological cancers, with 40% of the patients succumbing to death because of metastatic progression of the disease. Treatment of metastatic RCC remains highly challenging because of its resistance to chemotherapy as well as radiotherapy, besides surgical resection. Whereas RCC comprises tumors with differing histological types, clear cell RCC remains the most common. A major problem in the clinical management of patients presenting with localized ccRCC is the inability to determine tumor aggressiveness and accurately predict the risk of metastasis following surgery. As a measure to improve the diagnosis and prognosis of RCC, researchers have identified several molecular markers through a number of techniques. However the wealth of information available is scattered in literature and not easily amenable to data-mining. To reduce this gap, this work describes a comprehensive repository called Renal Cancer Gene Database, as an integrated gateway to study renal cancer related data. Renal Cancer Gene Database is a manually curated compendium of 240 protein-coding and 269 miRNA genes contributing to the etiology and pathogenesis of various forms of renal cell carcinomas. The protein coding genes have been classified according to the kind of gene alteration observed in RCC. RCDB also includes the miRNAsdysregulated in RCC, along with the corresponding information regarding the type of RCC and/or metastatic or prognostic significance. While some of the miRNA genes showed an association with other types of cancers few were unique to RCC. Users can query the database using keywords, category and chromosomal location of the genes. The knowledgebase can be freely accessed via a user-friendly web interface at http://www.juit.ac.in/attachments/jsr/rcdb/homenew.html. It is hoped that this database would serve as a useful complement to the existing public resources and as a good starting point for researchers and physicians interested in RCC genetics.

  16. The Protein Disease Database of human body fluids: II. Computer methods and data issues.

    PubMed

    Lemkin, P F; Orr, G A; Goldstein, M P; Creed, G J; Myrick, J E; Merril, C R

    1995-01-01

    The Protein Disease Database (PDD) is a relational database of proteins and diseases. With this database it is possible to screen for quantitative protein abnormalities associated with disease states. These quantitative relationships use data drawn from the peer-reviewed biomedical literature. Assays may also include those observed in high-resolution electrophoretic gels that offer the potential to quantitate many proteins in a single test as well as data gathered by enzymatic or immunologic assays. We are using the Internet World Wide Web (WWW) and the Web browser paradigm as an access method for wide distribution and querying of the Protein Disease Database. The WWW hypertext transfer protocol and its Common Gateway Interface make it possible to build powerful graphical user interfaces that can support easy-to-use data retrieval using query specification forms or images. The details of these interactions are totally transparent to the users of these forms. Using a client-server SQL relational database, user query access, initial data entry and database maintenance are all performed over the Internet with a Web browser. We discuss the underlying design issues, mapping mechanisms and assumptions that we used in constructing the system, data entry, access to the database server, security, and synthesis of derived two-dimensional gel image maps and hypertext documents resulting from SQL database searches.

  17. Access to Success

    ERIC Educational Resources Information Center

    Brunken, Anna; Delly, Pamela

    2011-01-01

    Changes to education in Australia have seen new government legislations increasing educational pathways so students can more easily enter university, the aim being to increase participation. Now, many domestic students utilise various pathways to access university. Some have undertaken basic Further Education Diplomas, received subject credits,…

  18. NASA PDS IMG: Accessing Your Planetary Image Data

    NASA Astrophysics Data System (ADS)

    Padams, J.; Grimes, K.; Hollins, G.; Lavoie, S.; Stanboli, A.; Wagstaff, K.

    2018-04-01

    The Planetary Data System Cartography and Imaging Sciences Node provides a number of tools and services to integrate the 700+ TB of image data so information can be correlated across missions, instruments, and data sets and easily accessed by the science community.

  19. Breakthrough Science Enabled by Regular Access to Orbits Beyond Earth

    NASA Astrophysics Data System (ADS)

    Gorjian, V.

    2018-02-01

    Regular launches to the Deep Space Gateway (DSG) will enable smallsats to access orbits not currently easily available to low cost missions. These orbits will allow great new science, especially when using the DSG as an optical hub for downlink.

  20. Integration of red cell genotyping into the blood supply chain: a population-based study.

    PubMed

    Flegel, Willy A; Gottschall, Jerome L; Denomme, Gregory A

    2015-07-01

    When problems with compatibility arise, transfusion services often use time-consuming serological tests to identify antigen-negative red cell units for safe transfusion. New methods have made red cell genotyping possible for all clinically relevant blood group antigens. We did mass-scale genotyping of donor blood and provided hospitals with access to a large red cell database to meet the demand for antigen-negative red cell units beyond ABO and Rh blood typing. We established a red cell genotype database at the BloodCenter of Wisconsin on July 17, 2010. All self-declared African American, Asian, Hispanic, and Native American blood donors were eligible irrespective of their ABO and Rh type or history of donation. Additionally, blood donors who were groups O, A, and B, irrespective of their Rh phenotype, were eligible for inclusion only if they had a history of at least three donations in the previous 3 years, with one donation in the previous 12 months at the BloodCenter of Wisconsin. We did red cell genotyping with a nanofluidic microarray system, using 32 single nucleotide polymorphisms to predict 42 blood group antigens. An additional 14 antigens were identified via serological phenotype. We monitored the ability of the red cell genotype database to meet demand for compatible blood during 3 years. In addition to the central database at the BloodCenter of Wisconsin, we gave seven hospitals online access to a web-based antigen query portal on May 1, 2013, to help them to locate antigen-negative red cell units in their own inventories. We analysed genotype data for 43,066 blood donors. Requests were filled for 5661 (99.8%) of 5672 patient encounters in which antigen-negative red cell units were needed. Red cell genotyping met the demand for antigen-negative blood in 5339 (94.1%) of 5672 patient encounters, and the remaining 333 (5.9%) requests were filled by use of serological data. Using the 42 antigens represented in our red cell genotype database, we were able to fill 14,357 (94.8%) of 15,140 requests for antigen-negative red cell units from hospitals served by the BloodCenter of Wisconsin. In the pilot phase, the seven hospitals identified 71 units from 52 antigen-negative red cell unit requests. Red cell genotyping has the potential to transform the way antigen-negative red cell units are provided. An antigen query portal could reduce the need for transportation of blood and serological screening. If this wealth of genotype data can be made easily accessible online, it will help with the supply of affordable antigen-negative red cell units to ensure patient safety. BloodCenter of Wisconsin Diagnostic Laboratories Strategic Initiative and the NIH Clinical Center Intramural Research Program. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Towards a Selenographic Information System: Apollo 15 Mission Digitization

    NASA Astrophysics Data System (ADS)

    Votava, J. E.; Petro, N. E.

    2012-12-01

    The Apollo missions represent some of the most technically complex and extensively documented explorations ever endeavored by mankind. The surface experiments performed and the lunar samples collected in-situ have helped form our understanding of the Moon's geologic history and the history of our Solar System. Unfortunately, a complication exists in the analysis and accessibility of these large volumes of lunar data and historical Apollo Era documents due to their multiple formats and disconnected web and print locations. Described here is a project to modernize, spatially reference, and link the lunar data into a comprehensive SELENOGRAPHIC INFORMATION SYSTEM, starting with the Apollo 15 mission. Like its terrestrial counter-parts, Geographic Information System (GIS) programs, such as ArcGIS, allow for easy integration, access, analysis, and display of large amounts of spatially-related data. Documentation in this new database includes surface photographs, panoramas, samples and their laboratory studies (major element and rare earth element weight percents), planned and actual vehicle traverses, and field notes. Using high-resolution (<0.25 m/pixel) images from the Lunar Reconnaissance Orbiter Camera (LROC) the rover (LRV) tracks and astronaut surface activities, along with field sketches from the Apollo 15 Preliminary Science Report (Swann, 1972), were digitized and mapped in ArcMap. Point features were created for each documented sample within the Lunar Sample Compendium (Meyer, 2010) and hyperlinked to the appropriate Compendium file (.PDF) at the stable archive site: http://curator.jsc.nasa.gov/lunar/compendium.cfm. Historical Apollo Era photographs and assembled panoramas were included as point features at each station that have been hyperlinked to the Apollo Lunar Surface Journal (ALSJ) online image library. The database has been set up to allow for the easy display of spatial variation of select attributes between samples. Attributes of interest that have data from the Compendium added directly into the database include age (Ga), mass, texture, major oxide elements (weight %), and Th and U (ppm). This project will produce an easily accessible and linked database that can offer technical and scientific information in its spatial context. While it is not possible given the enormous amounts of data, and the small allotment of time, to enter and/or link every detail to its map layer, the links that have been made here direct the user to rich, stable archive websites and web-based databases that are easy to navigate. While this project only created a product for the Apollo 15 mission, it is the model for spatially-referencing the other Apollo missions. Such a comprehensive lunar surface-activities database, a Selenographic Information System, will likely prove invaluable for future lunar studies. References: Meyer, C. (2010), The lunar sample compendium, June 2012 to August 2012, http://curator.jsc.nasa.gov/lunar/compendium.cfm, Astromaterials Res. & Exploration Sci., NASA L. B. Johnson Space Cent., Houston, TX. Swann, G. A. (1972), Preliminary geologic investigation of the Apollo 15 landing site, in Apollo 15 Preliminary Science Report, [NASA SP-289], pp. 5-1 - 5-112, NASA Manned Spacecraft Cent., Washington, D.C.

  2. Ursgal, Universal Python Module Combining Common Bottom-Up Proteomics Tools for Large-Scale Analysis.

    PubMed

    Kremer, Lukas P M; Leufken, Johannes; Oyunchimeg, Purevdulam; Schulze, Stefan; Fufezan, Christian

    2016-03-04

    Proteomics data integration has become a broad field with a variety of programs offering innovative algorithms to analyze increasing amounts of data. Unfortunately, this software diversity leads to many problems as soon as the data is analyzed using more than one algorithm for the same task. Although it was shown that the combination of multiple peptide identification algorithms yields more robust results, it is only recently that unified approaches are emerging; however, workflows that, for example, aim to optimize search parameters or that employ cascaded style searches can only be made accessible if data analysis becomes not only unified but also and most importantly scriptable. Here we introduce Ursgal, a Python interface to many commonly used bottom-up proteomics tools and to additional auxiliary programs. Complex workflows can thus be composed using the Python scripting language using a few lines of code. Ursgal is easily extensible, and we have made several database search engines (X!Tandem, OMSSA, MS-GF+, Myrimatch, MS Amanda), statistical postprocessing algorithms (qvality, Percolator), and one algorithm that combines statistically postprocessed outputs from multiple search engines ("combined FDR") accessible as an interface in Python. Furthermore, we have implemented a new algorithm ("combined PEP") that combines multiple search engines employing elements of "combined FDR", PeptideShaker, and Bayes' theorem.

  3. What Can OpenEI Do For You?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-12-10

    Open Energy Information (OpenEI) is an open source web platform—similar to the one used by Wikipedia—developed by the US Department of Energy (DOE) and the National Renewable Energy Laboratory (NREL) to make the large amounts of energy-related data and information more easily searched, accessed, and used both by people and automated machine processes. Built utilizing the standards and practices of the Linked Open Data community, the OpenEI platform is much more robust and powerful than typical web sites and databases. As an open platform, all users can search, edit, add, and access data in OpenEI for free. The user communitymore » contributes the content and ensures its accuracy and relevance; as the community expands, so does the content's comprehensiveness and quality. The data are structured and tagged with descriptors to enable cross-linking among related data sets, advanced search functionality, and consistent, usable formatting. Data input protocols and quality standards help ensure the content is structured and described properly and derived from a credible source. Although DOE/NREL is developing OpenEI and seeding it with initial data, it is designed to become a true community model with millions of users, a large core of active contributors, and numerous sponsors.« less

  4. Lunar e-Library: A Research Tool Focused on the Lunar Environment

    NASA Technical Reports Server (NTRS)

    McMahan, Tracy A.; Shea, Charlotte A.; Finckenor, Miria; Ferguson, Dale

    2007-01-01

    As NASA plans and implements the Vision for Space Exploration, managers, engineers, and scientists need lunar environment information that is readily available and easily accessed. For this effort, lunar environment data was compiled from a variety of missions from Apollo to more recent remote sensing missions, such as Clementine. This valuable information comes not only in the form of measurements and images but also from the observations of astronauts who have visited the Moon and people who have designed spacecraft for lunar missions. To provide a research tool that makes the voluminous lunar data more accessible, the Space Environments and Effects (SEE) Program, managed at NASA's Marshall Space Flight Center (MSFC) in Huntsville, AL, organized the data into a DVD knowledgebase: the Lunar e-Library. This searchable collection of 1100 electronic (.PDF) documents and abstracts makes it easy to find critical technical data and lessons learned from past lunar missions and exploration studies. The SEE Program began distributing the Lunar e-Library DVD in 2006. This paper describes the Lunar e-Library development process (including a description of the databases and resources used to acquire the documents) and the contents of the DVD product, demonstrates its usefulness with focused searches, and provides information on how to obtain this free resource.

  5. Health literacy in the eHealth era: A systematic review of the literature.

    PubMed

    Kim, Henna; Xie, Bo

    2017-06-01

    This study aimed to identify studies on online health service use by people with limited health literacy, as the findings could provide insights into how health literacy has been, and should be, addressed in the eHealth era. To identify the relevant literature published since 2010, we performed four rounds of selection-database selection, keyword search, screening of the titles and abstracts, and screening of full texts. This process produced a final of 74 publications. The themes addressed in the 74 publications fell into five categories: evaluation of health-related content, development and evaluation of eHealth services, development and evaluation of health literacy measurement tools, interventions to improve health literacy, and online health information seeking behavior. Barriers to access to and use of online health information can result from the readability of content and poor usability of eHealth services. We need new health literacy screening tools to identify skills for adequate use of eHealth services. Mobile apps hold great potential for eHealth and mHealth services tailored to people with low health literacy. Efforts should be made to make eHealth services easily accessible to low-literacy individuals and to enhance individual health literacy through educational programs. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Space Environments and Effects Program (SEE)

    NASA Technical Reports Server (NTRS)

    Yhisreal-Rivas, David M.

    2013-01-01

    The need to preserve works and NASA documented articles is done via the collection of various Space Environments and Effects (SEE) related articles. (SEE) contains and lists the various projects that are ongoing, or have been conducted with the help of NASA. The goal of the (SEE) program is to make publicly available the environment technologies that are required to design, manufacture and operate reliable, cost-effective spacecraft for the government and commercial sectors. Of the many projects contained within the (SEE) program the Lunar-E Library and Spacecraft Materials Selector (SMS) have been selected for a more user friendly means to make the tools easily available to the public. This information which is still available required a person or entity to request access from a point of contact at NASA and wait for the requested bundled software DVD via postal service. Redesigning the material presentation and availability has been mapped to a single step process with faster turnaround time via Materials and Processes Technical Information System (MAPTIS) database. This process requires users to register and be verified in order to gain access to the information contained within. Aiding in the progression of making the software tools/documents available required a combination of specialized in-house data gathering software tools and software archeology.

  7. What Can OpenEI Do For You?

    ScienceCinema

    None

    2018-02-06

    Open Energy Information (OpenEI) is an open source web platform—similar to the one used by Wikipedia—developed by the US Department of Energy (DOE) and the National Renewable Energy Laboratory (NREL) to make the large amounts of energy-related data and information more easily searched, accessed, and used both by people and automated machine processes. Built utilizing the standards and practices of the Linked Open Data community, the OpenEI platform is much more robust and powerful than typical web sites and databases. As an open platform, all users can search, edit, add, and access data in OpenEI for free. The user community contributes the content and ensures its accuracy and relevance; as the community expands, so does the content's comprehensiveness and quality. The data are structured and tagged with descriptors to enable cross-linking among related data sets, advanced search functionality, and consistent, usable formatting. Data input protocols and quality standards help ensure the content is structured and described properly and derived from a credible source. Although DOE/NREL is developing OpenEI and seeding it with initial data, it is designed to become a true community model with millions of users, a large core of active contributors, and numerous sponsors.

  8. Open-access databases as unprecedented resources and drivers of cultural change in fisheries science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McManamay, Ryan A; Utz, Ryan

    2014-01-01

    Open-access databases with utility in fisheries science have grown exponentially in quantity and scope over the past decade, with profound impacts to our discipline. The management, distillation, and sharing of an exponentially growing stream of open-access data represents several fundamental challenges in fisheries science. Many of the currently available open-access resources may not be universally known among fisheries scientists. We therefore introduce many national- and global-scale open-access databases with applications in fisheries science and provide an example of how they can be harnessed to perform valuable analyses without additional field efforts. We also discuss how the development, maintenance, and utilizationmore » of open-access data are likely to pose technical, financial, and educational challenges to fisheries scientists. Such cultural implications that will coincide with the rapidly increasing availability of free data should compel the American Fisheries Society to actively address these problems now to help ease the forthcoming cultural transition.« less

  9. Using Quasi-Horizontal Alignment in the absence of the actual alignment.

    PubMed

    Banihashemi, Mohamadreza

    2016-10-01

    Horizontal alignment is a major roadway characteristic used in safety and operational evaluations of many facility types. The Highway Safety Manual (HSM) uses this characteristic in crash prediction models for rural two-lane highways, freeway segments, and freeway ramps/C-D roads. Traffic simulation models use this characteristic in their processes on almost all types of facilities. However, a good portion of roadway databases do not include horizontal alignment data; instead, many contain point coordinate data along the roadways. SHRP 2 Roadway Information Database (RID) is a good example of this type of data. Only about 5% of this geodatabase contains alignment information and for the rest, point data can easily be produced. Even though the point data can be used to extract actual horizontal alignment data but, extracting horizontal alignment is a cumbersome and costly process, especially for a database of miles and miles of highways. This research introduces a so called "Quasi-Horizontal Alignment" that can be produced easily and automatically from point coordinate data and can be used in the safety and operational evaluations of highways. SHRP 2 RID for rural two-lane highways in Washington State is used in this study. This paper presents a process through which Quasi-Horizontal Alignments are produced from point coordinates along highways by using spreadsheet software such as MS EXCEL. It is shown that the safety and operational evaluations of the highways with Quasi-Horizontal Alignments are almost identical to the ones with the actual alignments. In the absence of actual alignment the Quasi-Horizontal Alignment can easily be produced from any type of databases that contain highway coordinates such geodatabases and digital maps. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. SORTEZ: a relational translator for NCBI's ASN.1 database.

    PubMed

    Hart, K W; Searls, D B; Overton, G C

    1994-07-01

    The National Center for Biotechnology Information (NCBI) has created a database collection that includes several protein and nucleic acid sequence databases, a biosequence-specific subset of MEDLINE, as well as value-added information such as links between similar sequences. Information in the NCBI database is modeled in Abstract Syntax Notation 1 (ASN.1) an Open Systems Interconnection protocol designed for the purpose of exchanging structured data between software applications rather than as a data model for database systems. While the NCBI database is distributed with an easy-to-use information retrieval system, ENTREZ, the ASN.1 data model currently lacks an ad hoc query language for general-purpose data access. For that reason, we have developed a software package, SORTEZ, that transforms the ASN.1 database (or other databases with nested data structures) to a relational data model and subsequently to a relational database management system (Sybase) where information can be accessed through the relational query language, SQL. Because the need to transform data from one data model and schema to another arises naturally in several important contexts, including efficient execution of specific applications, access to multiple databases and adaptation to database evolution this work also serves as a practical study of the issues involved in the various stages of database transformation. We show that transformation from the ASN.1 data model to a relational data model can be largely automated, but that schema transformation and data conversion require considerable domain expertise and would greatly benefit from additional support tools.

  11. Transcription Factor Information System (TFIS): A Tool for Detection of Transcription Factor Binding Sites.

    PubMed

    Narad, Priyanka; Kumar, Abhishek; Chakraborty, Amlan; Patni, Pranav; Sengupta, Abhishek; Wadhwa, Gulshan; Upadhyaya, K C

    2017-09-01

    Transcription factors are trans-acting proteins that interact with specific nucleotide sequences known as transcription factor binding site (TFBS), and these interactions are implicated in regulation of the gene expression. Regulation of transcriptional activation of a gene often involves multiple interactions of transcription factors with various sequence elements. Identification of these sequence elements is the first step in understanding the underlying molecular mechanism(s) that regulate the gene expression. For in silico identification of these sequence elements, we have developed an online computational tool named transcription factor information system (TFIS) for detecting TFBS for the first time using a collection of JAVA programs and is mainly based on TFBS detection using position weight matrix (PWM). The database used for obtaining position frequency matrices (PFM) is JASPAR and HOCOMOCO, which is an open-access database of transcription factor binding profiles. Pseudo-counts are used while converting PFM to PWM, and TFBS detection is carried out on the basis of percent score taken as threshold value. TFIS is equipped with advanced features such as direct sequence retrieving from NCBI database using gene identification number and accession number, detecting binding site for common TF in a batch of gene sequences, and TFBS detection after generating PWM from known raw binding sequences in addition to general detection methods. TFIS can detect the presence of potential TFBSs in both the directions at the same time. This feature increases its efficiency. And the results for this dual detection are presented in different colors specific to the orientation of the binding site. Results obtained by the TFIS are more detailed and specific to the detected TFs as integration of more informative links from various related web servers are added in the result pages like Gene Ontology, PAZAR database and Transcription Factor Encyclopedia in addition to NCBI and UniProt. Common TFs like SP1, AP1 and NF-KB of the Amyloid beta precursor gene is easily detected using TFIS along with multiple binding sites. In another scenario of embryonic developmental process, TFs of the FOX family (FOXL1 and FOXC1) were also identified. TFIS is platform-independent which is publicly available along with its support and documentation at http://tfistool.appspot.com and http://www.bioinfoplus.com/tfis/ . TFIS is licensed under the GNU General Public License, version 3 (GPL-3.0).

  12. Integrated Space Asset Management Database and Modeling

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd; Gagliano, Larry; Percy, Thomas; Mason, Shane

    2015-01-01

    Effective Space Asset Management is one key to addressing the ever-growing issue of space congestion. It is imperative that agencies around the world have access to data regarding the numerous active assets and pieces of space junk currently tracked in orbit around the Earth. At the center of this issues is the effective management of data of many types related to orbiting objects. As the population of tracked objects grows, so too should the data management structure used to catalog technical specifications, orbital information, and metadata related to those populations. Marshall Space Flight Center's Space Asset Management Database (SAM-D) was implemented in order to effectively catalog a broad set of data related to known objects in space by ingesting information from a variety of database and processing that data into useful technical information. Using the universal NORAD number as a unique identifier, the SAM-D processes two-line element data into orbital characteristics and cross-references this technical data with metadata related to functional status, country of ownership, and application category. The SAM-D began as an Excel spreadsheet and was later upgraded to an Access database. While SAM-D performs its task very well, it is limited by its current platform and is not available outside of the local user base. Further, while modeling and simulation can be powerful tools to exploit the information contained in SAM-D, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. This paper provides a summary of SAM-D development efforts to date and outlines a proposed data management infrastructure that extends SAM-D to support the larger data sets to be generated. A service-oriented architecture model using an information sharing platform named SIMON will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for visualizations. In addition, tight control of information sharing policy will increase confidence in the system, which would encourage industry partners to provide commercial data. Combined with the integration of new and legacy M&S tools, a SIMON-based architecture will provide a robust environment that can be extended and expanded indefinitely.

  13. [Status of libraries and databases for natural products at abroad].

    PubMed

    Zhao, Li-Mei; Tan, Ning-Hua

    2015-01-01

    For natural products are one of the important sources for drug discovery, libraries and databases of natural products are significant for the development and research of natural products. At present, most of compound libraries at abroad are synthetic or combinatorial synthetic molecules, resulting to access natural products difficult; for information of natural products are scattered with different standards, it is difficult to construct convenient, comprehensive and large-scale databases for natural products. This paper reviewed the status of current accessing libraries and databases for natural products at abroad and provided some important information for the development of libraries and database for natural products.

  14. FRED, a Front End for Databases.

    ERIC Educational Resources Information Center

    Crystal, Maurice I.; Jakobson, Gabriel E.

    1982-01-01

    FRED (a Front End for Databases) was conceived to alleviate data access difficulties posed by the heterogeneous nature of online databases. A hardware/software layer interposed between users and databases, it consists of three subsystems: user-interface, database-interface, and knowledge base. Architectural alternatives for this database machine…

  15. Universal Index System

    NASA Technical Reports Server (NTRS)

    Kelley, Steve; Roussopoulos, Nick; Sellis, Timos; Wallace, Sarah

    1993-01-01

    The Universal Index System (UIS) is an index management system that uses a uniform interface to solve the heterogeneity problem among database management systems. UIS provides an easy-to-use common interface to access all underlying data, but also allows different underlying database management systems, storage representations, and access methods.

  16. Village Green Project: Web-accessible Database

    EPA Science Inventory

    The purpose of this web-accessible database is for the public to be able to view instantaneous readings from a solar-powered air monitoring station located in a public location (prototype pilot test is outside of a library in Durham County, NC). The data are wirelessly transmitte...

  17. 47 CFR 64.623 - Administrator requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... administrator of the TRS User Registration Database, the administrator of the VRS Access Technology Reference... parties with a vested interest in the outcome of TRS-related numbering administration and activities. (4) None of the administrator of the TRS User Registration Database, the administrator of the VRS Access...

  18. 47 CFR 64.623 - Administrator requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... administrator of the TRS User Registration Database, the administrator of the VRS Access Technology Reference... parties with a vested interest in the outcome of TRS-related numbering administration and activities. (4) None of the administrator of the TRS User Registration Database, the administrator of the VRS Access...

  19. Designing a multi-petabyte database for LSST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becla, J; Hanushevsky, A

    2005-12-21

    The 3.2 giga-pixel LSST camera will produce over half a petabyte of raw images every month. This data needs to be reduced in under a minute to produce real-time transient alerts, and then cataloged and indexed to allow efficient access and simplify further analysis. The indexed catalogs alone are expected to grow at a speed of about 600 terabytes per year. The sheer volume of data, the real-time transient alerting requirements of the LSST, and its spatio-temporal aspects require cutting-edge techniques to build an efficient data access system at reasonable cost. As currently envisioned, the system will rely on amore » database for catalogs and metadata. Several database systems are being evaluated to understand how they will scale and perform at these data volumes in anticipated LSST access patterns. This paper describes the LSST requirements, the challenges they impose, the data access philosophy, and the database architecture that is expected to be adopted in order to meet the data challenges.« less

  20. Spatial data available on the web at http://mrdata.usgs.gov/

    USGS Publications Warehouse

    Johnson, Bruce

    2002-01-01

    Earth science information is important to decisionmakers who formulate public policy related to mineral resource sustainability, land stewardship, environmental hazards, the economy, and public health. To meet the growing demand for easily accessible data, the Mineral Resources Program has developed, in cooperation with other Federal and State agencies, an Internet-based, data-delivery system that allows interested customers worldwide to download accurate, up-to-date mineral resource-related data at any time. All data in the system are spatially located and customers with Internet access and a modern Web browser can easily produce maps having user-defined overlays for any region of interest.

Top