Sample records for scientific databases models

  1. An Overview of the Object Protocol Model (OPM) and the OPM Data Management Tools.

    ERIC Educational Resources Information Center

    Chen, I-Min A.; Markowitz, Victor M.

    1995-01-01

    Discussion of database management tools for scientific information focuses on the Object Protocol Model (OPM) and data management tools based on OPM. Topics include the need for new constructs for modeling scientific experiments, modeling object structures and experiments in OPM, queries and updates, and developing scientific database applications…

  2. Data structures and organisation: Special problems in scientific applications

    NASA Astrophysics Data System (ADS)

    Read, Brian J.

    1989-12-01

    In this paper we discuss and offer answers to the following questions: What, really, are the benifits of databases in physics? Are scientific databases essentially different from conventional ones? What are the drawbacks of a commercial database management system for use with scientific data? Do they outweigh the advantages? Do databases systems have adequate graphics facilities, or is a separate graphics package necessary? SQL as a standard language has deficiencies, but what are they for scientific data in particular? Indeed, is the relational model appropriate anyway? Or, should we turn to object oriented databases?

  3. Exploration of options for publishing databases and supplemental material in society journals

    USDA-ARS?s Scientific Manuscript database

    As scientific information becomes increasingly more abundant, there is increasing interest among members of our societies to share databases. These databases have great value, for example, in providing long-term perspectives of various scientific problems and for use by modelers to extend the inform...

  4. The Changing Face of Scientific Discourse: Analysis of Genomic and Proteomic Database Usage and Acceptance.

    ERIC Educational Resources Information Center

    Brown, Cecelia

    2003-01-01

    Discusses the growth in use and acceptance of Web-based genomic and proteomic databases (GPD) in scholarly communication. Confirms the role of GPD in the scientific literature cycle, suggests GPD are a storage and retrieval mechanism for molecular biology information, and recommends that existing models of scientific communication be updated to…

  5. Applying AN Object-Oriented Database Model to a Scientific Database Problem: Managing Experimental Data at Cebaf.

    NASA Astrophysics Data System (ADS)

    Ehlmann, Bryon K.

    Current scientific experiments are often characterized by massive amounts of very complex data and the need for complex data analysis software. Object-oriented database (OODB) systems have the potential of improving the description of the structure and semantics of this data and of integrating the analysis software with the data. This dissertation results from research to enhance OODB functionality and methodology to support scientific databases (SDBs) and, more specifically, to support a nuclear physics experiments database for the Continuous Electron Beam Accelerator Facility (CEBAF). This research to date has identified a number of problems related to the practical application of OODB technology to the conceptual design of the CEBAF experiments database and other SDBs: the lack of a generally accepted OODB design methodology, the lack of a standard OODB model, the lack of a clear conceptual level in existing OODB models, and the limited support in existing OODB systems for many common object relationships inherent in SDBs. To address these problems, the dissertation describes an Object-Relationship Diagram (ORD) and an Object-oriented Database Definition Language (ODDL) that provide tools that allow SDB design and development to proceed systematically and independently of existing OODB systems. These tools define multi-level, conceptual data models for SDB design, which incorporate a simple notation for describing common types of relationships that occur in SDBs. ODDL allows these relationships and other desirable SDB capabilities to be supported by an extended OODB system. A conceptual model of the CEBAF experiments database is presented in terms of ORDs and the ODDL to demonstrate their functionality and use and provide a foundation for future development of experimental nuclear physics software using an OODB approach.

  6. NoSQL data model for semi-automatic integration of ethnomedicinal plant data from multiple sources.

    PubMed

    Ningthoujam, Sanjoy Singh; Choudhury, Manabendra Dutta; Potsangbam, Kumar Singh; Chetia, Pankaj; Nahar, Lutfun; Sarker, Satyajit D; Basar, Norazah; Das Talukdar, Anupam

    2014-01-01

    Sharing traditional knowledge with the scientific community could refine scientific approaches to phytochemical investigation and conservation of ethnomedicinal plants. As such, integration of traditional knowledge with scientific data using a single platform for sharing is greatly needed. However, ethnomedicinal data are available in heterogeneous formats, which depend on cultural aspects, survey methodology and focus of the study. Phytochemical and bioassay data are also available from many open sources in various standards and customised formats. To design a flexible data model that could integrate both primary and curated ethnomedicinal plant data from multiple sources. The current model is based on MongoDB, one of the Not only Structured Query Language (NoSQL) databases. Although it does not contain schema, modifications were made so that the model could incorporate both standard and customised ethnomedicinal plant data format from different sources. The model presented can integrate both primary and secondary data related to ethnomedicinal plants. Accommodation of disparate data was accomplished by a feature of this database that supported a different set of fields for each document. It also allowed storage of similar data having different properties. The model presented is scalable to a highly complex level with continuing maturation of the database, and is applicable for storing, retrieving and sharing ethnomedicinal plant data. It can also serve as a flexible alternative to a relational and normalised database. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Governance and oversight of researcher access to electronic health data: the role of the Independent Scientific Advisory Committee for MHRA database research, 2006-2015.

    PubMed

    Waller, P; Cassell, J A; Saunders, M H; Stevens, R

    2017-03-01

    In order to promote understanding of UK governance and assurance relating to electronic health records research, we present and discuss the role of the Independent Scientific Advisory Committee (ISAC) for MHRA database research in evaluating protocols proposing the use of the Clinical Practice Research Datalink. We describe the development of the Committee's activities between 2006 and 2015, alongside growth in data linkage and wider national electronic health records programmes, including the application and assessment processes, and our approach to undertaking this work. Our model can provide independence, challenge and support to data providers such as the Clinical Practice Research Datalink database which has been used for well over 1,000 medical research projects. ISAC's role in scientific oversight ensures feasible and scientifically acceptable plans are in place, while having both lay and professional membership addresses governance issues in order to protect the integrity of the database and ensure that public confidence is maintained.

  8. The PMDB Protein Model Database

    PubMed Central

    Castrignanò, Tiziana; De Meo, Paolo D'Onorio; Cozzetto, Domenico; Talamo, Ivano Giuseppe; Tramontano, Anna

    2006-01-01

    The Protein Model Database (PMDB) is a public resource aimed at storing manually built 3D models of proteins. The database is designed to provide access to models published in the scientific literature, together with validating experimental data. It is a relational database and it currently contains >74 000 models for ∼240 proteins. The system is accessible at and allows predictors to submit models along with related supporting evidence and users to download them through a simple and intuitive interface. Users can navigate in the database and retrieve models referring to the same target protein or to different regions of the same protein. Each model is assigned a unique identifier that allows interested users to directly access the data. PMID:16381873

  9. SAADA: Astronomical Databases Made Easier

    NASA Astrophysics Data System (ADS)

    Michel, L.; Nguyen, H. N.; Motch, C.

    2005-12-01

    Many astronomers wish to share datasets with their community but have not enough manpower to develop databases having the functionalities required for high-level scientific applications. The SAADA project aims at automatizing the creation and deployment process of such databases. A generic but scientifically relevant data model has been designed which allows one to build databases by providing only a limited number of product mapping rules. Databases created by SAADA rely on a relational database supporting JDBC and covered by a Java layer including a lot of generated code. Such databases can simultaneously host spectra, images, source lists and plots. Data are grouped in user defined collections whose content can be seen as one unique set per data type even if their formats differ. Datasets can be correlated one with each other using qualified links. These links help, for example, to handle the nature of a cross-identification (e.g., a distance or a likelihood) or to describe their scientific content (e.g., by associating a spectrum to a catalog entry). The SAADA query engine is based on a language well suited to the data model which can handle constraints on linked data, in addition to classical astronomical queries. These constraints can be applied on the linked objects (number, class and attributes) and/or on the link qualifier values. Databases created by SAADA are accessed through a rich WEB interface or a Java API. We are currently developing an inter-operability module implanting VO protocols.

  10. Gene regulation knowledge commons: community action takes care of DNA binding transcription factors

    PubMed Central

    Tripathi, Sushil; Vercruysse, Steven; Chawla, Konika; Christie, Karen R.; Blake, Judith A.; Huntley, Rachael P.; Orchard, Sandra; Hermjakob, Henning; Thommesen, Liv; Lægreid, Astrid; Kuiper, Martin

    2016-01-01

    A large gap remains between the amount of knowledge in scientific literature and the fraction that gets curated into standardized databases, despite many curation initiatives. Yet the availability of comprehensive knowledge in databases is crucial for exploiting existing background knowledge, both for designing follow-up experiments and for interpreting new experimental data. Structured resources also underpin the computational integration and modeling of regulatory pathways, which further aids our understanding of regulatory dynamics. We argue how cooperation between the scientific community and professional curators can increase the capacity of capturing precise knowledge from literature. We demonstrate this with a project in which we mobilize biological domain experts who curate large amounts of DNA binding transcription factors, and show that they, although new to the field of curation, can make valuable contributions by harvesting reported knowledge from scientific papers. Such community curation can enhance the scientific epistemic process. Database URL: http://www.tfcheckpoint.org PMID:27270715

  11. Quantitative evaluation of Iranian radiology papers and its comparison with selected countries.

    PubMed

    Ghafoori, Mahyar; Emami, Hasan; Sedaghat, Abdolrasoul; Ghiasi, Mohammad; Shakiba, Madjid; Alavi, Manijeh

    2014-01-01

    Recent technological developments in medicine, including modern radiology have promoted the impact of scientific researches on social life. The scientific outputs such as article and patents are products that show the scientists' attempt to access these achievements. In the current study, we evaluate the current situation of Iranian scientists in the field of radiology and compare it with the selected countries in terms of scientific papers. For this purpose, we used scientometric tools to quantitatively assess the scientific papers in the field of radiology. Radiology papers were evaluated in the context of medical field audit using retrospective model. We used the related databases of biomedical sciences for extraction of articles related to radiology. In the next step, the situation of radiology scientific products of the country were determined with respect to the under study regional countries. Results of the current study showed a ratio of 0.19% for Iranian papers in PubMed database published in 2009. In addition, in 2009, Iranian papers constituted 0.29% of the Scopus scientific database. The proportion of Iranian papers in the understudy region was 7.6%. To diminish the gap between Iranian scientific radiology papers and other competitor countries in the region and achievement of document 2025 goals, multifold effort of the society of radiology is necessary.

  12. Immediate Dissemination of Student Discoveries to a Model Organism Database Enhances Classroom-Based Research Experiences

    PubMed Central

    Wiley, Emily A.; Stover, Nicholas A.

    2014-01-01

    Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have extended the typical model of inquiry-based labs to include a means for targeted dissemination of student-generated discoveries. This initiative required: 1) creating a set of research-based lab activities with the potential to yield results that a particular scientific community would find useful and 2) developing a means for immediate sharing of student-generated results. Working toward these goals, we designed guides for course-based research aimed to fulfill the need for functional annotation of the Tetrahymena thermophila genome, and developed an interactive Web database that links directly to the official Tetrahymena Genome Database for immediate, targeted dissemination of student discoveries. This combination of research via the course modules and the opportunity for students to immediately “publish” their novel results on a Web database actively used by outside scientists culminated in a motivational tool that enhanced students’ efforts to engage the scientific process and pursue additional research opportunities beyond the course. PMID:24591511

  13. Immediate dissemination of student discoveries to a model organism database enhances classroom-based research experiences.

    PubMed

    Wiley, Emily A; Stover, Nicholas A

    2014-01-01

    Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have extended the typical model of inquiry-based labs to include a means for targeted dissemination of student-generated discoveries. This initiative required: 1) creating a set of research-based lab activities with the potential to yield results that a particular scientific community would find useful and 2) developing a means for immediate sharing of student-generated results. Working toward these goals, we designed guides for course-based research aimed to fulfill the need for functional annotation of the Tetrahymena thermophila genome, and developed an interactive Web database that links directly to the official Tetrahymena Genome Database for immediate, targeted dissemination of student discoveries. This combination of research via the course modules and the opportunity for students to immediately "publish" their novel results on a Web database actively used by outside scientists culminated in a motivational tool that enhanced students' efforts to engage the scientific process and pursue additional research opportunities beyond the course.

  14. Earth System Model Development and Analysis using FRE-Curator and Live Access Servers: On-demand analysis of climate model output with data provenance.

    NASA Astrophysics Data System (ADS)

    Radhakrishnan, A.; Balaji, V.; Schweitzer, R.; Nikonov, S.; O'Brien, K.; Vahlenkamp, H.; Burger, E. F.

    2016-12-01

    There are distinct phases in the development cycle of an Earth system model. During the model development phase, scientists make changes to code and parameters and require rapid access to results for evaluation. During the production phase, scientists may make an ensemble of runs with different settings, and produce large quantities of output, that must be further analyzed and quality controlled for scientific papers and submission to international projects such as the Climate Model Intercomparison Project (CMIP). During this phase, provenance is a key concern:being able to track back from outputs to inputs. We will discuss one of the paths taken at GFDL in delivering tools across this lifecycle, offering on-demand analysis of data by integrating the use of GFDL's in-house FRE-Curator, Unidata's THREDDS and NOAA PMEL's Live Access Servers (LAS).Experience over this lifecycle suggests that a major difficulty in developing analysis capabilities is only partially the scientific content, but often devoted to answering the questions "where is the data?" and "how do I get to it?". "FRE-Curator" is the name of a database-centric paradigm used at NOAA GFDL to ingest information about the model runs into an RDBMS (Curator database). The components of FRE-Curator are integrated into Flexible Runtime Environment workflow and can be invoked during climate model simulation. The front end to FRE-Curator, known as the Model Development Database Interface (MDBI) provides an in-house web-based access to GFDL experiments: metadata, analysis output and more. In order to provide on-demand visualization, MDBI uses Live Access Servers which is a highly configurable web server designed to provide flexible access to geo-referenced scientific data, that makes use of OPeNDAP. Model output saved in GFDL's tape archive, the size of the database and experiments, continuous model development initiatives with more dynamic configurations add complexity and challenges in providing an on-demand visualization experience to our GFDL users.

  15. Gene annotation from scientific literature using mappings between keyword systems.

    PubMed

    Pérez, Antonio J; Perez-Iratxeta, Carolina; Bork, Peer; Thode, Guillermo; Andrade, Miguel A

    2004-09-01

    The description of genes in databases by keywords helps the non-specialist to quickly grasp the properties of a gene and increases the efficiency of computational tools that are applied to gene data (e.g. searching a gene database for sequences related to a particular biological process). However, the association of keywords to genes or protein sequences is a difficult process that ultimately implies examination of the literature related to a gene. To support this task, we present a procedure to derive keywords from the set of scientific abstracts related to a gene. Our system is based on the automated extraction of mappings between related terms from different databases using a model of fuzzy associations that can be applied with all generality to any pair of linked databases. We tested the system by annotating genes of the SWISS-PROT database with keywords derived from the abstracts linked to their entries (stored in the MEDLINE database of scientific references). The performance of the annotation procedure was much better for SWISS-PROT keywords (recall of 47%, precision of 68%) than for Gene Ontology terms (recall of 8%, precision of 67%). The algorithm can be publicly accessed and used for the annotation of sequences through a web server at http://www.bork.embl.de/kat

  16. MANAGEMENT AND DISSEMINATION OF HUMAN EXPOSURE DATABASES AND OTHER DATABASES NEEDED FOR HUMAN EXPOSURE MODELING AND ANALYSIS

    EPA Science Inventory

    Researchers in the National Exposure Research Laboratory (NERL) have performed a number of large human exposure measurement studies during the past decade. It is the goal of the NERL to make the data available to other researchers for analysis in order to further the scientific ...

  17. Scientific Use Cases for the Virtual Atomic and Molecular Data Center

    NASA Astrophysics Data System (ADS)

    Dubernet, M. L.; Aboudarham, J.; Ba, Y. A.; Boiziot, M.; Bottinelli, S.; Caux, E.; Endres, C.; Glorian, J. M.; Henry, F.; Lamy, L.; Le Sidaner, P.; Møller, T.; Moreau, N.; Rénié, C.; Roueff, E.; Schilke, P.; Vastel, C.; Zwoelf, C. M.

    2014-12-01

    VAMDC Consortium is a worldwide consortium which federates interoperable Atomic and Molecular databases through an e-science infrastructure. The contained data are of the highest scientific quality and are crucial for many applications: astrophysics, atmospheric physics, fusion, plasma and lighting technologies, health, etc. In this paper we present astrophysical scientific use cases in relation to the use of the VAMDC e-infrastructure. Those will cover very different applications such as: (i) modeling the spectra of interstellar objects using the myXCLASS software tool implemented in the Common Astronomy Software Applications package (CASA) or using the CASSIS software tool, in its stand-alone version or implemented in the Herschel Interactive Processing Environment (HIPE); (ii) the use of Virtual Observatory tools accessing VAMDC databases; (iii) the access of VAMDC from the Paris solar BASS2000 portal; (iv) the combination of tools and database from the APIS service (Auroral Planetary Imaging and Spectroscopy); (v) combination of heterogeneous data for the application to the interstellar medium from the SPECTCOL tool.

  18. Scientific information repository assisting reflectance spectrometry in legal medicine.

    PubMed

    Belenki, Liudmila; Sterzik, Vera; Bohnert, Michael; Zimmermann, Klaus; Liehr, Andreas W

    2012-06-01

    Reflectance spectrometry is a fast and reliable method for the characterization of human skin if the spectra are analyzed with respect to a physical model describing the optical properties of human skin. For a field study performed at the Institute of Legal Medicine and the Freiburg Materials Research Center of the University of Freiburg, a scientific information repository has been developed, which is a variant of an electronic laboratory notebook and assists in the acquisition, management, and high-throughput analysis of reflectance spectra in heterogeneous research environments. At the core of the repository is a database management system hosting the master data. It is filled with primary data via a graphical user interface (GUI) programmed in Java, which also enables the user to browse the database and access the results of data analysis. The latter is carried out via Matlab, Python, and C programs, which retrieve the primary data from the scientific information repository, perform the analysis, and store the results in the database for further usage.

  19. Outreach and online training services at the Saccharomyces Genome Database.

    PubMed

    MacPherson, Kevin A; Starr, Barry; Wong, Edith D; Dalusag, Kyla S; Hellerstedt, Sage T; Lang, Olivia W; Nash, Robert S; Skrzypek, Marek S; Engel, Stacia R; Cherry, J Michael

    2017-01-01

    The Saccharomyces Genome Database (SGD; www.yeastgenome.org ), the primary genetics and genomics resource for the budding yeast S. cerevisiae , provides free public access to expertly curated information about the yeast genome and its gene products. As the central hub for the yeast research community, SGD engages in a variety of social outreach efforts to inform our users about new developments, promote collaboration, increase public awareness of the importance of yeast to biomedical research, and facilitate scientific discovery. Here we describe these various outreach methods, from networking at scientific conferences to the use of online media such as blog posts and webinars, and include our perspectives on the benefits provided by outreach activities for model organism databases. http://www.yeastgenome.org. © The Author(s) 2017. Published by Oxford University Press.

  20. The Dartmouth Database of Children’s Faces: Acquisition and Validation of a New Face Stimulus Set

    PubMed Central

    Dalrymple, Kirsten A.; Gomez, Jesse; Duchaine, Brad

    2013-01-01

    Facial identity and expression play critical roles in our social lives. Faces are therefore frequently used as stimuli in a variety of areas of scientific research. Although several extensive and well-controlled databases of adult faces exist, few databases include children’s faces. Here we present the Dartmouth Database of Children’s Faces, a set of photographs of 40 male and 40 female Caucasian children between 6 and 16 years-of-age. Models posed eight facial expressions and were photographed from five camera angles under two lighting conditions. Models wore black hats and black gowns to minimize extra-facial variables. To validate the images, independent raters identified facial expressions, rated their intensity, and provided an age estimate for each model. The Dartmouth Database of Children’s Faces is freely available for research purposes and can be downloaded by contacting the corresponding author by email. PMID:24244434

  1. Groundwater modeling in integrated water resources management--visions for 2020.

    PubMed

    Refsgaard, Jens Christian; Højberg, Anker Lajer; Møller, Ingelise; Hansen, Martin; Søndergaard, Verner

    2010-01-01

    Groundwater modeling is undergoing a change from traditional stand-alone studies toward being an integrated part of holistic water resources management procedures. This is illustrated by the development in Denmark, where comprehensive national databases for geologic borehole data, groundwater-related geophysical data, geologic models, as well as a national groundwater-surface water model have been established and integrated to support water management. This has enhanced the benefits of using groundwater models. Based on insight gained from this Danish experience, a scientifically realistic scenario for the use of groundwater modeling in 2020 has been developed, in which groundwater models will be a part of sophisticated databases and modeling systems. The databases and numerical models will be seamlessly integrated, and the tasks of monitoring and modeling will be merged. Numerical models for atmospheric, surface water, and groundwater processes will be coupled in one integrated modeling system that can operate at a wide range of spatial scales. Furthermore, the management systems will be constructed with a focus on building credibility of model and data use among all stakeholders and on facilitating a learning process whereby data and models, as well as stakeholders' understanding of the system, are updated to currently available information. The key scientific challenges for achieving this are (1) developing new methodologies for integration of statistical and qualitative uncertainty; (2) mapping geological heterogeneity and developing scaling methodologies; (3) developing coupled model codes; and (4) developing integrated information systems, including quality assurance and uncertainty information that facilitate active stakeholder involvement and learning.

  2. Heterogeneous database integration in biomedicine.

    PubMed

    Sujansky, W

    2001-08-01

    The rapid expansion of biomedical knowledge, reduction in computing costs, and spread of internet access have created an ocean of electronic data. The decentralized nature of our scientific community and healthcare system, however, has resulted in a patchwork of diverse, or heterogeneous, database implementations, making access to and aggregation of data across databases very difficult. The database heterogeneity problem applies equally to clinical data describing individual patients and biological data characterizing our genome. Specifically, databases are highly heterogeneous with respect to the data models they employ, the data schemas they specify, the query languages they support, and the terminologies they recognize. Heterogeneous database systems attempt to unify disparate databases by providing uniform conceptual schemas that resolve representational heterogeneities, and by providing querying capabilities that aggregate and integrate distributed data. Research in this area has applied a variety of database and knowledge-based techniques, including semantic data modeling, ontology definition, query translation, query optimization, and terminology mapping. Existing systems have addressed heterogeneous database integration in the realms of molecular biology, hospital information systems, and application portability.

  3. Interdisciplinary Collaboration amongst Colleagues and between Initiatives with the Magnetics Information Consortium (MagIC) Database

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Tauxe, L.; Constable, C.; Jonestrask, L.; Shaar, R.

    2014-12-01

    Earth science grand challenges often require interdisciplinary and geographically distributed scientific collaboration to make significant progress. However, this organic collaboration between researchers, educators, and students only flourishes with the reduction or elimination of technological barriers. The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the geo-, paleo-, and rock magnetic scientific community to archive their wealth of peer-reviewed raw data and interpretations from studies on natural and synthetic samples. MagIC is dedicated to facilitating scientific progress towards several highly multidisciplinary grand challenges and the MagIC Database team is currently beta testing a new MagIC Search Interface and API designed to be flexible enough for the incorporation of large heterogeneous datasets and for horizontal scalability to tens of millions of records and hundreds of requests per second. In an effort to reduce the barriers to effective collaboration, the search interface includes a simplified data model and upload procedure, support for online editing of datasets amongst team members, commenting by reviewers and colleagues, and automated contribution workflows and data retrieval through the API. This web application has been designed to generalize to other databases in MagIC's umbrella website (EarthRef.org) so the Geochemical Earth Reference Model (http://earthref.org/GERM/) portal, Seamount Biogeosciences Network (http://earthref.org/SBN/), EarthRef Digital Archive (http://earthref.org/ERDA/) and EarthRef Reference Database (http://earthref.org/ERR/) will benefit from its development.

  4. DataHub: Knowledge-based data management for data discovery

    NASA Astrophysics Data System (ADS)

    Handley, Thomas H.; Li, Y. Philip

    1993-08-01

    Currently available database technology is largely designed for business data-processing applications, and seems inadequate for scientific applications. The research described in this paper, the DataHub, will address the issues associated with this shortfall in technology utilization and development. The DataHub development is addressing the key issues in scientific data management of scientific database models and resource sharing in a geographically distributed, multi-disciplinary, science research environment. Thus, the DataHub will be a server between the data suppliers and data consumers to facilitate data exchanges, to assist science data analysis, and to provide as systematic approach for science data management. More specifically, the DataHub's objectives are to provide support for (1) exploratory data analysis (i.e., data driven analysis); (2) data transformations; (3) data semantics capture and usage; analysis-related knowledge capture and usage; and (5) data discovery, ingestion, and extraction. Applying technologies that vary from deductive databases, semantic data models, data discovery, knowledge representation and inferencing, exploratory data analysis techniques and modern man-machine interfaces, DataHub will provide a prototype, integrated environement to support research scientists' needs in multiple disciplines (i.e. oceanography, geology, and atmospheric) while addressing the more general science data management issues. Additionally, the DataHub will provide data management services to exploratory data analysis applications such as LinkWinds and NCSA's XIMAGE.

  5. Why open drug discovery needs four simple rules for licensing data and models.

    PubMed

    Williams, Antony J; Wilbanks, John; Ekins, Sean

    2012-01-01

    When we look at the rapid growth of scientific databases on the Internet in the past decade, we tend to take the accessibility and provenance of the data for granted. As we see a future of increased database integration, the licensing of the data may be a hurdle that hampers progress and usability. We have formulated four rules for licensing data for open drug discovery, which we propose as a starting point for consideration by databases and for their ultimate adoption. This work could also be extended to the computational models derived from such data. We suggest that scientists in the future will need to consider data licensing before they embark upon re-using such content in databases they construct themselves.

  6. Object-oriented structures supporting remote sensing databases

    NASA Technical Reports Server (NTRS)

    Wichmann, Keith; Cromp, Robert F.

    1995-01-01

    Object-oriented databases show promise for modeling the complex interrelationships pervasive in scientific domains. To examine the utility of this approach, we have developed an Intelligent Information Fusion System based on this technology, and applied it to the problem of managing an active repository of remotely-sensed satellite scenes. The design and implementation of the system is compared and contrasted with conventional relational database techniques, followed by a presentation of the underlying object-oriented data structures used to enable fast indexing into the data holdings.

  7. Organization of Heterogeneous Scientific Data Using the EAV/CR Representation

    PubMed Central

    Nadkarni, Prakash M.; Marenco, Luis; Chen, Roland; Skoufos, Emmanouil; Shepherd, Gordon; Miller, Perry

    1999-01-01

    Entity-attribute-value (EAV) representation is a means of organizing highly heterogeneous data using a relatively simple physical database schema. EAV representation is widely used in the medical domain, most notably in the storage of data related to clinical patient records. Its potential strengths suggest its use in other biomedical areas, in particular research databases whose schemas are complex as well as constantly changing to reflect evolving knowledge in rapidly advancing scientific domains. When deployed for such purposes, the basic EAV representation needs to be augmented significantly to handle the modeling of complex objects (classes) as well as to manage interobject relationships. The authors refer to their modification of the basic EAV paradigm as EAV/CR (EAV with classes and relationships). They describe EAV/CR representation with examples from two biomedical databases that use it. PMID:10579606

  8. ENVIRONMENTAL INFORMATION MANAGEMENT SYSTEM (EIMS)

    EPA Science Inventory

    The Environmental Information Management System (EIMS) organizes descriptive information (metadata) for data sets, databases, documents, models, projects, and spatial data. The EIMS design provides a repository for scientific documentation that can be easily accessed with standar...

  9. Production and distribution of scientific and technical databases - Comparison among Japan, US and Europe

    NASA Astrophysics Data System (ADS)

    Onodera, Natsuo; Mizukami, Masayuki

    This paper estimates several quantitative indice on production and distribution of scientific and technical databases based on various recent publications and attempts to compare the indice internationally. Raw data used for the estimation are brought mainly from the Database Directory (published by MITI) for database production and from some domestic and foreign study reports for database revenues. The ratio of the indice among Japan, US and Europe for usage of database is similar to those for general scientific and technical activities such as population and R&D expenditures. But Japanese contributions to production, revenue and over-countory distribution of databases are still lower than US and European countries. International comparison of relative database activities between public and private sectors is also discussed.

  10. NCI at Frederick Scientific Library Reintroduces Scientific Publications Database | Poster

    Cancer.gov

    A 20-year-old database of scientific publications by NCI at Frederick, FNLCR, and affiliated employees has gotten a significant facelift. Maintained by the Scientific Library, the redesigned database—which is linked from each of the Scientific Library’s web pages—offers features that were not available in previous versions, such as additional search limits and non-traditional metrics for scholarly and scientific publishing known as altmetrics.

  11. NCI at Frederick Scientific Library Reintroduces Scientific Publications Database | Poster

    Cancer.gov

    A 20-year-old database of scientific publications by NCI at Frederick, FNLCR, and affiliated employees has gotten a significant facelift. Maintained by the Scientific Library, the redesigned database—which is linked from each of the Scientific Library’s web pages—offers features that were not available in previous versions, such as additional search limits and non-traditional

  12. eBASIS (Bioactive Substances in Food Information Systems) and Bioactive Intakes: Major Updates of the Bioactive Compound Composition and Beneficial Bioeffects Database and the Development of a Probabilistic Model to Assess Intakes in Europe.

    PubMed

    Plumb, Jenny; Pigat, Sandrine; Bompola, Foteini; Cushen, Maeve; Pinchen, Hannah; Nørby, Eric; Astley, Siân; Lyons, Jacqueline; Kiely, Mairead; Finglas, Paul

    2017-03-23

    eBASIS (Bioactive Substances in Food Information Systems), a web-based database that contains compositional and biological effects data for bioactive compounds of plant origin, has been updated with new data on fruits and vegetables, wheat and, due to some evidence of potential beneficial effects, extended to include meat bioactives. eBASIS remains one of only a handful of comprehensive and searchable databases, with up-to-date coherent and validated scientific information on the composition of food bioactives and their putative health benefits. The database has a user-friendly, efficient, and flexible interface facilitating use by both the scientific community and food industry. Overall, eBASIS contains data for 267 foods, covering the composition of 794 bioactive compounds, from 1147 quality-evaluated peer-reviewed publications, together with information from 567 publications describing beneficial bioeffect studies carried out in humans. This paper highlights recent updates and expansion of eBASIS and the newly-developed link to a probabilistic intake model, allowing exposure assessment of dietary bioactive compounds to be estimated and modelled in human populations when used in conjunction with national food consumption data. This new tool could assist small- and medium-sized enterprises (SMEs) in the development of food product health claim dossiers for submission to the European Food Safety Authority (EFSA).

  13. Protein Simulation Data in the Relational Model.

    PubMed

    Simms, Andrew M; Daggett, Valerie

    2012-10-01

    High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost-significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server.

  14. Protein Simulation Data in the Relational Model

    PubMed Central

    Simms, Andrew M.; Daggett, Valerie

    2011-01-01

    High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost—significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server. PMID:23204646

  15. Why Open Drug Discovery Needs Four Simple Rules for Licensing Data and Models

    PubMed Central

    Williams, Antony J.; Wilbanks, John; Ekins, Sean

    2012-01-01

    When we look at the rapid growth of scientific databases on the Internet in the past decade, we tend to take the accessibility and provenance of the data for granted. As we see a future of increased database integration, the licensing of the data may be a hurdle that hampers progress and usability. We have formulated four rules for licensing data for open drug discovery, which we propose as a starting point for consideration by databases and for their ultimate adoption. This work could also be extended to the computational models derived from such data. We suggest that scientists in the future will need to consider data licensing before they embark upon re-using such content in databases they construct themselves. PMID:23028298

  16. M4AST - A Tool for Asteroid Modelling

    NASA Astrophysics Data System (ADS)

    Birlan, Mirel; Popescu, Marcel; Irimiea, Lucian; Binzel, Richard

    2016-10-01

    M4AST (Modelling for asteroids) is an online tool devoted to the analysis and interpretation of reflection spectra of asteroids in the visible and near-infrared spectral intervals. It consists into a spectral database of individual objects and a set of routines for analysis which address scientific aspects such as: taxonomy, curve matching with laboratory spectra, space weathering models, and mineralogical diagnosis. Spectral data were obtained using groundbased facilities; part of these data are precompiled from the literature[1].The database is composed by permanent and temporary files. Each permanent file contains a header and two or three columns (wavelength, spectral reflectance, and the error on spectral reflectance). Temporary files can be uploaded anonymously, and are purged for the property of submitted data. The computing routines are organized in order to accomplish several scientific objectives: visualize spectra, compute the asteroid taxonomic class, compare an asteroid spectrum with similar spectra of meteorites, and computing mineralogical parameters. One facility of using the Virtual Observatory protocols was also developed.A new version of the service was released in June 2016. This new release of M4AST contains a database and facilities to model more than 6,000 spectra of asteroids. A new web-interface was designed. This development allows new functionalities into a user-friendly environment. A bridge system of access and exploiting the database SMASS-MIT (http://smass.mit.edu) allows the treatment and analysis of these data in the framework of M4AST environment.Reference:[1] M. Popescu, M. Birlan, and D.A. Nedelcu, "Modeling of asteroids: M4AST," Astronomy & Astrophysics 544, EDP Sciences, pp. A130, 2012.

  17. Mining and Indexing Graph Databases

    ERIC Educational Resources Information Center

    Yuan, Dayu

    2013-01-01

    Graphs are widely used to model structures and relationships of objects in various scientific and commercial fields. Chemical molecules, proteins, malware system-call dependencies and three-dimensional mechanical parts are all modeled as graphs. In this dissertation, we propose to mine and index those graph data to enable fast and scalable search.…

  18. Scientific Data Services -- A High-Performance I/O System with Array Semantics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Byna, Surendra; Rotem, Doron

    2011-09-21

    As high-performance computing approaches exascale, the existing I/O system design is having trouble keeping pace in both performance and scalability. We propose to address this challenge by adopting database principles and techniques in parallel I/O systems. First, we propose to adopt an array data model because many scientific applications represent their data in arrays. This strategy follows a cardinal principle from database research, which separates the logical view from the physical layout of data. This high-level data model gives the underlying implementation more freedom to optimize the physical layout and to choose the most effective way of accessing the data.more » For example, knowing that a set of write operations is working on a single multi-dimensional array makes it possible to keep the subarrays in a log structure during the write operations and reassemble them later into another physical layout as resources permit. While maintaining the high-level view, the storage system could compress the user data to reduce the physical storage requirement, collocate data records that are frequently used together, or replicate data to increase availability and fault-tolerance. Additionally, the system could generate secondary data structures such as database indexes and summary statistics. We expect the proposed Scientific Data Services approach to create a “live” storage system that dynamically adjusts to user demands and evolves with the massively parallel storage hardware.« less

  19. Optical components damage parameters database system

    NASA Astrophysics Data System (ADS)

    Tao, Yizheng; Li, Xinglan; Jin, Yuquan; Xie, Dongmei; Tang, Dingyong

    2012-10-01

    Optical component is the key to large-scale laser device developed by one of its load capacity is directly related to the device output capacity indicators, load capacity depends on many factors. Through the optical components will damage parameters database load capacity factors of various digital, information technology, for the load capacity of optical components to provide a scientific basis for data support; use of business processes and model-driven approach, the establishment of component damage parameter information model and database systems, system application results that meet the injury test optical components business processes and data management requirements of damage parameters, component parameters of flexible, configurable system is simple, easy to use, improve the efficiency of the optical component damage test.

  20. Modernizing the MagIC Paleomagnetic and Rock Magnetic Database Technology Stack to Encourage Code Reuse and Reproducible Science

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Jonestrask, L.; Tauxe, L.; Constable, C.

    2016-12-01

    The Magnetics Information Consortium (https://earthref.org/MagIC/) develops and maintains a database and web application for supporting the paleo-, geo-, and rock magnetic scientific community. Historically, this objective has been met with an Oracle database and a Perl web application at the San Diego Supercomputer Center (SDSC). The Oracle Enterprise Cluster at SDSC, however, was decommissioned in July of 2016 and the cost for MagIC to continue using Oracle became prohibitive. This provided MagIC with a unique opportunity to reexamine the entire technology stack and data model. MagIC has developed an open-source web application using the Meteor (http://meteor.com) framework and a MongoDB database. The simplicity of the open-source full-stack framework that Meteor provides has improved MagIC's development pace and the increased flexibility of the data schema in MongoDB encouraged the reorganization of the MagIC Data Model. As a result of incorporating actively developed open-source projects into the technology stack, MagIC has benefited from their vibrant software development communities. This has translated into a more modern web application that has significantly improved the user experience for the paleo-, geo-, and rock magnetic scientific community.

  1. Database assessment of CMIP5 and hydrological models to determine flood risk areas

    NASA Astrophysics Data System (ADS)

    Limlahapun, Ponthip; Fukui, Hiromichi

    2016-11-01

    Solutions for water-related disasters may not be solved with a single scientific method. Based on this premise, we involved logic conceptions, associate sequential result amongst models, and database applications attempting to analyse historical and future scenarios in the context of flooding. The three main models used in this study are (1) the fifth phase of the Coupled Model Intercomparison Project (CMIP5) to derive precipitation; (2) the Integrated Flood Analysis System (IFAS) to extract amount of discharge; and (3) the Hydrologic Engineering Center (HEC) model to generate inundated areas. This research notably focused on integrating data regardless of system-design complexity, and database approaches are significantly flexible, manageable, and well-supported for system data transfer, which makes them suitable for monitoring a flood. The outcome of flood map together with real-time stream data can help local communities identify areas at-risk of flooding in advance.

  2. Causal biological network database: a comprehensive platform of causal biological network models focused on the pulmonary and vascular systems

    PubMed Central

    Boué, Stéphanie; Talikka, Marja; Westra, Jurjen Willem; Hayes, William; Di Fabio, Anselmo; Park, Jennifer; Schlage, Walter K.; Sewer, Alain; Fields, Brett; Ansari, Sam; Martin, Florian; Veljkovic, Emilija; Kenney, Renee; Peitsch, Manuel C.; Hoeng, Julia

    2015-01-01

    With the wealth of publications and data available, powerful and transparent computational approaches are required to represent measured data and scientific knowledge in a computable and searchable format. We developed a set of biological network models, scripted in the Biological Expression Language, that reflect causal signaling pathways across a wide range of biological processes, including cell fate, cell stress, cell proliferation, inflammation, tissue repair and angiogenesis in the pulmonary and cardiovascular context. This comprehensive collection of networks is now freely available to the scientific community in a centralized web-based repository, the Causal Biological Network database, which is composed of over 120 manually curated and well annotated biological network models and can be accessed at http://causalbionet.com. The website accesses a MongoDB, which stores all versions of the networks as JSON objects and allows users to search for genes, proteins, biological processes, small molecules and keywords in the network descriptions to retrieve biological networks of interest. The content of the networks can be visualized and browsed. Nodes and edges can be filtered and all supporting evidence for the edges can be browsed and is linked to the original articles in PubMed. Moreover, networks may be downloaded for further visualization and evaluation. Database URL: http://causalbionet.com PMID:25887162

  3. eBASIS (Bioactive Substances in Food Information Systems) and Bioactive Intakes: Major Updates of the Bioactive Compound Composition and Beneficial Bioeffects Database and the Development of a Probabilistic Model to Assess Intakes in Europe

    PubMed Central

    Plumb, Jenny; Pigat, Sandrine; Bompola, Foteini; Cushen, Maeve; Pinchen, Hannah; Nørby, Eric; Astley, Siân; Lyons, Jacqueline; Kiely, Mairead; Finglas, Paul

    2017-01-01

    eBASIS (Bioactive Substances in Food Information Systems), a web-based database that contains compositional and biological effects data for bioactive compounds of plant origin, has been updated with new data on fruits and vegetables, wheat and, due to some evidence of potential beneficial effects, extended to include meat bioactives. eBASIS remains one of only a handful of comprehensive and searchable databases, with up-to-date coherent and validated scientific information on the composition of food bioactives and their putative health benefits. The database has a user-friendly, efficient, and flexible interface facilitating use by both the scientific community and food industry. Overall, eBASIS contains data for 267 foods, covering the composition of 794 bioactive compounds, from 1147 quality-evaluated peer-reviewed publications, together with information from 567 publications describing beneficial bioeffect studies carried out in humans. This paper highlights recent updates and expansion of eBASIS and the newly-developed link to a probabilistic intake model, allowing exposure assessment of dietary bioactive compounds to be estimated and modelled in human populations when used in conjunction with national food consumption data. This new tool could assist small- and medium-sized enterprises (SMEs) in the development of food product health claim dossiers for submission to the European Food Safety Authority (EFSA). PMID:28333085

  4. Environment Online: The Greening of Databases. Part 2. Scientific and Technical Databases.

    ERIC Educational Resources Information Center

    Alston, Patricia Gayle

    1991-01-01

    This second in a series of articles about online sources of environmental information describes scientific and technical databases that are useful for searching environmental data. Topics covered include chemicals and hazardous substances; agriculture; pesticides; water; forestry, oil, and energy resources; air; environmental and occupational…

  5. [Presence of the biomedical periodicals of Hungarian editions in international databases].

    PubMed

    Vasas, Lívia; Hercsel, Imréné

    2006-01-15

    Presence of the biomedical periodicals of Hungarian editions in international databases. The majority of Hungarian scientific results in medical and related sciences are published in scientific periodicals of foreign edition with high impact factor (IF) values, and they appear in international scientific literature in foreign languages. In this study the authors dealt with the presence and registered citation in international databases of those periodicals only, which had been published in Hungary and/or in cooperation with foreign publishing companies. The examination went back to year 1980 and covered a 25-year long period. 110 periodicals were selected for more detailed examination. The authors analyzed the situation of the current periodicals in the three most often visited databases (MEDLINE, EMBASE, Web of Science), and discovered, that the biomedical scientific periodicals of Hungarian interests were not represented with reasonable emphasis in the relevant international bibliographic databases. Because of the great number of data the scientific literature of medicine and related sciences could not be represented in its entirety, this publication, however, might give useful information for the inquirers, and call the attention of the competent people.

  6. A design for the geoinformatics system

    NASA Astrophysics Data System (ADS)

    Allison, M. L.

    2002-12-01

    Informatics integrates and applies information technologies with scientific and technical disciplines. A geoinformatics system targets the spatially based sciences. The system is not a master database, but will collect pertinent information from disparate databases distributed around the world. Seamless interoperability of databases promises quantum leaps in productivity not only for scientific researchers but also for many areas of society including business and government. The system will incorporate: acquisition of analog and digital legacy data; efficient information and data retrieval mechanisms (via data mining and web services); accessibility to and application of visualization, analysis, and modeling capabilities; online workspace, software, and tutorials; GIS; integration with online scientific journal aggregates and digital libraries; access to real time data collection and dissemination; user-defined automatic notification and quality control filtering for selection of new resources; and application to field techniques such as mapping. In practical terms, such a system will provide the ability to gather data over the Web from a variety of distributed sources, regardless of computer operating systems, database formats, and servers. Search engines will gather data about any geographic location, above, on, or below ground, covering any geologic time, and at any scale or detail. A distributed network of digital geolibraries can archive permanent copies of databases at risk of being discontinued and those that continue to be maintained by the data authors. The geoinformatics system will generate results from widely distributed sources to function as a dynamic data network. Instead of posting a variety of pre-made tables, charts, or maps based on static databases, the interactive dynamic system creates these products on the fly, each time an inquiry is made, using the latest information in the appropriate databases. Thus, in the dynamic system, a map generated today may differ from one created yesterday and one to be created tomorrow, because the databases used to make it are constantly (and sometimes automatically) being updated.

  7. Mathematical models for exploring different aspects of genotoxicity and carcinogenicity databases.

    PubMed

    Benigni, R; Giuliani, A

    1991-12-01

    One great obstacle to understanding and using the information contained in the genotoxicity and carcinogenicity databases is the very size of such databases. Their vastness makes them difficult to read; this leads to inadequate exploitation of the information, which becomes costly in terms of time, labor, and money. In its search for adequate approaches to the problem, the scientific community has, curiously, almost entirely neglected an existent series of very powerful methods of data analysis: the multivariate data analysis techniques. These methods were specifically designed for exploring large data sets. This paper presents the multivariate techniques and reports a number of applications to genotoxicity problems. These studies show how biology and mathematical modeling can be combined and how successful this combination is.

  8. Nonuniversal power law scaling in the probability distribution of scientific citations

    PubMed Central

    Peterson, George J.; Pressé, Steve; Dill, Ken A.

    2010-01-01

    We develop a model for the distribution of scientific citations. The model involves a dual mechanism: in the direct mechanism, the author of a new paper finds an old paper A and cites it. In the indirect mechanism, the author of a new paper finds an old paper A only via the reference list of a newer intermediary paper B, which has previously cited A. By comparison to citation databases, we find that papers having few citations are cited mainly by the direct mechanism. Papers already having many citations (“classics”) are cited mainly by the indirect mechanism. The indirect mechanism gives a power-law tail. The “tipping point” at which a paper becomes a classic is about 25 citations for papers published in the Institute for Scientific Information (ISI) Web of Science database in 1981, 31 for Physical Review D papers published from 1975–1994, and 37 for all publications from a list of high h-index chemists assembled in 2007. The power-law exponent is not universal. Individuals who are highly cited have a systematically smaller exponent than individuals who are less cited. PMID:20805513

  9. Comparison Study of Overlap among 21 Scientific Databases in Searching Pesticide Information.

    ERIC Educational Resources Information Center

    Meyer, Daniel E.; And Others

    1983-01-01

    Evaluates overlapping coverage of 21 scientific databases used in 10 online pesticide searches in an attempt to identify minimum number of databases needed to generate 90 percent of unique, relevant citations for given search. Comparison of searches combined under given pesticide usage (herbicide, fungicide, insecticide) is discussed. Nine…

  10. NRLMSISE-00 Empirical Model of the Atmosphere: Statistical Comparisons and Scientific Issues

    NASA Technical Reports Server (NTRS)

    Aikin, A. C.; Picone, J. M.; Hedin, A. E.; Drob, D. P.

    2001-01-01

    The new NRLMSISE-00 model and the associated NRLMSIS database now include the following data: (1) total mass density from satellite accelerometers and from orbit determination, including the Jacchia and Barlier data; (2) temperature from incoherent scatter radar, and; (3) molecular oxygen number density, [O2], from solar ultraviolet occultation aboard the Solar Maximum Mission (SMM). A new component, 'anomalous oxygen,' allows for appreciable O(+) and hot atomic oxygen contributions to the total mass density at high altitudes and applies primarily to drag estimation above 500 km. Extensive tables compare our entire database to the NRLMSISE-00, MSISE-90, and Jacchia-70 models for different altitude bands and levels of geomagnetic activity. We also investigate scientific issues related to the new data sets in the NRLMSIS database. Especially noteworthy is the solar activity dependence of the Jacchia data, with which we investigate a large O(+) contribution to the total mass density under the combination of summer, low solar activity, high latitudes, and high altitudes. Under these conditions, except at very low solar activity, the Jacchia data and the Jacchia-70 model indeed show a significantly higher total mass density than does MSISE-90. However, under the corresponding winter conditions, the MSIS-class models represent a noticeable improvement relative to Jacchia-70 over a wide range of F(sub 10.7). Considering the two regimes together, NRLMSISE-00 achieves an improvement over both MSISE-90 and Jacchia-70 by incorporating advantages of each.

  11. The AMMA information system

    NASA Astrophysics Data System (ADS)

    Brissebrat, Guillaume; Fleury, Laurence; Boichard, Jean-Luc; Cloché, Sophie; Eymard, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim; Asencio, Nicole; Favot, Florence; Roussot, Odile

    2013-04-01

    The AMMA information system aims at expediting data and scientific results communication inside the AMMA community and beyond. It has already been adopted as the data management system by several projects and is meant to become a reference information system about West Africa area for the whole scientific community. The AMMA database and the associated on line tools have been developed and are managed by two French teams (IPSL Database Centre, Palaiseau and OMP Data Service, Toulouse). The complete system has been fully duplicated and is operated by AGRHYMET Regional Centre in Niamey, Niger. The AMMA database contains a wide variety of datasets: - about 250 local observation datasets, that cover geophysical components (atmosphere, ocean, soil, vegetation) and human activities (agronomy, health...) They come from either operational networks or scientific experiments, and include historical data in West Africa from 1850; - 1350 outputs of a socio-economics questionnaire; - 60 operational satellite products and several research products; - 10 output sets of meteorological and ocean operational models and 15 of research simulations. Database users can access all the data using either the portal http://database.amma-international.org or http://amma.agrhymet.ne/amma-data. Different modules are available. The complete catalogue enables to access metadata (i.e. information about the datasets) that are compliant with the international standards (ISO19115, INSPIRE...). Registration pages enable to read and sign the data and publication policy, and to apply for a user database account. The data access interface enables to easily build a data extraction request by selecting various criteria like location, time, parameters... At present, the AMMA database counts more than 740 registered users and process about 80 data requests every month In order to monitor day-to-day meteorological and environment information over West Africa, some quick look and report display websites have been developed. They met the operational needs for the observational teams during the AMMA 2006 (http://aoc.amma-international.org) and FENNEC 2011 (http://fenoc.sedoo.fr) campaigns. But they also enable scientific teams to share physical indices along the monsoon season (http://misva.sedoo.fr from 2011). A collaborative WIKINDX tool has been set on line in order to manage scientific publications and communications of interest to AMMA (http://biblio.amma-international.org). Now the bibliographic database counts about 1200 references. It is the most exhaustive document collection about African Monsoon available for all. Every scientist is invited to make use of the different AMMA on line tools and data. Scientists or project leaders who have data management needs for existing or future datasets over West Africa are welcome to use the AMMA database framework and to contact ammaAdmin@sedoo.fr .

  12. Design of web platform for science and engineering in the model of open market

    NASA Astrophysics Data System (ADS)

    Demichev, A. P.; Kryukov, A. P.

    2016-09-01

    This paper presents a design and operation algorithms of a web-platform for convenient, secure and effective remote interaction on the principles of the open market of users and providers of scientific application software and databases.

  13. Creating a FIESTA (Framework for Integrated Earth Science and Technology Applications) with MagIC

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Tauxe, L.; Constable, C.

    2017-12-01

    The Magnetics Information Consortium (https://earthref.org/MagIC) has recently developed a containerized web application to considerably reduce the friction in contributing, exploring and combining valuable and complex datasets for the paleo-, geo- and rock magnetic scientific community. The data produced in this scientific domain are inherently hierarchical and the communities evolving approaches to this scientific workflow, from sampling to taking measurements to multiple levels of interpretations, require a large and flexible data model to adequately annotate the results and ensure reproducibility. Historically, contributing such detail in a consistent format has been prohibitively time consuming and often resulted in only publishing the highly derived interpretations. The new open-source (https://github.com/earthref/MagIC) application provides a flexible upload tool integrated with the data model to easily create a validated contribution and a powerful search interface for discovering datasets and combining them to enable transformative science. MagIC is hosted at EarthRef.org along with several interdisciplinary geoscience databases. A FIESTA (Framework for Integrated Earth Science and Technology Applications) is being created by generalizing MagIC's web application for reuse in other domains. The application relies on a single configuration document that describes the routing, data model, component settings and external services integrations. The container hosts an isomorphic Meteor JavaScript application, MongoDB database and ElasticSearch search engine. Multiple containers can be configured as microservices to serve portions of the application or rely on externally hosted MongoDB, ElasticSearch, or third-party services to efficiently scale computational demands. FIESTA is particularly well suited for many Earth Science disciplines with its flexible data model, mapping, account management, upload tool to private workspaces, reference metadata, image galleries, full text searches and detailed filters. EarthRef's Seamount Catalog of bathymetry and morphology data, EarthRef's Geochemical Earth Reference Model (GERM) databases, and Oregon State University's Marine and Geology Repository (http://osu-mgr.org) will benefit from custom adaptations of FIESTA.

  14. Validated environmental and physiological data from the CELSS Breadboard Projects Biomass Production Chamber. BWT931 (Wheat cv. Yecora Rojo)

    NASA Technical Reports Server (NTRS)

    Stutte, G. W.; Mackowiak, C. L.; Markwell, G. A.; Wheeler, R. M.; Sager, J. C.

    1993-01-01

    This KSC database is being made available to the scientific research community to facilitate the development of crop development models, to test monitoring and control strategies, and to identify environmental limitations in crop production systems. The KSC validated dataset consists of 17 parameters necessary to maintain bioregenerative life support functions: water purification, CO2 removal, O2 production, and biomass production. The data are available on disk as either a DATABASE SUBSET (one week of 5-minute data) or DATABASE SUMMARY (daily averages of parameters). Online access to the VALIDATED DATABASE will be made available to institutions with specific programmatic requirements. Availability and access to the KSC validated database are subject to approval and limitations implicit in KSC computer security policies.

  15. Causal biological network database: a comprehensive platform of causal biological network models focused on the pulmonary and vascular systems.

    PubMed

    Boué, Stéphanie; Talikka, Marja; Westra, Jurjen Willem; Hayes, William; Di Fabio, Anselmo; Park, Jennifer; Schlage, Walter K; Sewer, Alain; Fields, Brett; Ansari, Sam; Martin, Florian; Veljkovic, Emilija; Kenney, Renee; Peitsch, Manuel C; Hoeng, Julia

    2015-01-01

    With the wealth of publications and data available, powerful and transparent computational approaches are required to represent measured data and scientific knowledge in a computable and searchable format. We developed a set of biological network models, scripted in the Biological Expression Language, that reflect causal signaling pathways across a wide range of biological processes, including cell fate, cell stress, cell proliferation, inflammation, tissue repair and angiogenesis in the pulmonary and cardiovascular context. This comprehensive collection of networks is now freely available to the scientific community in a centralized web-based repository, the Causal Biological Network database, which is composed of over 120 manually curated and well annotated biological network models and can be accessed at http://causalbionet.com. The website accesses a MongoDB, which stores all versions of the networks as JSON objects and allows users to search for genes, proteins, biological processes, small molecules and keywords in the network descriptions to retrieve biological networks of interest. The content of the networks can be visualized and browsed. Nodes and edges can be filtered and all supporting evidence for the edges can be browsed and is linked to the original articles in PubMed. Moreover, networks may be downloaded for further visualization and evaluation. Database URL: http://causalbionet.com © The Author(s) 2015. Published by Oxford University Press.

  16. Open innovation: Towards sharing of data, models and workflows.

    PubMed

    Conrado, Daniela J; Karlsson, Mats O; Romero, Klaus; Sarr, Céline; Wilkins, Justin J

    2017-11-15

    Sharing of resources across organisations to support open innovation is an old idea, but which is being taken up by the scientific community at increasing speed, concerning public sharing in particular. The ability to address new questions or provide more precise answers to old questions through merged information is among the attractive features of sharing. Increased efficiency through reuse, and increased reliability of scientific findings through enhanced transparency, are expected outcomes from sharing. In the field of pharmacometrics, efforts to publicly share data, models and workflow have recently started. Sharing of individual-level longitudinal data for modelling requires solving legal, ethical and proprietary issues similar to many other fields, but there are also pharmacometric-specific aspects regarding data formats, exchange standards, and database properties. Several organisations (CDISC, C-Path, IMI, ISoP) are working to solve these issues and propose standards. There are also a number of initiatives aimed at collecting disease-specific databases - Alzheimer's Disease (ADNI, CAMD), malaria (WWARN), oncology (PDS), Parkinson's Disease (PPMI), tuberculosis (CPTR, TB-PACTS, ReSeqTB) - suitable for drug-disease modelling. Organized sharing of pharmacometric executable model code and associated information has in the past been sparse, but a model repository (DDMoRe Model Repository) intended for the purpose has recently been launched. In addition several other services can facilitate model sharing more generally. Pharmacometric workflows have matured over the last decades and initiatives to more fully capture those applied to analyses are ongoing. In order to maximize both the impact of pharmacometrics and the knowledge extracted from clinical data, the scientific community needs to take ownership of and create opportunities for open innovation. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. GeNNet: an integrated platform for unifying scientific workflows and graph databases for transcriptome data analysis

    PubMed Central

    Gadelha, Luiz; Ribeiro-Alves, Marcelo; Porto, Fábio

    2017-01-01

    There are many steps in analyzing transcriptome data, from the acquisition of raw data to the selection of a subset of representative genes that explain a scientific hypothesis. The data produced can be represented as networks of interactions among genes and these may additionally be integrated with other biological databases, such as Protein-Protein Interactions, transcription factors and gene annotation. However, the results of these analyses remain fragmented, imposing difficulties, either for posterior inspection of results, or for meta-analysis by the incorporation of new related data. Integrating databases and tools into scientific workflows, orchestrating their execution, and managing the resulting data and its respective metadata are challenging tasks. Additionally, a great amount of effort is equally required to run in-silico experiments to structure and compose the information as needed for analysis. Different programs may need to be applied and different files are produced during the experiment cycle. In this context, the availability of a platform supporting experiment execution is paramount. We present GeNNet, an integrated transcriptome analysis platform that unifies scientific workflows with graph databases for selecting relevant genes according to the evaluated biological systems. It includes GeNNet-Wf, a scientific workflow that pre-loads biological data, pre-processes raw microarray data and conducts a series of analyses including normalization, differential expression inference, clusterization and gene set enrichment analysis. A user-friendly web interface, GeNNet-Web, allows for setting parameters, executing, and visualizing the results of GeNNet-Wf executions. To demonstrate the features of GeNNet, we performed case studies with data retrieved from GEO, particularly using a single-factor experiment in different analysis scenarios. As a result, we obtained differentially expressed genes for which biological functions were analyzed. The results are integrated into GeNNet-DB, a database about genes, clusters, experiments and their properties and relationships. The resulting graph database is explored with queries that demonstrate the expressiveness of this data model for reasoning about gene interaction networks. GeNNet is the first platform to integrate the analytical process of transcriptome data with graph databases. It provides a comprehensive set of tools that would otherwise be challenging for non-expert users to install and use. Developers can add new functionality to components of GeNNet. The derived data allows for testing previous hypotheses about an experiment and exploring new ones through the interactive graph database environment. It enables the analysis of different data on humans, rhesus, mice and rat coming from Affymetrix platforms. GeNNet is available as an open source platform at https://github.com/raquele/GeNNet and can be retrieved as a software container with the command docker pull quelopes/gennet. PMID:28695067

  18. GeNNet: an integrated platform for unifying scientific workflows and graph databases for transcriptome data analysis.

    PubMed

    Costa, Raquel L; Gadelha, Luiz; Ribeiro-Alves, Marcelo; Porto, Fábio

    2017-01-01

    There are many steps in analyzing transcriptome data, from the acquisition of raw data to the selection of a subset of representative genes that explain a scientific hypothesis. The data produced can be represented as networks of interactions among genes and these may additionally be integrated with other biological databases, such as Protein-Protein Interactions, transcription factors and gene annotation. However, the results of these analyses remain fragmented, imposing difficulties, either for posterior inspection of results, or for meta-analysis by the incorporation of new related data. Integrating databases and tools into scientific workflows, orchestrating their execution, and managing the resulting data and its respective metadata are challenging tasks. Additionally, a great amount of effort is equally required to run in-silico experiments to structure and compose the information as needed for analysis. Different programs may need to be applied and different files are produced during the experiment cycle. In this context, the availability of a platform supporting experiment execution is paramount. We present GeNNet, an integrated transcriptome analysis platform that unifies scientific workflows with graph databases for selecting relevant genes according to the evaluated biological systems. It includes GeNNet-Wf, a scientific workflow that pre-loads biological data, pre-processes raw microarray data and conducts a series of analyses including normalization, differential expression inference, clusterization and gene set enrichment analysis. A user-friendly web interface, GeNNet-Web, allows for setting parameters, executing, and visualizing the results of GeNNet-Wf executions. To demonstrate the features of GeNNet, we performed case studies with data retrieved from GEO, particularly using a single-factor experiment in different analysis scenarios. As a result, we obtained differentially expressed genes for which biological functions were analyzed. The results are integrated into GeNNet-DB, a database about genes, clusters, experiments and their properties and relationships. The resulting graph database is explored with queries that demonstrate the expressiveness of this data model for reasoning about gene interaction networks. GeNNet is the first platform to integrate the analytical process of transcriptome data with graph databases. It provides a comprehensive set of tools that would otherwise be challenging for non-expert users to install and use. Developers can add new functionality to components of GeNNet. The derived data allows for testing previous hypotheses about an experiment and exploring new ones through the interactive graph database environment. It enables the analysis of different data on humans, rhesus, mice and rat coming from Affymetrix platforms. GeNNet is available as an open source platform at https://github.com/raquele/GeNNet and can be retrieved as a software container with the command docker pull quelopes/gennet.

  19. Lost in translation: the impact of publication language on citation frequency in the scientific dental literature.

    PubMed

    Poomkottayil, Deepak; Bornstein, Michael M; Sendi, Pedram

    2011-01-28

    Citation metrics are commonly used as a proxy for scientific merit and relevance. Papers published in English, however, may exhibit a higher citation frequency than research articles published in other languages, though this issue has not yet been investigated from a Swiss perspective where English is not the native language. To assess the impact of publication language on citation frequency we focused on oral surgery papers indexed in PubMed MEDLINE that were published by Swiss Dental Schools between 2002 and 2007. Citation frequency of research papers was extracted from the Institute for Scientific Information (ISI) and Google Scholar database. A univariate and multivariate logistic regression model was used to assess the impact of publication language (English versus German/French) on citation frequency, adjusted for journal impact factor, number of authors and research topic. Papers published in English showed a 6 (ISI database) and 7 (Google Scholar) times higher odds for being cited than research articles published in German or French. Our results suggest that publication language substantially influences the citation frequency of a research paper. Researchers should publish their work in English to render them accessible to the international scientific community.

  20. Community-Supported Data Repositories in Paleobiology: A 'Middle Tail' Between the Geoscientific and Informatics Communities

    NASA Astrophysics Data System (ADS)

    Williams, J. W.; Ashworth, A. C.; Betancourt, J. L.; Bills, B.; Blois, J.; Booth, R.; Buckland, P.; Charles, D.; Curry, B. B.; Goring, S. J.; Davis, E.; Grimm, E. C.; Graham, R. W.; Smith, A. J.

    2015-12-01

    Community-supported data repositories (CSDRs) in paleoecology and paleoclimatology have a decades-long tradition and serve multiple critical scientific needs. CSDRs facilitate synthetic large-scale scientific research by providing open-access and curated data that employ community-supported metadata and data standards. CSDRs serve as a 'middle tail' or boundary organization between information scientists and the long-tail community of individual geoscientists collecting and analyzing paleoecological data. Over the past decades, a distributed network of CSDRs has emerged, each serving a particular suite of data and research communities, e.g. Neotoma Paleoecology Database, Paleobiology Database, International Tree Ring Database, NOAA NCEI for Paleoclimatology, Morphobank, iDigPaleo, and Integrated Earth Data Alliance. Recently, these groups have organized into a common Paleobiology Data Consortium dedicated to improving interoperability and sharing best practices and protocols. The Neotoma Paleoecology Database offers one example of an active and growing CSDR, designed to facilitate research into ecological and evolutionary dynamics during recent past global change. Neotoma combines a centralized database structure with distributed scientific governance via multiple virtual constituent data working groups. The Neotoma data model is flexible and can accommodate a variety of paleoecological proxies from many depositional contests. Data input into Neotoma is done by trained Data Stewards, drawn from their communities. Neotoma data can be searched, viewed, and returned to users through multiple interfaces, including the interactive Neotoma Explorer map interface, REST-ful Application Programming Interfaces (APIs), the neotoma R package, and the Tilia stratigraphic software. Neotoma is governed by geoscientists and provides community engagement through training workshops for data contributors, stewards, and users. Neotoma is engaged in the Paleobiological Data Consortium and other efforts to improve interoperability among cyberinfrastructure in the paleogeosciences.

  1. BioModels Database: An enhanced, curated and annotated resource for published quantitative kinetic models

    PubMed Central

    2010-01-01

    Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation systems, and to study the clustering of models based upon their annotations. Model deposition to the database today is advised by several publishers of scientific journals. The models in BioModels Database are freely distributed and reusable; the underlying software infrastructure is also available from SourceForge https://sourceforge.net/projects/biomodels/ under the GNU General Public License. PMID:20587024

  2. The NBER-Rensselaer Scientific Papers Database: Form, Nature, and Function. NBER Working Paper No. 14575

    ERIC Educational Resources Information Center

    Adams, James D.; Clemmons, J. Roger

    2008-01-01

    This article is a guide to the NBER-Rensselaer Scientific Papers Database, which includes more than 2.5 million scientific publications and over 21 million citations to those papers. The data cover an important sample of 110 top U.S. universities and 200 top U.S.-based R&D-performing firms during the period 1981-1999. This article describes the…

  3. Workshop Report on a Future Information Infrastructure for the Physical Sciences. The Facts of the Matter: Finding, Understanding, and Using Information about out Physical World Held at the National Academy of Sciences on May 30-31, 2000

    DTIC Science & Technology

    2000-05-31

    Grey Literature Network Service ( Farace , Dominic,1997) as, “that which is produced on all levels of government, academics, business and industry in... literature is available, on-line, to scientific workers throughout the world, for a world scientific database.” These reports served as the base to begin...all the world’s formal scientific literature is available, on-line, to scientific workers throughout the world, for a world scientific database

  4. Finding mouse models of human lymphomas and leukemia's using the Jackson laboratory mouse tumor biology database.

    PubMed

    Begley, Dale A; Sundberg, John P; Krupke, Debra M; Neuhauser, Steven B; Bult, Carol J; Eppig, Janan T; Morse, Herbert C; Ward, Jerrold M

    2015-12-01

    Many mouse models have been created to study hematopoietic cancer types. There are over thirty hematopoietic tumor types and subtypes, both human and mouse, with various origins, characteristics and clinical prognoses. Determining the specific type of hematopoietic lesion produced in a mouse model and identifying mouse models that correspond to the human subtypes of these lesions has been a continuing challenge for the scientific community. The Mouse Tumor Biology Database (MTB; http://tumor.informatics.jax.org) is designed to facilitate use of mouse models of human cancer by providing detailed histopathologic and molecular information on lymphoma subtypes, including expertly annotated, on line, whole slide scans, and providing a repository for storing information on and querying these data for specific lymphoma models. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Implementing a Community-Driven Cyberinfrastructure Platform for the Paleo- and Rock Magnetic Scientific Fields that Generalizes to Other Geoscience Disciplines

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Jarboe, N.; Koppers, A. A.; Tauxe, L.; Constable, C.

    2013-12-01

    EarthRef.org is a geoscience umbrella website for several databases and data and model repository portals. These portals, unified in the mandate to preserve their respective data and promote scientific collaboration in their fields, are also disparate in their schemata. The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the paleo- and rock magnetic scientific community to archive their wealth of peer-reviewed raw data and interpretations from studies on natural and synthetic samples and relies on a partially strict subsumptive hierarchical data model. The Geochemical Earth Reference Model (http://earthref.org/GERM/) portal focuses on the chemical characterization of the Earth and relies on two data schemata: a repository of peer-reviewed reservoir geochemistry, and a database of partition coefficients for rocks, minerals, and elements. The Seamount Biogeosciences Network (http://earthref.org/SBN/) encourages the collaboration between the diverse disciplines involved in seamount research and includes the Seamount Catalog (http://earthref.org/SC/) of bathymetry and morphology. All of these portals also depend on the EarthRef Reference Database (http://earthref.org/ERR/) for publication reference metadata and the EarthRef Digital Archive (http://earthref.org/ERDA/), a generic repository of data objects and their metadata. The development of the new MagIC Search Interface (http://earthref.org/MagIC/search/) centers on a reusable platform designed to be flexible enough for largely heterogeneous datasets and to scale up to datasets with tens of millions of records. The HTML5 web application and Oracle 11g database residing at the San Diego Supercomputer Center (SDSC) support the online contribution and editing of complex datasets in a spreadsheet environment and the browsing and filtering of these contributions in the context of thousands of other datasets. EarthRef.org is in the process of implementing this platform across all of its data portals in spite of the wide variety of data schemata and is dedicated to serving the geoscience community with as little effort from the end-users as possible.

  6. Making species checklists understandable to machines - a shift from relational databases to ontologies.

    PubMed

    Laurenne, Nina; Tuominen, Jouni; Saarenmaa, Hannu; Hyvönen, Eero

    2014-01-01

    The scientific names of plants and animals play a major role in Life Sciences as information is indexed, integrated, and searched using scientific names. The main problem with names is their ambiguous nature, because more than one name may point to the same taxon and multiple taxa may share the same name. In addition, scientific names change over time, which makes them open to various interpretations. Applying machine-understandable semantics to these names enables efficient processing of biological content in information systems. The first step is to use unique persistent identifiers instead of name strings when referring to taxa. The most commonly used identifiers are Life Science Identifiers (LSID), which are traditionally used in relational databases, and more recently HTTP URIs, which are applied on the Semantic Web by Linked Data applications. We introduce two models for expressing taxonomic information in the form of species checklists. First, we show how species checklists are presented in a relational database system using LSIDs. Then, in order to gain a more detailed representation of taxonomic information, we introduce meta-ontology TaxMeOn to model the same content as Semantic Web ontologies where taxa are identified using HTTP URIs. We also explore how changes in scientific names can be managed over time. The use of HTTP URIs is preferable for presenting the taxonomic information of species checklists. An HTTP URI identifies a taxon and operates as a web address from which additional information about the taxon can be located, unlike LSID. This enables the integration of biological data from different sources on the web using Linked Data principles and prevents the formation of information silos. The Linked Data approach allows a user to assemble information and evaluate the complexity of taxonomical data based on conflicting views of taxonomic classifications. Using HTTP URIs and Semantic Web technologies also facilitate the representation of the semantics of biological data, and in this way, the creation of more "intelligent" biological applications and services.

  7. Safety of phase I clinical trials with monoclonal antibodies in Germany--the regulatory requirements viewed in the aftermath of the TGN1412 disaster.

    PubMed

    Liedert, B; Bassus, S; Schneider, C K; Kalinke, U; Löwer, J

    2007-01-01

    This review summarizes scientific, ethical and regulatory aspects of Phase I clinical trials with monoclonal antibodies. The current standard requirements for pre-clinical testing and for clinical study design are presented. The scientific considerations discussed herein are generally applicable, the view on legal requirements for clinical trials refer to the German jurisdiction only. The adverse effects associated with the TGN1412 Phase I trial indicate that the predictive value of pre-clinical animal models requires reevaluation and that, in certain cases, some issues of clinical trial protocols such as dose fixing may need refinement or redesign. Concrete safety measures, which have been proposed as a consequence of the TGN1412 event include introduction of criteria for high-risk antibodies, sequential inclusion of trial participants and implementation of pre-Phase I studies where dose calculation is based on the pre-clinical No Effect Level instead of the No Observed Adverse Effect Level. The recently established European clinical trials database (EUDRACT Database) is a further safety tool to expedite the sharing of relevant information between scientific authorities.

  8. A Visual Database System for Image Analysis on Parallel Computers and its Application to the EOS Amazon Project

    NASA Technical Reports Server (NTRS)

    Shapiro, Linda G.; Tanimoto, Steven L.; Ahrens, James P.

    1996-01-01

    The goal of this task was to create a design and prototype implementation of a database environment that is particular suited for handling the image, vision and scientific data associated with the NASA's EOC Amazon project. The focus was on a data model and query facilities that are designed to execute efficiently on parallel computers. A key feature of the environment is an interface which allows a scientist to specify high-level directives about how query execution should occur.

  9. A DBMS architecture for global change research

    NASA Astrophysics Data System (ADS)

    Hachem, Nabil I.; Gennert, Michael A.; Ward, Matthew O.

    1993-08-01

    The goal of this research is the design and development of an integrated system for the management of very large scientific databases, cartographic/geographic information processing, and exploratory scientific data analysis for global change research. The system will represent both spatial and temporal knowledge about natural and man-made entities on the eath's surface, following an object-oriented paradigm. A user will be able to derive, modify, and apply, procedures to perform operations on the data, including comparison, derivation, prediction, validation, and visualization. This work represents an effort to extend the database technology with an intrinsic class of operators, which is extensible and responds to the growing needs of scientific research. Of significance is the integration of many diverse forms of data into the database, including cartography, geography, hydrography, hypsography, images, and urban planning data. Equally important is the maintenance of metadata, that is, data about the data, such as coordinate transformation parameters, map scales, and audit trails of previous processing operations. This project will impact the fields of geographical information systems and global change research as well as the database community. It will provide an integrated database management testbed for scientific research, and a testbed for the development of analysis tools to understand and predict global change.

  10. The Model Parameter Estimation Experiment (MOPEX): Its structure, connection to other international initiatives and future directions

    USGS Publications Warehouse

    Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.

    2006-01-01

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.

  11. NASA scientific and technical information for the 1990s

    NASA Technical Reports Server (NTRS)

    Cotter, Gladys A.

    1990-01-01

    Projections for NASA scientific and technical information (STI) in the 1990s are outlined. NASA STI for the 1990s will maintain a quality bibliographic and full-text database, emphasizing electronic input and products supplemented by networked access to a wide variety of sources, particularly numeric databases.

  12. Patent Citation Networks

    NASA Astrophysics Data System (ADS)

    Strandburg, Katherine; Tobochnik, Jan; Csardi, Gabor

    2005-03-01

    Patent applications contain citations which are similar to but different from those found in published scientific papers. In particular, patent citations are governed by legal rules. Moreover, a large fraction of citations are made not by the patent inventor, but by a patent examiner during the application procedure. Using a patent database, which contains the patent citations, assignees and inventors, we have applied network analysis and built network models. Our work includes determining the structure of the patent citation network and comparing it to existing results for scientific citation networks; identifying differences between various technological fields and comparing the observed differences to expectations based on anecdotal evidence about patenting practice; and developing models to explain the results.

  13. Biocuration at the Saccharomyces genome database.

    PubMed

    Skrzypek, Marek S; Nash, Robert S

    2015-08-01

    Saccharomyces Genome Database is an online resource dedicated to managing information about the biology and genetics of the model organism, yeast (Saccharomyces cerevisiae). This information is derived primarily from scientific publications through a process of human curation that involves manual extraction of data and their organization into a comprehensive system of knowledge. This system provides a foundation for further analysis of experimental data coming from research on yeast as well as other organisms. In this review we will demonstrate how biocuration and biocurators add a key component, the biological context, to our understanding of how genes, proteins, genomes and cells function and interact. We will explain the role biocurators play in sifting through the wealth of biological data to incorporate and connect key information. We will also discuss the many ways we assist researchers with their various research needs. We hope to convince the reader that manual curation is vital in converting the flood of data into organized and interconnected knowledge, and that biocurators play an essential role in the integration of scientific information into a coherent model of the cell. © 2015 Wiley Periodicals, Inc.

  14. Biocuration at the Saccharomyces Genome Database

    PubMed Central

    Skrzypek, Marek S.; Nash, Robert S.

    2015-01-01

    Saccharomyces Genome Database is an online resource dedicated to managing information about the biology and genetics of the model organism, yeast (Saccharomyces cerevisiae). This information is derived primarily from scientific publications through a process of human curation that involves manual extraction of data and their organization into a comprehensive system of knowledge. This system provides a foundation for further analysis of experimental data coming from research on yeast as well as other organisms. In this review we will demonstrate how biocuration and biocurators add a key component, the biological context, to our understanding of how genes, proteins, genomes and cells function and interact. We will explain the role biocurators play in sifting through the wealth of biological data to incorporate and connect key information. We will also discuss the many ways we assist researchers with their various research needs. We hope to convince the reader that manual curation is vital in converting the flood of data into organized and interconnected knowledge, and that biocurators play an essential role in the integration of scientific information into a coherent model of the cell. PMID:25997651

  15. Arthroplasty knee registry of Catalonia: What scientific evidence supports the implantation of our prosthesis?

    PubMed

    Samaniego Alonso, R; Gaviria Parada, E; Pons Cabrafiga, M; Espallargues Carreras, M; Martinez Cruz, O

    2018-02-28

    In our environment, it is increasingly necessary to perform an activity based on scientific evidence and the field of prosthetic surgery should be governed by the same principles. The national arthroplasty registries allow us to obtain a large amount of data in order to evaluate this technique. The aim of our study is to analyse the scientific evidence that supports the primary total knee arthroplasties implanted in Catalonian public hospitals, based on the Arthoplasty Registry of Catalonia (RACat) MATERIAL AND METHODS: A review of the literature was carried out on knee prostheses (cruciate retaining, posterior stabilized, constricted and rotational) recorded in RACat between the period 2005-2013 in the following databases: Orthopedic Data Evaluation Panel, PubMed, TripDatabase and Google Scholar. The prostheses implanted in fewer than 10 units (1,358 prostheses corresponding to 62 models) were excluded. 41,947 prostheses (96.86%) were analysed out of 43,305 implanted, corresponding to 74 different models. In 13 models (n = 4,715) (11.24%) no clinical evidence to support their use was found. In the remaining 36 models (n = 13,609) (32.45%), level iv studies were the most predominant evidence. There was a significant number of implanted prostheses (11.24%) for which no clinical evidence was found. The number of models should be noted, 36 out of 110, with fewer than 10 units implanted. The use of arthroplasty registries has proved an extremely useful tool that allows us to analyse and draw conclusions in order to improve the efficiency of this surgical technique. Copyright © 2018 SECOT. Publicado por Elsevier España, S.L.U. All rights reserved.

  16. Multiple imputation as one tool to provide longitudinal databases for modelling human height and weight development.

    PubMed

    Aßmann, C

    2016-06-01

    Besides large efforts regarding field work, provision of valid databases requires statistical and informational infrastructure to enable long-term access to longitudinal data sets on height, weight and related issues. To foster use of longitudinal data sets within the scientific community, provision of valid databases has to address data-protection regulations. It is, therefore, of major importance to hinder identifiability of individuals from publicly available databases. To reach this goal, one possible strategy is to provide a synthetic database to the public allowing for pretesting strategies for data analysis. The synthetic databases can be established using multiple imputation tools. Given the approval of the strategy, verification is based on the original data. Multiple imputation by chained equations is illustrated to facilitate provision of synthetic databases as it allows for capturing a wide range of statistical interdependencies. Also missing values, typically occurring within longitudinal databases for reasons of item non-response, can be addressed via multiple imputation when providing databases. The provision of synthetic databases using multiple imputation techniques is one possible strategy to ensure data protection, increase visibility of longitudinal databases and enhance the analytical potential.

  17. Illuminating the Depths of the MagIC (Magnetics Information Consortium) Database

    NASA Astrophysics Data System (ADS)

    Koppers, A. A. P.; Minnett, R.; Jarboe, N.; Jonestrask, L.; Tauxe, L.; Constable, C.

    2015-12-01

    The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the paleo-, geo-, and rock magnetic scientific community. Its mission is to archive their wealth of peer-reviewed raw data and interpretations from magnetics studies on natural and synthetic samples. Many of these valuable data are legacy datasets that were never published in their entirety, some resided in other databases that are no longer maintained, and others were never digitized from the field notebooks and lab work. Due to the volume of data collected, most studies, modern and legacy, only publish the interpreted results and, occasionally, a subset of the raw data. MagIC is making an extraordinary effort to archive these data in a single data model, including the raw instrument measurements if possible. This facilitates the reproducibility of the interpretations, the re-interpretation of the raw data as the community introduces new techniques, and the compilation of heterogeneous datasets that are otherwise distributed across multiple formats and physical locations. MagIC has developed tools to assist the scientific community in many stages of their workflow. Contributors easily share studies (in a private mode if so desired) in the MagIC Database with colleagues and reviewers prior to publication, publish the data online after the study is peer reviewed, and visualize their data in the context of the rest of the contributions to the MagIC Database. From organizing their data in the MagIC Data Model with an online editable spreadsheet, to validating the integrity of the dataset with automated plots and statistics, MagIC is continually lowering the barriers to transforming dark data into transparent and reproducible datasets. Additionally, this web application generalizes to other databases in MagIC's umbrella website (EarthRef.org) so that the Geochemical Earth Reference Model (http://earthref.org/GERM/) portal, Seamount Biogeosciences Network (http://earthref.org/SBN/), EarthRef Digital Archive (http://earthref.org/ERDA/) and EarthRef Reference Database (http://earthref.org/ERR/) benefit from its development.

  18. Recent Developments of the GLIMS Glacier Database

    NASA Astrophysics Data System (ADS)

    Raup, B. H.; Berthier, E.; Bolch, T.; Kargel, J. S.; Paul, F.; Racoviteanu, A.

    2017-12-01

    Earth's glaciers are shrinking almost without exception, leading to changes in water resources, timing of runoff, sea level, and hazard potential. Repeat mapping of glacier outlines, lakes, and glacier topography, along with glacial processes, is critically needed to understand how glaciers will react to a changing climate, and how those changes will impact humans. To understand the impacts and processes behind the observed changes, it is crucial to monitor glaciers through time by mapping their areal extent, snow lines, ice flow velocities, associated water bodies, and thickness changes. The glacier database of the Global Land Ice Measurements from Space (GLIMS) initiative is the only multi-temporal glacier database capable of tracking all these glacier measurements and providing them to the scientific community and broader public.Recent developments in GLIMS include improvements in the database and web applications and new activities in the international GLIMS community. The coverage of the GLIMS database has recently grown geographically and temporally by drawing on the Randolph Glacier Inventory (RGI) and other new data sets. The GLIMS database is globally complete, and approximately one third of glaciers have outlines from more than one time. New tools for visualizing and downloading GLIMS data in a choice of formats and data models have been developed, and a new data model for handling multiple glacier records through time while avoiding double-counting of glacier number or area is nearing completion. A GLIMS workshop was held in Boulder, Colorado this year to facilitate two-way communication with the greater community on future needs.The result of this work is a more complete and accurate glacier data repository that shows both the current state of glaciers on Earth and how they have changed in recent decades. Needs for future scientific and technical developments were identified and prioritized at the GLIMS Workshop, and are reported here.

  19. eSciMart: Web Platform for Scientific Software Marketplace

    NASA Astrophysics Data System (ADS)

    Kryukov, A. P.; Demichev, A. P.

    2016-10-01

    In this paper we suggest a design of a web marketplace where users of scientific application software and databases, presented in the form of web services, as well as their providers will have presence simultaneously. The model, which will be the basis for the web marketplace is close to the customer-to-customer (C2C) model, which has been successfully used, for example, on the auction sites such as eBay (ebay.com). Unlike the classical model of C2C the suggested marketplace focuses on application software in the form of web services, and standardization of API through which application software will be integrated into the web marketplace. A prototype of such a platform, entitled eSciMart, is currently being developed at SINP MSU.

  20. Performance analysis of a dual-tree algorithm for computing spatial distance histograms

    PubMed Central

    Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni

    2011-01-01

    Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753

  1. A service-based framework for pharmacogenomics data integration

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Bai, Xiaoying; Li, Jing; Ding, Cong

    2010-08-01

    Data are central to scientific research and practices. The advance of experiment methods and information retrieval technologies leads to explosive growth of scientific data and databases. However, due to the heterogeneous problems in data formats, structures and semantics, it is hard to integrate the diversified data that grow explosively and analyse them comprehensively. As more and more public databases are accessible through standard protocols like programmable interfaces and Web portals, Web-based data integration becomes a major trend to manage and synthesise data that are stored in distributed locations. Mashup, a Web 2.0 technique, presents a new way to compose content and software from multiple resources. The paper proposes a layered framework for integrating pharmacogenomics data in a service-oriented approach using the mashup technology. The framework separates the integration concerns from three perspectives including data, process and Web-based user interface. Each layer encapsulates the heterogeneous issues of one aspect. To facilitate the mapping and convergence of data, the ontology mechanism is introduced to provide consistent conceptual models across different databases and experiment platforms. To support user-interactive and iterative service orchestration, a context model is defined to capture information of users, tasks and services, which can be used for service selection and recommendation during a dynamic service composition process. A prototype system is implemented and cases studies are presented to illustrate the promising capabilities of the proposed approach.

  2. International Soil Carbon Network (ISCN) Database v3-1

    DOE Data Explorer

    Nave, Luke [University of Michigan] (ORCID:0000000182588335); Johnson, Kris [USDA-Forest Service; van Ingen, Catharine [Microsoft Research; Agarwal, Deborah [Lawrence Berkeley National Laboratory] (ORCID:0000000150452396); Humphrey, Marty [University of Virginia; Beekwilder, Norman [University of Virginia

    2016-01-01

    The ISCN is an international scientific community devoted to the advancement of soil carbon research. The ISCN manages an open-access, community-driven soil carbon database. This is version 3-1 of the ISCN Database, released in December 2015. It gathers 38 separate dataset contributions, totalling 67,112 sites with data from 71,198 soil profiles and 431,324 soil layers. For more information about the ISCN, its scientific community and resources, data policies and partner networks visit: http://iscn.fluxdata.org/.

  3. Analysis and interpretation of diffuse x-ray emission using data from the Einstein satellite

    NASA Technical Reports Server (NTRS)

    Helfand, David J.

    1991-01-01

    An ambitious program to create a powerful and accessible archive of the HEAO-2 Imaging Proportional Counter (IPC) database was outlined. The scientific utility of that database for studies of diffuse x ray emissions was explored. Technical and scientific accomplishments are reviewed. Three papers were presented which have major new scientific findings relevant to the global structure of the interstellar medium and the origin of the cosmic x ray background. An all-sky map of diffuse x ray emission was constructed.

  4. How Do You Like Your Science, Wet or Dry? How Two Lab Experiences Influence Student Understanding of Science Concepts and Perceptions of Authentic Scientific Practice

    PubMed Central

    Munn, Maureen; Knuth, Randy; Van Horne, Katie; Shouse, Andrew W.; Levias, Sheldon

    2017-01-01

    This study examines how two kinds of authentic research experiences related to smoking behavior—genotyping human DNA (wet lab) and using a database to test hypotheses about factors that affect smoking behavior (dry lab)—influence students’ perceptions and understanding of scientific research and related science concepts. The study used pre and post surveys and a focus group protocol to compare students who conducted the research experiences in one of two sequences: genotyping before database and database before genotyping. Students rated the genotyping experiment to be more like real science than the database experiment, in spite of the fact that they associated more scientific tasks with the database experience than genotyping. Independent of the order of completing the labs, students showed gains in their understanding of science concepts after completion of the two experiences. There was little change in students’ attitudes toward science pre to post, as measured by the Scientific Attitude Inventory II. However, on the basis of their responses during focus groups, students developed more sophisticated views about the practices and nature of science after they had completed both research experiences, independent of the order in which they experienced them. PMID:28572181

  5. Designing a data portal for synthesis modeling

    NASA Astrophysics Data System (ADS)

    Holmes, M. A.

    2006-12-01

    Processing of field and model data in multi-disciplinary integrated science studies is a vital part of synthesis modeling. Collection and storage techniques for field data vary greatly between the participating scientific disciplines due to the nature of the data being collected, whether it be in situ, remotely sensed, or recorded by automated data logging equipment. Spreadsheets, personal databases, text files and binary files are used in the initial storage and processing of the raw data. In order to be useful to scientists, engineers and modelers the data need to be stored in a format that is easily identifiable, accessible and transparent to a variety of computing environments. The Model Operations and Synthesis (MOAS) database and associated web portal were created to provide such capabilities. The industry standard relational database is comprised of spatial and temporal data tables, shape files and supporting metadata accessible over the network, through a menu driven web-based portal or spatially accessible through ArcSDE connections from the user's local GIS desktop software. A separate server provides public access to spatial data and model output in the form of attributed shape files through an ArcIMS web-based graphical user interface.

  6. Immediate Dissemination of Student Discoveries to a Model Organism Database Enhances Classroom-Based Research Experiences

    ERIC Educational Resources Information Center

    Wiley, Emily A.; Stover, Nicholas A.

    2014-01-01

    Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have…

  7. EOSCUBE: A Constraint Database System for High-Level Specification and Efficient Generation of EOSDIS Products. Phase 1; Proof-of-Concept

    NASA Technical Reports Server (NTRS)

    Brodsky, Alexander; Segal, Victor E.

    1999-01-01

    The EOSCUBE constraint database system is designed to be a software productivity tool for high-level specification and efficient generation of EOSDIS and other scientific products. These products are typically derived from large volumes of multidimensional data which are collected via a range of scientific instruments.

  8. A Distributed Web-based Solution for Ionospheric Model Real-time Management, Monitoring, and Short-term Prediction

    NASA Astrophysics Data System (ADS)

    Kulchitsky, A.; Maurits, S.; Watkins, B.

    2006-12-01

    With the widespread availability of the Internet today, many people can monitor various scientific research activities. It is important to accommodate this interest providing on-line access to dynamic and illustrative Web-resources, which could demonstrate different aspects of ongoing research. It is especially important to explain and these research activities for high school and undergraduate students, thereby providing more information for making decisions concerning their future studies. Such Web resources are also important to clarify scientific research for the general public, in order to achieve better awareness of research progress in various fields. Particularly rewarding is dissemination of information about ongoing projects within Universities and research centers to their local communities. The benefits of this type of scientific outreach are mutual, since development of Web-based automatic systems is prerequisite for many research projects targeting real-time monitoring and/or modeling of natural conditions. Continuous operation of such systems provide ongoing research opportunities for the statistically massive validation of the models, as well. We have developed a Web-based system to run the University of Alaska Fairbanks Polar Ionospheric Model in real-time. This model makes use of networking and computational resources at the Arctic Region Supercomputing Center. This system was designed to be portable among various operating systems and computational resources. Its components can be installed across different computers, separating Web servers and computational engines. The core of the system is a Real-Time Management module (RMM) written Python, which facilitates interactions of remote input data transfers, the ionospheric model runs, MySQL database filling, and PHP scripts for the Web-page preparations. The RMM downloads current geophysical inputs as soon as they become available at different on-line depositories. This information is processed to provide inputs for the next ionospheic model time step and then stored in a MySQL database as the first part of the time-specific record. The RMM then performs synchronization of the input times with the current model time, prepares a decision on initialization for the next model time step, and monitors its execution. Then, as soon as the model completes computations for the next time step, RMM visualizes the current model output into various short-term (about 1-2 hours) forecasting products and compares prior results with available ionospheric measurements. The RMM places prepared images into the MySQL database, which can be located on a different computer node, and then proceeds to the next time interval continuing the time-loop. The upper-level interface of this real-time system is the a PHP-based Web site (http://www.arsc.edu/SpaceWeather/new). This site provides general information about the Earth polar and adjacent mid-latitude ionosphere, allows for monitoring of the current developments and short-term forecasts, and facilitates access to the comparisons archive stored in the database.

  9. The Mouse Tumor Biology Database: A Comprehensive Resource for Mouse Models of Human Cancer.

    PubMed

    Krupke, Debra M; Begley, Dale A; Sundberg, John P; Richardson, Joel E; Neuhauser, Steven B; Bult, Carol J

    2017-11-01

    Research using laboratory mice has led to fundamental insights into the molecular genetic processes that govern cancer initiation, progression, and treatment response. Although thousands of scientific articles have been published about mouse models of human cancer, collating information and data for a specific model is hampered by the fact that many authors do not adhere to existing annotation standards when describing models. The interpretation of experimental results in mouse models can also be confounded when researchers do not factor in the effect of genetic background on tumor biology. The Mouse Tumor Biology (MTB) database is an expertly curated, comprehensive compendium of mouse models of human cancer. Through the enforcement of nomenclature and related annotation standards, MTB supports aggregation of data about a cancer model from diverse sources and assessment of how genetic background of a mouse strain influences the biological properties of a specific tumor type and model utility. Cancer Res; 77(21); e67-70. ©2017 AACR . ©2017 American Association for Cancer Research.

  10. The rate of growth in scientific publication and the decline in coverage provided by Science Citation Index.

    PubMed

    Larsen, Peder Olesen; von Ins, Markus

    2010-09-01

    The growth rate of scientific publication has been studied from 1907 to 2007 using available data from a number of literature databases, including Science Citation Index (SCI) and Social Sciences Citation Index (SSCI). Traditional scientific publishing, that is publication in peer-reviewed journals, is still increasing although there are big differences between fields. There are no indications that the growth rate has decreased in the last 50 years. At the same time publication using new channels, for example conference proceedings, open archives and home pages, is growing fast. The growth rate for SCI up to 2007 is smaller than for comparable databases. This means that SCI was covering a decreasing part of the traditional scientific literature. There are also clear indications that the coverage by SCI is especially low in some of the scientific areas with the highest growth rate, including computer science and engineering sciences. The role of conference proceedings, open access archives and publications published on the net is increasing, especially in scientific fields with high growth rates, but this has only partially been reflected in the databases. The new publication channels challenge the use of the big databases in measurements of scientific productivity or output and of the growth rate of science. Because of the declining coverage and this challenge it is problematic that SCI has been used and is used as the dominant source for science indicators based on publication and citation numbers. The limited data available for social sciences show that the growth rate in SSCI was remarkably low and indicate that the coverage by SSCI was declining over time. National Science Indicators from Thomson Reuters is based solely on SCI, SSCI and Arts and Humanities Citation Index (AHCI). Therefore the declining coverage of the citation databases problematizes the use of this source.

  11. The rate of growth in scientific publication and the decline in coverage provided by Science Citation Index

    PubMed Central

    von Ins, Markus

    2010-01-01

    The growth rate of scientific publication has been studied from 1907 to 2007 using available data from a number of literature databases, including Science Citation Index (SCI) and Social Sciences Citation Index (SSCI). Traditional scientific publishing, that is publication in peer-reviewed journals, is still increasing although there are big differences between fields. There are no indications that the growth rate has decreased in the last 50 years. At the same time publication using new channels, for example conference proceedings, open archives and home pages, is growing fast. The growth rate for SCI up to 2007 is smaller than for comparable databases. This means that SCI was covering a decreasing part of the traditional scientific literature. There are also clear indications that the coverage by SCI is especially low in some of the scientific areas with the highest growth rate, including computer science and engineering sciences. The role of conference proceedings, open access archives and publications published on the net is increasing, especially in scientific fields with high growth rates, but this has only partially been reflected in the databases. The new publication channels challenge the use of the big databases in measurements of scientific productivity or output and of the growth rate of science. Because of the declining coverage and this challenge it is problematic that SCI has been used and is used as the dominant source for science indicators based on publication and citation numbers. The limited data available for social sciences show that the growth rate in SSCI was remarkably low and indicate that the coverage by SSCI was declining over time. National Science Indicators from Thomson Reuters is based solely on SCI, SSCI and Arts and Humanities Citation Index (AHCI). Therefore the declining coverage of the citation databases problematizes the use of this source. PMID:20700371

  12. An adaptable XML based approach for scientific data management and integration

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo

    2008-03-01

    Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.

  13. An Adaptable XML Based Approach for Scientific Data Management and Integration.

    PubMed

    Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo

    2008-02-20

    Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.

  14. Oceans of Data: In what ways can learning research inform the development of electronic interfaces and tools for use by students accessing large scientific databases?

    NASA Astrophysics Data System (ADS)

    Krumhansl, R. A.; Foster, J.; Peach, C. L.; Busey, A.; Baker, I.

    2012-12-01

    The practice of science and engineering is being revolutionized by the development of cyberinfrastructure for accessing near real-time and archived observatory data. Large cyberinfrastructure projects have the potential to transform the way science is taught in high school classrooms, making enormous quantities of scientific data available, giving students opportunities to analyze and draw conclusions from many kinds of complex data, and providing students with experiences using state-of-the-art resources and techniques for scientific investigations. However, online interfaces to scientific data are built by scientists for scientists, and their design can significantly impede broad use by novices. Knowledge relevant to the design of student interfaces to complex scientific databases is broadly dispersed among disciplines ranging from cognitive science to computer science and cartography and is not easily accessible to designers of educational interfaces. To inform efforts at bridging scientific cyberinfrastructure to the high school classroom, Education Development Center, Inc. and the Scripps Institution of Oceanography conducted an NSF-funded 2-year interdisciplinary review of literature and expert opinion pertinent to making interfaces to large scientific databases accessible to and usable by precollege learners and their teachers. Project findings are grounded in the fundamentals of Cognitive Load Theory, Visual Perception, Schemata formation and Universal Design for Learning. The Knowledge Status Report (KSR) presents cross-cutting and visualization-specific guidelines that highlight how interface design features can address/ ameliorate challenges novice high school students face as they navigate complex databases to find data, and construct and look for patterns in maps, graphs, animations and other data visualizations. The guidelines present ways to make scientific databases more broadly accessible by: 1) adjusting the cognitive load imposed by the user interface and visualizations so that it doesn't exceed the amount of information the learner can actively process; 2) drawing attention to important features and patterns; and 3) enabling customization of visualizations and tools to meet the needs of diverse learners.

  15. The AMMA database

    NASA Astrophysics Data System (ADS)

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can concern local, satellite and model data. - Documentation: catalogue of all the available data and their metadata. These tools have been developed using standard and free languages and softwares: - Linux system with an Apache web server and a Tomcat application server; - J2EE tools : JSF and Struts frameworks, hibernate; - relational database management systems: PostgreSQL and MySQL; - OpenLDAP directory. In order to facilitate the access to the data by African scientists, the complete system has been mirrored at AGHRYMET Regional Centre in Niamey and is operational there since January 2009. Users can now access metadata and request data through one or the other of two equivalent portals: http://database.amma-international.org or http://amma.agrhymet.ne/amma-data.

  16. SciELO, Scientific Electronic Library Online, a Database of Open Access Journals

    ERIC Educational Resources Information Center

    Meneghini, Rogerio

    2013-01-01

    This essay discusses SciELO, a scientific journal database operating in 14 countries. It covers over 1000 journals providing open access to full text and table sets of scientometrics data. In Brazil it is responsible for a collection of nearly 300 journals, selected along 15 years as the best Brazilian periodicals in natural and social sciences.…

  17. Instruments of scientific visual representation in atomic databases

    NASA Astrophysics Data System (ADS)

    Kazakov, V. V.; Kazakov, V. G.; Meshkov, O. I.

    2017-10-01

    Graphic tools of spectral data representation provided by operating information systems on atomic spectroscopy—ASD NIST, VAMDC, SPECTR-W3, and Electronic Structure of Atoms—for the support of scientific-research and human-resource development are presented. Such tools of visual representation of scientific data as those of the spectrogram and Grotrian diagram plotting are considered. The possibility of comparative analysis of the experimentally obtained spectra and reference spectra of atomic systems formed according to the database of a resource is described. The access techniques to the mentioned graphic tools are presented.

  18. Simple re-instantiation of small databases using cloud computing.

    PubMed

    Tan, Tin Wee; Xie, Chao; De Silva, Mark; Lim, Kuan Siong; Patro, C Pawan K; Lim, Shen Jean; Govindarajan, Kunde Ramamoorthy; Tong, Joo Chuan; Choo, Khar Heng; Ranganathan, Shoba; Khan, Asif M

    2013-01-01

    Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear.

  19. Simple re-instantiation of small databases using cloud computing

    PubMed Central

    2013-01-01

    Background Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. Results We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Conclusions Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear. PMID:24564380

  20. Toward the Assessment of Scientific and Public Health Impacts of the National Institute of Environmental Health Sciences Extramural Asthma Research Program Using Available Data

    PubMed Central

    Liebow, Edward; Phelps, Jerry; Van Houten, Bennett; Rose, Shyanika; Orians, Carlyn; Cohen, Jennifer; Monroe, Philip; Drew, Christina H.

    2009-01-01

    Background In the past 15 years, asthma prevalence has increased and is disproportionately distributed among children, minorities, and low-income persons. The National Institute of Environmental Health Sciences (NIEHS) Division of Extramural Research and Training developed a framework to measure the scientific and health impacts of its extramural asthma research to improve the scientific basis for reducing the health effects of asthma. Objectives Here we apply the framework to characterize the NIEHS asthma portfolio’s impact in terms of publications, clinical applications of findings, community interventions, and technology developments. Methods A logic model was tailored to inputs, outputs, and outcomes of the NIEHS asthma portfolio. Data from existing National Institutes of Health (NIH) databases are used, along with publicly available bibliometric data and structured elicitation of expert judgment. Results NIEHS is the third largest source of asthma-related research grant funding within the NIH between 1975 and 2005, after the National Heart, Lung, and Blood Institute and the National Institute of Allergy and Infectious Diseases. Much of NIEHS-funded asthma research focuses on basic research, but results are often published in journals focused on clinical investigation, increasing the likelihood that the work is moved into practice along the “bench to bedside” continuum. NIEHS support has led to key breakthroughs in scientific research concerning susceptibility to asthma, environmental conditions that heighten asthma symptoms, and cellular mechanisms that may be involved in treating asthma. Conclusions If gaps and limitations in publicly available data receive adequate attention, further linkages can be demonstrated between research activities and public health improvements. This logic model approach to research impact assessment demonstrates that it is possible to conceptualize program components, mine existing databases, and begin to show longer-term impacts of program results. The next challenges will be to modify current data structures, improve the linkages among relevant databases, incorporate as much electronically available data as possible, and determine how to improve the quality and health impact of the science that we support. PMID:19654926

  1. Dam Removal Information Portal (DRIP)—A map-based resource linking scientific studies and associated geospatial information about dam removals

    USGS Publications Warehouse

    Duda, Jeffrey J.; Wieferich, Daniel J.; Bristol, R. Sky; Bellmore, J. Ryan; Hutchison, Vivian B.; Vittum, Katherine M.; Craig, Laura; Warrick, Jonathan A.

    2016-08-18

    The removal of dams has recently increased over historical levels due to aging infrastructure, changing societal needs, and modern safety standards rendering some dams obsolete. Where possibilities for river restoration, or improved safety, exceed the benefits of retaining a dam, removal is more often being considered as a viable option. Yet, as this is a relatively new development in the history of river management, science is just beginning to guide our understanding of the physical and ecological implications of dam removal. Ultimately, the “lessons learned” from previous scientific studies on the outcomes dam removal could inform future scientific understanding of ecosystem outcomes, as well as aid in decision-making by stakeholders. We created a database visualization tool, the Dam Removal Information Portal (DRIP), to display map-based, interactive information about the scientific studies associated with dam removals. Serving both as a bibliographic source as well as a link to other existing databases like the National Hydrography Dataset, the derived National Dam Removal Science Database serves as the foundation for a Web-based application that synthesizes the existing scientific studies associated with dam removals. Thus, using the DRIP application, users can explore information about completed dam removal projects (for example, their location, height, and date removed), as well as discover sources and details of associated of scientific studies. As such, DRIP is intended to be a dynamic collection of scientific information related to dams that have been removed in the United States and elsewhere. This report describes the architecture and concepts of this “metaknowledge” database and the DRIP visualization tool.

  2. Domain fusion analysis by applying relational algebra to protein sequence and domain databases

    PubMed Central

    Truong, Kevin; Ikura, Mitsuhiko

    2003-01-01

    Background Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. Results This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at . Conclusion As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time. PMID:12734020

  3. A systematic review of model-based economic evaluations of diagnostic and therapeutic strategies for lower extremity artery disease.

    PubMed

    Vaidya, Anil; Joore, Manuela A; ten Cate-Hoek, Arina J; Kleinegris, Marie-Claire; ten Cate, Hugo; Severens, Johan L

    2014-01-01

    Lower extremity artery disease (LEAD) is a sign of wide spread atherosclerosis also affecting coronary, cerebral and renal arteries and is associated with increased risk of cardiovascular events. Many economic evaluations have been published for LEAD due to its clinical, social and economic importance. The aim of this systematic review was to assess modelling methods used in published economic evaluations in the field of LEAD. Our review appraised and compared the general characteristics, model structure and methodological quality of published models. Electronic databases MEDLINE and EMBASE were searched until February 2013 via OVID interface. Cochrane database of systematic reviews, Health Technology Assessment database hosted by National Institute for Health research and National Health Services Economic Evaluation Database (NHSEED) were also searched. The methodological quality of the included studies was assessed by using the Philips' checklist. Sixteen model-based economic evaluations were identified and included. Eleven models compared therapeutic health technologies; three models compared diagnostic tests and two models compared a combination of diagnostic and therapeutic options for LEAD. Results of this systematic review revealed an acceptable to low methodological quality of the included studies. Methodological diversity and insufficient information posed a challenge for valid comparison of the included studies. In conclusion, there is a need for transparent, methodologically comparable and scientifically credible model-based economic evaluations in the field of LEAD. Future modelling studies should include clinically and economically important cardiovascular outcomes to reflect the wider impact of LEAD on individual patients and on the society.

  4. Reef Ecosystem Services and Decision Support Database

    EPA Science Inventory

    This scientific and management information database utilizes systems thinking to describe the linkages between decisions, human activities, and provisioning of reef ecosystem goods and services. This database provides: (1) Hierarchy of related topics - Click on topics to navigat...

  5. The Neotoma Paleoecology Database: An International Community-Curated Resource for Paleoecological and Paleoenvironmental Data

    NASA Astrophysics Data System (ADS)

    Williams, J. W.; Grimm, E. C.; Ashworth, A. C.; Blois, J.; Charles, D. F.; Crawford, S.; Davis, E.; Goring, S. J.; Graham, R. W.; Miller, D. A.; Smith, A. J.; Stryker, M.; Uhen, M. D.

    2017-12-01

    The Neotoma Paleoecology Database supports global change research at the intersection of geology and ecology by providing a high-quality, community-curated data repository for paleoecological data. These data are widely used to study biological responses and feedbacks to past environmental change at local to global scales. The Neotoma data model is flexible and can store multiple kinds of fossil, biogeochemical, or physical variables measured from sedimentary archives. Data additions to Neotoma are growing and include >3.5 million observations, >16,000 datasets, and >8,500 sites. Dataset types include fossil pollen, vertebrates, diatoms, ostracodes, macroinvertebrates, plant macrofossils, insects, testate amoebae, geochronological data, and the recently added organic biomarkers, stable isotopes, and specimen-level data. Neotoma data can be found and retrieved in multiple ways, including the Explorer map-based interface, a RESTful Application Programming Interface, the neotoma R package, and digital object identifiers. Neotoma has partnered with the Paleobiology Database to produce a common data portal for paleobiological data, called the Earth Life Consortium. A new embargo management is designed to allow investigators to put their data into Neotoma and then make use of Neotoma's value-added services. Neotoma's distributed scientific governance model is flexible and scalable, with many open pathways for welcoming new members, data contributors, stewards, and research communities. As the volume and variety of scientific data grow, community-curated data resources such as Neotoma have become foundational infrastructure for big data science.

  6. NASA's experience in the international exchange of scientific and technical information in the aerospace field

    NASA Technical Reports Server (NTRS)

    Thibideau, Philip A.

    1989-01-01

    The early NASA international scientific and technical information (STI) exchange arrangements were usually detailed in correspondence with the librarians of the institutions involved. While this type of exchange, which involved only hardcopy (paper) products, grew to include some 220 organization in 43 countries, NASA's main focus shifted substantially to the STI relationship with the European Space Agency (ESA) which began in 1964. The NASA/ESA Tripartite Exchange Program, which now has more than 500 participants, provides more than 4,000 highly-relevant technical reports, fully processed, for the NASA produced 'Aerospace Database'. In turn, NASA provides an updated copy of this Database, known in Europe as the 'NASA File', for access, through ESA's Information Retrieval Service, by participating European organizations. Our experience in the evolving cooperation with ESA has established the 'model' for our more recent exchange agreements with Israel, Australia, Canada, and one under negotiation with Japan. The results of these agreements are made available to participating European organizations through the NASA File.

  7. IRIS Toxicological Review of Ethylene Glycol Mono-Butyl ...

    EPA Pesticide Factsheets

    EPA has conducted a peer review of the scientific basis supporting the human health hazard and dose-response assessment of ethylene glycol monobutyl ether that will appear on the Integrated Risk Information System (IRIS) database. EPA is conducting a peer review of the scientific basis supporting the human health hazard and dose-response assessment of propionaldehyde that will appear on the Integrated Risk Information System (IRIS) database.

  8. Modeling and Databases for Teaching Petrology

    NASA Astrophysics Data System (ADS)

    Asher, P.; Dutrow, B.

    2003-12-01

    With the widespread availability of high-speed computers with massive storage and ready transport capability of large amounts of data, computational and petrologic modeling and the use of databases provide new tools with which to teach petrology. Modeling can be used to gain insights into a system, predict system behavior, describe a system's processes, compare with a natural system or simply to be illustrative. These aspects result from data driven or empirical, analytical or numerical models or the concurrent examination of multiple lines of evidence. At the same time, use of models can enhance core foundations of the geosciences by improving critical thinking skills and by reinforcing prior knowledge gained. However, the use of modeling to teach petrology is dictated by the level of expectation we have for students and their facility with modeling approaches. For example, do we expect students to push buttons and navigate a program, understand the conceptual model and/or evaluate the results of a model. Whatever the desired level of sophistication, specific elements of design should be incorporated into a modeling exercise for effective teaching. These include, but are not limited to; use of the scientific method, use of prior knowledge, a clear statement of purpose and goals, attainable goals, a connection to the natural/actual system, a demonstration that complex heterogeneous natural systems are amenable to analyses by these techniques and, ideally, connections to other disciplines and the larger earth system. Databases offer another avenue with which to explore petrology. Large datasets are available that allow integration of multiple lines of evidence to attack a petrologic problem or understand a petrologic process. These are collected into a database that offers a tool for exploring, organizing and analyzing the data. For example, datasets may be geochemical, mineralogic, experimental and/or visual in nature, covering global, regional to local scales. These datasets provide students with access to large amount of related data through space and time. Goals of the database working group include educating earth scientists about information systems in general, about the importance of metadata about ways of using databases and datasets as educational tools and about the availability of existing datasets and databases. The modeling and databases groups hope to create additional petrologic teaching tools using these aspects and invite the community to contribute to the effort.

  9. Comprehensive European dietary exposure model (CEDEM) for food additives.

    PubMed

    Tennant, David R

    2016-05-01

    European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database.

  10. Mouse Genome Database: From sequence to phenotypes and disease models

    PubMed Central

    Richardson, Joel E.; Kadin, James A.; Smith, Cynthia L.; Blake, Judith A.; Bult, Carol J.

    2015-01-01

    Summary The Mouse Genome Database (MGD, www.informatics.jax.org) is the international scientific database for genetic, genomic, and biological data on the laboratory mouse to support the research requirements of the biomedical community. To accomplish this goal, MGD provides broad data coverage, serves as the authoritative standard for mouse nomenclature for genes, mutants, and strains, and curates and integrates many types of data from literature and electronic sources. Among the key data sets MGD supports are: the complete catalog of mouse genes and genome features, comparative homology data for mouse and vertebrate genes, the authoritative set of Gene Ontology (GO) annotations for mouse gene functions, a comprehensive catalog of mouse mutations and their phenotypes, and a curated compendium of mouse models of human diseases. Here, we describe the data acquisition process, specifics about MGD's key data areas, methods to access and query MGD data, and outreach and user help facilities. genesis 53:458–473, 2015. © 2015 The Authors. Genesis Published by Wiley Periodicals, Inc. PMID:26150326

  11. Non-Coding RNA Analysis Using the Rfam Database.

    PubMed

    Kalvari, Ioanna; Nawrocki, Eric P; Argasinska, Joanna; Quinones-Olvera, Natalia; Finn, Robert D; Bateman, Alex; Petrov, Anton I

    2018-06-01

    Rfam is a database of non-coding RNA families in which each family is represented by a multiple sequence alignment, a consensus secondary structure, and a covariance model. Using a combination of manual and literature-based curation and a custom software pipeline, Rfam converts descriptions of RNA families found in the scientific literature into computational models that can be used to annotate RNAs belonging to those families in any DNA or RNA sequence. Valuable research outputs that are often locked up in figures and supplementary information files are encapsulated in Rfam entries and made accessible through the Rfam Web site. The data produced by Rfam have a broad application, from genome annotation to providing training sets for algorithm development. This article gives an overview of how to search and navigate the Rfam Web site, and how to annotate sequences with RNA families. The Rfam database is freely available at http://rfam.org. © 2018 by John Wiley & Sons, Inc. Copyright © 2018 John Wiley & Sons, Inc.

  12. International Collaboration in Data Management for Scientific Ocean Drilling: Preserving Legacy Data While Implementing New Requirements.

    NASA Astrophysics Data System (ADS)

    Rack, F. R.

    2005-12-01

    The Integrated Ocean Drilling Program (IODP: 2003-2013 initial phase) is the successor to the Deep Sea Drilling Project (DSDP: 1968-1983) and the Ocean Drilling Program (ODP: 1985-2003). These earlier scientific drilling programs amassed collections of sediment and rock cores (over 300 kilometers stored in four repositories) and data organized in distributed databases and in print or electronic publications. International members of the IODP have established, through memoranda, the right to have access to: (1) all data, samples, scientific and technical results, all engineering plans, data or other information produced under contract to the program; and, (2) all data from geophysical and other site surveys performed in support of the program which are used for drilling planning. The challenge that faces the individual platform operators and management of IODP is to find the right balance and appropriate synergies among the needs, expectations and requirements of stakeholders. The evolving model for IODP database services consists of the management and integration of data collected onboard the various IODP platforms (including downhole logging and syn-cruise site survey information), legacy data from DSDP and ODP, data derived from post-cruise research and publications, and other IODP-relevant information types, to form a common, program-wide IODP information system (e.g., IODP Portal) which will be accessible to both researchers and the public. The JANUS relational database of ODP was introduced in 1997 and the bulk of ODP shipboard data has been migrated into this system, which is comprised of a relational data model consisting of over 450 tables. The JANUS database includes paleontological, lithostratigraphic, chemical, physical, sedimentological, and geophysical data from a global distribution of sites. For ODP Legs 100 through 210, and including IODP Expeditions 301 through 308, JANUS has been used to store data from 233,835 meters of core recovered, which are comprised of 38,039 cores, with 202,281 core sections stored in repositories, which have resulted in the taking of 2,299,180 samples for scientists and other users (http://iodp.tamu.edu/janusweb/general/dbtable.cgi). JANUS and other IODP databases are viewed as components of an evolving distributed network of databases, supported by metadata catalogs and middleware with XML workflows, that are intended to provide access to DSDP/ODP/IODP cores and sample-based data as well as other distributed geoscience data collections (e.g., CHRONOS, PetDB, SedDB). These data resources can be explored through the use of emerging data visualization environments, such as GeoWall, CoreWall (http://(www.evl.uic.edu/cavern/corewall), a multi-screen display for viewing cores and related data, GeoWall-2 and LambdaVision, a very-high resolution, networked environment for data exploration and visualization, and others. The U.S Implementing Organization (USIO) for the IODP, also known as the JOI Alliance, is a partnership between Joint Oceanographic Institutions (JOI), Texas A&M University, and Lamont-Doherty Earth Observatory of Columbia University. JOI is a consortium of 20 premier oceanographic research institutions that serves the U.S. scientific community by leading large-scale, global research programs in scientific ocean drilling and ocean observing. For more than 25 years, JOI has helped facilitate discovery and advance global understanding of the Earth and its oceans through excellence in program management.

  13. Annual Review of Database Developments: 1993.

    ERIC Educational Resources Information Center

    Basch, Reva

    1993-01-01

    Reviews developments in the database industry for 1993. Topics addressed include scientific and technical information; environmental issues; social sciences; legal information; business and marketing; news services; documentation; databases and document delivery; electronic bulletin boards and the Internet; and information industry organizational…

  14. Scaling behavior in the dynamics of citations to scientific journals

    NASA Astrophysics Data System (ADS)

    Picoli, S., Jr.; Mendes, R. S.; Malacarne, L. C.; Lenzi, E. K.

    2006-08-01

    We analyze a database comprising the impact factor (citations per recent items published) of scientific journals for a 13-year period (1992 2004). We find that i) the distribution of impact factors follows asymptotic power law behavior, ii) the distribution of annual logarithmic growth rates has an exponential form, and iii) the width of this distribution decays with the impact factor as a power law with exponent β simeq 0.22. The results ii) and iii) are surprising similar to those observed in the growth dynamics of organizations with complex internal structure suggesting the existence of common mechanisms underlying the dynamics of these systems. We propose a general model for such systems, an extension of the simplest model for firm growth, and compare their predictions with our empirical results.

  15. ODP Legacy

    Science.gov Websites

    Legacy: Scientific results ODP Legacy: Engineering and science operations ODP Legacy: Samples & ; databases ODP Legacy: Outreach Overview Program Administration | Scientific Results | Engineering &

  16. An Array Library for Microsoft SQL Server with Astrophysical Applications

    NASA Astrophysics Data System (ADS)

    Dobos, L.; Szalay, A. S.; Blakeley, J.; Falck, B.; Budavári, T.; Csabai, I.

    2012-09-01

    Today's scientific simulations produce output on the 10-100 TB scale. This unprecedented amount of data requires data handling techniques that are beyond what is used for ordinary files. Relational database systems have been successfully used to store and process scientific data, but the new requirements constantly generate new challenges. Moving terabytes of data among servers on a timely basis is a tough problem, even with the newest high-throughput networks. Thus, moving the computations as close to the data as possible and minimizing the client-server overhead are absolutely necessary. At least data subsetting and preprocessing have to be done inside the server process. Out of the box commercial database systems perform very well in scientific applications from the prospective of data storage optimization, data retrieval, and memory management but lack basic functionality like handling scientific data structures or enabling advanced math inside the database server. The most important gap in Microsoft SQL Server is the lack of a native array data type. Fortunately, the technology exists to extend the database server with custom-written code that enables us to address these problems. We present the prototype of a custom-built extension to Microsoft SQL Server that adds array handling functionality to the database system. With our Array Library, fix-sized arrays of all basic numeric data types can be created and manipulated efficiently. Also, the library is designed to be able to be seamlessly integrated with the most common math libraries, such as BLAS, LAPACK, FFTW, etc. With the help of these libraries, complex operations, such as matrix inversions or Fourier transformations, can be done on-the-fly, from SQL code, inside the database server process. We are currently testing the prototype with two different scientific data sets: The Indra cosmological simulation will use it to store particle and density data from N-body simulations, and the Milky Way Laboratory project will use it to store galaxy simulation data.

  17. The Crisis in Scholarly Communication, Open Access, and Open Data Policies: The Libraries' Perspective

    NASA Astrophysics Data System (ADS)

    Besara, Rachel

    2015-03-01

    For years the cost of STEM databases have exceeded the rate of inflation. Libraries have reallocated funds for years to continue to provide support to their scientific communities, but they are reaching a point at many institutions where they are no longer able to provide access to many databases considered standard to support research. A possible or partial alleviation to this problem is the federal open access mandate. However, this shift challenges the current model of publishing and data management in the sciences. This talk will discuss these topics from the perspective of research libraries supporting physics and the STEM disciplines.

  18. Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-09-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Reprint of: Simulation Platform: a cloud-based online simulation environment.

    PubMed

    Yamazaki, Tadashi; Ikeno, Hidetoshi; Okumura, Yoshihiro; Satoh, Shunji; Kamiyama, Yoshimi; Hirata, Yutaka; Inagaki, Keiichiro; Ishihara, Akito; Kannon, Takayuki; Usui, Shiro

    2011-11-01

    For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called "Simulation Platform". Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Geoinformatics in the public service: building a cyberinfrastructure across the geological surveys

    USGS Publications Warehouse

    Allison, M. Lee; Gundersen, Linda C.; Richard, Stephen M.; Keller, G. Randy; Baru, Chaitanya

    2011-01-01

    Advanced information technology infrastructure is increasingly being employed in the Earth sciences to provide researchers with efficient access to massive central databases and to integrate diversely formatted information from a variety of sources. These geoinformatics initiatives enable manipulation, modeling and visualization of data in a consistent way, and are helping to develop integrated Earth models at various scales, and from the near surface to the deep interior. This book uses a series of case studies to demonstrate computer and database use across the geosciences. Chapters are thematically grouped into sections that cover data collection and management; modeling and community computational codes; visualization and data representation; knowledge management and data integration; and web services and scientific workflows. Geoinformatics is a fascinating and accessible introduction to this emerging field for readers across the solid Earth sciences and an invaluable reference for researchers interested in initiating new cyberinfrastructure projects of their own.

  1. STI Handbook: Guidelines for Producing, Using, and Managing Scientific and Technical Information in the Department of the Navy. A Handbook for Navy Scientists and Engineers on the Use of Scientific and Technical Information

    DTIC Science & Technology

    1992-02-01

    6 What Information Should Be Included in the TR Database? 2-6 What Types of Media Can Be Used to Submit Information to the TR Database? 2-9 How Is...reports. Contract administration documents. Regulations. Commercially published books. WHAT TYPES OF MEDIA CAN BE USED TO SUBMIT INFORMATION TO THE TR...TOWARD DTIC’S WUIS DATA- BASE ? The WUIS database, used to control and report technical and management data, summarizes ongoing research and technology

  2. Computer Databases as an Educational Tool in the Basic Sciences.

    ERIC Educational Resources Information Center

    Friedman, Charles P.; And Others

    1990-01-01

    The University of North Carolina School of Medicine developed a computer database, INQUIRER, containing scientific information in bacteriology, and then integrated the database into routine educational activities for first-year medical students in their microbiology course. (Author/MLW)

  3. Recent Advances in the GLIMS Glacier Database

    NASA Astrophysics Data System (ADS)

    Raup, Bruce; Cogley, Graham; Zemp, Michael; Glaus, Ladina

    2017-04-01

    Glaciers are shrinking almost without exception. Glacier losses have impacts on local water availability and hazards, and contribute to sea level rise. To understand these impacts and the processes behind them, it is crucial to monitor glaciers through time by mapping their areal extent, changes in volume, elevation distribution, snow lines, ice flow velocities, and changes to associated water bodies. The glacier database of the Global Land Ice Measurements from Space (GLIMS) initiative is the only multi-temporal glacier database capable of tracking all these glacier measurements and providing them to the scientific community and broader public. Here we present recent results in 1) expansion of the geographic and temporal coverage of the GLIMS Glacier Database by drawing on the Randolph Glacier Inventory (RGI) and other new data sets; 2) improved tools for visualizing and downloading GLIMS data in a choice of formats and data models; and 3) a new data model for handling multiple glacier records through time while avoiding double-counting of glacier number or area. The result of this work is a more complete glacier data repository that shows not only the current state of glaciers on Earth, but how they have changed in recent decades. The database is useful for tracking changes in water resources, hazards, and mass budgets of the world's glaciers.

  4. Scientific Communication of Geochemical Data and the Use of Computer Databases.

    ERIC Educational Resources Information Center

    Le Bas, M. J.; Durham, J.

    1989-01-01

    Describes a scheme in the United Kingdom that coordinates geochemistry publications with a computerized geochemistry database. The database comprises not only data published in the journals but also the remainder of the pertinent data set. The discussion covers the database design; collection, storage and retrieval of data; and plans for future…

  5. filltex: Automatic queries to ADS and INSPIRE databases to fill LaTex bibliography

    NASA Astrophysics Data System (ADS)

    Gerosa, Davide; Vallisneri, Michele

    2017-05-01

    filltex is a simple tool to fill LaTex reference lists with records from the ADS and INSPIRE databases. ADS and INSPIRE are the most common databases used among the theoretical physics and astronomy scientific communities, respectively. filltex automatically looks for all citation labels present in a tex document and, by means of web-scraping, downloads all the required citation records from either of the two databases. filltex significantly speeds up the LaTex scientific writing workflow, as all required actions (compile the tex file, fill the bibliography, compile the bibliography, compile the tex file again) are automated in a single command. We also provide an integration of filltex for the macOS LaTex editor TexShop.

  6. USGS cold-water coral geographic database-Gulf of Mexico and western North Atlantic Ocean, version 1.0

    USGS Publications Warehouse

    Scanlon, Kathryn M.; Waller, Rhian G.; Sirotek, Alexander R.; Knisel, Julia M.; O'Malley, John; Alesandrini, Stian

    2010-01-01

    The USGS Cold-Water Coral Geographic Database (CoWCoG) provides a tool for researchers and managers interested in studying, protecting, and/or utilizing cold-water coral habitats in the Gulf of Mexico and western North Atlantic Ocean.  The database makes information about the locations and taxonomy of cold-water corals available to the public in an easy-to-access form while preserving the scientific integrity of the data.  The database includes over 1700 entries, mostly from published scientific literature, museum collections, and other databases.  The CoWCoG database is easy to search in a variety of ways, and data can be quickly displayed in table form and on a map by using only the software included with this publication.  Subsets of the database can be selected on the basis of geographic location, taxonomy, or other criteria and exported to one of several available file formats.  Future versions of the database are being planned to cover a larger geographic area and additional taxa.

  7. Methods for structuring scientific knowledge from many areas related to aging research.

    PubMed

    Zhavoronkov, Alex; Cantor, Charles R

    2011-01-01

    Aging and age-related disease represents a substantial quantity of current natural, social and behavioral science research efforts. Presently, no centralized system exists for tracking aging research projects across numerous research disciplines. The multidisciplinary nature of this research complicates the understanding of underlying project categories, the establishment of project relations, and the development of a unified project classification scheme. We have developed a highly visual database, the International Aging Research Portfolio (IARP), available at AgingPortfolio.org to address this issue. The database integrates information on research grants, peer-reviewed publications, and issued patent applications from multiple sources. Additionally, the database uses flexible project classification mechanisms and tools for analyzing project associations and trends. This system enables scientists to search the centralized project database, to classify and categorize aging projects, and to analyze the funding aspects across multiple research disciplines. The IARP is designed to provide improved allocation and prioritization of scarce research funding, to reduce project overlap and improve scientific collaboration thereby accelerating scientific and medical progress in a rapidly growing area of research. Grant applications often precede publications and some grants do not result in publications, thus, this system provides utility to investigate an earlier and broader view on research activity in many research disciplines. This project is a first attempt to provide a centralized database system for research grants and to categorize aging research projects into multiple subcategories utilizing both advanced machine algorithms and a hierarchical environment for scientific collaboration.

  8. Mathematical Notation in Bibliographic Databases.

    ERIC Educational Resources Information Center

    Pasterczyk, Catherine E.

    1990-01-01

    Discusses ways in which using mathematical symbols to search online bibliographic databases in scientific and technical areas can improve search results. The representations used for Greek letters, relations, binary operators, arrows, and miscellaneous special symbols in the MathSci, Inspec, Compendex, and Chemical Abstracts databases are…

  9. On the frequency-magnitude distribution of converging boundaries

    NASA Astrophysics Data System (ADS)

    Marzocchi, W.; Laura, S.; Heuret, A.; Funiciello, F.

    2011-12-01

    The occurrence of the last mega-thrust earthquake in Japan has clearly remarked the high risk posed to society by such events in terms of social and economic losses even at large spatial scale. The primary component for a balanced and objective mitigation of the impact of these earthquakes is the correct forecast of where such kind of events may occur in the future. To date, there is a wide range of opinions about where mega-thrust earthquakes can occur. Here, we aim at presenting some detailed statistical analysis of a database of worldwide interplate earthquakes occurring at current subduction zones. The database has been recently published in the framework of the EURYI Project 'Convergent margins and seismogenesis: defining the risk of great earthquakes by using statistical data and modelling', and it provides a unique opportunity to explore in detail the seismogenic process in subducting lithosphere. In particular, the statistical analysis of this database allows us to explore many interesting scientific issues such as the existence of different frequency-magnitude distributions across the trenches, the quantitative characterization of subduction zones that are able to produce more likely mega-thrust earthquakes, the prominent features that characterize converging boundaries with different seismic activity and so on. Besides the scientific importance, such issues may lead to improve our mega-thrust earthquake forecasting capability.

  10. Biological data integration: wrapping data and tools.

    PubMed

    Lacroix, Zoé

    2002-06-01

    Nowadays scientific data is inevitably digital and stored in a wide variety of formats in heterogeneous systems. Scientists need to access an integrated view of remote or local heterogeneous data sources with advanced data accessing, analyzing, and visualization tools. Building a digital library for scientific data requires accessing and manipulating data extracted from flat files or databases, documents retrieved from the Web as well as data generated by software. We present an approach to wrapping web data sources, databases, flat files, or data generated by tools through a database view mechanism. Generally, a wrapper has two tasks: it first sends a query to the source to retrieve data and, second builds the expected output with respect to the virtual structure. Our wrappers are composed of a retrieval component based on an intermediate object view mechanism called search views mapping the source capabilities to attributes, and an eXtensible Markup Language (XML) engine, respectively, to perform these two tasks. The originality of the approach consists of: 1) a generic view mechanism to access seamlessly data sources with limited capabilities and 2) the ability to wrap data sources as well as the useful specific tools they may provide. Our approach has been developed and demonstrated as part of the multidatabase system supporting queries via uniform object protocol model (OPM) interfaces.

  11. From data point timelines to a well curated data set, data mining of experimental data and chemical structure data from scientific articles, problems and possible solutions.

    PubMed

    Ruusmann, Villu; Maran, Uko

    2013-07-01

    The scientific literature is important source of experimental and chemical structure data. Very often this data has been harvested into smaller or bigger data collections leaving the data quality and curation issues on shoulders of users. The current research presents a systematic and reproducible workflow for collecting series of data points from scientific literature and assembling a database that is suitable for the purposes of high quality modelling and decision support. The quality assurance aspect of the workflow is concerned with the curation of both chemical structures and associated toxicity values at (1) single data point level and (2) collection of data points level. The assembly of a database employs a novel "timeline" approach. The workflow is implemented as a software solution and its applicability is demonstrated on the example of the Tetrahymena pyriformis acute aquatic toxicity endpoint. A literature collection of 86 primary publications for T. pyriformis was found to contain 2,072 chemical compounds and 2,498 unique toxicity values, which divide into 2,440 numerical and 58 textual values. Every chemical compound was assigned to a preferred toxicity value. Examples for most common chemical and toxicological data curation scenarios are discussed.

  12. Rapid Scientific Promotion of Scientific Productions in Stem Cells According to The Indexed Papers in The ISI (web of knowledge).

    PubMed

    Alijani, Rahim

    2015-01-01

    In recent years emphasis has been placed on evaluation studies and the publication of scientific papers in national and international journals. In this regard the publication of scientific papers in journals in the Institute for Scientific Information (ISI) database is highly recommended. The evaluation of scientific output via articles in journals indexed in the ISI database will enable the Iranian research authorities to allocate and organize research budgets and human resources in a way that maximises efficient science production. The purpose of the present paper is to publish a general and valid view of science production in the field of stem cells. In this research, outputs in the field of stem cell research are evaluated by survey research, the method of science assessment called Scientometrics in this branch of science. A total of 1528 documents was extracted from the ISI database and analysed using descriptive statistics software in Excel. The results of this research showed that 1528 papers in the stem cell field in the Web of Knowledge database were produced by Iranian researchers. The top ten Iranian researchers in this field have produced 936 of these papers, equivalent to 61.3% of the total. Among the top ten, Soleimani M. has occupied the first place with 181 papers. Regarding international scientific participation, Iranian researchers have cooperated to publish papers with researchers from 50 countries. Nearly 32% (452 papers) of the total research output in this field has been published in the top 10 journals. These results show that a small number of researchers have published the majority of papers in the stem cell field. International participation in this field of research unacceptably low. Such participation provides the opportunity to import modern science and international experience into Iran. This not only causes scientific growth, but also improves the research and enhances opportunities for employment and professional development. Iranian scientific outputs from stem cell research should not be limited to only a few specific journals.

  13. The Master Lens Database and The Orphan Lenses Project

    NASA Astrophysics Data System (ADS)

    Moustakas, Leonidas

    2012-10-01

    Strong gravitational lenses are uniquely suited for the study of dark matter structure and substructure within massive halos of many scales, act as gravitational telescopes for distant faint objects, and can give powerful and competitive cosmological constraints. While hundreds of strong lenses are known to date, spanning five orders of magnitude in mass scale, thousands will be identified this decade. To fully exploit the power of these objects presently, and in the near future, we are creating the Master Lens Database. This is a clearinghouse of all known strong lens systems, with a sophisticated and modern database of uniformly measured and derived observational and lens-model derived quantities, using archival Hubble data across several instruments. This Database enables new science that can be done with a comprehensive sample of strong lenses. The operational goal of this proposal is to develop the process and the code to semi-automatically stage Hubble data of each system, create appropriate masks of the lensing objects and lensing features, and derive gravitational lens models, to provide a uniform and fairly comprehensive information set that is ingested into the Database. The scientific goal for this team is to use the properties of the ensemble of lenses to make a new study of the internal structure of lensing galaxies, and to identify new objects that show evidence of strong substructure lensing, for follow-up study. All data, scripts, masks, model setup files, and derived parameters, will be public, and free. The Database will be accessible online and through a sophisticated smartphone application, which will also be free.

  14. Questions to Ask Your Doctor

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  15. Public understanding of climate change in the United States.

    PubMed

    Weber, Elke U; Stern, Paul C

    2011-01-01

    This article considers scientific and public understandings of climate change and addresses the following question: Why is it that while scientific evidence has accumulated to document global climate change and scientific opinion has solidified about its existence and causes, U.S. public opinion has not and has instead become more polarized? Our review supports a constructivist account of human judgment. Public understanding is affected by the inherent difficulty of understanding climate change, the mismatch between people's usual modes of understanding and the task, and, particularly in the United States, a continuing societal struggle to shape the frames and mental models people use to understand the phenomena. We conclude by discussing ways in which psychology can help to improve public understanding of climate change and link a better understanding to action. (PsycINFO Database Record (c) 2011 APA, all rights reserved).

  16. IRIS Toxicological Review of Methanol (Non-Cancer) ...

    EPA Pesticide Factsheets

    EPA is conducting a peer review and public comment of the scientific basis supporting the human health hazard and dose-response assessment of methanol (non-cancer) that when finalized will appear on the Integrated Risk Information System (IRIS) database. EPA is conducting a peer review of the scientific basis supporting the human health hazard and dose-response assessment of methanol (non-cancer) that will appear in the Integrated Risk Information System (IRIS) database.

  17. A database application for wilderness character monitoring

    Treesearch

    Ashley Adams; Peter Landres; Simon Kingston

    2012-01-01

    The National Park Service (NPS) Wilderness Stewardship Division, in collaboration with the Aldo Leopold Wilderness Research Institute and the NPS Inventory and Monitoring Program, developed a database application to facilitate tracking and trend reporting in wilderness character. The Wilderness Character Monitoring Database allows consistent, scientifically based...

  18. Monitoring outcomes with relational databases: does it improve quality of care?

    PubMed

    Clemmer, Terry P

    2004-12-01

    There are 3 key ingredients in improving quality of medial care: 1) using a scientific process of improvement, 2) executing the process at the lowest possible level in the organization, and 3) measuring the results of any change reliably. Relational databases when used within these guidelines are of great value in these efforts if they contain reliable information that is pertinent to the project and used in a scientific process of quality improvement by a front line team. Unfortunately, the data are frequently unreliable and/or not pertinent to the local process and is used by persons at very high levels in the organization without a scientific process and without reliable measurement of the outcome. Under these circumstances the effectiveness of relational databases in improving care is marginal at best, frequently wasteful and has the potential to be harmful. This article explores examples of these concepts.

  19. Hosting and pulishing astronomical data in SQL databases

    NASA Astrophysics Data System (ADS)

    Galkin, Anastasia; Klar, Jochen; Riebe, Kristin; Matokevic, Gal; Enke, Harry

    2017-04-01

    In astronomy, terabytes and petabytes of data are produced by ground instruments, satellite missions and simulations. At Leibniz-Institute for Astrophysics Potsdam (AIP) we host and publish terabytes of cosmological simulation and observational data. The public archive at AIP has now reached a size of 60TB and growing and helps to produce numerous scientific papers. The web framework Daiquiri offers a dedicated web interface for each of the hosted scientific databases. Scientists all around the world run SQL queries which include specific astrophysical functions and get their desired data in reasonable time. Daiquiri supports the scientific projects by offering a number of administration tools such as database and user management, contact messages to the staff and support for organization of meetings and workshops. The webpages can be customized and the Wordpress integration supports the participating scientists in maintaining the documentation and the projects' news sections.

  20. [Public scientific knowledge distribution in health information, communication and information technology indexed in MEDLINE and LILACS databases].

    PubMed

    Packer, Abel Laerte; Tardelli, Adalberto Otranto; Castro, Regina Célia Figueiredo

    2007-01-01

    This study explores the distribution of international, regional and national scientific output in health information and communication, indexed in the MEDLINE and LILACS databases, between 1996 and 2005. A selection of articles was based on the hierarchical structure of Information Science in MeSH vocabulary. Four specific domains were determined: health information, medical informatics, scientific communications on healthcare and healthcare communications. The variables analyzed were: most-covered subjects and journals, author affiliation and publication countries and languages, in both databases. The Information Science category is represented in nearly 5% of MEDLINE and LILACS articles. The four domains under analysis showed a relative annual increase in MEDLINE. The Medical Informatics domain showed the highest number of records in MEDLINE, representing about half of all indexed articles. The importance of Information Science as a whole is more visible in publications from developed countries and the findings indicate the predominance of the United States, with significant growth in scientific output from China and South Korea and, to a lesser extent, Brazil.

  1. Domain fusion analysis by applying relational algebra to protein sequence and domain databases.

    PubMed

    Truong, Kevin; Ikura, Mitsuhiko

    2003-05-06

    Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at http://calcium.uhnres.utoronto.ca/pi. As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time.

  2. Host range, host ecology, and distribution of more than 11800 fish parasite species

    USGS Publications Warehouse

    Strona, Giovanni; Palomares, Maria Lourdes D.; Bailly, Nicholas; Galli, Paolo; Lafferty, Kevin D.

    2013-01-01

    Our data set includes 38 008 fish parasite records (for Acanthocephala, Cestoda, Monogenea, Nematoda, Trematoda) compiled from the scientific literature, Internet databases, and museum collections paired to the corresponding host ecological, biogeographical, and phylogenetic traits (maximum length, growth rate, life span, age at maturity, trophic level, habitat preference, geographical range size, taxonomy). The data focus on host features, because specific parasite traits are not consistently available across records. For this reason, the data set is intended as a flexible resource able to extend the principles of ecological niche modeling to the host–parasite system, providing researchers with the data to model parasite niches based on their distribution in host species and the associated host features. In this sense, the database offers a framework for testing general ecological, biogeographical, and phylogenetic hypotheses based on the identification of hosts as parasite habitat. Potential applications of the data set are, for example, the investigation of species–area relationships or the taxonomic distribution of host-specificity. The provided host–parasite list is that currently used by Fish Parasite Ecology Software Tool (FishPEST, http://purl.oclc.org/fishpest), which is a website that allows researchers to model several aspects of the relationships between fish parasites and their hosts. The database is intended for researchers who wish to have more freedom to analyze the database than currently possible with FishPEST. However, for readers who have not seen FishPEST, we recommend using this as a starting point for interacting with the database.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolery, Thomas J.; Tayne, Andrew; Jove-Colon, Carlos F.

    Thermodynamic data are essential for understanding and evaluating geochemical processes, as by speciation-solubility calculations, reaction -path modeling, or reactive transport simulation. These data are required to evaluate both equilibrium states and the kinetic approach to such states (via the affinity term in rate laws). The development of thermodynamic databases for these purposes has a long history in geochemistry (e.g., Garrels and Christ, 1965; Helgeson et al., 1969; Helgeson et al., 1978, Johnson et al., 1992; Robie and Hemingway, 1995), paralleled by related and applicable work in the larger scientific community (e.g., Wagman et al., 1982, 1989; Cox et al., 1989;more » Barin and Platzki, 1995; Binneweis and Milke, 1999). The Yucca Mountain Project developed two qualified thermodynamic databases for to model geochemical processes, including ones involving repository components such as spent fuel. The first of the two (BSC, 2007a) was for systems containing dilute aqueous solutions only, the other (BSC, 2007b) for systems involving concentrated aqueous solutions and incorporating a model for such based on Pitzer’s (1991) equations . A 25°C-only database with similarities to the latter was also developed for WIPP (cf. Xiong, 2005). The YMP dilute systems database is widely used in the geochemistry community for a variety of applications involving rock/water interactions. The purpose of the present task is to improve these databases for work on the Used Fuel Disposition Project and maintain some semblance of order that will support qualification in support of the development of future underground high level nuclear waste disposal.« less

  4. Geoscience research databases for coastal Alabama ecosystem management

    USGS Publications Warehouse

    Hummell, Richard L.

    1995-01-01

    Effective management of complex coastal ecosystems necessitates access to scientific knowledge that can be acquired through a multidisciplinary approach involving Federal and State scientists that take advantage of agency expertise and resources for the benefit of all participants working toward a set of common research and management goals. Cooperative geostatic investigations have led toward building databases of fundamental scientific knowledge that can be utilized to manage coastal Alabama's natural and future development. These databases have been used to assess the occurrence and economic potential of hard mineral resources in the Alabama EFZ, and to support oil spill contingency planning and environmental analysis for coastal Alabama.

  5. Community Engagement to Drive Best Practices and Scientific Advancement

    NASA Astrophysics Data System (ADS)

    Goring, S. J.; Williams, J. W.; Uhen, M. D.; McClennen, M.; Jenkins, J.; Peters, S. E.; Grimm, E. C.; Anderson, M.; Fils, D.; Lehnert, K.; Carter, M.

    2016-12-01

    The development of databases, data models, and tools around Earth Science data requires constant feedback from user communities. Users must be engaged in all aspects of data upload and access, curation and governance, and, particularly, in highlighting future opportunities for scientific discovery using the data resources. A challenge for data repositories, many of which have evolved organically and independently, is moving from Systems of Record - data silos with only limited input and output options - to Systems of Engagement, that respond to users and interact with other user communities and data repositories across the geosciences and beyond. The Cyber4Paleo Community Development Workshop (http://cyber4paleo.github.io), held June 20 & 21st in Boulder, CO, was organized by the EarthCube Research Coordination Network C4P (Cyber4Paleo) to bring together disciplinary researchers and Principles within data collectives in an effort to drive scientific applications of the collective data resources. C4P focuses on coordinating data and user groups within the allied paleogeoscientific disciplines. Over the course of two days researchers developed research projects that examined standards of 210Pb dating in the published literature, a framework for implementing a common geological time scale across resources, the continued development of underlying data resources, tools to integrate climate and occupation data from paleoecological resources, and the implementation of harmonizing standards across databases. Scientific outcomes of the workshop serve to underpin our understanding of the interrelations between paleoecological data and geophysical components of the Earth System at short and long time scales. These tools enhance our ability to understand connections between and among proxies, across space and time, the serve as outreach tools for training and education, and, importantly, they help to define and improve best practices within the databases, by engaging directly with user communities to fill unanticipated needs.

  6. Applications of Precipitation Feature Databases from GPM core and constellation Satellites

    NASA Astrophysics Data System (ADS)

    Liu, C.

    2017-12-01

    Using the observations from Global Precipitation Mission (GPM) core and constellation satellites, global precipitation was quantitatively described from the perspective of precipitation systems and their properties. This presentation will introduce the development of precipitation feature databases, and several scientific questions that have been tackled using this database, including the topics of global snow precipitation, extreme intensive convection, hail storms, extreme precipitation, and microphysical properties derived with dual frequency radars at the top of convective cores. As more and more observations of constellation satellites become available, it is anticipated that the precipitation feature approach will help to address a large variety of scientific questions in the future. For anyone who is interested, all the current precipitation feature databases are freely open to public at: http://atmos.tamucc.edu/trmm/.

  7. Database of potential sources for earthquakes larger than magnitude 6 in Northern California

    USGS Publications Warehouse

    ,

    1996-01-01

    The Northern California Earthquake Potential (NCEP) working group, composed of many contributors and reviewers in industry, academia and government, has pooled its collective expertise and knowledge of regional tectonics to identify potential sources of large earthquakes in northern California. We have created a map and database of active faults, both surficial and buried, that forms the basis for the northern California portion of the national map of probabilistic seismic hazard. The database contains 62 potential sources, including fault segments and areally distributed zones. The working group has integrated constraints from broadly based plate tectonic and VLBI models with local geologic slip rates, geodetic strain rate, and microseismicity. Our earthquake source database derives from a scientific consensus that accounts for conflict in the diverse data. Our preliminary product, as described in this report brings to light many gaps in the data, including a need for better information on the proportion of deformation in fault systems that is aseismic.

  8. An X-Ray Analysis Database of Photoionization Cross Sections Including Variable Ionization

    NASA Technical Reports Server (NTRS)

    Wang, Ping; Cohen, David H.; MacFarlane, Joseph J.; Cassinelli, Joseph P.

    1997-01-01

    Results of research efforts in the following areas are discussed: review of the major theoretical and experimental data of subshell photoionization cross sections and ionization edges of atomic ions to assess the accuracy of the data, and to compile the most reliable of these data in our own database; detailed atomic physics calculations to complement the database for all ions of 17 cosmically abundant elements; reconciling the data from various sources and our own calculations; and fitting cross sections with functional approximations and incorporating these functions into a compact computer code.Also, efforts included adapting an ionization equilibrium code, tabulating results, and incorporating them into the overall program and testing the code (both ionization equilibrium and opacity codes) with existing observational data. The background and scientific applications of this work are discussed. Atomic physics cross section models and calculations are described. Calculation results are compared with available experimental data and other theoretical data. The functional approximations used for fitting cross sections are outlined and applications of the database are discussed.

  9. The Vocational Guidance Research Database: A Scientometric Approach

    ERIC Educational Resources Information Center

    Flores-Buils, Raquel; Gil-Beltran, Jose Manuel; Caballer-Miedes, Antonio; Martinez-Martinez, Miguel Angel

    2012-01-01

    The scientometric study of scientific output through publications in specialized journals cannot be undertaken exclusively with the databases available today. For this reason, the objective of this article is to introduce the "Base de Datos de Investigacion en Orientacion Vocacional" [Vocational Guidance Research Database], based on the…

  10. A quarter-long exercise that introduces general education students to neurophysiology and scientific writing.

    PubMed

    Krilowicz, B I; Henter, H; Kamhi-Stein, L

    1997-06-01

    Providing large numbers of general education students with an introduction to science is a challenge. To meet this challenge, a quarter-long neurophysiology project was developed for use in an introductory biology course. The primary goals of this multistep project were to introduce students to the scientific method, scientific writing, on-line scientific bibliographic databases, and the scientific literature, while improving their academic literacy skills. Students began by collecting data on their own circadian rhythms in autonomic, motor, and cognitive function, reliably demonstrating the predicted circadian changes in heart rate, eye-hand coordination, and adding speed. Students wrote a journal-style article using pooled class data. Students were prepared to write the paper by several methods that were designed to improve academic language skills, including a library training exercise, "modeling" of the writing assignment, and drafting of subsections of the paper. This multistep neurophysiology project represents a significant commitment of time by both students and instructors, but produces a valuable finished product and ideally gives introductory students a positive first experience with science.

  11. 25 Years of GenBank

    MedlinePlus

    ... this page please turn Javascript on. Unique DNA database has helped advance scientific discoveries worldwide Since its origin 25 years ago, the database of nucleic acid sequences known as GenBank has ...

  12. Roadblocks to Scientific Thinking in Educational Decision Making

    ERIC Educational Resources Information Center

    Yates, Gregory C. R.

    2008-01-01

    Principles of scientific data accumulation and evidence-based practices are vehicles of professional enhancement. In this article, the author argues that a scientific knowledge base exists descriptive of the relationship between teachers' activities and student learning. This database appears barely recognised however, for reasons including (a)…

  13. How Do You Like Your Science, Wet or Dry? How Two Lab Experiences Influence Student Understanding of Science Concepts and Perceptions of Authentic Scientific Practice.

    PubMed

    Munn, Maureen; Knuth, Randy; Van Horne, Katie; Shouse, Andrew W; Levias, Sheldon

    2017-01-01

    This study examines how two kinds of authentic research experiences related to smoking behavior-genotyping human DNA (wet lab) and using a database to test hypotheses about factors that affect smoking behavior (dry lab)-influence students' perceptions and understanding of scientific research and related science concepts. The study used pre and post surveys and a focus group protocol to compare students who conducted the research experiences in one of two sequences: genotyping before database and database before genotyping. Students rated the genotyping experiment to be more like real science than the database experiment, in spite of the fact that they associated more scientific tasks with the database experience than genotyping. Independent of the order of completing the labs, students showed gains in their understanding of science concepts after completion of the two experiences. There was little change in students' attitudes toward science pre to post, as measured by the Scientific Attitude Inventory II. However, on the basis of their responses during focus groups, students developed more sophisticated views about the practices and nature of science after they had completed both research experiences, independent of the order in which they experienced them. © 2017 M. Munn et al. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  14. EXPOSURES AND INTERNAL DOSES OF ...

    EPA Pesticide Factsheets

    The National Center for Environmental Assessment (NCEA) has released a final report that presents and applies a method to estimate distributions of internal concentrations of trihalomethanes (THMs) in humans resulting from a residential drinking water exposure. The report presents simulations of oral, dermal and inhalation exposures and demonstrates the feasibility of linking the US EPA’s information Collection Rule database with other databases on external exposure factors and physiologically based pharmacokinetic modeling to refine population-based estimates of exposure. Review Draft - by 2010, develop scientifically sound data and approaches to assess and manage risks to human health posed by exposure to specific regulated waterborne pathogens and chemicals, including those addressed by the Arsenic, M/DBP and Six-Year Review Rules.

  15. A Bayesian network approach to the database search problem in criminal proceedings

    PubMed Central

    2012-01-01

    Background The ‘database search problem’, that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions. The method’s graphical environment, along with its computational and probabilistic architectures, represents a rich package that offers analysts and discussants with additional modes of interaction, concise representation, and coherent communication. PMID:22849390

  16. Enabling comparative modeling of closely related genomes: Example genus Brucella

    DOE PAGES

    Faria, José P.; Edirisinghe, Janaka N.; Davis, James J.; ...

    2014-03-08

    For many scientific applications, it is highly desirable to be able to compare metabolic models of closely related genomes. In this study, we attempt to raise awareness to the fact that taking annotated genomes from public repositories and using them for metabolic model reconstructions is far from being trivial due to annotation inconsistencies. We are proposing a protocol for comparative analysis of metabolic models on closely related genomes, using fifteen strains of genus Brucella, which contains pathogens of both humans and livestock. This study lead to the identification and subsequent correction of inconsistent annotations in the SEED database, as wellmore » as the identification of 31 biochemical reactions that are common to Brucella, which are not originally identified by automated metabolic reconstructions. We are currently implementing this protocol for improving automated annotations within the SEED database and these improvements have been propagated into PATRIC, Model-SEED, KBase and RAST. This method is an enabling step for the future creation of consistent annotation systems and high-quality model reconstructions that will support in predicting accurate phenotypes such as pathogenicity, media requirements or type of respiration.« less

  17. Enabling comparative modeling of closely related genomes: Example genus Brucella

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faria, José P.; Edirisinghe, Janaka N.; Davis, James J.

    For many scientific applications, it is highly desirable to be able to compare metabolic models of closely related genomes. In this study, we attempt to raise awareness to the fact that taking annotated genomes from public repositories and using them for metabolic model reconstructions is far from being trivial due to annotation inconsistencies. We are proposing a protocol for comparative analysis of metabolic models on closely related genomes, using fifteen strains of genus Brucella, which contains pathogens of both humans and livestock. This study lead to the identification and subsequent correction of inconsistent annotations in the SEED database, as wellmore » as the identification of 31 biochemical reactions that are common to Brucella, which are not originally identified by automated metabolic reconstructions. We are currently implementing this protocol for improving automated annotations within the SEED database and these improvements have been propagated into PATRIC, Model-SEED, KBase and RAST. This method is an enabling step for the future creation of consistent annotation systems and high-quality model reconstructions that will support in predicting accurate phenotypes such as pathogenicity, media requirements or type of respiration.« less

  18. Mouse Tumor Biology (MTB): a database of mouse models for human cancer.

    PubMed

    Bult, Carol J; Krupke, Debra M; Begley, Dale A; Richardson, Joel E; Neuhauser, Steven B; Sundberg, John P; Eppig, Janan T

    2015-01-01

    The Mouse Tumor Biology (MTB; http://tumor.informatics.jax.org) database is a unique online compendium of mouse models for human cancer. MTB provides online access to expertly curated information on diverse mouse models for human cancer and interfaces for searching and visualizing data associated with these models. The information in MTB is designed to facilitate the selection of strains for cancer research and is a platform for mining data on tumor development and patterns of metastases. MTB curators acquire data through manual curation of peer-reviewed scientific literature and from direct submissions by researchers. Data in MTB are also obtained from other bioinformatics resources including PathBase, the Gene Expression Omnibus and ArrayExpress. Recent enhancements to MTB improve the association between mouse models and human genes commonly mutated in a variety of cancers as identified in large-scale cancer genomics studies, provide new interfaces for exploring regions of the mouse genome associated with cancer phenotypes and incorporate data and information related to Patient-Derived Xenograft models of human cancers. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Public Regulatory Databases as a Source of Insight for Neuromodulation Devices Stimulation Parameters

    PubMed Central

    Kumsa, Doe; Steinke, G. Karl; Molnar, Gregory F.; Hudak, Eric M.; Montague, Fred W.; Kelley, Shawn C.; Untereker, Darrel F.; Shi, Alan; Hahn, Benjamin P.; Condit, Chris; Lee, Hyowon; Bardot, Dawn; Centeno, Jose A.; Krauthamer, Victor; Takmakov, Pavel A.

    2017-01-01

    Objective The Shannon model is often used to define an expected boundary between non-damaging and damaging modes of electrical neurostimulation. Numerous preclinical studies have been performed by manufacturers of neuromodulation devices using different animal models and a broad range of stimulation parameters while developing devices for clinical use. These studies are mostly absent from peer-reviewed literature, which may lead to this information being overlooked by the scientific community. We aimed to locate summaries of these studies accessible via public regulatory databases and to add them to a body of knowledge available to a broad scientific community. Methods We employed web search terms describing device type, intended use, neural target, therapeutic application, company name, and submission number to identify summaries for premarket approval (PMA) devices and 510(k) devices. We filtered these records to a subset of entries that have sufficient technical information relevant to safety of neurostimulation. Results We identified 13 product codes for 8 types of neuromodulation devices. These led us to devices that have 22 PMAs and 154 510(k)s and six transcripts of public panel meetings. We found one PMA for a brain, peripheral nerve, and spinal cord stimulator and five 510(k) spinal cord stimulators with enough information to plot in Shannon coordinates of charge and charge density per phase. Conclusions Analysis of relevant entries from public regulatory databases reveals use of pig, sheep, monkey, dog, and goat animal models with deep brain, peripheral nerve, muscle and spinal cord electrode placement with a variety of stimulation durations (hours to years); frequencies (10–10,000 Hz) and magnitudes (Shannon k from below zero to 4.47). Data from located entries indicate that a feline cortical model that employs acute stimulation might have limitations for assessing tissue damage in diverse anatomical locations, particularly for peripheral nerve and spinal cord simulation. PMID:28782181

  20. Public Regulatory Databases as a Source of Insight for Neuromodulation Devices Stimulation Parameters.

    PubMed

    Kumsa, Doe; Steinke, G Karl; Molnar, Gregory F; Hudak, Eric M; Montague, Fred W; Kelley, Shawn C; Untereker, Darrel F; Shi, Alan; Hahn, Benjamin P; Condit, Chris; Lee, Hyowon; Bardot, Dawn; Centeno, Jose A; Krauthamer, Victor; Takmakov, Pavel A

    2018-02-01

    The Shannon model is often used to define an expected boundary between non-damaging and damaging modes of electrical neurostimulation. Numerous preclinical studies have been performed by manufacturers of neuromodulation devices using different animal models and a broad range of stimulation parameters while developing devices for clinical use. These studies are mostly absent from peer-reviewed literature, which may lead to this information being overlooked by the scientific community. We aimed to locate summaries of these studies accessible via public regulatory databases and to add them to a body of knowledge available to a broad scientific community. We employed web search terms describing device type, intended use, neural target, therapeutic application, company name, and submission number to identify summaries for premarket approval (PMA) devices and 510(k) devices. We filtered these records to a subset of entries that have sufficient technical information relevant to safety of neurostimulation. We identified 13 product codes for 8 types of neuromodulation devices. These led us to devices that have 22 PMAs and 154 510(k)s and six transcripts of public panel meetings. We found one PMA for a brain, peripheral nerve, and spinal cord stimulator and five 510(k) spinal cord stimulators with enough information to plot in Shannon coordinates of charge and charge density per phase. Analysis of relevant entries from public regulatory databases reveals use of pig, sheep, monkey, dog, and goat animal models with deep brain, peripheral nerve, muscle and spinal cord electrode placement with a variety of stimulation durations (hours to years); frequencies (10-10,000 Hz) and magnitudes (Shannon k from below zero to 4.47). Data from located entries indicate that a feline cortical model that employs acute stimulation might have limitations for assessing tissue damage in diverse anatomical locations, particularly for peripheral nerve and spinal cord simulation. © 2017 International Neuromodulation Society.

  1. NASA's experience in the international exchange of scientific and technical information in the aerospace field

    NASA Technical Reports Server (NTRS)

    Thibideau, Philip A.

    1990-01-01

    The early NASA international scientific and technical information exchange arrangements were usually detailed in correspondence with the librarians of the institutions involved. While this type of exchange grew to include some 200 organizations in 43 countries, NASA's main focus shifted to the relationship with the European Space Agency (ESA), which began in 1964. The NASA/ESA Tripartite Exchange Program provides more than 4000 technical reports from the NASA-produced Aerospace Database. The experience in the evolving cooperation between NASA and ESA has established the model for more recent exchange agreements with Israel, Australia, and Canada. The results of these agreements are made available to participating European organizations through the NASA File.

  2. Transitioning Newborns from NICU to Home: Family Information Packet

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  3. Next Steps After Your Diagnosis: Finding Information and Support

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  4. Blood Thinner Pills: Your Guide to Using Them Safely

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  5. Question Builder: Be Prepared for Your Next Medical Appointment

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  6. Database Software Selection for the Egyptian National STI Network.

    ERIC Educational Resources Information Center

    Slamecka, Vladimir

    The evaluation and selection of information/data management system software for the Egyptian National Scientific and Technical (STI) Network are described. An overview of the state-of-the-art of database technology elaborates on the differences between information retrieval and database management systems (DBMS). The desirable characteristics of…

  7. [New bibliometric indicators for the scientific literature: an evolving panorama].

    PubMed

    La Torre, G; Sciarra, I; Chiappetta, M; Monteduro, A

    2017-01-01

    Bibliometrics is a science which evaluates the impact of the scientific work of a journal or of an author, using mathematical and statistical tools. Impact Factor (IF) is the first bibliometric parameter created, and after it many others have been progressively conceived in order to go beyond its limits. Currently bibliometric indexes are used for academic purposes, among them to evaluate the eligibility of a researcher to compete for the National Scientific Qualification, in order to access to competitive exams to become professor. Aim of this study is to identify the most relevant bibliometric indexes and to summarized their characteristics. A revision of bibliometric indexes as been conducted, starting from the classic ones and completing with the most recent ones. The two most used bibliometric indexes are the IF, which measures the scientific impact of a periodical and bases on Web of Science citation database, and the h-index, which measures the impact of the scientific work of a researcher, basing on Scopus database. Besides them other indexes have been created more recently, such as the SCImago Journal Rank Indicator (SJR), the Source Normalised Impact per Paper (SNIP) and the CiteScore index. They are all based on Scopus database and evaluate, in different ways, the citational impact of a periodic. The i10-index instead is provided from Google Scholar database and allows to evaluate the impact of the scientific production of a researcher. Recently two softwares have been introduced: the first one, Publish or Perish, allows to evaluate the scientific work of a researcher, through the assessment of many indexes; the second one, Altmetric, measure the use in the Web of the academic papers, instead of measuring citations, by means of alternative metrics respect to the traditional ones. Each analized index shows advantages but also criticalities. Therefore the combined use of more than one indexes, citational and not, should be preferred, in order to correctly evaluate the work of reserchers and to finally improve the quality and the development of scientific research.

  8. Be More Involved in Your Health Care: Tips for Patients

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  9. Building health behavior models to guide the development of just-in-time adaptive interventions: A pragmatic framework.

    PubMed

    Nahum-Shani, Inbal; Hekler, Eric B; Spruijt-Metz, Donna

    2015-12-01

    Advances in wireless devices and mobile technology offer many opportunities for delivering just-in-time adaptive interventions (JITAIs)-suites of interventions that adapt over time to an individual's changing status and circumstances with the goal to address the individual's need for support, whenever this need arises. A major challenge confronting behavioral scientists aiming to develop a JITAI concerns the selection and integration of existing empirical, theoretical and practical evidence into a scientific model that can inform the construction of a JITAI and help identify scientific gaps. The purpose of this paper is to establish a pragmatic framework that can be used to organize existing evidence into a useful model for JITAI construction. This framework involves clarifying the conceptual purpose of a JITAI, namely, the provision of just-in-time support via adaptation, as well as describing the components of a JITAI and articulating a list of concrete questions to guide the establishment of a useful model for JITAI construction. The proposed framework includes an organizing scheme for translating the relatively static scientific models underlying many health behavior interventions into a more dynamic model that better incorporates the element of time. This framework will help to guide the next generation of empirical work to support the creation of effective JITAIs. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  10. Outside Mainstream Electronic Databases: Review of Studies Conducted in the USSR and Post-Soviet Countries on Electric Current-Assisted Consolidation of Powder Materials

    PubMed Central

    Olevsky, Eugene A.; Aleksandrova, Elena V.; Ilyina, Alexandra M.; Dudina, Dina V.; Novoselov, Alexander N.; Pelve, Kirill Y.; Grigoryev, Eugene G.

    2013-01-01

    This paper reviews research articles published in the former USSR and post-soviet countries on the consolidation of powder materials using electric current that passes through the powder sample and/or a conductive die-punch set-up. Having been published in Russian, many of the reviewed papers are not included in the mainstream electronic databases of the scientific articles and thus are not known to the scientific community. The present review is aimed at filling this information gap. In the paper, the electric current-assisted sintering techniques based on high- and low-voltage approaches are presented. The main results of the theoretical modeling of the processes of electromagnetic field-assisted consolidation of powder materials are discussed. Sintering experiments and related equipment are described and the major experimental results are analyzed. Sintering conditions required to achieve the desired properties of the sintered materials are provided for selected material systems. Tooling materials used in the electric current-assisted consolidation set-ups are also described. PMID:28788337

  11. The utilization of neural nets in populating an object-oriented database

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Hill, Scott E.; Cromp, Robert F.

    1989-01-01

    Existing NASA supported scientific data bases are usually developed, managed and populated in a tedious, error prone and self-limiting way in terms of what can be described in a relational Data Base Management System (DBMS). The next generation Earth remote sensing platforms (i.e., Earth Observation System, (EOS), will be capable of generating data at a rate of over 300 Mbs per second from a suite of instruments designed for different applications. What is needed is an innovative approach that creates object-oriented databases that segment, characterize, catalog and are manageable in a domain-specific context and whose contents are available interactively and in near-real-time to the user community. Described here is work in progress that utilizes an artificial neural net approach to characterize satellite imagery of undefined objects into high-level data objects. The characterized data is then dynamically allocated to an object-oriented data base where it can be reviewed and assessed by a user. The definition, development, and evolution of the overall data system model are steps in the creation of an application-driven knowledge-based scientific information system.

  12. Toward server-side, high performance climate change data analytics in the Earth System Grid Federation (ESGF) eco-system

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Williams, Dean; Aloisio, Giovanni

    2016-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated (e.g., the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). Most of the tools currently available for scientific data analysis in the climate domain fail at large scale since they: (1) are desktop based and need the data locally; (2) are sequential, so do not benefit from available multicore/parallel machines; (3) do not provide declarative languages to express scientific data analysis tasks; (4) are domain-specific, which ties their adoption to a specific domain; and (5) do not provide a workflow support, to enable the definition of complex "experiments". The Ophidia project aims at facing most of the challenges highlighted above by providing a big data analytics framework for eScience. Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes ("datacubes"). The project relies on a strong background of high performance database management and OLAP systems to manage large scientific data sets. It also provides a native workflow management support, to define processing chains and workflows with tens to hundreds of data analytics operators to build real scientific use cases. With regard to interoperability aspects, the talk will present the contribution provided both to the RDA Working Group on Array Databases, and the Earth System Grid Federation (ESGF) Compute Working Team. Also highlighted will be the results of large scale climate model intercomparison data analysis experiments, for example: (1) defined in the context of the EU H2020 INDIGO-DataCloud project; (2) implemented in a real geographically distributed environment involving CMCC (Italy) and LLNL (US) sites; (3) exploiting Ophidia as server-side, parallel analytics engine; and (4) applied on real CMIP5 data sets available through ESGF.

  13. Inter-University Upper Atmosphere Global Observation Network (IUGONET) Metadata Database and Its Interoperability

    NASA Astrophysics Data System (ADS)

    Yatagai, A. I.; Iyemori, T.; Ritschel, B.; Koyama, Y.; Hori, T.; Abe, S.; Tanaka, Y.; Shinbori, A.; Umemura, N.; Sato, Y.; Yagi, M.; Ueno, S.; Hashiguchi, N. O.; Kaneda, N.; Belehaki, A.; Hapgood, M. A.

    2013-12-01

    The IUGONET is a Japanese program to build a metadata database for ground-based observations of the upper atmosphere [1]. The project began in 2009 with five Japanese institutions which archive data observed by radars, magnetometers, photometers, radio telescopes and helioscopes, and so on, at various altitudes from the Earth's surface to the Sun. Systems have been developed to allow searching of the above described metadata. We have been updating the system and adding new and updated metadata. The IUGONET development team adopted the SPASE metadata model [2] to describe the upper atmosphere data. This model is used as the common metadata format by the virtual observatories for solar-terrestrial physics. It includes metadata referring to each data file (called a 'Granule'), which enable a search for data files as well as data sets. Further details are described in [2] and [3]. Currently, three additional Japanese institutions are being incorporated in IUGONET. Furthermore, metadata of observations of the troposphere, taken at the observatories of the middle and upper atmosphere radar at Shigaraki and the Meteor radar in Indonesia, have been incorporated. These additions will contribute to efficient interdisciplinary scientific research. In the beginning of 2013, the registration of the 'Observatory' and 'Instrument' metadata was completed, which makes it easy to overview of the metadata database. The number of registered metadata as of the end of July, totalled 8.8 million, including 793 observatories and 878 instruments. It is important to promote interoperability and/or metadata exchange between the database development groups. A memorandum of agreement has been signed with the European Near-Earth Space Data Infrastructure for e-Science (ESPAS) project, which has similar objectives to IUGONET with regard to a framework for formal collaboration. Furthermore, observations by satellites and the International Space Station are being incorporated with a view for making/linking metadata databases. The development of effective data systems will contribute to the progress of scientific research on solar terrestrial physics, climate and the geophysical environment. Any kind of cooperation, metadata input and feedback, especially for linkage of the databases, is welcomed. References 1. Hayashi, H. et al., Inter-university Upper Atmosphere Global Observation Network (IUGONET), Data Sci. J., 12, WDS179-184, 2013. 2. King, T. et al., SPASE 2.0: A standard data model for space physics. Earth Sci. Inform. 3, 67-73, 2010, doi:10.1007/s12145-010-0053-4. 3. Hori, T., et al., Development of IUGONET metadata format and metadata management system. J. Space Sci. Info. Jpn., 105-111, 2012. (in Japanese)

  14. Freva - Freie Univ Evaluation System Framework for Scientific Infrastructures in Earth System Modeling

    NASA Astrophysics Data System (ADS)

    Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Schartner, Thomas; Kirchner, Ingo; Rust, Henning W.; Cubasch, Ulrich; Ulbrich, Uwe

    2016-04-01

    The Freie Univ Evaluation System Framework (Freva - freva.met.fu-berlin.de) is a software infrastructure for standardized data and tool solutions in Earth system science. Freva runs on high performance computers to handle customizable evaluation systems of research projects, institutes or universities. It combines different software technologies into one common hybrid infrastructure, including all features present in the shell and web environment. The database interface satisfies the international standards provided by the Earth System Grid Federation (ESGF). Freva indexes different data projects into one common search environment by storing the meta data information of the self-describing model, reanalysis and observational data sets in a database. This implemented meta data system with its advanced but easy-to-handle search tool supports users, developers and their plugins to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitation of the provision and usage of tools and climate data automatically increases the number of scientists working with the data sets and identifying discrepancies. The integrated web-shell (shellinabox) adds a degree of freedom in the choice of the working environment and can be used as a gate to the research projects HPC. Plugins are able to integrate their e.g. post-processed results into the database of the user. This allows e.g. post-processing plugins to feed statistical analysis plugins, which fosters an active exchange between plugin developers of a research project. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a database. Configurations and results of the tools can be shared among scientists via shell or web system. Therefore, plugged-in tools benefit from transparency and reproducibility. Furthermore, if configurations match while starting an evaluation plugin, the system suggests to use results already produced by other users - saving CPU/h, I/O, disk space and time. The efficient interaction between different technologies improves the Earth system modeling science framed by Freva.

  15. Freva - Freie Univ Evaluation System Framework for Scientific HPC Infrastructures in Earth System Modeling

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Schartner, T.; Grieger, J.; Kirchner, I.; Rust, H.; Cubasch, U.; Ulbrich, U.

    2017-12-01

    The Freie Univ Evaluation System Framework (Freva - freva.met.fu-berlin.de) is a software infrastructure for standardized data and tool solutions in Earth system science (e.g. www-miklip.dkrz.de, cmip-eval.dkrz.de). Freva runs on high performance computers to handle customizable evaluation systems of research projects, institutes or universities. It combines different software technologies into one common hybrid infrastructure, including all features present in the shell and web environment. The database interface satisfies the international standards provided by the Earth System Grid Federation (ESGF). Freva indexes different data projects into one common search environment by storing the meta data information of the self-describing model, reanalysis and observational data sets in a database. This implemented meta data system with its advanced but easy-to-handle search tool supports users, developers and their plugins to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. The integrated web-shell (shellinabox) adds a degree of freedom in the choice of the working environment and can be used as a gate to the research projects HPC. Plugins are able to integrate their e.g. post-processed results into the database of the user. This allows e.g. post-processing plugins to feed statistical analysis plugins, which fosters an active exchange between plugin developers of a research project. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a database. Configurations and results of the tools can be shared among scientists via shell or web system. Furthermore, if configurations match while starting an evaluation plugin, the system suggests to use results already produced by other users - saving CPU/h, I/O, disk space and time. The efficient interaction between different technologies improves the Earth system modeling science framed by Freva.

  16. [Toward a model of communications in public health in Latin America and the Caribbean].

    PubMed

    Macías-Chapula, César A

    2005-12-01

    So far, there have been no bibliometric or scientometric studies that make it possible to examine, with quantitative, retrospective, and comprehensive criteria, the scientific output on public health in Latin America and the Caribbean (LAC). Further, the weakness of the existing information systems makes it impossible to examine the relevance, quality, and impact of this scientific output, with a view to evaluating it in terms of societal needs and existing patterns of scientific communication. This article presents the results of a bibliographic analysis of the scientific output in the area of public health in Latin America and the Caribbean. The ultimate goal of the analysis is to build a model of scientific communication in this field, to help researchers, managers, and others working in the area of public health to make decisions and choose actions to take. We conducted a literature review in order to identify the distribution of publications on public health that were produced by LAC researchers and published in each of the LAC countries from 1980 through 2002. The review used the Literatura Latino-Americana e do Caribe em Saúde Pública (LILACS-SP) (Latin American and Caribbean Literature on Public Health) bibliographic database. That database is operated by the Latin American and Caribbean Center on Health Sciences Information (BIREME), which is in São Paulo, Brazil. We processed the LILACS-SP data using two software packages, Microsoft Excel and Bibexcel, to obtain indicators of the scientific output, the type of document, the language, the number of authors for each publication, the thematic content, and the participating institutions. For the 1980-2002 period, there were 97,605 publications registered, from a total of 37 LAC countries. For the analysis presented in this article, we limited the sample to the 8 countries in Latin America and the Caribbean that had at least 3,000 documents each registered in the LILACS-SP database over the 1980-2002 study period. In descending order of the number of publications registered, the 8 nations were: Argentina, Brazil, Chile, Colombia, Cuba, Mexico, Peru, and Venezuela. Those 8 countries were responsible for 83,054 publications (85.10% of the total of 97,605 registered documents produced by the 37 LAC countries). Of those 83,054 publications from the 8 countries, 56,253 of them (67.73%) were articles published in scientific journals and 24,488 were monographs (29.48%). The proportion of works produced by two or more coauthors was relatively high (56.48%). The 56,253 articles appeared in a total of 929 different journals. Of the 929 journals, 91 of them published at least 150 articles over the study period. In descending order, LAC journals with the largest number of articles on public health were: Revista de Saúde Pública (Brazil); Cadernos de Saúde Pública (Brazil); Revista Médica de Chile; Archivos Latinoamericanos de Nutrición (Venezuela); and Salud Pública de México. The 91 journals that published at least 150 articles represented 29 different specialties. The most common of the specialties for the 91 journals were general medicine (18 journals) and pediatrics (10 journals). In descending order, the populations that the publications dealt with primarily were human beings in general, females, males, and adults; and, in descending order, a relatively small number of publications dealt with pregnant women and middle-aged or elderly persons. The topics most often covered in the publications were risk factors, health policy, and primary health care, as well as family doctors in the case of Cuba. This research produced a preliminary model of communications in public health in LAC countries that will hopefully help lay the groundwork for further research to develop a model of scientific communication in LAC nations.

  17. [Bibliometric and thematic analysis of the scientific literature about omega-3 fatty acids indexed in international databases on health sciences].

    PubMed

    Sanz-Valero, J; Gil, Á; Wanden-Berghe, C; Martínez de Victoria, E

    2012-11-01

    To evaluate by bibliometric and thematic analysis the scientific literature on omega-3 fatty acids indexed in international databases on health sciences and to establish a comparative base for future analysis. Searches were conducted with the descriptor (MeSH, as Major Topic) "Fatty Acids, Omega-3" from the first date available until December 31, 2010. Databases consulted: MEDLINE (via PubMed), EMBASE, ISI Web of Knowledge, CINAHL and LILACS. The most common type of document was originals articles. Obsolescence was set at 5 years. The geographical distribution of authors who appear as first author was EEUU and the articles were written predominantly in English. The study population was 90.98% (95% CI 89.25 to 92.71) adult humans. The documents were classified into 59 subject areas and the most studied topic 16.24% (95% CI 14.4 to 18.04) associated with omega-3, was cardiovascular disease. This study indicates that the scientific literature on omega-3 fatty acids is a full force area of knowledge. The Anglo-Saxon institutions dominate the scientific production and it is mainly oriented to the study of cardiovascular disease.

  18. Network-based statistical comparison of citation topology of bibliographic databases

    PubMed Central

    Šubelj, Lovro; Fiala, Dalibor; Bajec, Marko

    2014-01-01

    Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, while, not surprisingly, Web of Science is significantly more reliable from the perspective of consistency. This work can serve either as a reference for scholars in bibliometrics and scientometrics or a scientific evaluation guideline for governments and research agencies. PMID:25263231

  19. The development of a prototype intelligent user interface subsystem for NASA's scientific database systems

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Roelofs, Larry H.; Short, Nicholas M., Jr.

    1987-01-01

    The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has as one of its components the development of an Intelligent User Interface (IUI).The intent of the latter is to develop a friendly and intelligent user interface service that is based on expert systems and natural language processing technologies. The purpose is to support the large number of potential scientific and engineering users presently having need of space and land related research and technical data but who have little or no experience in query languages or understanding of the information content or architecture of the databases involved. This technical memorandum presents prototype Intelligent User Interface Subsystem (IUIS) using the Crustal Dynamics Project Database as a test bed for the implementation of the CRUDDES (Crustal Dynamics Expert System). The knowledge base has more than 200 rules and represents a single application view and the architectural view. Operational performance using CRUDDES has allowed nondatabase users to obtain useful information from the database previously accessible only to an expert database user or the database designer.

  20. High-Performance Secure Database Access Technologies for HEP Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less

  1. [A web-based integrated clinical database for laryngeal cancer].

    PubMed

    E, Qimin; Liu, Jialin; Li, Yong; Liang, Chuanyu

    2014-08-01

    To establish an integrated database for laryngeal cancer, and to provide an information platform for laryngeal cancer in clinical and fundamental researches. This database also meet the needs of clinical and scientific use. Under the guidance of clinical expert, we have constructed a web-based integrated clinical database for laryngeal carcinoma on the basis of clinical data standards, Apache+PHP+MySQL technology, laryngeal cancer specialist characteristics and tumor genetic information. A Web-based integrated clinical database for laryngeal carcinoma had been developed. This database had a user-friendly interface and the data could be entered and queried conveniently. In addition, this system utilized the clinical data standards and exchanged information with existing electronic medical records system to avoid the Information Silo. Furthermore, the forms of database was integrated with laryngeal cancer specialist characteristics and tumor genetic information. The Web-based integrated clinical database for laryngeal carcinoma has comprehensive specialist information, strong expandability, high feasibility of technique and conforms to the clinical characteristics of laryngeal cancer specialties. Using the clinical data standards and structured handling clinical data, the database can be able to meet the needs of scientific research better and facilitate information exchange, and the information collected and input about the tumor sufferers are very informative. In addition, the user can utilize the Internet to realize the convenient, swift visit and manipulation on the database.

  2. Aldrin and dieldrin: a review of research on their production, environmental deposition and fate, bioaccumulation, toxicology, and epidemiology in the United States.

    PubMed Central

    Jorgenson, J L

    2001-01-01

    In the last decade four international agreements have focused on a group of chemical substances known as persistent organic pollutants (POPs). Global agreement on the reduction and eventual elimination of these substances by banning their production and trade is a long-term goal. Negotiations for these agreements have focused on the need to correlate data from scientists working on soil and water sampling and air pollution monitoring. Toxicologists and epidemiologists have focused on wildlife and human health effects and understanding patterns of disease requires better access to these data. In the last 20 years, substantial databases have been created and now are becoming available on the Internet. This review is a detailed examination of 2 of the 12 POPs, aldrin and dieldrin, and how scientific groups identify and measure their effects. It draws on research findings from a variety of environmental monitoring networks in the United States. An overview of the ecologic and health effects of aldrin and dieldrin provides examples of how to streamline some of the programs and improve access to mutually useful scientific data. The research groups are located in many government departments, universities, and private organizations. Identifying databases can provide an "information accelerator" useful to a larger audience and can help build better plant and animal research models across scientific fields. PMID:11250811

  3. An environmental database for Venice and tidal zones

    NASA Astrophysics Data System (ADS)

    Macaluso, L.; Fant, S.; Marani, A.; Scalvini, G.; Zane, O.

    2003-04-01

    The natural environment is a complex, highly variable and physically non reproducible system (not in laboratory, nor in a confined territory). Environmental experimental studies are thus necessarily based on field measurements distributed in time and space. Only extensive data collections can provide the representative samples of the system behavior which are essential for scientific advancement. The assimilation of large data collections into accessible archives must necessarily be implemented in electronic databases. In the case of tidal environments in general, and of the Venice lagoon in particular, it is useful to establish a database, freely accessible to the scientific community, documenting the dynamics of such systems and their response to anthropic pressures and climatic variability. At the Istituto Veneto di Scienze, Lettere ed Arti in Venice (Italy) two internet environmental databases has been developed: one collects information regarding in detail the Venice lagoon; the other co-ordinate the research consortium of the "TIDE" EU RTD project, that attends to three different tidal areas: Venice Lagoon (Italy), Morecambe Bay (England), and Forth Estuary (Scotland). The archives may be accessed through the URL: www.istitutoveneto.it. The first one is freely available and applies to anyone is interested. It is continuously updated and has been structured in order to promote documentation concerning Venetian environment and disseminate this information for educational purposes (see "Dissemination" section). The second one is supplied by scientists and engineers working on this tidal system for various purposes (scientific, management, conservation purposes, etc.); it applies to interested researchers and grows with their own contributions. Both intend to promote scientific communication, to contribute to the realization of a distributed information system collecting homogeneous themes, and to initiate the interconnection among databases regarding different kinds of environment.

  4. An Update of the Bodeker Scientific Vertically Resolved, Global, Gap-Free Ozone Database

    NASA Astrophysics Data System (ADS)

    Kremser, S.; Bodeker, G. E.; Lewis, J.; Hassler, B.

    2016-12-01

    High vertical resolution ozone measurements from multiple satellite-based instruments have been merged with measurements from the global ozonesonde network to calculate monthly mean ozone values in 5º latitude zones. Ozone number densities and ozone mixing ratios are provided on 70 altitude levels (1 to 70 km) and on 70 pressure levels spaced approximately 1 km apart (878.4 hPa to 0.046 hPa). These data are sparse and do not cover the entire globe or altitude range. To provide a gap-free database, a least squares regression model is fitted to these data and then evaluated globally. By applying a single fit at each level, and using the approach of allowing the regression fits to change only slightly from one level to the next, the regression is less sensitive to measurement anomalies at individual stations or to individual satellite-based instruments. Particular attention is paid to ensuring that the low ozone abundances in the polar regions are captured. This presentation reports on updates to an earlier version of the vertically resolved ozone database, including the incorporation of new ozone measurements and new techniques for combining the data. Compared to previous versions of the database, particular attention is paid to avoiding spatial and temporal sampling biases and tracing uncertainties through to the final product. This updated database, developed within the New Zealand Deep South National Science Challenge, is suitable for assessing ozone fields from chemistry-climate model simulations or for providing the ozone boundary conditions for global climate model simulations that do not treat stratospheric chemistry interactively.

  5. PaperBLAST: Text Mining Papers for Information about Homologs

    DOE PAGES

    Price, Morgan N.; Arkin, Adam P.

    2017-08-15

    Large-scale genome sequencing has identified millions of protein-coding genes whose function is unknown. Many of these proteins are similar to characterized proteins from other organisms, but much of this information is missing from annotation databases and is hidden in the scientific literature. To make this information accessible, PaperBLAST uses EuropePMC to search the full text of scientific articles for references to genes. PaperBLAST also takes advantage of curated resources (Swiss-Prot, GeneRIF, and EcoCyc) that link protein sequences to scientific articles. PaperBLAST’s database includes over 700,000 scientific articles that mention over 400,000 different proteins. Given a protein of interest, PaperBLAST quicklymore » finds similar proteins that are discussed in the literature and presents snippets of text from relevant articles or from the curators. With the recent explosion of genome sequencing data, there are now millions of uncharacterized proteins. If a scientist becomes interested in one of these proteins, it can be very difficult to find information as to its likely function. Often a protein whose sequence is similar, and which is likely to have a similar function, has been studied already, but this information is not available in any database. To help find articles about similar proteins, PaperBLAST searches the full text of scientific articles for protein identifiers or gene identifiers, and it links these articles to protein sequences. Then, given a protein of interest, it can quickly find similar proteins in its database by using standard software (BLAST), and it can show snippets of text from relevant papers. We hope that PaperBLAST will make it easier for biologists to predict proteins’ functions.« less

  6. PaperBLAST: Text Mining Papers for Information about Homologs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, Morgan N.; Arkin, Adam P.

    Large-scale genome sequencing has identified millions of protein-coding genes whose function is unknown. Many of these proteins are similar to characterized proteins from other organisms, but much of this information is missing from annotation databases and is hidden in the scientific literature. To make this information accessible, PaperBLAST uses EuropePMC to search the full text of scientific articles for references to genes. PaperBLAST also takes advantage of curated resources (Swiss-Prot, GeneRIF, and EcoCyc) that link protein sequences to scientific articles. PaperBLAST’s database includes over 700,000 scientific articles that mention over 400,000 different proteins. Given a protein of interest, PaperBLAST quicklymore » finds similar proteins that are discussed in the literature and presents snippets of text from relevant articles or from the curators. With the recent explosion of genome sequencing data, there are now millions of uncharacterized proteins. If a scientist becomes interested in one of these proteins, it can be very difficult to find information as to its likely function. Often a protein whose sequence is similar, and which is likely to have a similar function, has been studied already, but this information is not available in any database. To help find articles about similar proteins, PaperBLAST searches the full text of scientific articles for protein identifiers or gene identifiers, and it links these articles to protein sequences. Then, given a protein of interest, it can quickly find similar proteins in its database by using standard software (BLAST), and it can show snippets of text from relevant papers. We hope that PaperBLAST will make it easier for biologists to predict proteins’ functions.« less

  7. PaperBLAST: Text Mining Papers for Information about Homologs

    PubMed Central

    Arkin, Adam P.

    2017-01-01

    ABSTRACT Large-scale genome sequencing has identified millions of protein-coding genes whose function is unknown. Many of these proteins are similar to characterized proteins from other organisms, but much of this information is missing from annotation databases and is hidden in the scientific literature. To make this information accessible, PaperBLAST uses EuropePMC to search the full text of scientific articles for references to genes. PaperBLAST also takes advantage of curated resources (Swiss-Prot, GeneRIF, and EcoCyc) that link protein sequences to scientific articles. PaperBLAST’s database includes over 700,000 scientific articles that mention over 400,000 different proteins. Given a protein of interest, PaperBLAST quickly finds similar proteins that are discussed in the literature and presents snippets of text from relevant articles or from the curators. PaperBLAST is available at http://papers.genomics.lbl.gov/. IMPORTANCE With the recent explosion of genome sequencing data, there are now millions of uncharacterized proteins. If a scientist becomes interested in one of these proteins, it can be very difficult to find information as to its likely function. Often a protein whose sequence is similar, and which is likely to have a similar function, has been studied already, but this information is not available in any database. To help find articles about similar proteins, PaperBLAST searches the full text of scientific articles for protein identifiers or gene identifiers, and it links these articles to protein sequences. Then, given a protein of interest, it can quickly find similar proteins in its database by using standard software (BLAST), and it can show snippets of text from relevant papers. We hope that PaperBLAST will make it easier for biologists to predict proteins’ functions. PMID:28845458

  8. PaperBLAST: Text Mining Papers for Information about Homologs.

    PubMed

    Price, Morgan N; Arkin, Adam P

    2017-01-01

    Large-scale genome sequencing has identified millions of protein-coding genes whose function is unknown. Many of these proteins are similar to characterized proteins from other organisms, but much of this information is missing from annotation databases and is hidden in the scientific literature. To make this information accessible, PaperBLAST uses EuropePMC to search the full text of scientific articles for references to genes. PaperBLAST also takes advantage of curated resources (Swiss-Prot, GeneRIF, and EcoCyc) that link protein sequences to scientific articles. PaperBLAST's database includes over 700,000 scientific articles that mention over 400,000 different proteins. Given a protein of interest, PaperBLAST quickly finds similar proteins that are discussed in the literature and presents snippets of text from relevant articles or from the curators. PaperBLAST is available at http://papers.genomics.lbl.gov/. IMPORTANCE With the recent explosion of genome sequencing data, there are now millions of uncharacterized proteins. If a scientist becomes interested in one of these proteins, it can be very difficult to find information as to its likely function. Often a protein whose sequence is similar, and which is likely to have a similar function, has been studied already, but this information is not available in any database. To help find articles about similar proteins, PaperBLAST searches the full text of scientific articles for protein identifiers or gene identifiers, and it links these articles to protein sequences. Then, given a protein of interest, it can quickly find similar proteins in its database by using standard software (BLAST), and it can show snippets of text from relevant papers. We hope that PaperBLAST will make it easier for biologists to predict proteins' functions.

  9. Seabird databases and the new paradigm for scientific publication and attribution

    USGS Publications Warehouse

    Hatch, Scott A.

    2010-01-01

    For more than 300 years, the peer-reviewed journal article has been the principal medium for packaging and delivering scientific data. With new tools for managing digital data, a new paradigm is emerging—one that demands open and direct access to data and that enables and rewards a broad-based approach to scientific questions. Ground-breaking papers in the future will increasingly be those that creatively mine and synthesize vast stores of data available on the Internet. This is especially true for conservation science, in which essential data can be readily captured in standard record formats. For seabird professionals, a number of globally shared databases are in the offing, or should be. These databases will capture the salient results of inventories and monitoring, pelagic surveys, diet studies, and telemetry. A number of real or perceived barriers to data sharing exist, but none is insurmountable. Our discipline should take an important stride now by adopting a specially designed markup language for annotating and sharing seabird data.

  10. "Mr. Database" : Jim Gray and the History of Database Technologies.

    PubMed

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  11. Scientific production of medical sciences universities in north of iran.

    PubMed

    Siamian, Hasan; Firooz, Mousa Yamin; Vahedi, Mohammad; Aligolbandi, Kobra

    2013-01-01

    NONE DECLARED. The study of the scientific evidence citation production by famous databases of the world is one of the important indicators to evaluate and rank the universities. The study at investigating the scientific production of Northern Iran Medical Sciences Universities in Scopus from 2005 through 2010. This survey used scientometrics technique. The samples under studies were the scientific products of four northern Iran Medical universities. Viewpoints quantity of the Scientific Products Mazandaran University of Medical Sciences stands first and of Babol University of Medical Sciences ranks the end, but from the viewpoints of quality of scientific products of considering the H-Index and the number of cited papers the Mazandaran University of Medical Sciences is a head from the other universities under study. From the viewpoints of subject of the papers, the highest scientific products belonged to the faculty of Pharmacy affiliated to Mazandaran University of Medial Sciences, but the three other universities for the genetics and biochemistry. Results showed that the Mazandaran University of Medical Sciences as compared to the other understudies universities ranks higher for the number of articles, cited articles, number of hard work authors and H-Index of Scopus database from 2005 through 2010.

  12. Coordinating Council. Fourth Meeting: NACA Documents Database Project

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This NASA Scientific and Technical Information Coordination Council meeting dealt with the topic 'NACA Documents Database Project'. The following presentations were made and reported on: NACA documents database project study plan, AIAA study, the Optimal NACA database, Deficiencies in online file, NACA documents: Availability and Preservation, the NARA Collection: What is in it? and What to do about it?, and NACA foreign documents and availability. Visuals are available for most presentations.

  13. Use of a secure Internet Web site for collaborative medical research.

    PubMed

    Marshall, W W; Haley, R W

    2000-10-11

    Researchers who collaborate on clinical research studies from diffuse locations need a convenient, inexpensive, secure way to record and manage data. The Internet, with its World Wide Web, provides a vast network that enables researchers with diverse types of computers and operating systems anywhere in the world to log data through a common interface. Development of a Web site for scientific data collection can be organized into 10 steps, including planning the scientific database, choosing a database management software system, setting up database tables for each collaborator's variables, developing the Web site's screen layout, choosing a middleware software system to tie the database software to the Web site interface, embedding data editing and calculation routines, setting up the database on the central server computer, obtaining a unique Internet address and name for the Web site, applying security measures to the site, and training staff who enter data. Ensuring the security of an Internet database requires limiting the number of people who have access to the server, setting up the server on a stand-alone computer, requiring user-name and password authentication for server and Web site access, installing a firewall computer to prevent break-ins and block bogus information from reaching the server, verifying the identity of the server and client computers with certification from a certificate authority, encrypting information sent between server and client computers to avoid eavesdropping, establishing audit trails to record all accesses into the Web site, and educating Web site users about security techniques. When these measures are carefully undertaken, in our experience, information for scientific studies can be collected and maintained on Internet databases more efficiently and securely than through conventional systems of paper records protected by filing cabinets and locked doors. JAMA. 2000;284:1843-1849.

  14. Study of Scientific Production of Community Medicines' Department Indexed in ISI Citation Databases.

    PubMed

    Khademloo, Mohammad; Khaseh, Ali Akbar; Siamian, Hasan; Aligolbandi, Kobra; Latifi, Mahsoomeh; Yaminfirooz, Mousa

    2016-10-01

    In the scientometric, the main criterion in determining the scientific position and ranking of the scientific centers, particularly the universities, is the rate of scientific production and innovation, and in all participations in the global scientific development. One of the subjects more involved in repeatedly dealt with science and technology and effective on the improvement of health is medical science fields. In this research using scientometric and citation analysis, we studied the rate of scientific productions in the field of community medicine, which is the numbers of articles published and indexed in ISI database from 2000 to 2010. This study is scientometric using the survey and analytical citation. The study samples included all of the articles in the ISI database from 2000 to 2010. For the data collection, the advance method of searching was used at the ISI database. The ISI analyses software and descriptive statistics were used for data analysis. Results showed that among the five top universities in producing documents, Tehran University of Medical Sciences with 88 (22.22%) documents are allocated to the first rank of scientific products. M. Askarian with 36 (90/9%) published documents; most of the scientific outputs in Community medicine, in the international arena is the most active author in this field. In collaboration with other writers, Iranian departments of Community Medicine with 27 published articles have the greatest participation with scholars of English authors. In the process of scientific outputs, the results showed that the scientific process was in its lowest in the years 2000 to 2004, and while the department of Community medicine in 2009 allocated most of the production process to itself. Iranian Journal of Public Health and Saudi Medical Journal each of them had 16 articles which had most participation rate in the publishing of community medicine's department. On the type of carrier, community medicine's department by presentation of 340(85.86%) articles had presented most of their scientific productions in the format of article, also in the field of community medicine outputs, article entitled: "Iron loading and erythrophagocytosis increase ferroportin 1 (FPN1) expression in J774 macrophages"(1) with 81 citations ranked first in cited articles. Subject areas of occupational health with 70 articles and subject areas of general medicine with 69 articles ranked the most active research areas in the Production of community medicine's department. the obtained data showed the much growth of scientific production. The Tehran University of medical Sciences ranked the first in publishing articles in community medicine's department and with most collaboration with community medicine department of England writers in this field and most writers will present their works in paper format.

  15. Study of Scientific Production of Community Medicines’ Department Indexed in ISI Citation Databases

    PubMed Central

    Khademloo, Mohammad; Khaseh, Ali Akbar; Siamian, Hasan; Aligolbandi, Kobra; Latifi, Mahsoomeh; Yaminfirooz, Mousa

    2016-01-01

    Background: In the scientometric, the main criterion in determining the scientific position and ranking of the scientific centers, particularly the universities, is the rate of scientific production and innovation, and in all participations in the global scientific development. One of the subjects more involved in repeatedly dealt with science and technology and effective on the improvement of health is medical science fields. In this research using scientometric and citation analysis, we studied the rate of scientific productions in the field of community medicine, which is the numbers of articles published and indexed in ISI database from 2000 to 2010. Methods: This study is scientometric using the survey and analytical citation. The study samples included all of the articles in the ISI database from 2000 to 2010. For the data collection, the advance method of searching was used at the ISI database. The ISI analyses software and descriptive statistics were used for data analysis. Results: Results showed that among the five top universities in producing documents, Tehran University of Medical Sciences with 88 (22.22%) documents are allocated to the first rank of scientific products. M. Askarian with 36 (90/9%) published documents; most of the scientific outputs in Community medicine, in the international arena is the most active author in this field. In collaboration with other writers, Iranian departments of Community Medicine with 27 published articles have the greatest participation with scholars of English authors. In the process of scientific outputs, the results showed that the scientific process was in its lowest in the years 2000 to 2004, and while the department of Community medicine in 2009 allocated most of the production process to itself. Iranian Journal of Public Health and Saudi Medical Journal each of them had 16 articles which had most participation rate in the publishing of community medicine’s department. On the type of carrier, community medicine’s department by presentation of 340(85.86%) articles had presented most of their scientific productions in the format of article, also in the field of community medicine outputs, article entitled: “Iron loading and erythrophagocytosis increase ferroportin 1 (FPN1) expression in J774 macrophages”(1) with 81 citations ranked first in cited articles. Subject areas of occupational health with 70 articles and subject areas of general medicine with 69 articles ranked the most active research areas in the Production of community medicine’s department. Conclusion: the obtained data showed the much growth of scientific production. The Tehran University of medical Sciences ranked the first in publishing articles in community medicine’s department and with most collaboration with community medicine department of England writers in this field and most writers will present their works in paper format. PMID:28077896

  16. Coordinating Council. First Meeting: NASA/RECON database

    NASA Technical Reports Server (NTRS)

    1990-01-01

    A Council of NASA Headquarters, American Institute of Aeronautics and Astronautics (AIAA), and the NASA Scientific and Technical Information (STI) Facility management met (1) to review and discuss issues of NASA concern, and (2) to promote new and better ways to collect and disseminate scientific and technical information. Topics mentioned for study and discussion at subsequent meetings included the pros and cons of transferring the NASA/RECON database to the commercial sector, the quality of the database, and developing ways to increase foreign acquisitions. The input systems at AIAA and the STI Facility were described. Also discussed were the proposed RECON II retrieval system, the transmittal of document orders received by the Facility and sent to AIAA, and the handling of multimedia input by the Departments of Defense and Commerce. A second meeting was scheduled for six weeks later to discuss database quality and international foreign input.

  17. Analysis of Landslide Hazard Impact Using the Landslide Database for Germany

    NASA Astrophysics Data System (ADS)

    Klose, M.; Damm, B.

    2014-12-01

    The Federal Republic of Germany has long been among the few European countries that lack a national landslide database. Systematic collection and inventory of landslide data still shows a comprehensive research history in Germany, but only one focused on development of databases with local or regional coverage. This has changed in recent years with the launch of a database initiative aimed at closing the data gap existing at national level. The present contribution reports on this project that is based on a landslide database which evolved over the last 15 years to a database covering large parts of Germany. A strategy of systematic retrieval, extraction, and fusion of landslide data is at the heart of the methodology, providing the basis for a database with a broad potential of application. The database offers a data pool of more than 4,200 landslide data sets with over 13,000 single data files and dates back to 12th century. All types of landslides are covered by the database, which stores not only core attributes, but also various complementary data, including data on landslide causes, impacts, and mitigation. The current database migration to PostgreSQL/PostGIS is focused on unlocking the full scientific potential of the database, while enabling data sharing and knowledge transfer via a web GIS platform. In this contribution, the goals and the research strategy of the database project are highlighted at first, with a summary of best practices in database development providing perspective. Next, the focus is on key aspects of the methodology, which is followed by the results of different case studies in the German Central Uplands. The case study results exemplify database application in analysis of vulnerability to landslides, impact statistics, and hazard or cost modeling.

  18. The Marshall Islands Data Management Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoker, A.C.; Conrado, C.L.

    1995-09-01

    This report is a resource document of the methods and procedures used currently in the Data Management Program of the Marshall Islands Dose Assessment and Radioecology Project. Since 1973, over 60,000 environmental samples have been collected. Our program includes relational database design, programming and maintenance; sample and information management; sample tracking; quality control; and data entry, evaluation and reduction. The usefulness of scientific databases involves careful planning in order to fulfill the requirements of any large research program. Compilation of scientific results requires consolidation of information from several databases, and incorporation of new information as it is generated. The successmore » in combining and organizing all radionuclide analysis, sample information and statistical results into a readily accessible form, is critical to our project.« less

  19. Making your database available through Wikipedia: the pros and cons.

    PubMed

    Finn, Robert D; Gardner, Paul P; Bateman, Alex

    2012-01-01

    Wikipedia, the online encyclopedia, is the most famous wiki in use today. It contains over 3.7 million pages of content; with many pages written on scientific subject matters that include peer-reviewed citations, yet are written in an accessible manner and generally reflect the consensus opinion of the community. In this, the 19th Annual Database Issue of Nucleic Acids Research, there are 11 articles that describe the use of a wiki in relation to a biological database. In this commentary, we discuss how biological databases can be integrated with Wikipedia, thereby utilising the pre-existing infrastructure, tools and above all, large community of authors (or Wikipedians). The limitations to the content that can be included in Wikipedia are highlighted, with examples drawn from articles found in this issue and other wiki-based resources, indicating why other wiki solutions are necessary. We discuss the merits of using open wikis, like Wikipedia, versus other models, with particular reference to potential vandalism. Finally, we raise the question about the future role of dedicated database biocurators in context of the thousands of crowdsourced, community annotations that are now being stored in wikis.

  20. MiCroKit 3.0: an integrated database of midbody, centrosome and kinetochore.

    PubMed

    Ren, Jian; Liu, Zexian; Gao, Xinjiao; Jin, Changjiang; Ye, Mingliang; Zou, Hanfa; Wen, Longping; Zhang, Zhaolei; Xue, Yu; Yao, Xuebiao

    2010-01-01

    During cell division/mitosis, a specific subset of proteins is spatially and temporally assembled into protein super complexes in three distinct regions, i.e. centrosome/spindle pole, kinetochore/centromere and midbody/cleavage furrow/phragmoplast/bud neck, and modulates cell division process faithfully. Although many experimental efforts have been carried out to investigate the characteristics of these proteins, no integrated database was available. Here, we present the MiCroKit database (http://microkit.biocuckoo.org) of proteins that localize in midbody, centrosome and/or kinetochore. We collected into the MiCroKit database experimentally verified microkit proteins from the scientific literature that have unambiguous supportive evidence for subcellular localization under fluorescent microscope. The current version of MiCroKit 3.0 provides detailed information for 1489 microkit proteins from seven model organisms, including Saccharomyces cerevisiae, Schizasaccharomyces pombe, Caenorhabditis elegans, Drosophila melanogaster, Xenopus laevis, Mus musculus and Homo sapiens. Moreover, the orthologous information was provided for these microkit proteins, and could be a useful resource for further experimental identification. The online service of MiCroKit database was implemented in PHP + MySQL + JavaScript, while the local packages were developed in JAVA 1.5 (J2SE 5.0).

  1. MiCroKit 3.0: an integrated database of midbody, centrosome and kinetochore

    PubMed Central

    Liu, Zexian; Gao, Xinjiao; Jin, Changjiang; Ye, Mingliang; Zou, Hanfa; Wen, Longping; Zhang, Zhaolei; Xue, Yu; Yao, Xuebiao

    2010-01-01

    During cell division/mitosis, a specific subset of proteins is spatially and temporally assembled into protein super complexes in three distinct regions, i.e. centrosome/spindle pole, kinetochore/centromere and midbody/cleavage furrow/phragmoplast/bud neck, and modulates cell division process faithfully. Although many experimental efforts have been carried out to investigate the characteristics of these proteins, no integrated database was available. Here, we present the MiCroKit database (http://microkit.biocuckoo.org) of proteins that localize in midbody, centrosome and/or kinetochore. We collected into the MiCroKit database experimentally verified microkit proteins from the scientific literature that have unambiguous supportive evidence for subcellular localization under fluorescent microscope. The current version of MiCroKit 3.0 provides detailed information for 1489 microkit proteins from seven model organisms, including Saccharomyces cerevisiae, Schizasaccharomyces pombe, Caenorhabditis elegans, Drosophila melanogaster, Xenopus laevis, Mus musculus and Homo sapiens. Moreover, the orthologous information was provided for these microkit proteins, and could be a useful resource for further experimental identification. The online service of MiCroKit database was implemented in PHP + MySQL + JavaScript, while the local packages were developed in JAVA 1.5 (J2SE 5.0). PMID:19783819

  2. Making your database available through Wikipedia: the pros and cons

    PubMed Central

    Finn, Robert D.; Gardner, Paul P.; Bateman, Alex

    2012-01-01

    Wikipedia, the online encyclopedia, is the most famous wiki in use today. It contains over 3.7 million pages of content; with many pages written on scientific subject matters that include peer-reviewed citations, yet are written in an accessible manner and generally reflect the consensus opinion of the community. In this, the 19th Annual Database Issue of Nucleic Acids Research, there are 11 articles that describe the use of a wiki in relation to a biological database. In this commentary, we discuss how biological databases can be integrated with Wikipedia, thereby utilising the pre-existing infrastructure, tools and above all, large community of authors (or Wikipedians). The limitations to the content that can be included in Wikipedia are highlighted, with examples drawn from articles found in this issue and other wiki-based resources, indicating why other wiki solutions are necessary. We discuss the merits of using open wikis, like Wikipedia, versus other models, with particular reference to potential vandalism. Finally, we raise the question about the future role of dedicated database biocurators in context of the thousands of crowdsourced, community annotations that are now being stored in wikis. PMID:22144683

  3. Implementation and enforcement of the 3Rs principle in the field of transgenic animals used for scientific purposes. Report and recommendations of the BfR expert workshop, May 18-20, 2009, Berlin, Germany.

    PubMed

    Kretlow, Ariane; Butzke, Daniel; Goetz, Mario E; Grune, Barbara; Halder, Marlies; Henkler, Frank; Liebsch, Manfred; Nobiling, Rainer; Oelgeschlaeger, Michael; Reifenberg, Kurt; Schaefer, Bernd; Seiler, Andrea; Luch, Andreas

    2010-01-01

    In 2007, 2.7 million vertebrates were used for animal experiments and other scientific purposes in Germany alone. Since 1998 there has been an increase in the number of animals used for research purposes, which is partly attributable to the growing use of transgenic animals. These animals are, for instance, used as in vivo models to mimic human diseases like diabetes, cancer or Alzheimer's disease. Here, transgenic model organisms serve as valuable tools, being instrumental in facilitating the analysis of the molecular mechanisms underlying human diseases, and might contribute to the development of novel therapeutic approaches. Due to variable and, sometimes low, efficiency (depending on the species used), however, the generation of such animals often requires a large number of embryo donors and recipients. The experts evaluated methods that could possibly be utilised to reduce, refine or even replace experiments with transgenic vertebrates in the mid-term future. Among the promising alternative model organisms available at the moment are the fruit fly Drosophila melanogaster and the roundworm Caenorhabditis elegans. Specific cell culture experiments or three-dimensional (3D) tissue models also offer valuable opportunities to replace experiments with transgenic animals or reduce the number of laboratory animals required by assisting in decision-making processes. Furthermore, at the workshop an in vitro technique was presented which permits the production of complete human antibodies without using genetically modified ("humanised") animals. Up to now, genetically modified mice are widely used for this purpose. Improved breeding protocols, enhanced efficiency of mutagenesis as well as training of laboratory personnel and animal keepers can also help to reduce the numbers of laboratory animals. Well-trained staff in particular can help to minimise the pain, suffering and discomfort of animals and, at the same time, improve the quality of data obtained from animal experiments. This, in turn, can lead to a reduction in the numbers of animals needed for each experiment. The experts also came to the conclusion that the numbers of laboratory animals can be reduced by open access to a central database that provides detailed documentation of completed experiments involving transgenic animals. This documentation should not be restricted to experiments with substantial scientific results that warrant publication, but should also include those with "negative" outcome, which are usually not published. Capturing all kinds of results within such a database provides added value to the respective scientists and the scientific community as a whole; it could also help to stimulate collaborations and to ensure funding for future research. An important aspect to be considered in the generation of this kind of database is the quality and standardisation of the information provided on existing in vitro models and the respective opportunities for their use. The experts felt that the greatest potential for reducing the numbers of laboratory animals in the near future realistically might not be offered by the complete replacement of transgenic animal models but by opportunities to examine specific questions to a greater degree using in vitro models, such as cell and tissue cultures including organotypic models. The use of these models would considerably reduce the number of in vivo experiments using transgenic animals. However, the overall number of experimental animals may still be increasing or remain unaffected, e.g. when transgenic animals continue to serve as the source of primary cells and organs/tissues for in vitro experiments.

  4. A survey of the current status of web-based databases indexing Iranian journals.

    PubMed

    Merat, Shahin; Khatibzadeh, Shahab; Mesgarpour, Bita; Malekzadeh, Reza

    2009-05-01

    The scientific output of Iran is increasing rapidly during the recent years. Unfortunately, most papers are published in journals which are not indexed by popular indexing systems and many of them are in Persian without English translation. This makes the results of Iranian scientific research unavailable to other researchers, including Iranians. The aim of this study was to evaluate the quality of current web-based databases indexing scientific articles published in Iran. We identified web-based databases which indexed scientific journals published in Iran using popular search engines. The sites were then subjected to a series of tests to evaluate their coverage, search capabilities, stability, accuracy of information, consistency, accessibility, ease of use, and other features. Results were compared with each other to identify strengths and shortcomings of each site. Five web sites were indentified. None had a complete coverage on scientific Iranian journals. The search capabilities were less than optimal in most sites. English translations of research titles, author names, keywords, and abstracts of Persian-language articles did not follow standards. Some sites did not cover abstracts. Numerous typing errors make searches ineffective and citation indexing unreliable. None of the currently available indexing sites are capable of presenting Iranian research to the international scientific community. The government should intervene by enforcing policies designed to facilitate indexing through a systematic approach. The policies should address Iranian journals, authors, and indexing sites. Iranian journals should be required to provide their indexing data, including references, electronically; authors should provide correct indexing information to journals; and indexing sites should improve their software to meet standards set by the government.

  5. E&P data lifecycle: a case study in Petrobras Company

    NASA Astrophysics Data System (ADS)

    Mastella, Laura; Campinho, Vania; Alonso, João

    2013-04-01

    Petrobras, the biggest Brazilian Petroleum Company, has been studying and working on Brazilian sedimentary basins for nearly 60 years. The corporate database currently registers over 25000 wells and all their associated products (geophysical logs, cores, sidewall samples) and analyses. There are thousands of samples, descriptions, pictures, measures, and other scientific data resulted from petroleum exploration and production. This data constitutes a huge scientific database which is applied to support Petrobras economic strategy. Geological models built during the exploration phase continue to be refined during both the development and production phases: data should be continually manipulated, correlated and integrated. As E&P assets reach maturity, a new cycle starts: data is re-analyzed and new hypotheses are made in order to increase hydrocarbon productivity. Initial geological models then evolve from accumulated knowledge throughout all the E&P phases. Therefore the quality control must be performed in the first phases of data acquisition, i.e., during the exploration phase, to avoid reworking and loss of information. The last decade witnessed a great evolution in petroleum industry technology. As a consequence, the complexity and particulars of the information generated have increased accordingly. Current technology has also facilitated access to networks and databases, making it possible to store large amounts of information. This scenario makes available a large mass of information from difference sources, which uses heterogeneous vocabulary as well as different scales and measurement units. In this context, knowledge might be diluted and the total amount of information cannot be applied in E&P process. In order to provide adequate data governance, data input is controlled by rules, standards and policies, implemented by corporate software systems. Petrobras' integrated E&P database is a centralized repository to which all E&P systems can have access. The quality of the data that goes into the database can be increased by means of information management practices: • data validation, • language internationalization, • dictionaries, patterns, metadata. Moreover, stored data must be kept consistent, and any changes in the data should be registered while maintaining, if possible, the original data, associating the modification with its author, timestamp and reason. These practices lead to the creation of a database that serves and benefits the company's knowledge. Information retrieval and visualization is one of the main issues concerning petroleum industries. In order to make significant information available for end-users, it is fundamental to have an efficient data integration strategy. The integration of E&P data, such as geological, geophysical, geographical and operational data, is the end goal of the exploratory activities. Petrobras corporate systems are evolving towards it so as to make available various data from diverse sources and to create a dashboard that can be easily accessed at any time by geoscientists and reservoir engineers. The main goal is to maintain scientific integrity of information, from generators to consumers, during all E&P data life cycle.

  6. Making the MagIC (Magnetics Information Consortium) Web Application Accessible to New Users and Useful to Experts

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A.; Jarboe, N.; Tauxe, L.; Constable, C.; Jonestrask, L.

    2017-12-01

    Challenges are faced by both new and experienced users interested in contributing their data to community repositories, in data discovery, or engaged in potentially transformative science. The Magnetics Information Consortium (https://earthref.org/MagIC) has recently simplified its data model and developed a new containerized web application to reduce the friction in contributing, exploring, and combining valuable and complex datasets for the paleo-, geo-, and rock magnetic scientific community. The new data model more closely reflects the hierarchical workflow in paleomagnetic experiments to enable adequate annotation of scientific results and ensure reproducibility. The new open-source (https://github.com/earthref/MagIC) application includes an upload tool that is integrated with the data model to provide early data validation feedback and ease the friction of contributing and updating datasets. The search interface provides a powerful full text search of contributions indexed by ElasticSearch and a wide array of filters, including specific geographic and geological timescale filtering, to support both novice users exploring the database and experts interested in compiling new datasets with specific criteria across thousands of studies and millions of measurements. The datasets are not large, but they are complex, with many results from evolving experimental and analytical approaches. These data are also extremely valuable due to the cost in collecting or creating physical samples and the, often, destructive nature of the experiments. MagIC is heavily invested in encouraging young scientists as well as established labs to cultivate workflows that facilitate contributing their data in a consistent format. This eLightning presentation includes a live demonstration of the MagIC web application, developed as a configurable container hosting an isomorphic Meteor JavaScript application, MongoDB database, and ElasticSearch search engine. Visitors can explore the MagIC Database through maps and image or plot galleries or search and filter the raw measurements and their derived hierarchy of analytical interpretations.

  7. LiverTox: Clinical and Research Information on Drug-Induced Liver Injury

    MedlinePlus

    ... News Information Resources Glossary Abbreviations SEARCH THE LIVERTOX DATABASE Search for a specific medication, herbal or supplement: ... About Us . Disclaimer. Information presented in the LiverTox database is derived from the scientific literature and public ...

  8. Theory-based interventions in physical activity: a systematic review of literature in Iran.

    PubMed

    Abdi, Jalal; Eftekhar, Hassan; Estebsari, Fatemeh; Sadeghi, Roya

    2014-11-30

    Lack of physical activity is ranked fourth among the causes of human death and chronic diseases. Using models and theories to design, implement, and evaluate the health education and health promotion interventions has many advantages. Using models and theories of physical activity, we decided to systematically study the educational and promotional interventions carried out in Iran from 2003 to 2013.Three information databases were used to systematically select papers using key words including Iranian Magazine Database (MAGIRAN), Iran Medical Library (MEDLIB), and Scientific Information Database (SID). Twenty papers were selected and studied .Having been applied in 9 studies, The Trans Theoretical Model (TTM) was the most widespread model in Iran (PENDER in 3 studies, BASNEF in 2, and the Theory of Planned Behavior in 2 studies). With regards to the educational methods, almost all studies used a combination of methods. The most widely used Integrative educational method was group discussion. Only one integrated study was done. Behavior maintenance was not addressed in 75% of the studies. Almost all studies used self-reporting instruments. The effectiveness of educational methods was assessed in none of the studies. Most of the included studies had several methodological weaknesses, which hinder the validity and applicability of their results. According to the findings, the necessity of need assessment in using models, epidemiology and methodology consultation, addressing maintenance of physical activity, using other theories and models such as social marketing and social-cognitive theory, and other educational methods like empirical and complementary are suggested.

  9. The new modern era of yeast genomics: community sequencing and the resulting annotation of multiple Saccharomyces cerevisiae strains at the Saccharomyces Genome Database

    PubMed Central

    Engel, Stacia R.; Cherry, J. Michael

    2013-01-01

    The first completed eukaryotic genome sequence was that of the yeast Saccharomyces cerevisiae, and the Saccharomyces Genome Database (SGD; http://www.yeastgenome.org/) is the original model organism database. SGD remains the authoritative community resource for the S. cerevisiae reference genome sequence and its annotation, and continues to provide comprehensive biological information correlated with S. cerevisiae genes and their products. A diverse set of yeast strains have been sequenced to explore commercial and laboratory applications, and a brief history of those strains is provided. The publication of these new genomes has motivated the creation of new tools, and SGD will annotate and provide comparative analyses of these sequences, correlating changes with variations in strain phenotypes and protein function. We are entering a new era at SGD, as we incorporate these new sequences and make them accessible to the scientific community, all in an effort to continue in our mission of educating researchers and facilitating discovery. Database URL: http://www.yeastgenome.org/ PMID:23487186

  10. The Effect of Geographical Proximity on Scientific Cooperation among Chinese Cities from 1990 to 2010

    PubMed Central

    Ma, Haitao; Fang, Chuanglin; Pang, Bo; Li, Guangdong

    2014-01-01

    Background The relations between geographical proximity and spatial distance constitute a popular topic of concern. Thus, how geographical proximity affects scientific cooperation, and whether geographically proximate scientific cooperation activities in fact exhibit geographic scale features should be investigated. Methodology Selected statistics from the ISI database on cooperatively authored papers, the authors of which resided in 60 typical cites in China, and which were published in the years 1990, 1995, 2000, 2005, and 2010, were used to establish matrices of geographic distance and cooperation levels between cities. By constructing a distance-cooperation model, the degree of scientific cooperation based on spatial distance was calculated. The relationship between geographical proximity and scientific cooperation, as well as changes in that relationship, was explored using the fitting function. Result (1) Instead of declining, the role of geographical proximity in inter-city scientific cooperation has increased gradually but significantly with the popularization of telecommunication technologies; (2) the relationship between geographical proximity and scientific cooperation has not followed a perfect declining curve, and at certain spatial scales, the distance-decay regularity does not work; (3) the Chinese scientific cooperation network gathers around different regional center cities, showing a trend towards a regional network; within this cooperation network the amount of inter-city cooperation occurring at close range increased greatly. Conclusion The relationship between inter-city geographical distance and scientific cooperation has been enhanced and strengthened over time. PMID:25365449

  11. Atmospheric Effects of Subsonic Aircraft: Interim Assessment Report of the Advanced Subsonic Technology Program

    NASA Technical Reports Server (NTRS)

    Friedl, Randall R. (Editor)

    1997-01-01

    This first interim assessment of the subsonic assessment (SASS) project attempts to summarize concisely the status of our knowledge concerning the impacts of present and future subsonic aircraft fleets. It also highlights the major areas of scientific uncertainty, through review of existing data bases and model-based sensitivity studies. In view of the need for substantial improvements in both model formulations and experimental databases, this interim assessment cannot provide confident numerical predictions of aviation impacts. However, a number of quantitative estimates are presented, which provide some guidance to policy makers.

  12. A new generation of intelligent trainable tools for analyzing large scientific image databases

    NASA Technical Reports Server (NTRS)

    Fayyad, Usama M.; Smyth, Padhraic; Atkinson, David J.

    1994-01-01

    The focus of this paper is on the detection of natural, as opposed to human-made, objects. The distinction is important because, in the context of image analysis, natural objects tend to possess much greater variability in appearance than human-made objects. Hence, we shall focus primarily on the use of algorithms that 'learn by example' as the basis for image exploration. The 'learn by example' approach is potentially more generally applicable compared to model-based vision methods since domain scientists find it relatively easier to provide examples of what they are searching for versus describing a model.

  13. Classification of ECG beats using deep belief network and active learning.

    PubMed

    G, Sayantan; T, Kien P; V, Kadambari K

    2018-04-12

    A new semi-supervised approach based on deep learning and active learning for classification of electrocardiogram signals (ECG) is proposed. The objective of the proposed work is to model a scientific method for classification of cardiac irregularities using electrocardiogram beats. The model follows the Association for the Advancement of medical instrumentation (AAMI) standards and consists of three phases. In phase I, feature representation of ECG is learnt using Gaussian-Bernoulli deep belief network followed by a linear support vector machine (SVM) training in the consecutive phase. It yields three deep models which are based on AAMI-defined classes, namely N, V, S, and F. In the last phase, a query generator is introduced to interact with the expert to label few beats to improve accuracy and sensitivity. The proposed approach depicts significant improvement in accuracy with minimal queries posed to the expert and fast online training as tested on the MIT-BIH Arrhythmia Database and the MIT-BIH Supra-ventricular Arrhythmia Database (SVDB). With 100 queries labeled by the expert in phase III, the method achieves an accuracy of 99.5% in "S" versus all classifications (SVEB) and 99.4% accuracy in "V " versus all classifications (VEB) on MIT-BIH Arrhythmia Database. In a similar manner, it is attributed that an accuracy of 97.5% for SVEB and 98.6% for VEB on SVDB database is achieved respectively. Graphical Abstract Reply- Deep belief network augmented by active learning for efficient prediction of arrhythmia.

  14. Information And Data-Sharing Plan of IPY China Activity

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Cheng, W.

    2007-12-01

    Polar Data-Sharing is an effective resolution to global system and polar science problems and to interdisciplinary and sustainable study, as well as an important means to deal with IPY scientific heritages and realize IPY goals. Corresponding to IPY Data-Sharing policies, Information and Data-Sharing Plan was listed in five sub-plans of IPY Chinese Programme launched in March, 2007,they are Scientific research program of the Prydz Bay, Amery Ice Shelf and Dome A transects(short title:'PANDA'), the Arctic Scientific Research Expedition Plan, International Cooperation Plan, Information and Data-Sharing Plan, Education and Outreach. China, since the foundation of Antarctic Zhongshan Station in 1989, has carried out systematic scientific expeditions and researches in Larsemann Hills, Prydz Bay and the neighbouring sea areas, organized 14 Prydz Bay oceanographic investigations, 3 Amery Ice Shelf expeditions, 4 Grove Mountains expeditions and 5 inland ice cap scientific expeditions. 2 comprehensive oceanographic investigations in the Arctic Ocean were conducted in 1999 and 2003, acquired a large amount of data and samples in PANDA section and fan areas of Pacific Ocean in the Arctic Ocean. A mechanism of basic data submitting ,sharing and archiving has been gradually set up since 2000. Presently, Polar Science Database and Polar Sample Resource Sharing Platform of China with the aim of sharing polar data and samples has been initially established and began to provide sharing service to domestic and oversea users. According to IPY Chinese Activity, 2 scientific expeditions in the Arctic Ocean, 3 in the South Ocean, 2 at Amery Ice Shelf, 1 on Grove Mountains and 2 inland ice cap expeditions on Dome A will be carried out during IPY period. According to the experiences accumulated in the past and the jobs in the future, the Information and Data- Sharing Plan, during 2007-2010, will save, archive, and provide exchange and sharing services upon the data obtained by scientific expeditions on the site of IPY Chinese Programme. Meanwhile, focusing on areas in east Antarctic Dome A-Grove Mountain-Zhongshan Station-Amery Ice Shelf-Prydz Bay Section and the fan areas of Pacific Ocean in the Arctic Ocean, the Plan will also collect and integrate IPY data and historical data and establish database of PANDA Section and the Arctic Ocean. The details are as follows: On the basis of integrating the observed data acquired during the expeditions of China, the Plan will, adopting portal technology, develop 5 subject databases (English version included):(1) Database of Zhongshan Station- Dome A inner land ice cap section;(2) Database of interaction of ocean-ice-atmosphere-ice shelf in east Antarctica;(3) Database of geological and glaciological advance and retreat evolvement in Grove Mountains; (4) Database of Solar Terrestrial Physics at Zhongshan Station; (5) Oceanographic database of fan area of Pacific Ocean in the Arctic Ocean. CN-NADC of PRIC is the institute which assumes the responsibility for the Plan, specifically, it coordinates and organizes the operation of the Plan which includes data management, developing the portal of data and information sharing, and international exchanges. The specific assignments under the Plan will be carried out by research institutes under CAS (Chinese Academy of Sciences), SOA ( State Oceanic Administration), State Bureau of Surveying and Mapping and Ministry of Education.

  15. A data model and database for high-resolution pathology analytical image informatics.

    PubMed

    Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel

    2011-01-01

    The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming increasingly feasible for basic, clinical, and translational research studies to produce thousands of whole-slide images. Systematic analysis of these large datasets requires efficient data management support for representing and indexing results from hundreds of interrelated analyses generating very large volumes of quantifications such as shape and texture and of classifications of the quantified features. We have designed a data model and a database to address the data management requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines. The data model represents virtual slide related image, annotation, markup and feature information. The database supports a wide range of metadata and spatial queries on images, annotations, markups, and features. We currently have three databases running on a Dell PowerEdge T410 server with CentOS 5.5 Linux operating system. The database server is IBM DB2 Enterprise Edition 9.7.2. The set of databases consists of 1) a TMA database containing image analysis results from 4740 cases of breast cancer, with 641 MB storage size; 2) an algorithm validation database, which stores markups and annotations from two segmentation algorithms and two parameter sets on 18 selected slides, with 66 GB storage size; and 3) an in silico brain tumor study database comprising results from 307 TCGA slides, with 365 GB storage size. The latter two databases also contain human-generated annotations and markups for regions and nuclei. Modeling and managing pathology image analysis results in a database provide immediate benefits on the value and usability of data in a research study. The database provides powerful query capabilities, which are otherwise difficult or cumbersome to support by other approaches such as programming languages. Standardized, semantic annotated data representation and interfaces also make it possible to more efficiently share image data and analysis results.

  16. Analysis of commercial and public bioactivity databases.

    PubMed

    Tiikkainen, Pekka; Franke, Lutz

    2012-02-27

    Activity data for small molecules are invaluable in chemoinformatics. Various bioactivity databases exist containing detailed information of target proteins and quantitative binding data for small molecules extracted from journals and patents. In the current work, we have merged several public and commercial bioactivity databases into one bioactivity metabase. The molecular presentation, target information, and activity data of the vendor databases were standardized. The main motivation of the work was to create a single relational database which allows fast and simple data retrieval by in-house scientists. Second, we wanted to know the amount of overlap between databases by commercial and public vendors to see whether the former contain data complementing the latter. Third, we quantified the degree of inconsistency between data sources by comparing data points derived from the same scientific article cited by more than one vendor. We found that each data source contains unique data which is due to different scientific articles cited by the vendors. When comparing data derived from the same article we found that inconsistencies between the vendors are common. In conclusion, using databases of different vendors is still useful since the data overlap is not complete. It should be noted that this can be partially explained by the inconsistencies and errors in the source data.

  17. A user-friendly phytoremediation database: creating the searchable database, the users, and the broader implications.

    PubMed

    Famulari, Stevie; Witz, Kyla

    2015-01-01

    Designers, students, teachers, gardeners, farmers, landscape architects, architects, engineers, homeowners, and others have uses for the practice of phytoremediation. This research looks at the creation of a phytoremediation database which is designed for ease of use for a non-scientific user, as well as for students in an educational setting ( http://www.steviefamulari.net/phytoremediation ). During 2012, Environmental Artist & Professor of Landscape Architecture Stevie Famulari, with assistance from Kyla Witz, a landscape architecture student, created an online searchable database designed for high public accessibility. The database is a record of research of plant species that aid in the uptake of contaminants, including metals, organic materials, biodiesels & oils, and radionuclides. The database consists of multiple interconnected indexes categorized into common and scientific plant name, contaminant name, and contaminant type. It includes photographs, hardiness zones, specific plant qualities, full citations to the original research, and other relevant information intended to aid those designing with phytoremediation search for potential plants which may be used to address their site's need. The objective of the terminology section is to remove uncertainty for more inexperienced users, and to clarify terms for a more user-friendly experience. Implications of the work, including education and ease of browsing, as well as use of the database in teaching, are discussed.

  18. Proposals to conserve the names Balansia claviceps against Ephelis mexicana,……,and Tolypocladium inflatum against Cordyceps subsessilis (Ascomycota: Sordariomycetes: Hypocreales)

    USDA-ARS?s Scientific Manuscript database

    In the course of updating the scientific names of plant-associated fungi in the U.S. National Fungus Collections Databases to conform with the requirement of one scientific name for each fungal species, several scientific names currently in use were identified that should be changed to the oldest ep...

  19. The use of the Hirsch index in benchmarking hepatic surgery research.

    PubMed

    Cucchetti, Alessandro; Mazzotti, Federico; Pellegrini, Sara; Cescon, Matteo; Maroni, Lorenzo; Ercolani, Giorgio; Pinna, Antonio Daniele

    2013-10-01

    The Hirsch index (h-index) is recognized as an effective way to summarize an individual's scientific research output. However, a benchmark for evaluating surgeon scientists in the field of hepatic surgery is still not available. A total of 3,251 authors who published between 1949 and 2011 were identified using the Scopus identification number. The h-index, the total number of cited document, the total number of citations, and the scientific age were calculated for each author using both Scopus and Google Scholar. The median h-index was 6 and the median scientific age, assessed with Google Scholar, was 19 years. The numbers of cited documents, numbers of citations, and h-indexes obtained from Scopus and Google Scholar showed good correlation with one another; however, the results from the 2 databases were modified in different ways by scientific age. By plotting scientific age against h-index percentiles an h-index growth chart for both Scopus database and Google Scholar was provided. This analysis provides a first benchmark to assess surgeon scientists' productivity in the field of liver surgery. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Thomson Scientific's expanding Web of Knowledge: beyond citation databases and current awareness services.

    PubMed

    London, Sue; Brahmi, Frances A

    2005-01-01

    As end-user demand for easy access to electronic full text continues to climb, an increasing number of information providers are combining that access with their other products and services, making navigating their Web sites by librarians seeking information on a given product or service more daunting than ever. One such provider of a complex array of products and services is Thomson Scientific. This paper looks at some of the many products and tools available from two of Thomson Scientific's businesses, Thomson ISI and Thomson ResearchSoft. Among the items of most interest to health sciences and veterinary librarians and their users are the variety of databases available via the ISI Web of Knowledge platform and the information management products available from ResearchSoft.

  1. JoVE: the Journal of Visualized Experiments.

    PubMed

    Vardell, Emily

    2015-01-01

    The Journal of Visualized Experiments (JoVE) is the world's first scientific video journal and is designed to communicate research and scientific methods in an innovative, intuitive way. JoVE includes a wide range of biomedical videos, from biology to immunology and bioengineering to clinical and translation medicine. This column describes the browsing and searching capabilities of JoVE, as well as its additional features (including the JoVE Scientific Education Database designed for students in scientific fields).

  2. Kristin Munch | NREL

    Science.gov Websites

    Information Management System, Materials Research Society Fall Meeting (2013) Photovoltaics Informatics scientific data management, database and data systems design, database clusters, storage systems integration , and distributed data analytics. She has used her experience in laboratory data management systems, lab

  3. Development of USDA's expanded flavonoid database: A Tool for Epidemiological Research

    USDA-ARS?s Scientific Manuscript database

    The scientific community continues to be interested in potential links between flavonoid intakes and beneficial health effects associated with certain chronic diseases such as cardiovascular diseases, some cancers and type 2 diabetes. Three separate flavonoid databases (Flavonoids (5 subclasses: fl...

  4. Bibliometric analysis of theses and dissertations on prematurity in the Capes database.

    PubMed

    Pizzani, Luciana; Lopes, Juliana de Fátima; Manzini, Mariana Gurian; Martinez, Claudia Maria Simões

    2012-01-01

    To perform a bibliometric analysis of theses and dissertations on prematurity in the Capes database from 1987 to 2009. This is a descriptive study that used the bibliometric approach for the production of indicators of scientific production. Operationally, the methodology was developed in four steps: 1) construction of the theoretical framework; 2) data collection sourced from the abstracts of theses and dissertations available in the Capes Thesis Database which presented the issue of prematurity in the period 1987 to 2009; 3) organization, processing and construction of bibliometric indicators; 4) analysis and interpretation of results. Increase in the scientific literature on prematurity during the period 1987 to 2009; production is represented mostly by dissertations; the institution that received prominence was the Universidade de São Paulo. The studies are directed toward the low birth weight and very low birth weight preterm newborn, encompassing the social, biological and multifactorial causes of prematurity. There is a qualified, diverse and substantial scientific literature on prematurity developed in various graduate programs of higher education institutions in Brazil.

  5. Scientific thinking in elementary school: Children's social cognition and their epistemological understanding promote experimentation skills.

    PubMed

    Osterhaus, Christopher; Koerber, Susanne; Sodian, Beate

    2017-03-01

    Do social cognition and epistemological understanding promote elementary school children's experimentation skills? To investigate this question, 402 children (ages 8, 9, and 10) in 2nd, 3rd, and 4th grades were assessed for their experimentation skills, social cognition (advanced theory of mind [AToM]), epistemological understanding (understanding the nature of science), and general information-processing skills (inhibition, intelligence, and language abilities) in a whole-class testing procedure. A multiple indicators multiple causes model revealed a significant influence of social cognition (AToM) on epistemological understanding, and a McNemar test suggested that children's development of AToM is an important precursor for the emergence of an advanced, mature epistemological understanding. Children's epistemological understanding, in turn, predicted their experimentation skills. Importantly, this relation was independent of the common influences of general information processing. Significant relations between experimentation skills and inhibition, and between epistemological understanding, intelligence, and language abilities emerged, suggesting that general information processing contributes to the conceptual development that is involved in scientific thinking. The model of scientific thinking that was tested in this study (social cognition and epistemological understanding promote experimentation skills) fitted the data significantly better than 2 alternative models, which assumed nonspecific, equally strong relations between all constructs under investigation. Our results support the conclusion that social cognition plays a foundational role in the emergence of children's epistemological understanding, which in turn is closely related to the development of experimentation skills. Our findings have significant implications for the teaching of scientific thinking in elementary school and they stress the importance of children's epistemological understanding in scientific-thinking processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Konnichi Wa, Nihon (Hello, Japan!): Best Databases for Business, Technology and News.

    ERIC Educational Resources Information Center

    Hoetker, Glenn

    1994-01-01

    Describes online information sources for Japanese business, scientific, and technical developments. Highlights include English language materials versus the need for translation from Japanese; government research; scientific and technical information; patent information; corporate financial information; business information from newswires and…

  7. Astronomical Publishing: Yesterday, Today and Tomorrow

    NASA Astrophysics Data System (ADS)

    Huchra, John

    Just in the last few years scientific publishing has moved rapidly away from the modes that served it well for over two centuries. As "digital natives" take over the field and rapid and open access comes to dominate the way we communicate, both scholarly journals and libraries need to adopt new business models to serve their communities. This is best done by identifying new "added value" such as databases, full text searching, full cross indexing while at the same time retaining the high quality of peer reviewed publication.

  8. Abstracting data warehousing issues in scientific research.

    PubMed

    Tews, Cody; Bracio, Boris R

    2002-01-01

    This paper presents the design and implementation of the Idaho Biomedical Data Management System (IBDMS). This system preprocesses biomedical data from the IMPROVE (Improving Control of Patient Status in Critical Care) library via an Open Database Connectivity (ODBC) connection. The ODBC connection allows for local and remote simulations to access filtered, joined, and sorted data using the Structured Query Language (SQL). The tool is capable of providing an overview of available data in addition to user defined data subset for verification of models of the human respiratory system.

  9. Large-scale extraction of brain connectivity from the neuroscientific literature

    PubMed Central

    Richardet, Renaud; Chappelier, Jean-Cédric; Telefont, Martin; Hill, Sean

    2015-01-01

    Motivation: In neuroscience, as in many other scientific domains, the primary form of knowledge dissemination is through published articles. One challenge for modern neuroinformatics is finding methods to make the knowledge from the tremendous backlog of publications accessible for search, analysis and the integration of such data into computational models. A key example of this is metascale brain connectivity, where results are not reported in a normalized repository. Instead, these experimental results are published in natural language, scattered among individual scientific publications. This lack of normalization and centralization hinders the large-scale integration of brain connectivity results. In this article, we present text-mining models to extract and aggregate brain connectivity results from 13.2 million PubMed abstracts and 630 216 full-text publications related to neuroscience. The brain regions are identified with three different named entity recognizers (NERs) and then normalized against two atlases: the Allen Brain Atlas (ABA) and the atlas from the Brain Architecture Management System (BAMS). We then use three different extractors to assess inter-region connectivity. Results: NERs and connectivity extractors are evaluated against a manually annotated corpus. The complete in litero extraction models are also evaluated against in vivo connectivity data from ABA with an estimated precision of 78%. The resulting database contains over 4 million brain region mentions and over 100 000 (ABA) and 122 000 (BAMS) potential brain region connections. This database drastically accelerates connectivity literature review, by providing a centralized repository of connectivity data to neuroscientists. Availability and implementation: The resulting models are publicly available at github.com/BlueBrain/bluima. Contact: renaud.richardet@epfl.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25609795

  10. Towards structured sharing of raw and derived neuroimaging data across existing resources

    PubMed Central

    Keator, D.B.; Helmer, K.; Steffener, J.; Turner, J.A.; Van Erp, T.G.M.; Gadde, S.; Ashish, N.; Burns, G.A.; Nichols, B.N.

    2013-01-01

    Data sharing efforts increasingly contribute to the acceleration of scientific discovery. Neuroimaging data is accumulating in distributed domain-specific databases and there is currently no integrated access mechanism nor an accepted format for the critically important meta-data that is necessary for making use of the combined, available neuroimaging data. In this manuscript, we present work from the Derived Data Working Group, an open-access group sponsored by the Biomedical Informatics Research Network (BIRN) and the International Neuroimaging Coordinating Facility (INCF) focused on practical tools for distributed access to neuroimaging data. The working group develops models and tools facilitating the structured interchange of neuroimaging meta-data and is making progress towards a unified set of tools for such data and meta-data exchange. We report on the key components required for integrated access to raw and derived neuroimaging data as well as associated meta-data and provenance across neuroimaging resources. The components include (1) a structured terminology that provides semantic context to data, (2) a formal data model for neuroimaging with robust tracking of data provenance, (3) a web service-based application programming interface (API) that provides a consistent mechanism to access and query the data model, and (4) a provenance library that can be used for the extraction of provenance data by image analysts and imaging software developers. We believe that the framework and set of tools outlined in this manuscript have great potential for solving many of the issues the neuroimaging community faces when sharing raw and derived neuroimaging data across the various existing database systems for the purpose of accelerating scientific discovery. PMID:23727024

  11. Data Mining Research with the LSST

    NASA Astrophysics Data System (ADS)

    Borne, Kirk D.; Strauss, M. A.; Tyson, J. A.

    2007-12-01

    The LSST catalog database will exceed 10 petabytes, comprising several hundred attributes for 5 billion galaxies, 10 billion stars, and over 1 billion variable sources (optical variables, transients, or moving objects), extracted from over 20,000 square degrees of deep imaging in 5 passbands with thorough time domain coverage: 1000 visits over the 10-year LSST survey lifetime. The opportunities are enormous for novel scientific discoveries within this rich time-domain ultra-deep multi-band survey database. Data Mining, Machine Learning, and Knowledge Discovery research opportunities with the LSST are now under study, with a potential for new collaborations to develop to contribute to these investigations. We will describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. We also give some illustrative examples of current scientific data mining research in astronomy, and point out where new research is needed. In particular, the data mining research community will need to address several issues in the coming years as we prepare for the LSST data deluge. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night); multi-resolution methods for exploration of petascale databases; visual data mining algorithms for visual exploration of the data; indexing of multi-attribute multi-dimensional astronomical databases (beyond RA-Dec spatial indexing) for rapid querying of petabyte databases; and more. Finally, we will identify opportunities for synergistic collaboration between the data mining research group and the LSST Data Management and Science Collaboration teams.

  12. [Historical, social and cultural aspects of the deaf population].

    PubMed

    Duarte, Soraya Bianca Reis; Chaveiro, Neuma; Freitas, Adriana Ribeiro de; Barbosa, Maria Alves; Porto, Celmo Celeno; Fleck, Marcelo Pio de Almeida

    2013-10-01

    This work redeems, contextualizes and features the social, historical and cultural aspects of the deaf community that uses the Brazilian Sign Language focusing on the social and anthropological model. The scope of this study was to conduct a bibliographical review in scientific textbooks and articles available in the Virtual Health Library, irrespective of the date of publication. 102 articles and 53 books were located, including 33 textbooks and 26 articles (four from the Lilacs database and 22 from the Medline database) that constituted the sample. Today, in contrast with the past, there are laws that guarantee the right to communication and attendance by means of the Brazilian Sign Language. The repercussion, acceptance and inclusion in health policies of the decrees enshrined in Brazilian laws is a major priority.

  13. Models@Home: distributed computing in bioinformatics using a screensaver based approach.

    PubMed

    Krieger, Elmar; Vriend, Gert

    2002-02-01

    Due to the steadily growing computational demands in bioinformatics and related scientific disciplines, one is forced to make optimal use of the available resources. A straightforward solution is to build a network of idle computers and let each of them work on a small piece of a scientific challenge, as done by Seti@Home (http://setiathome.berkeley.edu), the world's largest distributed computing project. We developed a generally applicable distributed computing solution that uses a screensaver system similar to Seti@Home. The software exploits the coarse-grained nature of typical bioinformatics projects. Three major considerations for the design were: (1) often, many different programs are needed, while the time is lacking to parallelize them. Models@Home can run any program in parallel without modifications to the source code; (2) in contrast to the Seti project, bioinformatics applications are normally more sensitive to lost jobs. Models@Home therefore includes stringent control over job scheduling; (3) to allow use in heterogeneous environments, Linux and Windows based workstations can be combined with dedicated PCs to build a homogeneous cluster. We present three practical applications of Models@Home, running the modeling programs WHAT IF and YASARA on 30 PCs: force field parameterization, molecular dynamics docking, and database maintenance.

  14. Spatial Databases for CalVO Volcanoes: Current Status and Future Directions

    NASA Astrophysics Data System (ADS)

    Ramsey, D. W.

    2013-12-01

    The U.S. Geological Survey (USGS) California Volcano Observatory (CalVO) aims to advance scientific understanding of volcanic processes and to lessen harmful impacts of volcanic activity in California and Nevada. Within CalVO's area of responsibility, ten volcanoes or volcanic centers have been identified by a national volcanic threat assessment in support of developing the U.S. National Volcano Early Warning System (NVEWS) as posing moderate, high, or very high threats to surrounding communities based on their recent eruptive histories and their proximity to vulnerable people, property, and infrastructure. To better understand the extent of potential hazards at these and other volcanoes and volcanic centers, the USGS Volcano Science Center (VSC) is continually compiling spatial databases of volcano information, including: geologic mapping, hazards assessment maps, locations of geochemical and geochronological samples, and the distribution of volcanic vents. This digital mapping effort has been ongoing for over 15 years and early databases are being converted to match recent datasets compiled with new data models designed for use in: 1) generating hazard zones, 2) evaluating risk to population and infrastructure, 3) numerical hazard modeling, and 4) display and query on the CalVO as well as other VSC and USGS websites. In these capacities, spatial databases of CalVO volcanoes and their derivative map products provide an integrated and readily accessible framework of VSC hazards science to colleagues, emergency managers, and the general public.

  15. Impact factor evolution of nursing research journals: 2009 to 2014.

    PubMed

    Cáceres, Macarena C; Guerrero-Martín, Jorge; González-Morales, Borja; Pérez-Civantos, Demetrio V; Carreto-Lemus, Maria A; Durán-Gómez, Noelia

    The use of bibliometric indicators (impact factor [IF], impact index, h-index, etc.) is now believed to be a fundamental measure of the quality of scientific research output. In this context, the presence of scientific nursing journals in international databases and the factors influencing their impact ratings is being widely analyzed. The aim of this study was to analyze the presence of scientific nursing journals in international databases and track the changes in their IF. A secondary analysis was carried out on data for the years 2009 to 2014 held in the JCR database (subject category: nursing). Additionally, the presence of scientific nursing journals in Medline, CINAHL, Scopus, and SJR was analyzed. During the period studied, the number of journals indexed in the JCR under the nursing subject category increased from 70 in 2009 (mean IF: 0.99, standard deviation: 0.53) to 115 in 2014 (mean IF: 1.04, standard deviation: 0.42), of which only 70 were listed for the full six years. Although mean IF showed an upward trend throughout this time, no statistically significant differences were found in the variations to this figure. Although IF and other bibliometric indicators have their limitations, it is nonetheless true that bibliometry is now the most widely used tool for evaluating scientific output in all disciplines, including nursing, highlighting the importance of being familiar with how they are calculated and their significance when deciding the journal or journals in which to publish the results of our research. That said, it is also necessary to consider other possible alternative ways of assessing the quality and impact of scientific contributions. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Nutritional Status Driving Infection by Trypanosoma cruzi: Lessons from Experimental Animals

    PubMed Central

    Malafaia, Guilherme; Talvani, André

    2011-01-01

    This paper reviews the scientific knowledge about protein-energy and micronutrient malnutrition in the context of Chagas disease, especially in experimental models. The search of articles was conducted using the electronic databases of SciELO (Scientific Electronic Library Online), PubMed and MEDLINE published between 1960 and March 2010. It was possible to verify that nutritional deficiencies (protein-energy malnutrition and micronutrient malnutrition) exert a direct effect on the infection by T. cruzi. However, little is known about the immunological mechanisms involved in the relationship “nutritional deficiencies and infection by T. cruzi”. A hundred years after the discovery of Chagas disease many aspects of this illness still require clarification, including the effects of nutritional deficiencies on immune and pathological mechanisms of T. cruzi infection. PMID:21577255

  17. [Construction of chemical information database based on optical structure recognition technique].

    PubMed

    Lv, C Y; Li, M N; Zhang, L R; Liu, Z M

    2018-04-18

    To create a protocol that could be used to construct chemical information database from scientific literature quickly and automatically. Scientific literature, patents and technical reports from different chemical disciplines were collected and stored in PDF format as fundamental datasets. Chemical structures were transformed from published documents and images to machine-readable data by using the name conversion technology and optical structure recognition tool CLiDE. In the process of molecular structure information extraction, Markush structures were enumerated into well-defined monomer molecules by means of QueryTools in molecule editor ChemDraw. Document management software EndNote X8 was applied to acquire bibliographical references involving title, author, journal and year of publication. Text mining toolkit ChemDataExtractor was adopted to retrieve information that could be used to populate structured chemical database from figures, tables, and textual paragraphs. After this step, detailed manual revision and annotation were conducted in order to ensure the accuracy and completeness of the data. In addition to the literature data, computing simulation platform Pipeline Pilot 7.5 was utilized to calculate the physical and chemical properties and predict molecular attributes. Furthermore, open database ChEMBL was linked to fetch known bioactivities, such as indications and targets. After information extraction and data expansion, five separate metadata files were generated, including molecular structure data file, molecular information, bibliographical references, predictable attributes and known bioactivities. Canonical simplified molecular input line entry specification as primary key, metadata files were associated through common key nodes including molecular number and PDF number to construct an integrated chemical information database. A reasonable construction protocol of chemical information database was created successfully. A total of 174 research articles and 25 reviews published in Marine Drugs from January 2015 to June 2016 collected as essential data source, and an elementary marine natural product database named PKU-MNPD was built in accordance with this protocol, which contained 3 262 molecules and 19 821 records. This data aggregation protocol is of great help for the chemical information database construction in accuracy, comprehensiveness and efficiency based on original documents. The structured chemical information database can facilitate the access to medical intelligence and accelerate the transformation of scientific research achievements.

  18. Cost-efficient scheduling of FAST observations

    NASA Astrophysics Data System (ADS)

    Luo, Qi; Zhao, Laiping; Yu, Ce; Xiao, Jian; Sun, Jizhou; Zhu, Ming; Zhong, Yi

    2018-03-01

    A cost-efficient schedule for the Five-hundred-meter Aperture Spherical radio Telescope (FAST) requires to maximize the number of observable proposals and the overall scientific priority, and minimize the overall slew-cost generated by telescope shifting, while taking into account the constraints including the astronomical objects visibility, user-defined observable times, avoiding Radio Frequency Interference (RFI). In this contribution, first we solve the problem of maximizing the number of observable proposals and scientific priority by modeling it as a Minimum Cost Maximum Flow (MCMF) problem. The optimal schedule can be found by any MCMF solution algorithm. Then, for minimizing the slew-cost of the generated schedule, we devise a maximally-matchable edges detection-based method to reduce the problem size, and propose a backtracking algorithm to find the perfect matching with minimum slew-cost. Experiments on a real dataset from NASA/IPAC Extragalactic Database (NED) show that, the proposed scheduler can increase the usage of available times with high scientific priority and reduce the slew-cost significantly in a very short time.

  19. An International Aerospace Information System: A Cooperative Opportunity.

    ERIC Educational Resources Information Center

    Blados, Walter R.; Cotter, Gladys A.

    1992-01-01

    Introduces and discusses ideas and issues relevant to the international unification of scientific and technical information (STI) through development of an international aerospace database (IAD). Specific recommendations for improving the National Aeronautics and Space Administration Aerospace Database (NAD) and for implementing IAD are given.…

  20. ECOS E-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parisien, Lia

    2016-01-31

    This final scientific/technical report on the ECOS e-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database provides a disclaimer and acknowledgement, table of contents, executive summary, description of project activities, and briefing/technical presentation link.

  1. Subject and authorship of records related to the Organization for Tropical Studies (OTS) in BINABITROP, a comprehensive database about Costa Rican biology.

    PubMed

    Monge-Nájera, Julián; Nielsen-Muñoz, Vanessa; Azofeifa-Mora, Ana Beatriz

    2013-06-01

    BINABITROP is a bibliographical database of more than 38000 records about the ecosystems and organisms of Costa Rica. In contrast with commercial databases, such as Web of Knowledge and Scopus, which exclude most of the scientific journals published in tropical countries, BINABITROP is a comprehensive record of knowledge on the tropical ecosystems and organisms of Costa Rica. We analyzed its contents in three sites (La Selva, Palo Verde and Las Cruces) and recorded scientific field, taxonomic group and authorship. We found that most records dealt with ecology and systematics, and that most authors published only one article in the study period (1963-2011). Most research was published in four journals: Biotropica, Revista de Biología Tropical/ International Journal of Tropical Biology and Conservation, Zootaxa and Brenesia. This may be the first study of a such a comprehensive database for any case of tropical biology literature.

  2. A community effort to construct a gravity database for the United States and an associated Web portal

    USGS Publications Warehouse

    Keller, Gordon R.; Hildenbrand, T.G.; Kucks, R.; Webring, M.; Briesacher, A.; Rujawitz, K.; Hittleman, A.M.; Roman, D.R.; Winester, D.; Aldouri, R.; Seeley, J.; Rasillo, J.; Torres, R.; Hinze, W. J.; Gates, A.; Kreinovich, V.; Salayandia, L.

    2006-01-01

    Potential field data (gravity and magnetic measurements) are both useful and costeffective tools for many geologic investigations. Significant amounts of these data are traditionally in the public domain. A new magnetic database for North America was released in 2002, and as a result, a cooperative effort between government agencies, industry, and universities to compile an upgraded digital gravity anomaly database, grid, and map for the conterminous United States was initiated and is the subject of this paper. This database is being crafted into a data system that is accessible through a Web portal. This data system features the database, software tools, and convenient access. The Web portal will enhance the quality and quantity of data contributed to the gravity database that will be a shared community resource. The system's totally digital nature ensures that it will be flexible so that it can grow and evolve as new data, processing procedures, and modeling and visualization tools become available. Another goal of this Web-based data system is facilitation of the efforts of researchers and students who wish to collect data from regions currently not represented adequately in the database. The primary goal of upgrading the United States gravity database and this data system is to provide more reliable data that support societal and scientific investigations of national importance. An additional motivation is the international intent to compile an enhanced North American gravity database, which is critical to understanding regional geologic features, the tectonic evolution of the continent, and other issues that cross national boundaries. ?? 2006 Geological Society of America. All rights reserved.

  3. The landslide database for Germany: Closing the gap at national level

    NASA Astrophysics Data System (ADS)

    Damm, Bodo; Klose, Martin

    2015-11-01

    The Federal Republic of Germany has long been among the few European countries that lack a national landslide database. Systematic collection and inventory of landslide data still has a long research history in Germany, but one focussed on the development of databases with local or regional coverage. This has changed in recent years with the launch of a database initiative aimed at closing the data gap existing at national level. The present paper reports on this project that is based on a landslide database which evolved over the last 15 years to a database covering large parts of Germany. A strategy of systematic retrieval, extraction, and fusion of landslide data is at the heart of the methodology, providing the basis for a database with a broad potential of application. The database offers a data pool of more than 4,200 landslide data sets with over 13,000 single data files and dates back to the 12th century. All types of landslides are covered by the database, which stores not only core attributes, but also various complementary data, including data on landslide causes, impacts, and mitigation. The current database migration to PostgreSQL/PostGIS is focused on unlocking the full scientific potential of the database, while enabling data sharing and knowledge transfer via a web GIS platform. In this paper, the goals and the research strategy of the database project are highlighted at first, with a summary of best practices in database development providing perspective. Next, the focus is on key aspects of the methodology, which is followed by the results of three case studies in the German Central Uplands. The case study results exemplify database application in the analysis of landslide frequency and causes, impact statistics, and landslide susceptibility modeling. Using the example of these case studies, strengths and weaknesses of the database are discussed in detail. The paper concludes with a summary of the database project with regard to previous achievements and the strategic roadmap.

  4. Digital Earth system based river basin data integration

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Li, Wanqing; Lin, Chao

    2014-12-01

    Digital Earth is an integrated approach to build scientific infrastructure. The Digital Earth systems provide a three-dimensional visualization and integration platform for river basin data which include the management data, in situ observation data, remote sensing observation data and model output data. This paper studies the Digital Earth system based river basin data integration technology. Firstly, the construction of the Digital Earth based three-dimensional river basin data integration environment is discussed. Then the river basin management data integration technology is presented which is realized by general database access interface, web service and ActiveX control. Thirdly, the in situ data stored in database tables as records integration is realized with three-dimensional model of the corresponding observation apparatus display in the Digital Earth system by a same ID code. In the next two parts, the remote sensing data and the model output data integration technologies are discussed in detail. The application in the Digital Zhang River basin System of China shows that the method can effectively improve the using efficiency and visualization effect of the data.

  5. Nanopublications for exposing experimental data in the life-sciences: a Huntington's Disease case study.

    PubMed

    Mina, Eleni; Thompson, Mark; Kaliyaperumal, Rajaram; Zhao, Jun; der Horst, van Eelke; Tatum, Zuotian; Hettne, Kristina M; Schultes, Erik A; Mons, Barend; Roos, Marco

    2015-01-01

    Data from high throughput experiments often produce far more results than can ever appear in the main text or tables of a single research article. In these cases, the majority of new associations are often archived either as supplemental information in an arbitrary format or in publisher-independent databases that can be difficult to find. These data are not only lost from scientific discourse, but are also elusive to automated search, retrieval and processing. Here, we use the nanopublication model to make scientific assertions that were concluded from a workflow analysis of Huntington's Disease data machine-readable, interoperable, and citable. We followed the nanopublication guidelines to semantically model our assertions as well as their provenance metadata and authorship. We demonstrate interoperability by linking nanopublication provenance to the Research Object model. These results indicate that nanopublications can provide an incentive for researchers to expose data that is interoperable and machine-readable for future use and preservation for which they can get credits for their effort. Nanopublications can have a leading role into hypotheses generation offering opportunities to produce large-scale data integration.

  6. Kameleon Live: An Interactive Cloud Based Analysis and Visualization Platform for Space Weather Researchers

    NASA Astrophysics Data System (ADS)

    Pembroke, A. D.; Colbert, J. A.

    2015-12-01

    The Community Coordinated Modeling Center (CCMC) provides hosting for many of the simulations used by the space weather community of scientists, educators, and forecasters. CCMC users may submit model runs through the Runs on Request system, which produces static visualizations of model output in the browser, while further analysis may be performed off-line via Kameleon, CCMC's cross-language access and interpolation library. Off-line analysis may be suitable for power-users, but storage and coding requirements present a barrier to entry for non-experts. Moreover, a lack of a consistent framework for analysis hinders reproducibility of scientific findings. To that end, we have developed Kameleon Live, a cloud based interactive analysis and visualization platform. Kameleon Live allows users to create scientific studies built around selected runs from the Runs on Request database, perform analysis on those runs, collaborate with other users, and disseminate their findings among the space weather community. In addition to showcasing these novel collaborative analysis features, we invite feedback from CCMC users as we seek to advance and improve on the new platform.

  7. [Bibliometric analysis of scientific articles on rehabilitation nursing for adult burn patients in China].

    PubMed

    Ying, Sun; Jie, Cao; Ping, Feng; Lingjuan, Zhang

    2015-06-01

    To analyze the current research status of rehabilitation nursing for adult burn patients in China, and to disuss the related strategies. Chinese scientific articles on adult burn patients' rehabilitation nursing published from January 2003 to December 2013 were retrieved from 3 databases namely China Biology Medicine disc, Chinese Journals Full-text Database , and Chinese Science and Technology Journals Database . From the results retrieved, data with regard to publication year, journal distribution, research type, region of affiliation of the first author, and the main research content were collected. Data were processed with Microsoft Excel software. A total of 417 articles conforming with the criteria were retrieved. During the 11 years, the number of the relevant articles per year was on the rise, and the increasing rates in 2005, 2008, 2009, and 2013 were all above 30% . Regarding the distribution among journals, these 417 articles were published in 151 journals, with 188 articles in Source Journal for Chinese Scientific and Technical Papers , accounting for 45.08%. Regarding the research type, 173 out of the 417 articles were dealing with clinical experiences, accounting for 41.49% ; 172 out of the 417 articles were dealing with experimental studies, accounting for 41.25% . The regions of affiliation of the first author were mainly situated in Guangdong province, Shandong province, Hunan province, and Jiangsu province, with Guangdong province contributing 58 articles, accounting for 13.91%. The research content of these articles was mainly focused on psychological nursing, nursing model, and health education, respectively 188,101, and 85 articles, accounting for 45.08%, 24.22%, and 20.38%. The research on rehabilitation nursing for adult burn patients in China has been carried out nationwide. Although the number of relevant papers is on the rise, the quality of these papers needs to be further improved. There is an urgent need for the guideline on rehabilitation nursing for adult burn patients in China so as to standardize the content and procedure of rehabilitation nursing.

  8. Critical thinking in nursing: Scoping review of the literature.

    PubMed

    Zuriguel Pérez, Esperanza; Lluch Canut, Maria Teresa; Falcó Pegueroles, Anna; Puig Llobet, Montserrat; Moreno Arroyo, Carmen; Roldán Merino, Juan

    2015-12-01

    This article seeks to analyse the current state of scientific knowledge concerning critical thinking in nursing. The methodology used consisted of a scoping review of the main scientific databases using an applied search strategy. A total of 1518 studies published from January 1999 to June 2013 were identified, of which 90 met the inclusion criteria. The main conclusion drawn is that critical thinking in nursing is experiencing a growing interest in the study of both its concepts and its dimensions, as well as in the development of training strategies to further its development among both students and professionals. Furthermore, the analysis reveals that critical thinking has been investigated principally in the university setting, independent of conceptual models, with a variety of instruments used for its measurement. We recommend (i) the investigation of critical thinking among working professionals, (ii) the designing of evaluative instruments linked to conceptual models and (iii) the identification of strategies to promote critical thinking in the context of providing nursing care. © 2014 Wiley Publishing Asia Pty Ltd.

  9. Advances in Data Management in Remote Sensing and Climate Modeling

    NASA Astrophysics Data System (ADS)

    Brown, P. G.

    2014-12-01

    Recent commercial interest in "Big Data" information systems has yielded little more than a sense of deja vu among scientists whose work has always required getting their arms around extremely large databases, and writing programs to explore and analyze it. On the flip side, there are some commercial DBMS startups building "Big Data" platform using techniques taken from earth science, astronomy, high energy physics and high performance computing. In this talk, we will introduce one such platform; Paradigm4's SciDB, the first DBMS designed from the ground up to combine the kinds of quality-of-service guarantees made by SQL DBMS platforms—high level data model, query languages, extensibility, transactions—with the kinds of functionality familiar to scientific users—arrays as structural building blocks, integrated linear algebra, and client language interfaces that minimize the learning curve. We will review how SciDB is used to manage and analyze earth science data by several teams of scientific users.

  10. THE HUMAN EXPOSURE DATABASE SYSTEM (HEDS)-PUTTING THE NHEXAS DATA ON-LINE

    EPA Science Inventory

    The EPA's National Exposure Research Laboratory (NERL) has developed an Internet accessible Human Exposure Database System (HEDS) to provide the results of NERL human exposure studies to both the EPA and the external scientific communities. The first data sets that will be ava...

  11. Application of the intelligent techniques in transplantation databases: a review of articles published in 2009 and 2010.

    PubMed

    Sousa, F S; Hummel, A D; Maciel, R F; Cohrs, F M; Falcão, A E J; Teixeira, F; Baptista, R; Mancini, F; da Costa, T M; Alves, D; Pisa, I T

    2011-05-01

    The replacement of defective organs with healthy ones is an old problem, but only a few years ago was this issue put into practice. Improvements in the whole transplantation process have been increasingly important in clinical practice. In this context are clinical decision support systems (CDSSs), which have reflected a significant amount of work to use mathematical and intelligent techniques. The aim of this article was to present consideration of intelligent techniques used in recent years (2009 and 2010) to analyze organ transplant databases. To this end, we performed a search of the PubMed and Institute for Scientific Information (ISI) Web of Knowledge databases to find articles published in 2009 and 2010 about intelligent techniques applied to transplantation databases. Among 69 retrieved articles, we chose according to inclusion and exclusion criteria. The main techniques were: Artificial Neural Networks (ANN), Logistic Regression (LR), Decision Trees (DT), Markov Models (MM), and Bayesian Networks (BN). Most articles used ANN. Some publications described comparisons between techniques or the use of various techniques together. The use of intelligent techniques to extract knowledge from databases of healthcare is increasingly common. Although authors preferred to use ANN, statistical techniques were equally effective for this enterprise. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. dbPAF: an integrative database of protein phosphorylation in animals and fungi.

    PubMed

    Ullah, Shahid; Lin, Shaofeng; Xu, Yang; Deng, Wankun; Ma, Lili; Zhang, Ying; Liu, Zexian; Xue, Yu

    2016-03-24

    Protein phosphorylation is one of the most important post-translational modifications (PTMs) and regulates a broad spectrum of biological processes. Recent progresses in phosphoproteomic identifications have generated a flood of phosphorylation sites, while the integration of these sites is an urgent need. In this work, we developed a curated database of dbPAF, containing known phosphorylation sites in H. sapiens, M. musculus, R. norvegicus, D. melanogaster, C. elegans, S. pombe and S. cerevisiae. From the scientific literature and public databases, we totally collected and integrated 54,148 phosphoproteins with 483,001 phosphorylation sites. Multiple options were provided for accessing the data, while original references and other annotations were also present for each phosphoprotein. Based on the new data set, we computationally detected significantly over-represented sequence motifs around phosphorylation sites, predicted potential kinases that are responsible for the modification of collected phospho-sites, and evolutionarily analyzed phosphorylation conservation states across different species. Besides to be largely consistent with previous reports, our results also proposed new features of phospho-regulation. Taken together, our database can be useful for further analyses of protein phosphorylation in human and other model organisms. The dbPAF database was implemented in PHP + MySQL and freely available at http://dbpaf.biocuckoo.org.

  13. A new improved database to support spanish phenological observations

    NASA Astrophysics Data System (ADS)

    Romero-Fresneda, Ramiro; Martínez-Núñez, Lourdes; Botey-Fullat, Roser; Gallego-Abaroa, Teresa; De Cara-García, Juan Antonio; Rodríguez-Ballesteros, César

    2017-04-01

    Since the last 30 years, phenology has regained scientific interest as the most reported biological indicator of anthropogenic climate change. AEMET (Spanish National Meteorological Agency) has long records in the field of phenological observations, since the 1940s. However, there is a large variety of paper records which are necessary to digitalize. On the other hand, it had been necessary to adapt our methods to the World Meteorological Organization (WMO) guidelines (BBCH code, data documentation- metadata…) and to standardize phenological stages and species in order to provide information to PEP725 (Pan European Phenology Database). Consequently, AEMET is developing a long-term, multi-taxa phenological database to support research and scientific studies about climate, their variability and influence on natural ecosystems, agriculture, etc. This paper presents the steps that are being carried out in order to achieve this goal.

  14. PTAL Database and Website: Developing a Novel Information System for the Scientific Exploitation of the Planetary Terrestrial Analogues Library

    NASA Astrophysics Data System (ADS)

    Veneranda, M.; Negro, J. I.; Medina, J.; Rull, F.; Lantz, C.; Poulet, F.; Cousin, A.; Dypvik, H.; Hellevang, H.; Werner, S. C.

    2018-04-01

    The PTAL website will store multispectral analysis of samples collected from several terrestrial analogue sites and pretend to become a cornerstone tool for the scientific community interested in deepening the knowledge on Mars geological processes.

  15. Global and Local Collaborators: A Study of Scientific Collaboration.

    ERIC Educational Resources Information Center

    Pao, Miranda Lee

    1992-01-01

    Describes an empirical study that was conducted to examine the relationship among scientific co-authorship (i.e., collaboration), research funding, and productivity. Bibliographic records from the MEDLINE database that used the subject heading for schistosomiasis are analyzed, global and local collaborators are discussed, and scientific…

  16. Knowledge Discovery and Data Mining in Iran's Climatic Researches

    NASA Astrophysics Data System (ADS)

    Karimi, Mostafa

    2013-04-01

    Advances in measurement technology and data collection is the database gets larger. Large databases require powerful tools for analysis data. Iterative process of acquiring knowledge from information obtained from data processing is done in various forms in all scientific fields. However, when the data volume large, and many of the problems the Traditional methods cannot respond. in the recent years, use of databases in various scientific fields, especially atmospheric databases in climatology expanded. in addition, increases in the amount of data generated by the climate models is a challenge for analysis of it for extraction of hidden pattern and knowledge. The approach to this problem has been made in recent years uses the process of knowledge discovery and data mining techniques with the use of the concepts of machine learning, artificial intelligence and expert (professional) systems is overall performance. Data manning is analytically process for manning in massive volume data. The ultimate goal of data mining is access to information and finally knowledge. climatology is a part of science that uses variety and massive volume data. Goal of the climate data manning is Achieve to information from variety and massive atmospheric and non-atmospheric data. in fact, Knowledge Discovery performs these activities in a logical and predetermined and almost automatic process. The goal of this research is study of uses knowledge Discovery and data mining technique in Iranian climate research. For Achieve This goal, study content (descriptive) analysis and classify base method and issue. The result shown that in climatic research of Iran most clustering, k-means and wards applied and in terms of issues precipitation and atmospheric circulation patterns most introduced. Although several studies in geography and climate issues with statistical techniques such as clustering and pattern extraction is done, Due to the nature of statistics and data mining, but cannot say for internal climate studies in data mining and knowledge discovery techniques are used. However, it is necessary to use the KDD Approach and DM techniques in the climatic studies, specific interpreter of climate modeling result.

  17. Evaluation of scientific periodicals and the Brazilian production of nursing articles.

    PubMed

    Erdmann, Alacoque Lorenzini; Marziale, Maria Helena Palucci; Pedreira, Mavilde da Luz Gonçalves; Lana, Francisco Carlos Félix; Pagliuca, Lorita Marlena Freitag; Padilha, Maria Itayra; Fernandes, Josicelia Dumêt

    2009-01-01

    This study aimed to identify nursing journals edited in Brazil indexed in the main bibliographic databases in the areas of health and nursing. It also aimed to classify the production of nursing graduate programs in 2007 according to the QUALIS/CAPES criteria used to classify scientific periodicals that disseminate the intellectual production of graduate programs in Brazil. This exploratory study used data from reports and documents available from CAPES to map scientific production and from searching the main international and national indexing databases. The findings from this research can help students, professors and coordinators of graduate programs in several ways: to understand the criteria of classifying periodicals; to be aware of the current production of graduate programs in the area of nursing; and to provide information that authors can use to select periodicals in which to publish their articles.

  18. How to Search, Write, Prepare and Publish the Scientific Papers in the Biomedical Journals

    PubMed Central

    Masic, Izet

    2011-01-01

    This article describes the methodology of preparation, writing and publishing scientific papers in biomedical journals. given is a concise overview of the concept and structure of the System of biomedical scientific and technical information and the way of biomedical literature retreival from worldwide biomedical databases. Described are the scientific and professional medical journals that are currently published in Bosnia and Herzegovina. Also, given is the comparative review on the number and structure of papers published in indexed journals in Bosnia and Herzegovina, which are listed in the Medline database. Analyzed are three B&H journals indexed in MEDLINE database: Medical Archives (Medicinski Arhiv), Bosnian Journal of Basic Medical Sciences and Medical Gazette (Medicinki Glasnik) in 2010. The largest number of original papers was published in the Medical Archives. There is a statistically significant difference in the number of papers published by local authors in relation to international journals in favor of the Medical Archives. True, the Journal Bosnian Journal of Basic Medical Sciences does not categorize the articles and we could not make comparisons. Journal Medical Archives and Bosnian Journal of Basic Medical Sciences by percentage published the largest number of articles by authors from Sarajevo and Tuzla, the two oldest and largest university medical centers in Bosnia and Herzegovina. The author believes that it is necessary to make qualitative changes in the reception and reviewing of papers for publication in biomedical journals published in Bosnia and Herzegovina which should be the responsibility of the separate scientific authority/ committee composed of experts in the field of medicine at the state level. PMID:23572850

  19. Dermatological manifestations in hemodialysis patients in Iran: A systematic review and meta-analysis.

    PubMed

    Asayesh, Hamid; Peykari, Niloofar; Pavaresh-Masoud, Mohammad; Esmaeili Abdar, Mohammad; Tajbakhsh, Ramin; Mousavi, Seyed Mojtaba; Djalalinia, Shirin; Noroozi, Mehdi; Qorbani, Mostafa; Mahdavi-Gorabi, Armita

    2018-03-25

    Dermatologic complications are common in patients with end-stage renal disease and also have a high diversity. This meta-analysis reviews prevalence of dermatological manifestations among hemodialysis patients in Iran. Using PubMed and NLM Gateway (for MEDLINE), Institute of Scientific Information (ISI), and SCOPUS as the main international electronic data sources, and Iran-Medex, Irandoc, and Scientific Information Database, as the main domestic databases with systematic search capability, we systematically searched surveys, papers, and reports on the prevalence of dermatological manifestations (until February 2016). Heterogeneity of reported prevalence's between studies was assessed using the Q test; overall prevalence of dermatological manifestations was estimated using random-effect meta-analysis model. We found 1229 records; from them, a total of eight studies comprising 917 hemodialysis patients were included. In all of studies, skin discoloration, pruritus and xerosis have the highest prevalence. According to random-effect meta-analysis model, the pooled prevalence of skin discoloration, pruritus, ecchymosis, xerosis, and half-and-half nail in hemodialysis patients were 48.03% (95% CI: 45.09-51.01), 52.85% (95%CI: 49.23-56.47), 19.88 (95% CI: 17.57-22.19), 51.14% (95% CI: 48.25-54.02), and 18.50% (95% CI: 16.0-21.0), respectively. his study shows that the prevalence of dermatological manifestations seems high among the hemodialysis patients in Iran, and skin discoloration, pruritus, and xerosis are more common. © 2018 Wiley Periodicals, Inc.

  20. The establishment of the atmospheric emission inventories of the ESCOMPTE program

    NASA Astrophysics Data System (ADS)

    François, S.; Grondin, E.; Fayet, S.; Ponche, J.-L.

    2005-03-01

    Within the frame of the ESCOMPTE program, a spatial emission inventory and an emission database aimed at tropospheric photochemistry intercomparison modeling has been developed under the scientific supervision of the LPCA with the help of the regional coordination of Air Quality network AIRMARAIX. This inventory has been established for all categories of sources (stationary, mobile and biogenic sources) over a domain of 19,600 km 2 centered on the cities of Marseilles-Aix-en-Provence in the southeastern part of France with a spatial resolution of 1 km 2. A yearly inventory for 1999 has been established, and hourly emission inventories for 23 days of June and July 2000 and 2001, corresponding to the intensive measurement periods, have been produced. The 104 chemical species in the inventory have been selected to be relevant with respect to photochemistry modeling according to available data. The entire list of species in the inventory numbers 216 which will allow other future applications of this database. This database is presently the most detailed and complete regional emission database in France. In addition, the database structure and the emission calculation modules have been designed to ensure a better sustainability and upgradeability, being provided with appropriate maintenance software. The general organization and method is summarized and the results obtained for both yearly and hourly emissions are detailed and discussed. Some comparisons have been performed with the existing results in this region to ensure the congruency of the results. This leads to confirm the relevance and the consistency of the ESCOMPTE emission inventory.

  1. Bibliographical database of radiation biological dosimetry and risk assessment: Part 1, through June 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straume, T.; Ricker, Y.; Thut, M.

    1988-08-29

    This database was constructed to support research in radiation biological dosimetry and risk assessment. Relevant publications were identified through detailed searches of national and international electronic databases and through our personal knowledge of the subject. Publications were numbered and key worded, and referenced in an electronic data-retrieval system that permits quick access through computerized searches on publication number, authors, key words, title, year, and journal name. Photocopies of all publications contained in the database are maintained in a file that is numerically arranged by citation number. This report of the database is provided as a useful reference and overview. Itmore » should be emphasized that the database will grow as new citations are added to it. With that in mind, we arranged this report in order of ascending citation number so that follow-up reports will simply extend this document. The database cite 1212 publications. Publications are from 119 different scientific journals, 27 of these journals are cited at least 5 times. It also contains reference to 42 books and published symposia, and 129 reports. Information relevant to radiation biological dosimetry and risk assessment is widely distributed among the scientific literature, although a few journals clearly dominate. The four journals publishing the largest number of relevant papers are Health Physics, Mutation Research, Radiation Research, and International Journal of Radiation Biology. Publications in Health Physics make up almost 10% of the current database.« less

  2. Review of telehealth stuttering management.

    PubMed

    Lowe, Robyn; O'Brian, Sue; Onslow, Mark

    2013-01-01

    Telehealth is the use of communication technology to provide health care services by means other than typical in-clinic attendance models. Telehealth is increasingly used for the management of speech, language and communication disorders. The aim of this article is to review telehealth applications to stuttering management. We conducted a search of peer-reviewed literature for the past 20 years using the Institute for Scientific Information Web of Science database, PubMed: The Bibliographic Database and a search for articles by hand. Outcomes for telehealth stuttering treatment were generally positive, but there may be a compromise of treatment efficiency with telehealth treatment of young children. Our search found no studies dealing with stuttering assessment procedures using telehealth models. No economic analyses of this delivery model have been reported. This review highlights the need for continued research about telehealth for stuttering management. Evidence from research is needed to inform the efficacy of assessment procedures using telehealth methods as well as guide the development of improved treatment procedures. Clinical and technical guidelines are urgently needed to ensure that the evolving and continued use of telehealth to manage stuttering does not compromise the standards of care afforded with standard in-clinic models.

  3. Data, Data Everywhere but Not a Byte to Read: Managing Monitoring Information.

    ERIC Educational Resources Information Center

    Stafford, Susan G.

    1993-01-01

    Describes the Forest Science Data Bank that contains 2,400 data sets from over 350 existing ecological studies. Database features described include involvement of the scientific community; database documentation; data quality assurance; security; data access and retrieval; and data import/export flexibility. Appendices present the Quantitative…

  4. Utilizing the Web in the Classroom: Linking Student Scientists with Professional Data.

    ERIC Educational Resources Information Center

    Seitz, Kristine; Leake, Devin

    1999-01-01

    Describes how information gathered from a computer database can be used as a springboard to scientific discovery. Specifies directions for studying the homeobox gene PAX-6 using GenBank, a database maintained by the National Center for Biotechnology Information (NCBI). Contains 16 references. (WRM)

  5. The BioMart community portal: an innovative alternative to large, centralized data repositories

    USDA-ARS?s Scientific Manuscript database

    The BioMart Community Portal (www.biomart.org) is a community-driven effort to provide a unified interface to biomedical databases that are distributed worldwide. The portal provides access to numerous database projects supported by 30 scientific organizations. It includes over 800 different biologi...

  6. Specific character of citations in historiography (using the example of Polish history).

    PubMed

    Kolasa, Władysław Marek

    2012-03-01

    The first part of the paper deals with the assessment of international databases in relation to the number of historical publications (representation and relevance in comparison with the model database). The second part is focused on providing answer to the question whether historiography is governed by similar bibliometric rules as exact sciences or whether it has its own specific character. Empirical database for this part of the research constituted the database prepared ad hoc: The Citation Index of the History of Polish Media (CIHPM). Among numerous typically historical features the main focus was put on: linguistic localism, specific character of publishing forms, differences in citing of various sources (contributions and syntheses) and specific character of the authorship (the Lorenz Curve and the Lotka's Law). Slightly more attention was devoted to the half-life indicator and its role in a diachronic study of a scientific field; also, a new indicator (HL14), depicting distribution of citations younger then half-life was introduced. Additionally, the comparison and correlation of selected parameters for the body of historical science (citations, HL14, the Hirsch Index, number of publications, volume and other) were also conducted.

  7. Making proteomics data accessible and reusable: Current state of proteomics databases and repositories

    PubMed Central

    Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-01-01

    Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. PMID:25158685

  8. EKPD: a hierarchical database of eukaryotic protein kinases and protein phosphatases.

    PubMed

    Wang, Yongbo; Liu, Zexian; Cheng, Han; Gao, Tianshun; Pan, Zhicheng; Yang, Qing; Guo, Anyuan; Xue, Yu

    2014-01-01

    We present here EKPD (http://ekpd.biocuckoo.org), a hierarchical database of eukaryotic protein kinases (PKs) and protein phosphatases (PPs), the key molecules responsible for the reversible phosphorylation of proteins that are involved in almost all aspects of biological processes. As extensive experimental and computational efforts have been carried out to identify PKs and PPs, an integrative resource with detailed classification and annotation information would be of great value for both experimentalists and computational biologists. In this work, we first collected 1855 PKs and 347 PPs from the scientific literature and various public databases. Based on previously established rationales, we classified all of the known PKs and PPs into a hierarchical structure with three levels, i.e. group, family and individual PK/PP. There are 10 groups with 149 families for the PKs and 10 groups with 33 families for the PPs. We constructed 139 and 27 Hidden Markov Model profiles for PK and PP families, respectively. Then we systematically characterized ∼50,000 PKs and >10,000 PPs in eukaryotes. In addition, >500 PKs and >400 PPs were computationally identified by ortholog search. Finally, the online service of the EKPD database was implemented in PHP + MySQL + JavaScript.

  9. Improved Dust Forecast Products for Southwest Asia Forecasters through Dust Source Database Advancements

    NASA Astrophysics Data System (ADS)

    Brooks, G. R.

    2011-12-01

    Dust storm forecasting is a critical part of military theater operations in Afghanistan and Iraq as well as other strategic areas of the globe. The Air Force Weather Agency (AFWA) has been using the Dust Transport Application (DTA) as a forecasting tool since 2001. Initially developed by The Johns Hopkins University Applied Physics Laboratory (JHUAPL), output products include dust concentration and reduction of visibility due to dust. The performance of the products depends on several factors including the underlying dust source database, treatment of soil moisture, parameterization of dust processes, and validity of the input atmospheric model data. Over many years of analysis, seasonal dust forecast biases of the DTA have been observed and documented. As these products are unique and indispensible for U.S. and NATO forces, amendments were required to provide the best forecasts possible. One of the quickest ways to scientifically address the dust concentration biases noted over time was to analyze the weaknesses in, and adjust the dust source database. Dust source database strengths and weaknesses, the satellite analysis and adjustment process, and tests which confirmed the resulting improvements in the final dust concentration and visibility products will be shown.

  10. Issues for SME Credit Information DB Institutionsand Expectations for the Econophysics ---Scientific Economic Policies for Avoiding Moral Hazard---

    NASA Astrophysics Data System (ADS)

    Tanabe, T.

    The CRD database, which has been accumulating financial data on SMEsover the ten years since its founding, and has gathered approximately 12 million records for around 2 million SMEs, approximately 3 million records for somewhere around 900,000 sole proprietors, also collected default data on these companies and sole proprietors. The CRD database's weakness is anonymity. Going forward, therefore, it appears the CRD Association is faced with questions concerning how it will enhance the attractiveness of its database whether new knowledge should be gained by using econophysics or other research approaches. We have already seen several examples of knowledge gained through econophysical analyses using the CRD database, and I would like to express my hope that we will eventually see greater application of the SME credit information database and econophysical analysis for the development of Japans SME policies which are scientific economic policies for avoiding moral hazard, and will expect elucidating risk scenarios for the global financial, natural disaster, and other shocks expected to happen with greater frequency. Therefore, the role played by econophysics will become increasingly important, and we have high expectations for the role to be played by the field of econophysics.

  11. [Bibliometric analysis of Revista Médica del IMSS in the Scopus database for the period between 2005-2013].

    PubMed

    García-Gómez, Francisco; Ramírez-Méndez, Fernando

    2015-01-01

    To analyze the number of articles of Revista Médica del Instituto Mexicano del Seguro Social (Rev Med Inst Mex Seguro Soc) in the Scopus database and describe principal quantitative bibliometric indicators of scientific publications during the period between 2005 to 2013. Scopus database was used limited to the period between 2005 to 2013. The analysis cover mainly title of articles with the title of Revista Médica del Instituto Mexicano del Seguro Social and its possible modifications. For the analysis, Scopus, Excel and Access were used. 864 articles were published during the period between 2005 to 2013 in the Scopus database. We identified authors with the highest number of contributions including articles with the highest citation rate and forms of documents cited. We also divided articles by subjects, types of documents and other bibliometric indicators which characterize the publications. The use of Scopus brings the possibility of analyze with an external tool the visibility of the scientific production published in the Revista Médica del IMSS. The use of this database also contributes to identify the state of science in México, as well as in the developing countries.

  12. A comprehensive view of the web-resources related to sericulture

    PubMed Central

    Singh, Deepika; Chetia, Hasnahana; Kabiraj, Debajyoti; Sharma, Swagata; Kumar, Anil; Sharma, Pragya; Deka, Manab; Bora, Utpal

    2016-01-01

    Recent progress in the field of sequencing and analysis has led to a tremendous spike in data and the development of data science tools. One of the outcomes of this scientific progress is development of numerous databases which are gaining popularity in all disciplines of biology including sericulture. As economically important organism, silkworms are studied extensively for their numerous applications in the field of textiles, biomaterials, biomimetics, etc. Similarly, host plants, pests, pathogens, etc. are also being probed to understand the seri-resources more efficiently. These studies have led to the generation of numerous seri-related databases which are extremely helpful for the scientific community. In this article, we have reviewed all the available online resources on silkworm and its related organisms, including databases as well as informative websites. We have studied their basic features and impact on research through citation count analysis, finally discussing the role of emerging sequencing and analysis technologies in the field of seri-data science. As an outcome of this review, a web portal named SeriPort, has been created which will act as an index for the various sericulture-related databases and web resources available in cyberspace. Database URL: http://www.seriport.in/ PMID:27307138

  13. [The theme of disaster in health care: profile of technical and scientific production in the specialized database on disasters of the Virtual Health Library - VHL].

    PubMed

    Rocha, Vania; Ximenes, Elisa Francioli; Carvalho, Mauren Lopes de; Alpino, Tais de Moura Ariza; Freitas, Carlos Machado de

    2014-09-01

    In the specialized database of the Virtual Health Library (VHL), the DISASTER database highlights the importance of the theme for the health sector. The scope of this article is to identify the profiles of technical and scientific publications in the specialized database. Based on systematic searches and the analysis of results it is possible to determine: the type of publication; the main topics addressed; the most common type of disasters mentioned in published materials, countries and regions as subjects, historic periods with the most publications and the current trend of publications. When examining the specialized data in detail, it soon becomes clear that the number of major topics is very high, making a specific search process in this database a challenging exercise. On the other hand, it is encouraging that the disaster topic is discussed and assessed in a broad and diversified manner, associated with different aspects of the natural and social sciences. The disaster issue requires the production of interdisciplinary knowledge development to reduce the impacts of disasters and for risk management. In this way, since the health sector is a interdisciplinary area, it can contribute to knowledge production.

  14. Academic impact of a public electronic health database: bibliometric analysis of studies using the general practice research database.

    PubMed

    Chen, Yu-Chun; Wu, Jau-Ching; Haschler, Ingo; Majeed, Azeem; Chen, Tzeng-Ji; Wetter, Thomas

    2011-01-01

    Studies that use electronic health databases as research material are getting popular but the influence of a single electronic health database had not been well investigated yet. The United Kingdom's General Practice Research Database (GPRD) is one of the few electronic health databases publicly available to academic researchers. This study analyzed studies that used GPRD to demonstrate the scientific production and academic impact by a single public health database. A total of 749 studies published between 1995 and 2009 with 'General Practice Research Database' as their topics, defined as GPRD studies, were extracted from Web of Science. By the end of 2009, the GPRD had attracted 1251 authors from 22 countries and been used extensively in 749 studies published in 193 journals across 58 study fields. Each GPRD study was cited 2.7 times by successive studies. Moreover, the total number of GPRD studies increased rapidly, and it is expected to reach 1500 by 2015, twice the number accumulated till the end of 2009. Since 17 of the most prolific authors (1.4% of all authors) contributed nearly half (47.9%) of GPRD studies, success in conducting GPRD studies may accumulate. The GPRD was used mainly in, but not limited to, the three study fields of "Pharmacology and Pharmacy", "General and Internal Medicine", and "Public, Environmental and Occupational Health". The UK and United States were the two most active regions of GPRD studies. One-third of GRPD studies were internationally co-authored. A public electronic health database such as the GPRD will promote scientific production in many ways. Data owners of electronic health databases at a national level should consider how to reduce access barriers and to make data more available for research.

  15. DoSSiER: Database of scientific simulation and experimental results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenzel, Hans; Yarba, Julia; Genser, Krzystof

    The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.

  16. DB90: A Fortran Callable Relational Database Routine for Scientific and Engineering Computer Programs

    NASA Technical Reports Server (NTRS)

    Wrenn, Gregory A.

    2005-01-01

    This report describes a database routine called DB90 which is intended for use with scientific and engineering computer programs. The software is written in the Fortran 90/95 programming language standard with file input and output routines written in the C programming language. These routines should be completely portable to any computing platform and operating system that has Fortran 90/95 and C compilers. DB90 allows a program to supply relation names and up to 5 integer key values to uniquely identify each record of each relation. This permits the user to select records or retrieve data in any desired order.

  17. DoSSiER: Database of scientific simulation and experimental results

    DOE PAGES

    Wenzel, Hans; Yarba, Julia; Genser, Krzystof; ...

    2016-08-01

    The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.

  18. Vascular knowledge in medieval times was the turning point for the humanistic trend.

    PubMed

    Ducasse, E; Speziale, F; Baste, J C; Midy, D

    2006-06-01

    Knowledge of the history of our surgical specialty may broaden our viewpoint for everyday practice. We illustrate the scientific progress made in medieval times relevant to the vascular system and blood circulation, progress made despite prevailing religious and philosophical dogma. We located all articles concerning vascular knowledge and historical reviews in databases such as MEDLINE, EMBASE and the database of abstracts of reviews (DARE). We also explored the database of the register from the French National Library, the French Medical Inter-University (BIUM), the Italian National Library and the French and Italian Libraries in the Vatican. All data were collected and analysed in chronological order. Medieval vascular knowledge was inherited from Greek via Byzantine and Arabic writings, the first controversies against the recognized vascular schema emanating from an Arabian physician in the 13th century. Dissection was forbidden and clerical rules instilled a fear of blood. Major contributions to scientific progress in the vascular field in medieval times came from Ibn-al-Nafis and Harvey. Vascular specialists today may feel proud to recall that once religious dogma declined in early medieval times, vascular anatomic and physiological discoveries led the way to scientific progress.

  19. Déjà vu: a database of highly similar citations in the scientific literature

    PubMed Central

    Errami, Mounir; Sun, Zhaohui; Long, Tara C.; George, Angela C.; Garner, Harold R.

    2009-01-01

    In the scientific research community, plagiarism and covert multiple publications of the same data are considered unacceptable because they undermine the public confidence in the scientific integrity. Yet, little has been done to help authors and editors to identify highly similar citations, which sometimes may represent cases of unethical duplication. For this reason, we have made available Déjà vu, a publicly available database of highly similar Medline citations identified by the text similarity search engine eTBLAST. Following manual verification, highly similar citation pairs are classified into various categories ranging from duplicates with different authors to sanctioned duplicates. Déjà vu records also contain user-provided commentary and supporting information to substantiate each document's categorization. Déjà vu and eTBLAST are available to authors, editors, reviewers, ethicists and sociologists to study, intercept, annotate and deter questionable publication practices. These tools are part of a sustained effort to enhance the quality of Medline as ‘the’ biomedical corpus. The Déjà vu database is freely accessible at http://spore.swmed.edu/dejavu. The tool eTBLAST is also freely available at http://etblast.org. PMID:18757888

  20. Deja vu: a database of highly similar citations in the scientific literature.

    PubMed

    Errami, Mounir; Sun, Zhaohui; Long, Tara C; George, Angela C; Garner, Harold R

    2009-01-01

    In the scientific research community, plagiarism and covert multiple publications of the same data are considered unacceptable because they undermine the public confidence in the scientific integrity. Yet, little has been done to help authors and editors to identify highly similar citations, which sometimes may represent cases of unethical duplication. For this reason, we have made available Déjà vu, a publicly available database of highly similar Medline citations identified by the text similarity search engine eTBLAST. Following manual verification, highly similar citation pairs are classified into various categories ranging from duplicates with different authors to sanctioned duplicates. Déjà vu records also contain user-provided commentary and supporting information to substantiate each document's categorization. Déjà vu and eTBLAST are available to authors, editors, reviewers, ethicists and sociologists to study, intercept, annotate and deter questionable publication practices. These tools are part of a sustained effort to enhance the quality of Medline as 'the' biomedical corpus. The Déjà vu database is freely accessible at http://spore.swmed.edu/dejavu. The tool eTBLAST is also freely available at http://etblast.org.

  1. Crème de la crème in forensic science and legal medicine. The most highly cited articles, authors and journals 1981-2003.

    PubMed

    Jones, Alan Wayne

    2005-03-01

    The importance and prestige of a scientific journal is increasingly being judged by the number of times the articles it publishes are cited or referenced in articles published in other scientific journals. Citation counting is also used to assess the merits of individual scientists when academic promotion and tenure are decided. With the help of Thomson, Institute for Scientific Information (Thomson ISI) a citation database was created for six leading forensic science and legal medicine journals. This database was used to determine the most highly cited articles, authors, journals and the most prolific authors of articles in the forensic sciences. The forensic science and legal medicine journals evaluated were: Journal of Forensic Sciences (JFS), Forensic Science International (FSI), International Journal of Legal Medicine (IJLM), Medicine, Science and the Law (MSL), American Journal of Forensic Medicine and Pathology (AJFMP), and Science and Justice (S&J). The resulting forensics database contained 14,210 papers published between 1981 and 2003. This in-depth bibliometric analysis has identified the creme de la creme in forensic science and legal medicine in a quantitative and objective way by citation analysis with focus on articles, authors and journals.

  2. Canopies to Continents: What spatial scales are needed to represent landcover distributions in earth system models?

    NASA Astrophysics Data System (ADS)

    Guenther, A. B.; Duhl, T.

    2011-12-01

    Increasing computational resources have enabled a steady improvement in the spatial resolution used for earth system models. Land surface models and landcover distributions have kept ahead by providing higher spatial resolution than typically used in these models. Satellite observations have played a major role in providing high resolution landcover distributions over large regions or the entire earth surface but ground observations are needed to calibrate these data and provide accurate inputs for models. As our ability to resolve individual landscape components improves, it is important to consider what scale is sufficient for providing inputs to earth system models. The required spatial scale is dependent on the processes being represented and the scientific questions being addressed. This presentation will describe the development a contiguous U.S. landcover database using high resolution imagery (1 to 1000 meters) and surface observations of species composition and other landcover characteristics. The database includes plant functional types and species composition and is suitable for driving land surface models (CLM and MEGAN) that predict land surface exchange of carbon, water, energy and biogenic reactive gases (e.g., isoprene, sesquiterpenes, and NO). We investigate the sensitivity of model results to landcover distributions with spatial scales ranging over six orders of magnitude (1 meter to 1000000 meters). The implications for predictions of regional climate and air quality will be discussed along with recommendations for regional and global earth system modeling.

  3. Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monitoring solution if needed. The heterogeneous accounting information is transferred from the database to the ElasticSearch engine via a custom Logstash plugin. Each use-case is indexed separately in ElasticSearch and we setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service. Moreover, we have developed a billing system for our private Cloud, which relies on the RabbitMQ message queue for asynchronous communication to the database and on the ELK stack for its graphical interface. The Italian Grid accounting framework is also migrating to a similar set-up. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BESIII virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools. At present, we are working to define a model for monitoring-as-a-service, based on the tools described above, which the Cloud tenants can easily configure to suit their specific needs.

  4. NeMedPlant: a database of therapeutic applications and chemical constituents of medicinal plants from north-east region of India

    PubMed Central

    Meetei, Potshangbam Angamba; Singh, Pankaj; Nongdam, Potshangbam; Prabhu, N Prakash; Rathore, RS; Vindal, Vaibhav

    2012-01-01

    The North-East region of India is one of the twelve mega biodiversity region, containing many rare and endangered species. A curated database of medicinal and aromatic plants from the regions called NeMedPlant is developed. The database contains traditional, scientific and medicinal information about plants and their active constituents, obtained from scholarly literature and local sources. The database is cross-linked with major biochemical databases and analytical tools. The integrated database provides resource for investigations into hitherto unexplored medicinal plants and serves to speed up the discovery of natural productsbased drugs. Availability The database is available for free at http://bif.uohyd.ac.in/nemedplant/orhttp://202.41.85.11/nemedplant/ PMID:22419844

  5. Inferring rupture characteristics using new databases for 3D slab geometry and earthquake rupture models

    NASA Astrophysics Data System (ADS)

    Hayes, G. P.; Plescia, S. M.; Moore, G.

    2017-12-01

    The U.S. Geological Survey National Earthquake Information Center has recently published a database of finite fault models for globally distributed M7.5+ earthquakes since 1990. Concurrently, we have also compiled a database of three-dimensional slab geometry models for all global subduction zones, to update and replace Slab1.0. Here, we use these two new and valuable resources to infer characteristics of earthquake rupture and propagation in subduction zones, where the vast majority of large-to-great-sized earthquakes occur. For example, we can test questions that are fairly prevalent in seismological literature. Do large ruptures preferentially occur where subduction zones are flat (e.g., Bletery et al., 2016)? Can `flatness' be mapped to understand and quantify earthquake potential? Do the ends of ruptures correlate with significant changes in slab geometry, and/or bathymetric features entering the subduction zone? Do local subduction zone geometry changes spatially correlate with areas of low slip in rupture models (e.g., Moreno et al., 2012)? Is there a correlation between average seismogenic zone dip, and/or seismogenic zone width, and earthquake size? (e.g., Hayes et al., 2012; Heuret et al., 2011). These issues are fundamental to the understanding of earthquake rupture dynamics and subduction zone seismogenesis, and yet many are poorly understood or are still debated in scientific literature. We attempt to address these questions and similar issues in this presentation, and show how these models can be used to improve our understanding of earthquake hazard in subduction zones.

  6. Experiences with the Application of Services Oriented Approaches to the Federation of Heterogeneous Geologic Data Resources

    NASA Astrophysics Data System (ADS)

    Cervato, C.; Fils, D.; Bohling, G.; Diver, P.; Greer, D.; Reed, J.; Tang, X.

    2006-12-01

    The federation of databases is not a new endeavor. Great strides have been made e.g. in the health and astrophysics communities. Reviews of those successes indicate that they have been able to leverage off key cross-community core concepts. In its simplest implementation, a federation of databases with identical base schemas that can be extended to address individual efforts, is relatively easy to accomplish. Efforts of groups like the Open Geospatial Consortium have shown methods to geospatially relate data between different sources. We present here a summary of CHRONOS's (http://www.chronos.org) experience with highly heterogeneous data. Our experience with the federation of very diverse databases shows that the wide variety of encoding options for items like locality, time scale, taxon ID, and other key parameters makes it difficult to effectively join data across them. However, the response to this is not to develop one large, monolithic database, which will suffer growth pains due to social, national, and operational issues, but rather to systematically develop the architecture that will enable cross-resource (database, repository, tool, interface) interaction. CHRONOS has accomplished the major hurdle of federating small IT database efforts with service-oriented and XML-based approaches. The application of easy-to-use procedures that allow groups of all sizes to implement and experiment with searches across various databases and to use externally created tools is vital. We are sharing with the geoinformatics community the difficulties with application frameworks, user authentication, standards compliance, and data storage encountered in setting up web sites and portals for various science initiatives (e.g., ANDRILL, EARTHTIME). The ability to incorporate CHRONOS data, services, and tools into the existing framework of a group is crucial to the development of a model that supports and extends the vitality of the small- to medium-sized research effort that is essential for a vibrant scientific community. This presentation will directly address issues of portal development related to JSR-168 and other portal API's as well as issues related to both federated and local directory-based authentication. The application of service-oriented architecture in connection with ReST-based approaches is vital to facilitate service use by experienced and less experienced information technology groups. Application of these services with XML- based schemas allows for the connection to third party tools such a GIS-based tools and software designed to perform a specific scientific analysis. The connection of all these capabilities into a combined framework based on the standard XHTML Document object model and CSS 2.0 standards used in traditional web development will be demonstrated. CHRONOS also utilizes newer client techniques such as AJAX and cross- domain scripting along with traditional server-side database, application, and web servers. The combination of the various components of this architecture creates an environment based on open and free standards that allows for the discovery, retrieval, and integration of tools and data.

  7. [Conceptual foundations of creation of branch database of technology and intellectual property rights owned by scientific institutions, organizations, higher medical educational institutions and enterprises of healthcare sphere of Ukraine].

    PubMed

    Horban', A Ie

    2013-09-01

    The question of implementation of the state policy in the field of technology transfer in the medical branch to implement the law of Ukraine of 02.10.2012 No 5407-VI "On Amendments to the law of Ukraine" "On state regulation of activity in the field of technology transfers", namely to ensure the formation of branch database on technology and intellectual property rights owned by scientific institutions, organizations, higher medical education institutions and enterprises of healthcare sphere of Ukraine and established by budget are considered. Analysis of international and domestic experience in the processing of information about intellectual property rights and systems implementation support transfer of new technologies are made. The main conceptual principles of creation of this branch database of technology transfer and branch technology transfer network are defined.

  8. Creating a High-Frequency Electronic Database in the PICU: The Perpetual Patient.

    PubMed

    Brossier, David; El Taani, Redha; Sauthier, Michael; Roumeliotis, Nadia; Emeriaud, Guillaume; Jouvet, Philippe

    2018-04-01

    Our objective was to construct a prospective high-quality and high-frequency database combining patient therapeutics and clinical variables in real time, automatically fed by the information system and network architecture available through fully electronic charting in our PICU. The purpose of this article is to describe the data acquisition process from bedside to the research electronic database. Descriptive report and analysis of a prospective database. A 24-bed PICU, medical ICU, surgical ICU, and cardiac ICU in a tertiary care free-standing maternal child health center in Canada. All patients less than 18 years old were included at admission to the PICU. None. Between May 21, 2015, and December 31, 2016, 1,386 consecutive PICU stays from 1,194 patients were recorded in the database. Data were prospectively collected from admission to discharge, every 5 seconds from monitors and every 30 seconds from mechanical ventilators and infusion pumps. These data were linked to the patient's electronic medical record. The database total volume was 241 GB. The patients' median age was 2.0 years (interquartile range, 0.0-9.0). Data were available for all mechanically ventilated patients (n = 511; recorded duration, 77,678 hr), and respiratory failure was the most frequent reason for admission (n = 360). The complete pharmacologic profile was synched to database for all PICU stays. Following this implementation, a validation phase is in process and several research projects are ongoing using this high-fidelity database. Using the existing bedside information system and network architecture of our PICU, we implemented an ongoing high-fidelity prospectively collected electronic database, preventing the continuous loss of scientific information. This offers the opportunity to develop research on clinical decision support systems and computational models of cardiorespiratory physiology for example.

  9. The National Landslide Database and GIS for Great Britain: construction, development, data acquisition, application and communication

    NASA Astrophysics Data System (ADS)

    Pennington, Catherine; Dashwood, Claire; Freeborough, Katy

    2014-05-01

    The National Landslide Database has been developed by the British Geological Survey (BGS) and is the focus for national geohazard research for landslides in Great Britain. The history and structure of the geospatial database and associated Geographical Information System (GIS) are explained, along with the future developments of the database and its applications. The database is the most extensive source of information on landslides in Great Britain with over 16,500 records of landslide events, each documented as fully as possible. Data are gathered through a range of procedures, including: incorporation of other databases; automated trawling of current and historical scientific literature and media reports; new field- and desk-based mapping technologies with digital data capture, and crowd-sourcing information through social media and other online resources. This information is invaluable for the investigation, prevention and mitigation of areas of unstable ground in accordance with Government planning policy guidelines. The national landslide susceptibility map (GeoSure) and a national landslide domain map currently under development rely heavily on the information contained within the landslide database. Assessing susceptibility to landsliding requires knowledge of the distribution of failures and an understanding of causative factors and their spatial distribution, whilst understanding the frequency and types of landsliding present is integral to modelling how rainfall will influence the stability of a region. Communication of landslide data through the Natural Hazard Partnership (NHP) contributes to national hazard mitigation and disaster risk reduction with respect to weather and climate. Daily reports of landslide potential are published by BGS through the NHP and data collected for the National Landslide Database is used widely for the creation of these assessments. The National Landslide Database is freely available via an online GIS and is used by a variety of stakeholders for research purposes.

  10. GigaTON: an extensive publicly searchable database providing a new reference transcriptome in the pacific oyster Crassostrea gigas.

    PubMed

    Riviere, Guillaume; Klopp, Christophe; Ibouniyamine, Nabihoudine; Huvet, Arnaud; Boudry, Pierre; Favrel, Pascal

    2015-12-02

    The Pacific oyster, Crassostrea gigas, is one of the most important aquaculture shellfish resources worldwide. Important efforts have been undertaken towards a better knowledge of its genome and transcriptome, which makes now C. gigas becoming a model organism among lophotrochozoans, the under-described sister clade of ecdysozoans within protostomes. These massive sequencing efforts offer the opportunity to assemble gene expression data and make such resource accessible and exploitable for the scientific community. Therefore, we undertook this assembly into an up-to-date publicly available transcriptome database: the GigaTON (Gigas TranscriptOme pipeliNe) database. We assembled 2204 million sequences obtained from 114 publicly available RNA-seq libraries that were realized using all embryo-larval development stages, adult organs, different environmental stressors including heavy metals, temperature, salinity and exposure to air, which were mostly performed as part of the Crassostrea gigas genome project. This data was analyzed in silico and resulted into 56621 newly assembled contigs that were deposited into a publicly available database, the GigaTON database. This database also provides powerful and user-friendly request tools to browse and retrieve information about annotation, expression level, UTRs, splice and polymorphism, and gene ontology associated to all the contigs into each, and between all libraries. The GigaTON database provides a convenient, potent and versatile interface to browse, retrieve, confront and compare massive transcriptomic information in an extensive range of conditions, tissues and developmental stages in Crassostrea gigas. To our knowledge, the GigaTON database constitutes the most extensive transcriptomic database to date in marine invertebrates, thereby a new reference transcriptome in the oyster, a highly valuable resource to physiologists and evolutionary biologists.

  11. Biomedical science journals in the Arab world.

    PubMed

    Tadmouri, Ghazi O

    2004-10-01

    Medieval Arab scientists established the basis of medical practice and gave important attention to the publication of scientific results. At present, modern scientific publishing in the Arab world is in its developmental stage. Arab biomedical journals are less than 300, most of which are published in Egypt, Lebanon, and the Kingdom of Saudi Arabia. Yet, many of these journals do not have on-line access or are indexed in major bibliographic databases. The majority of indexed journals, however, do not have a stable presence in the popular PubMed database and their indexes are discontinued since 2001. The exposure of Arab biomedical journals in international indices undoubtedly plays an important role in improving the scientific quality of these journals. The successful examples discussed in this review encourage us to call for the formation of a consortium of Arab biomedical journal publishers to assist in redressing the balance of the region from biomedical data consumption to data production.

  12. [Over- or underestimated? Bibliographic survey of the biomedical periodicals published in Hungary].

    PubMed

    Berhidi, Anna; Horváth, Katalin; Horváth, Gabriella; Vasas, Lívia

    2013-06-30

    This publication - based on an article published in 2006 - emphasises the qualities of the current biomedical periodicals of Hungarian editions. The aim of this study was to analyse how Hungarian journals meet the requirements of the scientific aspect and international visibility. Authors evaluated 93 Hungarian biomedical periodicals by 4 viewpoints of the two criteria mentioned above. 35% of the analysed journals complete the attributes of scientific aspect, 5% the international visibility, 6% fulfill all examined criteria, and 25% are indexed in international databases. 6 biomedical Hungarian periodicals covered by each of the three main bibliographic databases (Medline, Scopus, Web of Science) have the best qualities. Authors recommend to improve viewpoints of the scientific aspect and international visibility. The basis of qualitative adequacy are the accurate authors' guidelines, title, abstract, keywords of the articles in English, and the ability to publish on time.

  13. [25 Years in nutrition and food research in the Iberoamerican knowledge area].

    PubMed

    Wanden-Berghe, C; Martín-Rodero, H

    2012-11-01

    Research is usually considered a reliable indicator of the degree of development. Research in a problematic area such as food and nutrition for a given region, should have an impact on scientific production in agreement with the importance of the problem, the research capacity and the available resources for generating such a research. To identify some indicators of Iberoamerican research in nutrition and food. Retrospective study of Iberoamerican scientific production in nutrition and food in the last 25 years. The data were obtained from the bibliographic database Science Citation Index Expanded, Journal Citation Reports Science Edition Database 2011, both included in the Web of Knowledge (Thomson Reuters), and the database of the World Bank. 49,808 papers were registered, the 3.20% of the Health Sciences collection in SCI. The evolution was fitted to an exponential model, N&D (R² 0.962) and FS&T (R² 0.995). The average production in N&D per average population was higher in Spain with 0.659 papers/million. The highest rates of productivity and profitability were found in Guatemala with 12.963 papers/1000 researchers and 1.486 papers/million $ respectively. The average production in FS&T of the different countries per average population was higher in Cuba with 21.624 papers/million. The productivity index was higher in Uruguay with 25.999 papers/thousand researchers. The profitability index was higher in Guatemala with 0.271 papers/million $. There is exponential growth in the two categories studied N&D and FS&T. Productivity and profitability was higher in countries with low R&D (Research & Development) budget.

  14. The Atmospheric Mercury Network: measurement and initial examination of an ongoing atmospheric mercury record across North America

    NASA Astrophysics Data System (ADS)

    Gay, D. A.; Schmeltz, D.; Prestbo, E.; Olson, M.; Sharac, T.; Tordon, R.

    2013-04-01

    The National Atmospheric Deposition Program (NADP) developed and operates a collaborative network of atmospheric mercury monitoring sites based in North America - the Atmospheric Mercury Network (AMNet). The justification for the network was growing interest and demand from many scientists and policy makers for a robust database of measurements to improve model development, assess policies and programs, and improve estimates of mercury dry deposition. Many different agencies and groups support the network, including federal, state, tribal, and international governments, academic institutions, and private companies. AMNet has added two high elevation sites outside of continental North America in Hawaii and Taiwan because of new partnerships forged within NADP. Network sites measure concentrations of atmospheric mercury fractions using automated, continuous mercury speciation systems. The procedures that NADP developed for field operations, data management, and quality assurance ensure that the network makes scientifically valid and consistent measurements. AMNet reports concentrations of hourly gaseous elemental mercury (GEM), two-hour gaseous oxidized mercury (GOM), and two-hour particulate-bound mercury less than 2.5 microns in size (PBM2.5). As of January 2012, over 450 000 valid observations are available from 30 stations. The AMNet also collects ancillary meteorological data and information on land-use and vegetation, when available. We present atmospheric mercury data comparisons by time (3 yr) at 22 unique site locations. Highlighted are contrasting values for site locations across the network: urban versus rural, coastal versus high-elevation and the range of maximum observations. The data presented should catalyze the formation of many scientific questions that may be answered through further in-depth analysis and modeling studies of the AMNet database. All data and methods are publically available through an online database on the NADP website (http://nadp.isws.illinois.edu/amn/). Future network directions are to foster new network partnerships and continue to collect, quality assure, and post data, including dry deposition estimates, for each fraction.

  15. The Atmospheric Mercury Network: measurement and initial examination of an ongoing atmospheric mercury record across North America

    NASA Astrophysics Data System (ADS)

    Gay, D. A.; Schmeltz, D.; Prestbo, E.; Olson, M.; Sharac, T.; Tordon, R.

    2013-11-01

    The National Atmospheric Deposition Program (NADP) developed and operates a collaborative network of atmospheric-mercury-monitoring sites based in North America - the Atmospheric Mercury Network (AMNet). The justification for the network was growing interest and demand from many scientists and policy makers for a robust database of measurements to improve model development, assess policies and programs, and improve estimates of mercury dry deposition. Many different agencies and groups support the network, including federal, state, tribal, and international governments, academic institutions, and private companies. AMNet has added two high-elevation sites outside of continental North America in Hawaii and Taiwan because of new partnerships forged within NADP. Network sites measure concentrations of atmospheric mercury fractions using automated, continuous mercury speciation systems. The procedures that NADP developed for field operations, data management, and quality assurance ensure that the network makes scientifically valid and consistent measurements. AMNet reports concentrations of hourly gaseous elemental mercury (GEM), two-hour gaseous oxidized mercury (GOM), and two-hour particulate-bound mercury less than 2.5 microns in size (PBM2.5). As of January 2012, over 450 000 valid observations are available from 30 stations. AMNet also collects ancillary meteorological data and information on land use and vegetation, when available. We present atmospheric mercury data comparisons by time (3 yr) at 21 individual sites and instruments. Highlighted are contrasting values for site locations across the network: urban versus rural, coastal versus high elevation and the range of maximum observations. The data presented should catalyze the formation of many scientific questions that may be answered through further in-depth analysis and modeling studies of the AMNet database. All data and methods are publically available through an online database on the NADP website (http://nadp.sws.uiuc.edu/amn/). Future network directions are to foster new network partnerships and continue to collect, quality assure, and post data, including dry deposition estimates, for each fraction.

  16. Evaluation of the Inhalation Carcinogenicity of Ethylene Oxide ...

    EPA Pesticide Factsheets

    EPA is seeking peer review of the scientific basis supporting the human health hazard and dose-response assessment of ethylene oxide (cancer) that will appear in the Integrated Risk Information System (IRIS) database. EPA seeks external peer review on how the Agency responded to the SAB panel recommendations, the exposure-response modeling of epidemiologic data, including new analyses since the 2007 external peer review, and on the adequacy, transparency, and clarity of the revised draft. The peer review will include an opportunity for the public to address the peer reviewers.

  17. Pharmacognosy, Phytochemistry and Pharmacological Properties of Achillea millefolium L.: A Review.

    PubMed

    Ali, Sofi Imtiyaz; Gopalakrishnan, B; Venkatesalu, V

    2017-08-01

    Achillea millefoilum L. (Yarrow) is an important species of Asteraceae family with common utilization in traditional medicine of several cultures from Europe to Asia for the treatment of spasmodic gastrointestinal disorders, hepatobiliary, gynecological disorders, against inflammation and for wound healing. An extensive review of literature was made on A. millefoilum L. using ethno botanical text books, published articles in peer-reviewed journals, unpublished materials and scientific databases. The Plant List, International Plant Name Index and Kew Botanical Garden databases were used to authenticate the scientific names. Monoterpenes are the most representative metabolites constituting 90% of the essential oils in relation to the sesquiterpenes, and a wide range of chemical compounds have also been reported. Different pharmacological experiments in many in-vitro and in-vivo models have proved the potential of A. millefoilum with antiinflammatory, antiulcer, anticancer activities etc. lending support to the rationale behind numerous of its traditional uses. Due to the noteworthy pharmacological activities, A. millefoilum will be a better option for new drug discovery. The present review will comprehensively summarize the pharmacognosy, phytochemistry and ethnopharmacology of A. millefoilum reported to date, with emphasis on more in vitro, clinical and pathological studies needed to investigate the unexploited potential of this plant. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. The National Landslide Database of Great Britain: Acquisition, communication and the role of social media

    NASA Astrophysics Data System (ADS)

    Pennington, Catherine; Freeborough, Katy; Dashwood, Claire; Dijkstra, Tom; Lawrie, Kenneth

    2015-11-01

    The British Geological Survey (BGS) is the national geological agency for Great Britain that provides geoscientific information to government, other institutions and the public. The National Landslide Database has been developed by the BGS and is the focus for national geohazard research for landslides in Great Britain. The history and structure of the geospatial database and associated Geographical Information System (GIS) are explained, along with the future developments of the database and its applications. The database is the most extensive source of information on landslides in Great Britain with over 17,000 records of landslide events to date, each documented as fully as possible for inland, coastal and artificial slopes. Data are gathered through a range of procedures, including: incorporation of other databases; automated trawling of current and historical scientific literature and media reports; new field- and desk-based mapping technologies with digital data capture, and using citizen science through social media and other online resources. This information is invaluable for directing the investigation, prevention and mitigation of areas of unstable ground in accordance with Government planning policy guidelines. The national landslide susceptibility map (GeoSure) and a national landslide domains map currently under development, as well as regional mapping campaigns, rely heavily on the information contained within the landslide database. Assessing susceptibility to landsliding requires knowledge of the distribution of failures, an understanding of causative factors, their spatial distribution and likely impacts, whilst understanding the frequency and types of landsliding present is integral to modelling how rainfall will influence the stability of a region. Communication of landslide data through the Natural Hazard Partnership (NHP) and Hazard Impact Model contributes to national hazard mitigation and disaster risk reduction with respect to weather and climate. Daily reports of landslide potential are published by BGS through the NHP partnership and data collected for the National Landslide Database are used widely for the creation of these assessments. The National Landslide Database is freely available via an online GIS and is used by a variety of stakeholders for research purposes.

  19. Systematically Retrieving Research: A Case Study Evaluating Seven Databases

    ERIC Educational Resources Information Center

    Taylor, Brian; Wylie, Emma; Dempster, Martin; Donnelly, Michael

    2007-01-01

    Objective: Developing the scientific underpinnings of social welfare requires effective and efficient methods of retrieving relevant items from the increasing volume of research. Method: We compared seven databases by running the nearest equivalent search on each. The search topic was chosen for relevance to social work practice with older people.…

  20. Extracting Databases from Dark Data with DeepDive.

    PubMed

    Zhang, Ce; Shin, Jaeho; Ré, Christopher; Cafarella, Michael; Niu, Feng

    2016-01-01

    DeepDive is a system for extracting relational databases from dark data : the mass of text, tables, and images that are widely collected and stored but which cannot be exploited by standard relational tools. If the information in dark data - scientific papers, Web classified ads, customer service notes, and so on - were instead in a relational database, it would give analysts a massive and valuable new set of "big data." DeepDive is distinctive when compared to previous information extraction systems in its ability to obtain very high precision and recall at reasonable engineering cost; in a number of applications, we have used DeepDive to create databases with accuracy that meets that of human annotators. To date we have successfully deployed DeepDive to create data-centric applications for insurance, materials science, genomics, paleontologists, law enforcement, and others. The data unlocked by DeepDive represents a massive opportunity for industry, government, and scientific researchers. DeepDive is enabled by an unusual design that combines large-scale probabilistic inference with a novel developer interaction cycle. This design is enabled by several core innovations around probabilistic training and inference.

  1. TAPAS, a VO archive at the IRAM 30-m telescope

    NASA Astrophysics Data System (ADS)

    Leon, Stephane; Espigares, Victor; Ruíz, José Enrique; Verdes-Montenegro, Lourdes; Mauersberger, Rainer; Brunswig, Walter; Kramer, Carsten; Santander-Vela, Juan de Dios; Wiesemeyer, Helmut

    2012-07-01

    Astronomical observatories are today generating increasingly large volumes of data. For an efficient use of them, databases have been built following the standards proposed by the International Virtual Observatory Alliance (IVOA), providing a common protocol to query them and make them interoperable. The IRAM 30-m radio telescope, located in Sierra Nevada (Granada, Spain) is a millimeter wavelength telescope with a constantly renewed, extensive choice of instruments, and capable of covering the frequency range between 80 and 370 GHz. It is continuously producing a large amount of data thanks to the more than 200 scientific projects observed each year. The TAPAS archive at the IRAM 30-m telescope is aimed to provide public access to the headers describing the observations performed with the telescope, according to a defined data policy, making as well the technical data available to the IRAM staff members. A special emphasis has been made to make it Virtual Observatory (VO) compliant, and to offer a VO compliant web interface allowing to make the information available to the scientific community. TAPAS is built using the Django Python framework on top of a relational MySQL database, and is fully integrated with the telescope control system. The TAPAS data model (DM) is based on the Radio Astronomical DAta Model for Single dish radio telescopes (RADAMS), to allow for easy integration into the VO infrastructure. A metadata modeling layer is used by the data-filler to allow an implementation free from assumptions about the control system and the underlying database. TAPAS and its public web interface ( http://tapas.iram.es ) provides a scalable system that can evolve with new instruments and observing modes. A meta description of the DM has been introduced in TAPAS in order to both avoid undesired coupling between the code and the DM and to provide a better management of the archive. A subset of the header data stored in TAPAS will be made available at the CDS.

  2. Cognitive Affordances of the Cyberinfrastructure for Science and Math Learning

    ERIC Educational Resources Information Center

    Martinez, Michael E.; Peters Burton, Erin E.

    2011-01-01

    The "cyberinfrastucture" is a broad informational network that entails connections to real-time data sensors as well as tools that permit visualization and other forms of analysis, and that facilitates access to vast scientific databases. This multifaceted network, already a major boon to scientific discovery, now shows exceptional promise in…

  3. Applications and Methods Utilizing the Simple Semantic Web Architecture and Protocol (SSWAP) for Bioinformatics Resource Discovery and Disparate Data and Service Integration

    USDA-ARS?s Scientific Manuscript database

    Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of scientific data between information resources difficu...

  4. Scientific Journal Publishing: Yearly Volume and Open Access Availability

    ERIC Educational Resources Information Center

    Bjork, Bo-Christer; Roos, Annikki; Lauri, Mari

    2009-01-01

    Introduction: We estimate the total yearly volume of peer-reviewed scientific journal articles published world-wide as well as the share of these articles available openly on the Web either directly or as copies in e-print repositories. Method: We rely on data from two commercial databases (ISI and Ulrich's Periodicals Directory) supplemented by…

  5. Referencing Science: Teaching Undergraduates to Identify, Validate, and Utilize Peer-Reviewed Online Literature

    ERIC Educational Resources Information Center

    Berzonsky, William A.; Richardson, Katherine D.

    2008-01-01

    Accessibility of online scientific literature continues to expand due to the advent of scholarly databases and search engines. Studies have shown that undergraduates favor using online scientific literature to address research questions, but they often do not have the skills to assess the validity of research articles. Undergraduates generally are…

  6. Hirsch's index: a case study conducted atthe Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, Universidade de São Paulo.

    PubMed

    Torro-Alves, N; Herculano, R D; Terçariol, C A S; Kinouchi Filho, O; Graeff, C F O

    2007-11-01

    An analysis of scientific bibliographic productivity using the Hirsch h-index, information from the Institute of Scientific Information database and the Curriculum Lattes (CNPq, Brazil) was performed at the Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, Universidade de São Paulo (FFCLRP-USP) that has four departments in natural, biological and social sciences. Bibliometric evaluations of undergraduate programs showed a better performance of the departments of Chemistry (P < 0.001) and Biology (P < 0.001) when compared to the departments of Physics and Mathematics and Psychology and Education. We also analyzed the scientific output of the six graduate programs of FFCLRP: Psychology, Psychobiology, Chemistry, Physics Applied to Medicine and Biology, Comparative Biology, and Entomology. The graduate program in Psychology presented a lower h-index (P < 0.001) and had fewer papers indexed by the ISI web of science (P < 0.001) when compared to the other graduate programs. The poorer performance of the Psychology program may be associated with the limited coverage by the Thompson Institute of Scientific Information database.

  7. Assessing the Scientific Research Productivity of a Brazilian Healthcare Institution: A Case Study at the Heart Institute of São Paulo, Brazil

    PubMed Central

    Tess, Beatriz Helena; Furuie, Sérgio Shiguemi; Castro, Regina Célia Figueiredo; do Carmo Cavarette Barreto, Maria; Nobre, Moacyr Roberto Cuce

    2009-01-01

    INTRODUCTION: The present study was motivated by the need to systematically assess the research productivity of the Heart Institute (InCor), Medical School of the University of São Paulo, Brazil. OBJECTIVE: To explore methodology for the assessment of institutional scientific research productivity. MATERIALS AND METHODS: Bibliometric indicators based on searches for author affiliation of original scientific articles or reviews published in journals indexed in the databases Web of Science, MEDLINE, EMBASE, LILACS and SciELO from January 2000 to December 2003 were used in this study. The retrieved records were analyzed according to the index parameters of the journals and modes of access. The number of citations was used to calculate the institutional impact factor. RESULTS: Out of 1253 records retrieved from the five databases, 604 original articles and reviews were analyzed; of these, 246 (41%) articles were published in national journals and 221 (90%) of those were in journals with free online access through SciELO or their own websites. Of the 358 articles published in international journals, 333 (93%) had controlled online access and 223 (67%) were available through the Capes Portal of Journals. The average impact of each article for InCor was 2.224 in the period studied. CONCLUSION: A simple and practical methodology to evaluate the scientific production of health research institutions includes searches in the LILACS database for national journals and in MEDLINE and the Web of Science for international journals. The institutional impact factor of articles indexed in the Web of Science may serve as a measure by which to assess and review the scientific productivity of a research institution. PMID:19578662

  8. Mining hidden knowledge for drug safety assessment: topic modeling of LiverTox as a case study

    PubMed Central

    2014-01-01

    Background Given the significant impact on public health and drug development, drug safety has been a focal point and research emphasis across multiple disciplines in addition to scientific investigation, including consumer advocates, drug developers and regulators. Such a concern and effort has led numerous databases with drug safety information available in the public domain and the majority of them contain substantial textual data. Text mining offers an opportunity to leverage the hidden knowledge within these textual data for the enhanced understanding of drug safety and thus improving public health. Methods In this proof-of-concept study, topic modeling, an unsupervised text mining approach, was performed on the LiverTox database developed by National Institutes of Health (NIH). The LiverTox structured one document per drug that contains multiple sections summarizing clinical information on drug-induced liver injury (DILI). We hypothesized that these documents might contain specific textual patterns that could be used to address key DILI issues. We placed the study on drug-induced acute liver failure (ALF) which was a severe form of DILI with limited treatment options. Results After topic modeling of the "Hepatotoxicity" sections of the LiverTox across 478 drug documents, we identified a hidden topic relevant to Hy's law that was a widely-accepted rule incriminating drugs with high risk of causing ALF in humans. Using this topic, a total of 127 drugs were further implicated, 77 of which had clear ALF relevant terms in the "Outcome and management" sections of the LiverTox. For the rest of 50 drugs, evidence supporting risk of ALF was found for 42 drugs from other public databases. Conclusion In this case study, the knowledge buried in the textual data was extracted for identification of drugs with potential of causing ALF by applying topic modeling to the LiverTox database. The knowledge further guided identification of drugs with the similar potential and most of them could be verified and confirmed. This study highlights the utility of topic modeling to leverage information within textual drug safety databases, which provides new opportunities in the big data era to assess drug safety. PMID:25559675

  9. Mining hidden knowledge for drug safety assessment: topic modeling of LiverTox as a case study.

    PubMed

    Yu, Ke; Zhang, Jie; Chen, Minjun; Xu, Xiaowei; Suzuki, Ayako; Ilic, Katarina; Tong, Weida

    2014-01-01

    Given the significant impact on public health and drug development, drug safety has been a focal point and research emphasis across multiple disciplines in addition to scientific investigation, including consumer advocates, drug developers and regulators. Such a concern and effort has led numerous databases with drug safety information available in the public domain and the majority of them contain substantial textual data. Text mining offers an opportunity to leverage the hidden knowledge within these textual data for the enhanced understanding of drug safety and thus improving public health. In this proof-of-concept study, topic modeling, an unsupervised text mining approach, was performed on the LiverTox database developed by National Institutes of Health (NIH). The LiverTox structured one document per drug that contains multiple sections summarizing clinical information on drug-induced liver injury (DILI). We hypothesized that these documents might contain specific textual patterns that could be used to address key DILI issues. We placed the study on drug-induced acute liver failure (ALF) which was a severe form of DILI with limited treatment options. After topic modeling of the "Hepatotoxicity" sections of the LiverTox across 478 drug documents, we identified a hidden topic relevant to Hy's law that was a widely-accepted rule incriminating drugs with high risk of causing ALF in humans. Using this topic, a total of 127 drugs were further implicated, 77 of which had clear ALF relevant terms in the "Outcome and management" sections of the LiverTox. For the rest of 50 drugs, evidence supporting risk of ALF was found for 42 drugs from other public databases. In this case study, the knowledge buried in the textual data was extracted for identification of drugs with potential of causing ALF by applying topic modeling to the LiverTox database. The knowledge further guided identification of drugs with the similar potential and most of them could be verified and confirmed. This study highlights the utility of topic modeling to leverage information within textual drug safety databases, which provides new opportunities in the big data era to assess drug safety.

  10. Spectral information (gas, liquid and solid phase from EUV-VUV-UV-Vis-NIR) and related data (e.g. information concerning publications on quantum yield studies or photolysis studies) from published papers

    NASA Astrophysics Data System (ADS)

    Noelle, A.; Hartmann, G. K.; Martin-Torres, F. J.

    2010-05-01

    The science-softCon "UV/Vis+ Spectra Data Base" is a non-profit project established in August 2000 and is operated in accordance to the "Open Access" definitions and regulations of the CSPR Assessment Panel on Scientific Data and Information (International Council for Science, 2004, HYPERLINK "http://www.science-softcon.de/spectra/cspr.pdf" ICSU Report of the CSPR Assessment Panel on Data and Information; ISBN 0-930357-60-4). The on-line database contains currently about 5600 spectra (from low to very high resolution, at different temperatures and pressures) and datasheets (metadata) of about 850 substances. Additional spectra/datasheets will be added continuously. In addition more than 250 links to on-line free available original publications are provided. The interdisciplinary of this photochemistry database provides a good interaction between different research areas. So, this database is an excellent tool for scientists who investigate on different fields such as atmospheric chemistry, astrophysics, agriculture, analytical chemistry, environmental chemistry, medicine, remote sensing, etc. To ensure the high quality standard of the fast growing UV/Vis+ Spectra Data Base an international "Scientific Advisory Group" (SAG) has been established in 2004. Because of the importance of maintenance of the database the support of the scientific community is crucial. Therefore we would like to encourage all scientists to support this data compilation project thru the provision of new or missing spectral data and information.

  11. A scientific database for real-time Neutron Monitor measurements - taking Neutron Monitors into the 21st century

    NASA Astrophysics Data System (ADS)

    Steigies, Christian

    2012-07-01

    The Neutron Monitor Database project, www.nmdb.eu, has been funded in 2008 and 2009 by the European Commission's 7th framework program (FP7). Neutron monitors (NMs) have been in use worldwide since the International Geophysical Year (IGY) in 1957 and cosmic ray data from the IGY and the improved NM64 NMs has been distributed since this time, but a common data format existed only for data with one hour resolution. This data was first distributed in printed books, later via the World Data Center ftp server. In the 1990's the first NM stations started to record data at higher resolutions (typically 1 minute) and publish in on their webpages. However, every NM station chose their own format, making it cumbersome to work with this distributed data. In NMDB all European and some neighboring NM stations came together to agree on a common format for high-resolution data and made this available via a centralized database. The goal of NMDB is to make all data from all NM stations available in real-time. The original NMDB network has recently been joined by the Bartol Research Institute (Newark DE, USA), the National Autonomous University of Mexico and the North-West University (Potchefstroom, South Africa). The data is accessible to everyone via an easy to use webinterface, but expert users can also directly access the database to build applications like real-time space weather alerts. Even though SQL databases are used today by most webservices (blogs, wikis, social media, e-commerce), the power of an SQL database has not yet been fully realized by the scientific community. In training courses, we are teaching how to make use of NMDB, how to join NMDB, and how to ensure the data quality. The present status of the extended NMDB will be presented. The consortium welcomes further data providers to help increase the scientific contributions of the worldwide neutron monitor network to heliospheric physics and space weather.

  12. Application description and policy model in collaborative environment for sharing of information on epidemiological and clinical research data sets.

    PubMed

    de Carvalho, Elias César Araujo; Batilana, Adelia Portero; Simkins, Julie; Martins, Henrique; Shah, Jatin; Rajgor, Dimple; Shah, Anand; Rockart, Scott; Pietrobon, Ricardo

    2010-02-19

    Sharing of epidemiological and clinical data sets among researchers is poor at best, in detriment of science and community at large. The purpose of this paper is therefore to (1) describe a novel Web application designed to share information on study data sets focusing on epidemiological clinical research in a collaborative environment and (2) create a policy model placing this collaborative environment into the current scientific social context. The Database of Databases application was developed based on feedback from epidemiologists and clinical researchers requiring a Web-based platform that would allow for sharing of information about epidemiological and clinical study data sets in a collaborative environment. This platform should ensure that researchers can modify the information. A Model-based predictions of number of publications and funding resulting from combinations of different policy implementation strategies (for metadata and data sharing) were generated using System Dynamics modeling. The application allows researchers to easily upload information about clinical study data sets, which is searchable and modifiable by other users in a wiki environment. All modifications are filtered by the database principal investigator in order to maintain quality control. The application has been extensively tested and currently contains 130 clinical study data sets from the United States, Australia, China and Singapore. Model results indicated that any policy implementation would be better than the current strategy, that metadata sharing is better than data-sharing, and that combined policies achieve the best results in terms of publications. Based on our empirical observations and resulting model, the social network environment surrounding the application can assist epidemiologists and clinical researchers contribute and search for metadata in a collaborative environment, thus potentially facilitating collaboration efforts among research communities distributed around the globe.

  13. Implementation of the CUAHSI information system for regional hydrological research and workflow

    NASA Astrophysics Data System (ADS)

    Bugaets, Andrey; Gartsman, Boris; Bugaets, Nadezhda; Krasnopeyev, Sergey; Krasnopeyeva, Tatyana; Sokolov, Oleg; Gonchukov, Leonid

    2013-04-01

    Environmental research and education have become increasingly data-intensive as a result of the proliferation of digital technologies, instrumentation, and pervasive networks through which data are collected, generated, shared, and analyzed. Over the next decade, it is likely that science and engineering research will produce more scientific data than has been created over the whole of human history (Cox et al., 2006). Successful using these data to achieve new scientific breakthroughs depends on the ability to access, organize, integrate, and analyze these large datasets. The new project of PGI FEB RAS (http://tig.dvo.ru), FERHRI (www.ferhri.org) and Primgidromet (www.primgidromet.ru) is focused on creation of an open unified hydrological information system according to the international standards to support hydrological investigation, water management and forecasts systems. Within the hydrologic science community, the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (http://his.cuahsi.org) has been developing a distributed network of data sources and functions that are integrated using web services and that provide access to data, tools, and models that enable synthesis, visualization, and evaluation of hydrologic system behavior. Based on the top of CUAHSI technologies two first template databases were developed for primary datasets of special observations on experimental basins in the Far East Region of Russia. The first database contains data of special observation performed on the former (1957-1994) Primorskaya Water-Balance Station (1500 km2). Measurements were carried out on 20 hydrological and 40 rain gauging station and were published as special series but only as hardcopy books. Database provides raw data from loggers with hourly and daily time support. The second database called «FarEastHydro» provides published standard daily measurement performed at Roshydromet observation network (200 hydrological and meteorological stations) for the period beginning 1930 through 1990. Both of the data resources are maintained in a test mode at the project site http://gis.dvo.ru:81/, which is permanently updated. After first success, the decision was made to use the CUAHSI technology as a basis for development of hydrological information system to support data publishing and workflow of Primgidromet, the regional office of Federal State Hydrometeorological Agency. At the moment, Primgidromet observation network is equipped with 34 automatic SEBA hydrological pressure sensor pneumatic gauges PS-Light-2 and 36 automatic SEBA weather stations. Large datasets generated by sensor networks are organized and stored within a central ODM database which allows to unambiguously interpret the data with sufficient metadata and provides traceable heritage from raw measurements to useable information. Organization of the data within a central CUAHSI ODM database was the most critical step, with several important implications. This technology is widespread and well documented, and it ensures that all datasets are publicly available and readily used by other investigators and developers to support additional analyses and hydrological modeling. Implementation of ODM within a Relational Database Management System eliminates the potential data manipulation errors and intermediate the data processing steps. Wrapping CUAHSI WaterOneFlow web-service into OpenMI 2.0 linkable component (www.openmi.org) allows a seamless integration with well-known hydrological modeling systems.

  14. XML-based information system for planetary sciences

    NASA Astrophysics Data System (ADS)

    Carraro, F.; Fonte, S.; Turrini, D.

    2009-04-01

    EuroPlaNet (EPN in the following) has been developed by the planetological community under the "Sixth Framework Programme" (FP6 in the following), the European programme devoted to the improvement of the European research efforts through the creation of an internal market for science and technology. The goal of the EPN programme is the creation of a European network aimed to the diffusion of data produced by space missions dedicated to the study of the Solar System. A special place within the EPN programme is that of I.D.I.S. (Integrated and Distributed Information Service). The main goal of IDIS is to offer to the planetary science community a user-friendly access to the data and information produced by the various types of research activities, i.e. Earth-based observations, space observations, modeling, theory and laboratory experiments. During the FP6 programme IDIS development consisted in the creation of a series of thematic nodes, each of them specialized in a specific scientific domain, and a technical coordination node. The four thematic nodes are the Atmosphere node, the Plasma node, the Interiors & Surfaces node and the Small Bodies & Dust node. The main task of the nodes have been the building up of selected scientific cases related with the scientific domain of each node. The second work done by EPN nodes have been the creation of a catalogue of resources related to their main scientific theme. Both these efforts have been used as the basis for the development of the main IDIS goal, i.e. the integrated distributed service. An XML-based data model have been developed to describe resources using meta-data and to store the meta-data within an XML-based database called eXist. A search engine has been then developed in order to allow users to search resources within the database. Users can select the resource type and can insert one or more values or can choose a value among those present in a list, depending on selected resource. The system searches for all the resources containing the inserted values within the resources descriptions. An important facility of the IDIS search system is the multi-node search capability. This is due to the capacity of eXist to make queries on remote databases. This allows the system to show all resources which satisfy the search criteria on local node and to show how many resources are found on remote nodes, giving also a link to open the results page on remote nodes. During FP7 the development of the IDIS system will have the main goal to make the service Virtual Observatory compliant.

  15. Computational Thermochemistry of Jet Fuels and Rocket Propellants

    NASA Technical Reports Server (NTRS)

    Crawford, T. Daniel

    2002-01-01

    The design of new high-energy density molecules as candidates for jet and rocket fuels is an important goal of modern chemical thermodynamics. The NASA Glenn Research Center is home to a database of thermodynamic data for over 2000 compounds related to this goal, in the form of least-squares fits of heat capacities, enthalpies, and entropies as functions of temperature over the range of 300 - 6000 K. The chemical equilibrium with applications (CEA) program written and maintained by researchers at NASA Glenn over the last fifty years, makes use of this database for modeling the performance of potential rocket propellants. During its long history, the NASA Glenn database has been developed based on experimental results and data published in the scientific literature such as the standard JANAF tables. The recent development of efficient computational techniques based on quantum chemical methods provides an alternative source of information for expansion of such databases. For example, it is now possible to model dissociation or combustion reactions of small molecules to high accuracy using techniques such as coupled cluster theory or density functional theory. Unfortunately, the current applicability of reliable computational models is limited to relatively small molecules containing only around a dozen (non-hydrogen) atoms. We propose to extend the applicability of coupled cluster theory- often referred to as the 'gold standard' of quantum chemical methods- to molecules containing 30-50 non-hydrogen atoms. The centerpiece of this work is the concept of local correlation, in which the description of the electron interactions- known as electron correlation effects- are reduced to only their most important localized components. Such an advance has the potential to greatly expand the current reach of computational thermochemistry and thus to have a significant impact on the theoretical study of jet and rocket propellants.

  16. Assessing availability of scientific journals, databases, and health library services in Canadian health ministries: a cross-sectional study.

    PubMed

    Léon, Grégory; Ouimet, Mathieu; Lavis, John N; Grimshaw, Jeremy; Gagnon, Marie-Pierre

    2013-03-21

    Evidence-informed health policymaking logically depends on timely access to research evidence. To our knowledge, despite the substantial political and societal pressure to enhance the use of the best available research evidence in public health policy and program decision making, there is no study addressing availability of peer-reviewed research in Canadian health ministries. To assess availability of (1) a purposive sample of high-ranking scientific journals, (2) bibliographic databases, and (3) health library services in the fourteen Canadian health ministries. From May to October 2011, we conducted a cross-sectional survey among librarians employed by Canadian health ministries to collect information relative to availability of scientific journals, bibliographic databases, and health library services. Availability of scientific journals in each ministry was determined using a sample of 48 journals selected from the 2009 Journal Citation Reports (Sciences and Social Sciences Editions). Selection criteria were: relevance for health policy based on scope note information about subject categories and journal popularity based on impact factors. We found that the majority of Canadian health ministries did not have subscription access to key journals and relied heavily on interlibrary loans. Overall, based on a sample of high-ranking scientific journals, availability of journals through interlibrary loans, online and print-only subscriptions was estimated at 63%, 28% and 3%, respectively. Health Canada had a 2.3-fold higher number of journal subscriptions than that of the provincial ministries' average. Most of the organisations provided access to numerous discipline-specific and multidisciplinary databases. Many organisations provided access to the library resources described through library partnerships or consortia. No professionally led health library environment was found in four out of fourteen Canadian health ministries (i.e. Manitoba Health, Northwest Territories Department of Health and Social Services, Nunavut Department of Health and Social Services and Yukon Department of Health and Social Services). There is inequity in availability of peer-reviewed research in the fourteen Canadian health ministries. This inequity could present a problem, as each province and territory is responsible for formulating and implementing evidence-informed health policies and services for the benefit of its population.

  17. Assessing availability of scientific journals, databases, and health library services in Canadian health ministries: a cross-sectional study

    PubMed Central

    2013-01-01

    Background Evidence-informed health policymaking logically depends on timely access to research evidence. To our knowledge, despite the substantial political and societal pressure to enhance the use of the best available research evidence in public health policy and program decision making, there is no study addressing availability of peer-reviewed research in Canadian health ministries. Objectives To assess availability of (1) a purposive sample of high-ranking scientific journals, (2) bibliographic databases, and (3) health library services in the fourteen Canadian health ministries. Methods From May to October 2011, we conducted a cross-sectional survey among librarians employed by Canadian health ministries to collect information relative to availability of scientific journals, bibliographic databases, and health library services. Availability of scientific journals in each ministry was determined using a sample of 48 journals selected from the 2009 Journal Citation Reports (Sciences and Social Sciences Editions). Selection criteria were: relevance for health policy based on scope note information about subject categories and journal popularity based on impact factors. Results We found that the majority of Canadian health ministries did not have subscription access to key journals and relied heavily on interlibrary loans. Overall, based on a sample of high-ranking scientific journals, availability of journals through interlibrary loans, online and print-only subscriptions was estimated at 63%, 28% and 3%, respectively. Health Canada had a 2.3-fold higher number of journal subscriptions than that of the provincial ministries’ average. Most of the organisations provided access to numerous discipline-specific and multidisciplinary databases. Many organisations provided access to the library resources described through library partnerships or consortia. No professionally led health library environment was found in four out of fourteen Canadian health ministries (i.e. Manitoba Health, Northwest Territories Department of Health and Social Services, Nunavut Department of Health and Social Services and Yukon Department of Health and Social Services). Conclusions There is inequity in availability of peer-reviewed research in the fourteen Canadian health ministries. This inequity could present a problem, as each province and territory is responsible for formulating and implementing evidence-informed health policies and services for the benefit of its population. PMID:23514333

  18. Mental health services assessment in Brazil: systematic literature review.

    PubMed

    da Costa, Pedro Henrique Antunes; Colugnati, Fernando Antonio Basile; Ronzani, Telmo Mota

    2015-10-01

    Assessment in the mental health area is a mechanism able to generate information that positively helps decision-making. Therefore, it is necessary to appropriate on the existing discussions, reasoning the challenges and possibilities linked to knowledge production within this scientific filed. A systematic review of publications about the Brazilian scientific production on mental health service assessment was performed, identifying and discussing methods, assessment perspectives and results. The search for articles was done in IBECS, Lilacs and Scielo databases, considering the publication of Federal Law 10.216. Thirty-five articles were selected based on the used terms and on the inclusion and exclusion criteria. Scientific production in this field is concentrated in the South and Southwest regions and holds different scopes and participants. Such wide range of possibilities is adopted as a way to help improving services and decision-making processes in mental health care. Advances in humanized, participative and community care are highlighted, but requiring more investments, professional qualification and organizational improvements. It is postulated greater integration among research, with evaluations going beyond structural aspects and the comparison with hospitalocentric models.

  19. Database systems for knowledge-based discovery.

    PubMed

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.

  20. Atlas - a data warehouse for integrative bioinformatics.

    PubMed

    Shah, Sohrab P; Huang, Yong; Xu, Tao; Yuen, Macaire M S; Ling, John; Ouellette, B F Francis

    2005-02-21

    We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL) calls that are implemented in a set of Application Programming Interfaces (APIs). The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD), Biomolecular Interaction Network Database (BIND), Database of Interacting Proteins (DIP), Molecular Interactions Database (MINT), IntAct, NCBI Taxonomy, Gene Ontology (GO), Online Mendelian Inheritance in Man (OMIM), LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First, Atlas stores data of similar types using common data models, enforcing the relationships between data types. Second, integration is achieved through a combination of APIs, ontology, and tools. The Atlas software is freely available under the GNU General Public License at: http://bioinformatics.ubc.ca/atlas/

  1. Atlas – a data warehouse for integrative bioinformatics

    PubMed Central

    Shah, Sohrab P; Huang, Yong; Xu, Tao; Yuen, Macaire MS; Ling, John; Ouellette, BF Francis

    2005-01-01

    Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL) calls that are implemented in a set of Application Programming Interfaces (APIs). The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD), Biomolecular Interaction Network Database (BIND), Database of Interacting Proteins (DIP), Molecular Interactions Database (MINT), IntAct, NCBI Taxonomy, Gene Ontology (GO), Online Mendelian Inheritance in Man (OMIM), LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First, Atlas stores data of similar types using common data models, enforcing the relationships between data types. Second, integration is achieved through a combination of APIs, ontology, and tools. The Atlas software is freely available under the GNU General Public License at: PMID:15723693

  2. United states national land cover data base development? 1992-2001 and beyond

    USGS Publications Warehouse

    Yang, L.

    2008-01-01

    An accurate, up-to-date and spatially-explicate national land cover database is required for monitoring the status and trends of the nation's terrestrial ecosystem, and for managing and conserving land resources at the national scale. With all the challenges and resources required to develop such a database, an innovative and scientifically sound planning must be in place and a partnership be formed among users from government agencies, research institutes and private sectors. In this paper, we summarize major scientific and technical issues regarding the development of the NLCD 1992 and 2001. Experiences and lessons learned from the project are documented with regard to project design, technical approaches, accuracy assessment strategy, and projecti imiplementation.Future improvements in developing next generation NLCD beyond 2001 are suggested, including: 1) enhanced satellite data preprocessing in correction of atmospheric and adjacency effect and the topographic normalization; 2) improved classification accuracy through comprehensive and consistent training data and new algorithm development; 3) multi-resolution and multi-temporal database targeting major land cover changes and land cover database updates; 4) enriched database contents by including additional biophysical parameters and/or more detailed land cover classes through synergizing multi-sensor, multi-temporal, and multi-spectral satellite data and ancillary data, and 5) transform the NLCD project into a national land cover monitoring program. ?? 2008 IEEE.

  3. Energy science and technology database (on the internet). Online data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The Energy Science and Technology Database (EDB) is a multidisciplinary file containing worldwide references to basic and applied scientific and technical research literature. The information is collected for use by government managers, researchers at the national laboratories, and other research efforts sponsored by the U.S. Department of Energy, and the results of this research are transferred to the public. Abstracts are included for records from 1976 to the present. The EDB also contains the Nuclear Science Abstracts which is a comprehensive abstract and index collection to the international nuclear science and technology literature for the period 1948 through 1976. Includedmore » are scientific and technical reports of the U.S. Atomic Energy Commission, U.S. Energy Research and Development Administration and its contractors, other agencies, universities, and industrial and research organizations. Approximately 25% of the records in the file contain abstracts. Nuclear Science Abstracts contains over 900,000 bibliographic records. The entire Energy Science and Technology Database contains over 3 million bibliographic records. This database is now available for searching through the GOV. Research-Center (GRC) service. GRC is a single online web-based search service to well known Government databases. Featuring powerful search and retrieval software, GRC is an important research tool. The GRC web site is at http://grc.ntis.gov.« less

  4. The LSST Data Mining Research Agenda

    NASA Astrophysics Data System (ADS)

    Borne, K.; Becla, J.; Davidson, I.; Szalay, A.; Tyson, J. A.

    2008-12-01

    We describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night) multi-resolution methods for exploration of petascale databases; indexing of multi-attribute multi-dimensional astronomical databases (beyond spatial indexing) for rapid querying of petabyte databases; and more.

  5. They See a Rat, We Seek a Cure for Diseases: The Current Status of Animal Experimentation in Medical Practice

    PubMed Central

    Kehinde, Elijah O.

    2013-01-01

    The objective of this review article was to examine current and prospective developments in the scientific use of laboratory animals, and to find out whether or not there are still valid scientific benefits of and justification for animal experimentation. The PubMed and Web of Science databases were searched using the following key words: animal models, basic research, pharmaceutical research, toxicity testing, experimental surgery, surgical simulation, ethics, animal welfare, benign, malignant diseases. Important relevant reviews, original articles and references from 1970 to 2012 were reviewed for data on the use of experimental animals in the study of diseases. The use of laboratory animals in scientific research continues to generate intense public debate. Their use can be justified today in the following areas of research: basic scientific research, use of animals as models for human diseases, pharmaceutical research and development, toxicity testing and teaching of new surgical techniques. This is because there are inherent limitations in the use of alternatives such as in vitro studies, human clinical trials or computer simulation. However, there are problems of transferability of results obtained from animal research to humans. Efforts are on-going to find suitable alternatives to animal experimentation like cell and tissue culture and computer simulation. For the foreseeable future, it would appear that to enable scientists to have a more precise understanding of human disease, including its diagnosis, prognosis and therapeutic intervention, there will still be enough grounds to advocate animal experimentation. However, efforts must continue to minimize or eliminate the need for animal testing in scientific research as soon as possible. PMID:24217224

  6. They see a rat, we seek a cure for diseases: the current status of animal experimentation in medical practice.

    PubMed

    Kehinde, Elijah O

    2013-01-01

    The objective of this review article was to examine current and prospective developments in the scientific use of laboratory animals, and to find out whether or not there are still valid scientific benefits of and justification for animal experimentation. The PubMed and Web of Science databases were searched using the following key words: animal models, basic research, pharmaceutical research, toxicity testing, experimental surgery, surgical simulation, ethics, animal welfare, benign, malignant diseases. Important relevant reviews, original articles and references from 1970 to 2012 were reviewed for data on the use of experimental animals in the study of diseases. The use of laboratory animals in scientific research continues to generate intense public debate. Their use can be justified today in the following areas of research: basic scientific research, use of animals as models for human diseases, pharmaceutical research and development, toxicity testing and teaching of new surgical techniques. This is because there are inherent limitations in the use of alternatives such as in vitro studies, human clinical trials or computer simulation. However, there are problems of transferability of results obtained from animal research to humans. Efforts are on-going to find suitable alternatives to animal experimentation like cell and tissue culture and computer simulation. For the foreseeable future, it would appear that to enable scientists to have a more precise understanding of human disease, including its diagnosis, prognosis and therapeutic intervention, there will still be enough grounds to advocate animal experimentation. However, efforts must continue to minimize or eliminate the need for animal testing in scientific research as soon as possible. © 2013 S. Karger AG, Basel.

  7. A pilot GIS database of active faults of Mt. Etna (Sicily): A tool for integrated hazard evaluation

    NASA Astrophysics Data System (ADS)

    Barreca, Giovanni; Bonforte, Alessandro; Neri, Marco

    2013-02-01

    A pilot GIS-based system has been implemented for the assessment and analysis of hazard related to active faults affecting the eastern and southern flanks of Mt. Etna. The system structure was developed in ArcGis® environment and consists of different thematic datasets that include spatially-referenced arc-features and associated database. Arc-type features, georeferenced into WGS84 Ellipsoid UTM zone 33 Projection, represent the five main fault systems that develop in the analysed region. The backbone of the GIS-based system is constituted by the large amount of information which was collected from the literature and then stored and properly geocoded in a digital database. This consists of thirty five alpha-numeric fields which include all fault parameters available from literature such us location, kinematics, landform, slip rate, etc. Although the system has been implemented according to the most common procedures used by GIS developer, the architecture and content of the database represent a pilot backbone for digital storing of fault parameters, providing a powerful tool in modelling hazard related to the active tectonics of Mt. Etna. The database collects, organises and shares all scientific currently available information about the active faults of the volcano. Furthermore, thanks to the strong effort spent on defining the fields of the database, the structure proposed in this paper is open to the collection of further data coming from future improvements in the knowledge of the fault systems. By layering additional user-specific geographic information and managing the proposed database (topological querying) a great diversity of hazard and vulnerability maps can be produced by the user. This is a proposal of a backbone for a comprehensive geographical database of fault systems, universally applicable to other sites.

  8. On Establishing Big Data Wave Breakwaters with Analytics (Invited)

    NASA Astrophysics Data System (ADS)

    Riedel, M.

    2013-12-01

    The Research Data Alliance Big Data Analytics (RDA-BDA) Interest Group seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. RDA-BDA seeks to analyze different scientific domain applications and their potential use of various big data analytics techniques. A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. These combinations are complex since a wide variety of different data analysis algorithms exist (e.g. specific algorithms using GPUs of analyzing brain images) that need to work together with multiple analytical tools reaching from simple (iterative) map-reduce methods (e.g. with Apache Hadoop or Twister) to sophisticated higher level frameworks that leverage machine learning algorithms (e.g. Apache Mahout). These computational analysis techniques are often augmented with visual analytics techniques (e.g. computational steering on large-scale high performance computing platforms) to put the human judgement into the analysis loop or new approaches with databases that are designed to support new forms of unstructured or semi-structured data as opposed to the rather tradtional structural databases (e.g. relational databases). More recently, data analysis and underpinned analytics frameworks also have to consider energy footprints of underlying resources. To sum up, the aim of this talk is to provide pieces of information to understand big data analytics in the context of science and engineering using the aforementioned classification as the lighthouse and as the frame of reference for a systematic approach. This talk will provide insights about big data analytics methods in context of science within varios communities and offers different views of how approaches of correlation and causality offer complementary methods to advance in science and engineering today. The RDA Big Data Analytics Group seeks to understand what approaches are not only technically feasible, but also scientifically feasible. The lighthouse Goal of the RDA Big Data Analytics Group is a classification of clever combinations of various Technologies and scientific applications in order to provide clear recommendations to the scientific community what approaches are technicalla and scientifically feasible.

  9. From Peer-Reviewed to Peer-Reproduced in Scholarly Publishing: The Complementary Roles of Data Models and Workflows in Bioinformatics

    PubMed Central

    Zhao, Jun; Avila-Garcia, Maria Susana; Roos, Marco; Thompson, Mark; van der Horst, Eelke; Kaliyaperumal, Rajaram; Luo, Ruibang; Lee, Tin-Lap; Lam, Tak-wah; Edmunds, Scott C.; Sansone, Susanna-Assunta

    2015-01-01

    Motivation Reproducing the results from a scientific paper can be challenging due to the absence of data and the computational tools required for their analysis. In addition, details relating to the procedures used to obtain the published results can be difficult to discern due to the use of natural language when reporting how experiments have been performed. The Investigation/Study/Assay (ISA), Nanopublications (NP), and Research Objects (RO) models are conceptual data modelling frameworks that can structure such information from scientific papers. Computational workflow platforms can also be used to reproduce analyses of data in a principled manner. We assessed the extent by which ISA, NP, and RO models, together with the Galaxy workflow system, can capture the experimental processes and reproduce the findings of a previously published paper reporting on the development of SOAPdenovo2, a de novo genome assembler. Results Executable workflows were developed using Galaxy, which reproduced results that were consistent with the published findings. A structured representation of the information in the SOAPdenovo2 paper was produced by combining the use of ISA, NP, and RO models. By structuring the information in the published paper using these data and scientific workflow modelling frameworks, it was possible to explicitly declare elements of experimental design, variables, and findings. The models served as guides in the curation of scientific information and this led to the identification of inconsistencies in the original published paper, thereby allowing its authors to publish corrections in the form of an errata. Availability SOAPdenovo2 scripts, data, and results are available through the GigaScience Database: http://dx.doi.org/10.5524/100044; the workflows are available from GigaGalaxy: http://galaxy.cbiit.cuhk.edu.hk; and the representations using the ISA, NP, and RO models are available through the SOAPdenovo2 case study website http://isa-tools.github.io/soapdenovo2/. Contact: philippe.rocca-serra@oerc.ox.ac.uk and susanna-assunta.sansone@oerc.ox.ac.uk. PMID:26154165

  10. Resources | Office of Cancer Genomics

    Cancer.gov

    OCG provides a variety of scientific and educational resources for both cancer researchers and members of the general public. These resources are divided into the following types: OCG-Supported Resources: Tools, databases, and reagents generated by initiated and completed OCG programs for researchers, educators, and students. (Note: Databases for current OCG programs are available through program-specific data matrices)

  11. Bosnian and Herzegovinian medical scientists in PubMed database.

    PubMed

    Masic, Izet

    2013-01-01

    In this paper it is shortly presented PubMed as one of the most important on-line databases of the scientific biomedical literature. Also, the author has analyzed the most cited authors, professors of the medical faculties in Bosnia and Herzegovina, from the published papers in the biomedical journals abstracted and indexed in PubMed.

  12. Elsevier's Vanishing Act: To the Dismay of Scholars, the Publishing Giant Quietly Purges Articles from Its Database.

    ERIC Educational Resources Information Center

    Foster, Andrea L.

    2003-01-01

    Elsevier, the largest publisher of scientific journals, has removed journal articles from its database, often without providing reasons. The usual reason for removing an article is fear of copyright litigation, but critics of the policy fear that information gaps or misleading lack of data will develop. (SLD)

  13. Discrete Lognormal Model as an Unbiased Quantitative Measure of Scientific Performance Based on Empirical Citation Data

    NASA Astrophysics Data System (ADS)

    Moreira, Joao; Zeng, Xiaohan; Amaral, Luis

    2013-03-01

    Assessing the career performance of scientists has become essential to modern science. Bibliometric indicators, like the h-index are becoming more and more decisive in evaluating grants and approving publication of articles. However, many of the more used indicators can be manipulated or falsified by publishing with very prolific researchers or self-citing papers with a certain number of citations, for instance. Accounting for these factors is possible but it introduces unwanted complexity that drives us further from the purpose of the indicator: to represent in a clear way the prestige and importance of a given scientist. Here we try to overcome this challenge. We used Thompson Reuter's Web of Science database and analyzed all the papers published until 2000 by ~1500 researchers in the top 30 departments of seven scientific fields. We find that over 97% of them have a citation distribution that is consistent with a discrete lognormal model. This suggests that our model can be used to accurately predict the performance of a researcher. Furthermore, this predictor does not depend on the individual number of publications and is not easily ``gamed'' on. The authors acknowledge support from FCT Portugal, and NSF grants

  14. The Role of Emotional Factors in Building Public Scientific Literacy and Engagement with Science

    ERIC Educational Resources Information Center

    Lin, Huann-shyang; Hong, Zuway-R.; Huang, Tai-Chu

    2012-01-01

    This study uses the database from an extensive international study on 15-year-old students (N = 8,815) to analyze the relationship between emotional factors and students' scientific literacy and explore the potential link between the emotions of the students and subsequent public engagement with science. The results revealed that students'…

  15. The DoD Gateway Information System: Bibliography, Directory of Resources, Prototype Experience, [and] User Interface Design.

    ERIC Educational Resources Information Center

    Cotter, Gladys A.; And Others

    The Defense Technical Information Center (DTIC), an organization charged with providing information services to the Department of Defense (DoD) scientific and technical community, actively seeks ways to promote access to and utilization of scientific and technical information (STI) databases, online services, and networks relevant to the conduct…

  16. An Idea for the Future of Dental Research: A Cloud-Based Clinical Network and Database

    ERIC Educational Resources Information Center

    Owtad, Payam; Taichman, Russell; Park, Jae Hyun; Yaibuathes, Sorn; Knapp, John

    2013-01-01

    Evidence-based dentistry (EBD) is an approach to oral healthcare requiring systematic assessment of relevant scientific evidence to clinical practice and patients' needs. EBD attempts to globally establish personalized dental care based upon the most recent and highest order scientific evidence. However, some times the EBD does not consider local…

  17. The Impact of the Programme for International Student Assessment on Academic Journals

    ERIC Educational Resources Information Center

    Dominguez, Maria; Vieira, Maria-Jose; Vidal, Javier

    2012-01-01

    The aim of this study is to assess the impact of PISA (Programme for International Student Assessment) on international scientific journals. A bibliometric analysis was conducted of publications included in three main scientific publication databases: Eric, EBSCOhost and the ISI Web of Knowledge, from 2002 to 2010. The paper focused on four main…

  18. IRIS Toxicological Review of Dichloromethane (Methylene ...

    EPA Pesticide Factsheets

    EPA is conducting a peer review and public comment of the scientific basis supporting the human health hazard and dose-response assessment of Dichloromethane that when finalized will appear on the Integrated Risk Information System (IRIS) database. The draft Toxicological Review of Dichloromethane provides scientific support and rationale for the hazard and dose-response assessment pertaining to chronic exposure to dichloromethane.

  19. Virtual Observatory and Distributed Data Mining

    NASA Astrophysics Data System (ADS)

    Borne, Kirk D.

    2012-03-01

    New modes of discovery are enabled by the growth of data and computational resources (i.e., cyberinfrastructure) in the sciences. This cyberinfrastructure includes structured databases, virtual observatories (distributed data, as described in Section 20.2.1 of this chapter), high-performance computing (petascale machines), distributed computing (e.g., the Grid, the Cloud, and peer-to-peer networks), intelligent search and discovery tools, and innovative visualization environments. Data streams from experiments, sensors, and simulations are increasingly complex and growing in volume. This is true in most sciences, including astronomy, climate simulations, Earth observing systems, remote sensing data collections, and sensor networks. At the same time, we see an emerging confluence of new technologies and approaches to science, most clearly visible in the growing synergism of the four modes of scientific discovery: sensors-modeling-computing-data (Eastman et al. 2005). This has been driven by numerous developments, including the information explosion, development of large-array sensors, acceleration in high-performance computing (HPC) power, advances in algorithms, and efficient modeling techniques. Among these, the most extreme is the growth in new data. Specifically, the acquisition of data in all scientific disciplines is rapidly accelerating and causing a data glut (Bell et al. 2007). It has been estimated that data volumes double every year—for example, the NCSA (National Center for Supercomputing Applications) reported that their users cumulatively generated one petabyte of data over the first 19 years of NCSA operation, but they then generated their next one petabyte in the next year alone, and the data production has been growing by almost 100% each year after that (Butler 2008). The NCSA example is just one of many demonstrations of the exponential (annual data-doubling) growth in scientific data collections. In general, this putative data-doubling is an inevitable result of several compounding factors: the proliferation of data-generating devices, sensors, projects, and enterprises; the 18-month doubling of the digital capacity of these microprocessor-based sensors and devices (commonly referred to as "Moore’s law"); the move to digital for nearly all forms of information; the increase in human-generated data (both unstructured information on the web and structured data from experiments, models, and simulation); and the ever-expanding capability of higher density media to hold greater volumes of data (i.e., data production expands to fill the available storage space). These factors are consequently producing an exponential data growth rate, which will soon (if not already) become an insurmountable technical challenge even with the great advances in computation and algorithms. This technical challenge is compounded by the ever-increasing geographic dispersion of important data sources—the data collections are not stored uniformly at a single location, or with a single data model, or in uniform formats and modalities (e.g., images, databases, structured and unstructured files, and XML data sets)—the data are in fact large, distributed, heterogeneous, and complex. The greatest scientific research challenge with these massive distributed data collections is consequently extracting all of the rich information and knowledge content contained therein, thus requiring new approaches to scientific research. This emerging data-intensive and data-oriented approach to scientific research is sometimes called discovery informatics or X-informatics (where X can be any science, such as bio, geo, astro, chem, eco, or anything; Agresti 2003; Gray 2003; Borne 2010). This data-oriented approach to science is now recognized by some (e.g., Mahootian and Eastman 2009; Hey et al. 2009) as the fourth paradigm of research, following (historically) experiment/observation, modeling/analysis, and computational science.

  20. E-Labs - Learning with Authentic Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bardeen, Marjorie G.; Wayne, Mitchell

    the success teachers have had providing an opportunity for students to: • Organize and conduct authentic research. • Experience the environment of scientific collaborations. • Possibly make real contributions to a burgeoning scientific field. We've created projects that are problem-based, student driven and technology dependent. Students reach beyond classroom walls to explore data with other students and experts and share results, publishing original work to a worldwide audience. Students can discover and extend the research of other students, modeling the processes of modern, large-scale research projects. From start to finish e-Labs are student-led, teacher-guided projects. Students need only a Webmore » browser to access computing techniques employed by professional researchers. A Project Map with milestones allows students to set the research plan rather than follow a step-by-step process common in other online projects. Most importantly, e-Labs build the learning experience around the students' own questions and let them use the very tools that scientists use. Students contribute to and access shared data, most derived from professional research databases. They use common analysis tools, store their work and use metadata to discover, replicate and confirm the research of others. This is where real scientific collaboration begins. Using online tools, students correspond with other research groups, post comments and questions, prepare summary reports, and in general participate in the part of scientific research that is often left out of classroom experiments. Teaching tools such as student and teacher logbooks, pre- and post-tests and an assessment rubric aligned with learner outcomes help teachers guide student work. Constraints on interface designs and administrative tools such as registration databases give teachers the "one-stop-shopping" they seek for multiple e-Labs. Teaching and administrative tools also allow us to track usage and assess the impact on student learning.« less

  1. Scientometric and patentometric analyses to determine the knowledge landscape in innovative technologies: The case of 3D bioprinting

    PubMed Central

    2017-01-01

    This research proposes an innovative data model to determine the landscape of emerging technologies. It is based on a competitive technology intelligence methodology that incorporates the assessment of scientific publications and patent analysis production, and is further supported by experts’ feedback. It enables the definition of the growth rate of scientific and technological output in terms of the top countries, institutions and journals producing knowledge within the field as well as the identification of main areas of research and development by analyzing the International Patent Classification codes including keyword clusterization and co-occurrence of patent assignees and patent codes. This model was applied to the evolving domain of 3D bioprinting. Scientific documents from the Scopus and Web of Science databases, along with patents from 27 authorities and 140 countries, were retrieved. In total, 4782 scientific publications and 706 patents were identified from 2000 to mid-2016. The number of scientific documents published and patents in the last five years showed an annual average growth of 20% and 40%, respectively. Results indicate that the most prolific nations and institutions publishing on 3D bioprinting are the USA and China, including the Massachusetts Institute of Technology (USA), Nanyang Technological University (Singapore) and Tsinghua University (China), respectively. Biomaterials and Biofabrication are the predominant journals. The most prolific patenting countries are China and the USA; while Organovo Holdings Inc. (USA) and Tsinghua University (China) are the institutions leading. International Patent Classification codes reveal that most 3D bioprinting inventions intended for medical purposes apply porous or cellular materials or biologically active materials. Knowledge clusters and expert drivers indicate that there is a research focus on tissue engineering including the fabrication of organs, bioinks and new 3D bioprinting systems. Our model offers a guide to researchers to understand the knowledge production of pioneering technologies, in this case 3D bioprinting. PMID:28662187

  2. Scientometric and patentometric analyses to determine the knowledge landscape in innovative technologies: The case of 3D bioprinting.

    PubMed

    Rodríguez-Salvador, Marisela; Rio-Belver, Rosa María; Garechana-Anacabe, Gaizka

    2017-01-01

    This research proposes an innovative data model to determine the landscape of emerging technologies. It is based on a competitive technology intelligence methodology that incorporates the assessment of scientific publications and patent analysis production, and is further supported by experts' feedback. It enables the definition of the growth rate of scientific and technological output in terms of the top countries, institutions and journals producing knowledge within the field as well as the identification of main areas of research and development by analyzing the International Patent Classification codes including keyword clusterization and co-occurrence of patent assignees and patent codes. This model was applied to the evolving domain of 3D bioprinting. Scientific documents from the Scopus and Web of Science databases, along with patents from 27 authorities and 140 countries, were retrieved. In total, 4782 scientific publications and 706 patents were identified from 2000 to mid-2016. The number of scientific documents published and patents in the last five years showed an annual average growth of 20% and 40%, respectively. Results indicate that the most prolific nations and institutions publishing on 3D bioprinting are the USA and China, including the Massachusetts Institute of Technology (USA), Nanyang Technological University (Singapore) and Tsinghua University (China), respectively. Biomaterials and Biofabrication are the predominant journals. The most prolific patenting countries are China and the USA; while Organovo Holdings Inc. (USA) and Tsinghua University (China) are the institutions leading. International Patent Classification codes reveal that most 3D bioprinting inventions intended for medical purposes apply porous or cellular materials or biologically active materials. Knowledge clusters and expert drivers indicate that there is a research focus on tissue engineering including the fabrication of organs, bioinks and new 3D bioprinting systems. Our model offers a guide to researchers to understand the knowledge production of pioneering technologies, in this case 3D bioprinting.

  3. User Guidelines for the Brassica Database: BRAD.

    PubMed

    Wang, Xiaobo; Cheng, Feng; Wang, Xiaowu

    2016-01-01

    The genome sequence of Brassica rapa was first released in 2011. Since then, further Brassica genomes have been sequenced or are undergoing sequencing. It is therefore necessary to develop tools that help users to mine information from genomic data efficiently. This will greatly aid scientific exploration and breeding application, especially for those with low levels of bioinformatic training. Therefore, the Brassica database (BRAD) was built to collect, integrate, illustrate, and visualize Brassica genomic datasets. BRAD provides useful searching and data mining tools, and facilitates the search of gene annotation datasets, syntenic or non-syntenic orthologs, and flanking regions of functional genomic elements. It also includes genome-analysis tools such as BLAST and GBrowse. One of the important aims of BRAD is to build a bridge between Brassica crop genomes with the genome of the model species Arabidopsis thaliana, thus transferring the bulk of A. thaliana gene study information for use with newly sequenced Brassica crops.

  4. Proteomics data repositories: Providing a safe haven for your data and acting as a springboard for further research

    PubMed Central

    Vizcaíno, Juan Antonio; Foster, Joseph M.; Martens, Lennart

    2010-01-01

    Despite the fact that data deposition is not a generalised fact yet in the field of proteomics, several mass spectrometry (MS) based proteomics repositories are publicly available for the scientific community. The main existing resources are: the Global Proteome Machine Database (GPMDB), PeptideAtlas, the PRoteomics IDEntifications database (PRIDE), Tranche, and NCBI Peptidome. In this review the capabilities of each of these will be described, paying special attention to four key properties: data types stored, applicable data submission strategies, supported formats, and available data mining and visualization tools. Additionally, the data contents from model organisms will be enumerated for each resource. There are other valuable smaller and/or more specialized repositories but they will not be covered in this review. Finally, the concept behind the ProteomeXchange consortium, a collaborative effort among the main resources in the field, will be introduced. PMID:20615486

  5. Image Reference Database in Teleradiology: Migrating to WWW

    NASA Astrophysics Data System (ADS)

    Pasqui, Valdo

    The paper presents a multimedia Image Reference Data Base (IRDB) used in Teleradiology. The application was developed at the University of Florence in the framework of the European Community TELEMED Project. TELEMED overall goals and IRDB requirements are outlined and the resulting architecture is described. IRDB is a multisite database containing radiological images, selected because their scientific interest, and their related information. The architecture consists of a set of IRDB Installations which are accessed from Viewing Stations (VS) located at different medical sites. The interaction between VS and IRDB Installations follows the client-server paradigm and uses an OSI level-7 protocol, named Telemed Communication Language. After reviewing Florence prototype implementation and experimentation, IRDB migration to World Wide Web (WWW) is discussed. A possible scenery to implement IRDB on the basis of WWW model is depicted in order to exploit WWW servers and browsers capabilities. Finally, the advantages of this conversion are outlined.

  6. Uncovering Capgras delusion using a large-scale medical records database

    PubMed Central

    Marshall, Caryl; Kanji, Zara; Wilkinson, Sam; Halligan, Peter; Deeley, Quinton

    2017-01-01

    Background Capgras delusion is scientifically important but most commonly reported as single case studies. Studies analysing large clinical records databases focus on common disorders but none have investigated rare syndromes. Aims Identify cases of Capgras delusion and associated psychopathology, demographics, cognitive function and neuropathology in light of existing models. Method Combined computational data extraction and qualitative classification using 250 000 case records from South London and Maudsley Clinical Record Interactive Search (CRIS) database. Results We identified 84 individuals and extracted diagnosis-matched comparison groups. Capgras was not ‘monothematic’ in the majority of cases. Most cases involved misidentified family members or close partners but others were misidentified in 25% of cases, contrary to dual-route face recognition models. Neuroimaging provided no evidence for predominantly right hemisphere damage. Individuals were ethnically diverse with a range of psychosis spectrum diagnoses. Conclusions Capgras is more diverse than current models assume. Identification of rare syndromes complements existing ‘big data’ approaches in psychiatry. Declaration of interests V.B. is supported by a Wellcome Trust Seed Award in Science (200589/Z/16/Z) and the UCLH NIHR Biomedical Research Centre. S.W. is supported by a Wellcome Trust Strategic Award (WT098455MA). Q.D. has received a grant from King’s Health Partners. Copyright and usage © The Royal College of Psychiatrists 2017. This is an open access article distributed under the terms of the Creative Commons Non-Commercial, No Derivatives (CC BY-NC-ND) license. PMID:28794897

  7. "XANSONS for COD": a new small BOINC project in crystallography

    NASA Astrophysics Data System (ADS)

    Neverov, Vladislav S.; Khrapov, Nikolay P.

    2018-04-01

    "XANSONS for COD" (http://xansons4cod.com) is a new BOINC project aimed at creating the open-access database of simulated x-ray and neutron powder diffraction patterns for nanocrystalline phase of materials from the collection of the Crystallography Open Database (COD). The project uses original open-source software XaNSoNS to simulate diffraction patterns on CPU and GPU. This paper describes the scientific problem this project solves, the project's internal structure, its operation principles and organization of the final database.

  8. CRISPR-Cas in Medicinal Chemistry: Applications and Regulatory Concerns.

    PubMed

    Duardo-Sanchez, Aliuska

    2017-01-01

    A rapid search in scientific publication's databases shows how the use of CRISPR-Cas genome editions' technique has considerably expanded, and its growing importance, in modern molecular biology. Just in pub-med platform, the search of the term gives more than 3000 results. Specifically, in Drug Discovery, Medicinal Chemistry and Chemical Biology in general CRISPR method may have multiple applications. Some of these applications are: resistance-selection studies of antimalarial lead organic compounds; investigation of druggability; development of animal models for chemical compounds testing, etc. In this paper, we offer a review of the most relevant scientific literature illustrated with specific examples of application of CRISPR technique to medicinal chemistry and chemical biology. We also present a general overview of the main legal and ethical trends regarding this method of genome editing. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  9. [The Open Access Initiative (OAI) in the scientific literature].

    PubMed

    Sánchez-Martín, Francisco M; Millán Rodríguez, Félix; Villavicencio Mavrich, Humberto

    2009-01-01

    According to the declaration of the Budapest Open Access Initiative (OAI) is defined as a editorial model in which access to scientific journal literature and his use are free. Free flow of information allowed by Internet has been the basis of this initiative. The Bethesda and the Berlin declarations, supported by some international agencies, proposes to require researchers to deposit copies of all articles published in a self-archive or an Open Access repository, and encourage researchers to publish their research papers in journals Open Access. This paper reviews the keys of the OAI, with their strengths and controversial aspects; and it discusses the position of databases, search engines and repositories of biomedical information, as well as the attitude of the scientists, publishers and journals. So far the journal Actas Urológicas Españolas (Act Urol Esp) offer their contents on Open Access as On Line in Spanish and English.

  10. Four barriers to the global understanding of biodiversity conservation: wealth, language, geographical location and security.

    PubMed

    Amano, Tatsuya; Sutherland, William J

    2013-04-07

    Global biodiversity conservation is seriously challenged by gaps and heterogeneity in the geographical coverage of existing information. Nevertheless, the key barriers to the collection and compilation of biodiversity information at a global scale have yet to be identified. We show that wealth, language, geographical location and security each play an important role in explaining spatial variations in data availability in four different types of biodiversity databases. The number of records per square kilometre is high in countries with high per capita gross domestic product (GDP), high proportion of English speakers and high security levels, and those located close to the country hosting the database; but these are not necessarily countries with high biodiversity. These factors are considered to affect data availability by impeding either the activities of scientific research or active international communications. Our results demonstrate that efforts to solve environmental problems at a global scale will gain significantly by focusing scientific education, communication, research and collaboration in low-GDP countries with fewer English speakers and located far from Western countries that host the global databases; countries that have experienced conflict may also benefit. Findings of this study may be broadly applicable to other fields that require the compilation of scientific knowledge at a global level.

  11. Award for Distinguished Scientific Contributions: Terry E. Robinson.

    PubMed

    2016-11-01

    The APA Awards for Distinguished Scientific Contributions are presented to persons who, in the opinion of the Committee on Scientific Awards, have made distinguished theoretical or empirical contributions to basic research in psychology. One of the 2016 award winners is Terry E. Robinson, who received this award for "outstanding contributions to understanding the psychological and neural mechanisms underlying stimulant drug responses." Robinson's award citation, biography, and a selected bibliography are presented here. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. Agent-based computational models to explore diffusion of medical innovations among cardiologists.

    PubMed

    Borracci, Raul A; Giorgi, Mariano A

    2018-04-01

    Diffusion of medical innovations among physicians rests on a set of theoretical assumptions, including learning and decision-making under uncertainty, social-normative pressures, medical expert knowledge, competitive concerns, network performance effects, professional autonomy or individualism and scientific evidence. The aim of this study was to develop and test four real data-based, agent-based computational models (ABM) to qualitatively and quantitatively explore the factors associated with diffusion and application of innovations among cardiologists. Four ABM were developed to study diffusion and application of medical innovations among cardiologists, considering physicians' network connections, leaders' opinions, "adopters' categories", physicians' autonomy, scientific evidence, patients' pressure, affordability for the end-user population, and promotion from companies. Simulations demonstrated that social imitation among local cardiologists was sufficient for innovation diffusion, as long as opinion leaders did not act as detractors of the innovation. Even in the absence of full scientific evidence to support innovation, up to one-fifth of cardiologists could accept it when local leaders acted as promoters. Patients' pressure showed a large effect size (Cohen's d > 1.2) on the proportion of cardiologists applying an innovation. Two qualitative patterns (speckled and granular) appeared associated to traditional Gompertz and sigmoid cumulative distributions. These computational models provided a semiquantitative insight on the emergent collective behavior of a physician population facing the acceptance or refusal of medical innovations. Inclusion in the models of factors related to patients' pressure and accesibility to medical coverage revealed the contrast between accepting and effectively adopting a new product or technology for population health care. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Database of Novel and Emerging Adsorbent Materials

    National Institute of Standards and Technology Data Gateway

    SRD 205 NIST/ARPA-E Database of Novel and Emerging Adsorbent Materials (Web, free access)   The NIST/ARPA-E Database of Novel and Emerging Adsorbent Materials is a free, web-based catalog of adsorbent materials and measured adsorption properties of numerous materials obtained from article entries from the scientific literature. Search fields for the database include adsorbent material, adsorbate gas, experimental conditions (pressure, temperature), and bibliographic information (author, title, journal), and results from queries are provided as a list of articles matching the search parameters. The database also contains adsorption isotherms digitized from the cataloged articles, which can be compared visually online in the web application or exported for offline analysis.

  14. The DrugAge database of aging-related drugs.

    PubMed

    Barardo, Diogo; Thornton, Daniel; Thoppil, Harikrishnan; Walsh, Michael; Sharifi, Samim; Ferreira, Susana; Anžič, Andreja; Fernandes, Maria; Monteiro, Patrick; Grum, Tjaša; Cordeiro, Rui; De-Souza, Evandro Araújo; Budovsky, Arie; Araujo, Natali; Gruber, Jan; Petrascheck, Michael; Fraifeld, Vadim E; Zhavoronkov, Alexander; Moskalev, Alexey; de Magalhães, João Pedro

    2017-06-01

    Aging is a major worldwide medical challenge. Not surprisingly, identifying drugs and compounds that extend lifespan in model organisms is a growing research area. Here, we present DrugAge (http://genomics.senescence.info/drugs/), a curated database of lifespan-extending drugs and compounds. At the time of writing, DrugAge contains 1316 entries featuring 418 different compounds from studies across 27 model organisms, including worms, flies, yeast and mice. Data were manually curated from 324 publications. Using drug-gene interaction data, we also performed a functional enrichment analysis of targets of lifespan-extending drugs. Enriched terms include various functional categories related to glutathione and antioxidant activity, ion transport and metabolic processes. In addition, we found a modest but significant overlap between targets of lifespan-extending drugs and known aging-related genes, suggesting that some but not most aging-related pathways have been targeted pharmacologically in longevity studies. DrugAge is freely available online for the scientific community and will be an important resource for biogerontologists. © 2017 The Authors. Aging Cell published by the Anatomical Society and John Wiley & Sons Ltd.

  15. A Web Server and Mobile App for Computing Hemolytic Potency of Peptides.

    PubMed

    Chaudhary, Kumardeep; Kumar, Ritesh; Singh, Sandeep; Tuknait, Abhishek; Gautam, Ankur; Mathur, Deepika; Anand, Priya; Varshney, Grish C; Raghava, Gajendra P S

    2016-03-08

    Numerous therapeutic peptides do not enter the clinical trials just because of their high hemolytic activity. Recently, we developed a database, Hemolytik, for maintaining experimentally validated hemolytic and non-hemolytic peptides. The present study describes a web server and mobile app developed for predicting, and screening of peptides having hemolytic potency. Firstly, we generated a dataset HemoPI-1 that contains 552 hemolytic peptides extracted from Hemolytik database and 552 random non-hemolytic peptides (from Swiss-Prot). The sequence analysis of these peptides revealed that certain residues (e.g., L, K, F, W) and motifs (e.g., "FKK", "LKL", "KKLL", "KWK", "VLK", "CYCR", "CRR", "RFC", "RRR", "LKKL") are more abundant in hemolytic peptides. Therefore, we developed models for discriminating hemolytic and non-hemolytic peptides using various machine learning techniques and achieved more than 95% accuracy. We also developed models for discriminating peptides having high and low hemolytic potential on different datasets called HemoPI-2 and HemoPI-3. In order to serve the scientific community, we developed a web server, mobile app and JAVA-based standalone software (http://crdd.osdd.net/raghava/hemopi/).

  16. U.S. Army Research Laboratory (ARL) multimodal signatures database

    NASA Astrophysics Data System (ADS)

    Bennett, Kelly

    2008-04-01

    The U.S. Army Research Laboratory (ARL) Multimodal Signatures Database (MMSDB) is a centralized collection of sensor data of various modalities that are co-located and co-registered. The signatures include ground and air vehicles, personnel, mortar, artillery, small arms gunfire from potential sniper weapons, explosives, and many other high value targets. This data is made available to Department of Defense (DoD) and DoD contractors, Intel agencies, other government agencies (OGA), and academia for use in developing target detection, tracking, and classification algorithms and systems to protect our Soldiers. A platform independent Web interface disseminates the signatures to researchers and engineers within the scientific community. Hierarchical Data Format 5 (HDF5) signature models provide an excellent solution for the sharing of complex multimodal signature data for algorithmic development and database requirements. Many open source tools for viewing and plotting HDF5 signatures are available over the Web. Seamless integration of HDF5 signatures is possible in both proprietary computational environments, such as MATLAB, and Free and Open Source Software (FOSS) computational environments, such as Octave and Python, for performing signal processing, analysis, and algorithm development. Future developments include extending the Web interface into a portal system for accessing ARL algorithms and signatures, High Performance Computing (HPC) resources, and integrating existing database and signature architectures into sensor networking environments.

  17. Making proteomics data accessible and reusable: current state of proteomics databases and repositories.

    PubMed

    Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-03-01

    Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. © 2014 The Authors. PROTEOMICS published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Evaluation of improved land use and canopy representation in ...

    EPA Pesticide Factsheets

    Biogenic volatile organic compounds (BVOC) participate in reactions that can lead to secondarily formed ozone and particulate matter (PM) impacting air quality and climate. BVOC emissions are important inputs to chemical transport models applied on local to global scales but considerable uncertainty remains in the representation of canopy parameterizations and emission algorithms from different vegetation species. The Biogenic Emission Inventory System (BEIS) has been used to support both scientific and regulatory model assessments for ozone and PM. Here we describe a new version of BEIS which includes updated input vegetation data and canopy model formulation for estimating leaf temperature and vegetation data on estimated BVOC. The Biogenic Emission Landuse Database (BELD) was revised to incorporate land use data from the Moderate Resolution Imaging Spectroradiometer (MODIS) land product and 2006 National Land Cover Database (NLCD) land coverage. Vegetation species data are based on the US Forest Service (USFS) Forest Inventory and Analysis (FIA) version 5.1 for 2002–2013 and US Department of Agriculture (USDA) 2007 census of agriculture data. This update results in generally higher BVOC emissions throughout California compared with the previous version of BEIS. Baseline and updated BVOC emission estimates are used in Community Multiscale Air Quality (CMAQ) Model simulations with 4 km grid resolution and evaluated with measurements of isoprene and monoterp

  19. The current structure of key actors involved in research on land and soil degradation

    NASA Astrophysics Data System (ADS)

    Escadafal, Richard; Barbero, Celia; Exbrayat, Williams; Marques, Maria Jose; Ruiz, Manuel; El Haddadi, Anass; Akhtar-Schuster, Mariam

    2013-04-01

    Land and soil conservation topics, the final mandate of the United Convention to Combat desertification in drylands, have been diagnosed as still suffering from a lack of guidance. On the contrary, climate change and biodiversity issues -the other two big subjects of the Rio Conventions- seem to progress and may benefit from the advice of international panels. Arguably the weakness of policy measures and hence the application of scientific knowledge by land users and stakeholders could be the expression of an inadequate research organization and a lack of ability to channel their findings. In order to better understand the size, breadth and depth of the scientific communities involved in providing advice to this convention and to other bodies, this study explores the corpus of international publications dealing with land and/or with soils. A database of several thousands records including a significant part of the literature published so far was performed using the Web of Science and other socio-economic databases such as FRANCIS and CAIRN. We extracted hidden information using bibliometric methods and data mining applied to these scientific publications to map the key actors (laboratories, teams, institutions) involved in research on land and on soils. Several filters were applied to the databases in combination with the word "desertification". The further use of Tetralogie software merges databases, analyses similarities and differences between keywords, disciplines, authors and regions and identifies obvious clusters. Assessing their commonalities and differences, the visualisation of links and gaps between scientists, organisations, policymakers and other stakeholders is possible. The interpretation of the 'clouds' of disciplines, keywords, and techniques will enhance the understanding of interconnections between them; ultimately this will allow diagnosing some of their strengths and weaknesses. This may help explain why land and soil degradation remains a serious global problem that lacks sufficient attention. We hope that this study will contribute to clarify the scientific landscape at stake to remediate possible weaknesses in the future.

  20. Evaluation models and criteria of the quality of hospital websites: a systematic review study

    PubMed Central

    Jeddi, Fatemeh Rangraz; Gilasi, Hamidreza; Khademi, Sahar

    2017-01-01

    Introduction Hospital websites are important tools in establishing communication and exchanging information between patients and staff, and thus should enjoy an acceptable level of quality. The aim of this study was to identify proper models and criteria to evaluate the quality of hospital websites. Methods This research was a systematic review study. The international databases such as Science Direct, Google Scholar, PubMed, Proquest, Ovid, Elsevier, Springer, and EBSCO together with regional database such as Magiran, Scientific Information Database, Persian Journal Citation Report (PJCR) and IranMedex were searched. Suitable keywords including website, evaluation, and quality of website were used. Full text papers related to the research were included. The criteria and sub criteria of the evaluation of website quality were extracted and classified. Results To evaluate the quality of the websites, various models and criteria were presented. The WEB-Q-IM, Mile, Minerva, Seruni Luci, and Web-Qual models were the designed models. The criteria of accessibility, content and apparent features of the websites, the design procedure, the graphics applied in the website, and the page’s attractions have been mentioned in the majority of studies. Conclusion The criteria of accessibility, content, design method, security, and confidentiality of personal information are the essential criteria in the evaluation of all websites. It is suggested that the ease of use, graphics, attractiveness and other apparent properties of websites are considered as the user-friendliness sub criteria. Further, the criteria of speed and accessibility of the website should be considered as sub criterion of efficiency. When determining the evaluation criteria of the quality of websites, attention to major differences in the specific features of any website is essential. PMID:28465807

  1. Evaluation models and criteria of the quality of hospital websites: a systematic review study.

    PubMed

    Jeddi, Fatemeh Rangraz; Gilasi, Hamidreza; Khademi, Sahar

    2017-02-01

    Hospital websites are important tools in establishing communication and exchanging information between patients and staff, and thus should enjoy an acceptable level of quality. The aim of this study was to identify proper models and criteria to evaluate the quality of hospital websites. This research was a systematic review study. The international databases such as Science Direct, Google Scholar, PubMed, Proquest, Ovid, Elsevier, Springer, and EBSCO together with regional database such as Magiran, Scientific Information Database, Persian Journal Citation Report (PJCR) and IranMedex were searched. Suitable keywords including website, evaluation, and quality of website were used. Full text papers related to the research were included. The criteria and sub criteria of the evaluation of website quality were extracted and classified. To evaluate the quality of the websites, various models and criteria were presented. The WEB-Q-IM, Mile, Minerva, Seruni Luci, and Web-Qual models were the designed models. The criteria of accessibility, content and apparent features of the websites, the design procedure, the graphics applied in the website, and the page's attractions have been mentioned in the majority of studies. The criteria of accessibility, content, design method, security, and confidentiality of personal information are the essential criteria in the evaluation of all websites. It is suggested that the ease of use, graphics, attractiveness and other apparent properties of websites are considered as the user-friendliness sub criteria. Further, the criteria of speed and accessibility of the website should be considered as sub criterion of efficiency. When determining the evaluation criteria of the quality of websites, attention to major differences in the specific features of any website is essential.

  2. CubeSat mission design software tool for risk estimating relationships

    NASA Astrophysics Data System (ADS)

    Gamble, Katharine Brumbaugh; Lightsey, E. Glenn

    2014-09-01

    In an effort to make the CubeSat risk estimation and management process more scientific, a software tool has been created that enables mission designers to estimate mission risks. CubeSat mission designers are able to input mission characteristics, such as form factor, mass, development cycle, and launch information, in order to determine the mission risk root causes which historically present the highest risk for their mission. Historical data was collected from the CubeSat community and analyzed to provide a statistical background to characterize these Risk Estimating Relationships (RERs). This paper develops and validates the mathematical model based on the same cost estimating relationship methodology used by the Unmanned Spacecraft Cost Model (USCM) and the Small Satellite Cost Model (SSCM). The RER development uses general error regression models to determine the best fit relationship between root cause consequence and likelihood values and the input factors of interest. These root causes are combined into seven overall CubeSat mission risks which are then graphed on the industry-standard 5×5 Likelihood-Consequence (L-C) chart to help mission designers quickly identify areas of concern within their mission. This paper is the first to document not only the creation of a historical database of CubeSat mission risks, but, more importantly, the scientific representation of Risk Estimating Relationships.

  3. [Colciencias and disdain for Colombian scientists: from the Stone Age to the impact factor].

    PubMed

    Leon-Sarmiento, Fidias E; Bayona-Prieto, Jaime; Bayona, Edgardo; Leon, Martha E

    2005-01-01

    Writing has dramatically evolved in the world; however, qualification of scientific production in Colombia has not, including the improper use of decree 1444/93 and 1279/02. The last of these decrees authorized Colciencias, the Colombian government institute created to support scientific research in Colombia, to establish rules for its implementation. Colciencias decided to evaluate scientific papers produced in Colombia based on the non-scientific method of the "impact factor", and considered that citations in MEDLINE/PubMed and PsylNFO were second line publications thus violating Colombian law. This affects not only the progress of scientific research in Colombia but also researchers' income and puts Colombia's scientific journals and publications at great disadvantage. Scientific papers indexed in qualified databases such as MEDLINE/PubMed must be judged according to law in order to prevent further injuries to the developing Colombian scientific production.

  4. Updated Palaeotsunami Database for Aotearoa/New Zealand

    NASA Astrophysics Data System (ADS)

    Gadsby, M. R.; Goff, J. R.; King, D. N.; Robbins, J.; Duesing, U.; Franz, T.; Borrero, J. C.; Watkins, A.

    2016-12-01

    The updated configuration, design, and implementation of a national palaeotsunami (pre-historic tsunami) database for Aotearoa/New Zealand (A/NZ) is near completion. This tool enables correlation of events along different stretches of the NZ coastline, provides information on frequency and extent of local, regional and distant-source tsunamis, and delivers detailed information on the science and proxies used to identify the deposits. In A/NZ a plethora of data, scientific research and experience surrounds palaeotsunami deposits, but much of this information has been difficult to locate, has variable reporting standards, and lacked quality assurance. The original database was created by Professor James Goff while working at the National Institute of Water & Atmospheric Research in A/NZ, but has subsequently been updated during his tenure at the University of New South Wales. The updating and establishment of the national database was funded by the Ministry of Civil Defence and Emergency Management (MCDEM), led by Environment Canterbury Regional Council, and supported by all 16 regions of A/NZ's local government. Creation of a single database has consolidated a wide range of published and unpublished research contributions from many science providers on palaeotsunamis in A/NZ. The information is now easily accessible and quality assured and allows examination of frequency, extent and correlation of events. This provides authoritative scientific support for coastal-marine planning and risk management. The database will complement the GNS New Zealand Historical Database, and contributes to a heightened public awareness of tsunami by being a "one-stop-shop" for information on past tsunami impacts. There is scope for this to become an international database, enabling the pacific-wide correlation of large events, as well as identifying smaller regional ones. The Australian research community has already expressed an interest, and the database is also compatible with a similar one currently under development in Japan. Expressions of interest in collaborating with the A/NZ team to expand the database are invited from other Pacific nations.

  5. Development of Human Face Literature Database Using Text Mining Approach: Phase I.

    PubMed

    Kaur, Paramjit; Krishan, Kewal; Sharma, Suresh K

    2018-06-01

    The face is an important part of the human body by which an individual communicates in the society. Its importance can be highlighted by the fact that a person deprived of face cannot sustain in the living world. The amount of experiments being performed and the number of research papers being published under the domain of human face have surged in the past few decades. Several scientific disciplines, which are conducting research on human face include: Medical Science, Anthropology, Information Technology (Biometrics, Robotics, and Artificial Intelligence, etc.), Psychology, Forensic Science, Neuroscience, etc. This alarms the need of collecting and managing the data concerning human face so that the public and free access of it can be provided to the scientific community. This can be attained by developing databases and tools on human face using bioinformatics approach. The current research emphasizes on creating a database concerning literature data of human face. The database can be accessed on the basis of specific keywords, journal name, date of publication, author's name, etc. The collected research papers will be stored in the form of a database. Hence, the database will be beneficial to the research community as the comprehensive information dedicated to the human face could be found at one place. The information related to facial morphologic features, facial disorders, facial asymmetry, facial abnormalities, and many other parameters can be extracted from this database. The front end has been developed using Hyper Text Mark-up Language and Cascading Style Sheets. The back end has been developed using hypertext preprocessor (PHP). The JAVA Script has used as scripting language. MySQL (Structured Query Language) is used for database development as it is most widely used Relational Database Management System. XAMPP (X (cross platform), Apache, MySQL, PHP, Perl) open source web application software has been used as the server.The database is still under the developmental phase and discusses the initial steps of its creation. The current paper throws light on the work done till date.

  6. PHENOPSIS DB: an Information System for Arabidopsis thaliana phenotypic data in an environmental context

    PubMed Central

    2011-01-01

    Background Renewed interest in plant × environment interactions has risen in the post-genomic era. In this context, high-throughput phenotyping platforms have been developed to create reproducible environmental scenarios in which the phenotypic responses of multiple genotypes can be analysed in a reproducible way. These platforms benefit hugely from the development of suitable databases for storage, sharing and analysis of the large amount of data collected. In the model plant Arabidopsis thaliana, most databases available to the scientific community contain data related to genetic and molecular biology and are characterised by an inadequacy in the description of plant developmental stages and experimental metadata such as environmental conditions. Our goal was to develop a comprehensive information system for sharing of the data collected in PHENOPSIS, an automated platform for Arabidopsis thaliana phenotyping, with the scientific community. Description PHENOPSIS DB is a publicly available (URL: http://bioweb.supagro.inra.fr/phenopsis/) information system developed for storage, browsing and sharing of online data generated by the PHENOPSIS platform and offline data collected by experimenters and experimental metadata. It provides modules coupled to a Web interface for (i) the visualisation of environmental data of an experiment, (ii) the visualisation and statistical analysis of phenotypic data, and (iii) the analysis of Arabidopsis thaliana plant images. Conclusions Firstly, data stored in the PHENOPSIS DB are of interest to the Arabidopsis thaliana community, particularly in allowing phenotypic meta-analyses directly linked to environmental conditions on which publications are still scarce. Secondly, data or image analysis modules can be downloaded from the Web interface for direct usage or as the basis for modifications according to new requirements. Finally, the structure of PHENOPSIS DB provides a useful template for the development of other similar databases related to genotype × environment interactions. PMID:21554668

  7. The AMMA information system

    NASA Astrophysics Data System (ADS)

    Fleury, Laurence; Brissebrat, Guillaume; Boichard, Jean-Luc; Cloché, Sophie; Eymard, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim; Favot, Florence; Roussot, Odile

    2014-05-01

    In the framework of the African Monsoon Multidisciplinary Analyses (AMMA) programme, several tools have been developed in order to facilitate and speed up data and information exchange between researchers from different disciplines. The AMMA information system includes (i) a multidisciplinary user-friendly data management and dissemination system, (ii) report and chart archives associated with display websites and (iii) a scientific paper exchange system. The AMMA information system is enriched by several previous (IMPETUS...) and following projects (FENNEC, ESCAPE, QweCI, DACCIWA…) and is becoming a reference information system about West Africa monsoon. (i) The AMMA project includes airborne, ground-based and ocean measurements, satellite data use, modelling studies and value-added product development. Therefore, the AMMA database user interface enables to access a great amount and a large variety of data: - 250 local observation datasets, that cover many geophysical components (atmosphere, ocean, soil, vegetation) and human activities (agronomy, health). They have been collected by operational networks from 1850 to present, long term monitoring research networks (CATCH, IDAF, PIRATA...) or scientific campaigns; - 1350 outputs of a socio-economics questionnaire; - 60 operational satellite products and several research products; - 10 output sets of meteorological and ocean operational models and 15 of research simulations. All the data are documented in compliance with metadata international standards, and delivered into standard formats. The data request user interface takes full advantage of the data and metadata base relational structure and enables users to elaborate easily multicriteria data requests (period, area, property, property value…). The AMMA data portal counts around 800 registered users and process about 50 data requests every month. The AMMA databases and data portal have been developed and are operated jointly by SEDOO and ESPRI in France: http://database.amma-international.org. The complete system is fully duplicated and operated by CRA in Niger: http://amma.agrhymet.ne/amma-data. (ii) A day-to-day chart and report display application has been designed and operated in order to monitor meteorological and environment information and to meet the observational team needs during the 2006 AMMA SOP (http://aoc.amma-international.org) and 2011 FENNEC campaigns (http://fenoc.sedoo.fr). At present the websites constitute a testimonial view on the campaigns and a preliminary investigation tool for researchers. Since 2011, the same application enables a group of French and Senegalese researchers and forecasters to share in near real time physical indices and diagnosis calculated from numerical weather operational forecasts, satellite products and in situ operational observations along the monsoon season, in order to better estimate, understand and anticipate the monsoon intraseasonal variability (http://misva.sedoo.fr). (iii) A collaborative WIKINDX tool has also been set online in order to gather together scientific publications, theses and communications of interest to AMMA: http://biblio.amma-international.org. Now the bibliographic database counts about 1200 references. It is the most exhaustive document collection about the West African monsoon available for all. Every scientist is invited to make use of the different AMMA online tools and data. Scientists or project leaders who have data management needs for existing or future datasets over West Africa are welcome to use the AMMA database framework and to contact ammaAdmin@sedoo.fr .

  8. Netlib services and resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Browne, S.V.; Green, S.C.; Moore, K.

    1994-04-01

    The Netlib repository, maintained by the University of Tennessee and Oak Ridge National Laboratory, contains freely available software, documents, and databases of interest to the numerical, scientific computing, and other communities. This report includes both the Netlib User`s Guide and the Netlib System Manager`s Guide, and contains information about Netlib`s databases, interfaces, and system implementation. The Netlib repository`s databases include the Performance Database, the Conferences Database, and the NA-NET mail forwarding and Whitepages Databases. A variety of user interfaces enable users to access the Netlib repository in the manner most convenient and compatible with their networking capabilities. These interfaces includemore » the Netlib email interface, the Xnetlib X Windows client, the netlibget command-line TCP/IP client, anonymous FTP, anonymous RCP, and gopher.« less

  9. Education for All. INNOV Data Base. Making It Work. Innovative Basic Education Projects in Developing Countries Database 1994.

    ERIC Educational Resources Information Center

    United Nations Educational, Scientific, and Cultural Organization, Paris (France).

    The INNOV database was created as part of a United Nations Educational, Scientific and Cultural Organization (UNESCO) program to collect, analyze and promote successful basic education projects in the developing world, and this report lists innovations in the field. It is divided into sections of project reports in three major geographical…

  10. NASA STI program database: Journal coverage (1990-1992)

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Data are given in tabular form on the extent of recent journal accessions (1990-1992) to the NASA Scientific and Technical Information (STI) Database. Journals are presented by country in two ways: first by an alphabetical listing; and second, by the decreasing number of citations extracted from these journals during this period. An appendix containing a statistical summary is included.

  11. Unified Planetary Coordinates System: A Searchable Database of Geodetic Information

    NASA Technical Reports Server (NTRS)

    Becker, K. J.a; Gaddis, L. R.; Soderblom, L. A.; Kirk, R. L.; Archinal, B. A.; Johnson, J. R.; Anderson, J. A.; Bowman-Cisneros, E.; LaVoie, S.; McAuley, M.

    2005-01-01

    Over the past 40 years, an enormous quantity of orbital remote sensing data has been collected for Mars from many missions and instruments. Unfortunately these datasets currently exist in a wide range of disparate coordinate systems, making it extremely difficult for the scientific community to easily correlate, combine, and compare data from different Mars missions and instruments. As part of our work for the PDS Imaging Node and on behalf of the USGS Astrogeology Team, we are working to solve this problem and to provide the NASA scientific research community with easy access to Mars orbital data in a unified, consistent coordinate system along with a wide variety of other key geometric variables. The Unified Planetary Coordinates (UPC) system is comprised of two main elements: (1) a database containing Mars orbital remote sensing data computed using a uniform coordinate system, and (2) a process by which continual maintainance and updates to the contents of the database are performed.

  12. Genetics and Forensics: Making the National DNA Database

    PubMed Central

    Johnson, Paul; Williams, Robin; Martin, Paul

    2005-01-01

    This paper is based on a current study of the growing police use of the epistemic authority of molecular biology for the identification of criminal suspects in support of crime investigation. It discusses the development of DNA profiling and the establishment and development of the UK National DNA Database (NDNAD) as an instance of the ‘scientification of police work’ (Ericson and Shearing 1986) in which the police uses of science and technology have a recursive effect on their future development. The NDNAD, owned by the Association of Chief Police Officers of England and Wales, is the first of its kind in the world and currently contains the genetic profiles of more than 2 million people. The paper provides a framework for the examination of this socio-technical innovation, begins to tease out the dense and compact history of the database and accounts for the way in which changes and developments across disparate scientific, governmental and policing contexts, have all contributed to the range of uses to which it is put. PMID:16467921

  13. Patient rights in Iran: a review article.

    PubMed

    Joolaee, Soodabeh; Hajibabaee, Fatemeh

    2012-01-01

    A significant development for conducting research on patient rights has been made in Iran over the past decade. This study is conducted in order to review and analyze the previous studies that have been made, so far, concerning patient rights in Iran. This is a comprehensive review study conducted by searching the Iranian databases, Scientific Information Database, Iranian Research Institute for Information Science and Technology, Iran Medex and Google using the Persian equivalent of keywords for 'awareness', 'attitude', and 'patient rights'. For pertinent Iranian papers published in English, scientific databases PubMed, and Google Scholar were searched using the keyword 'patient rights' and 'Iran'. A total of 41 Persian and five English articles were found for these keywords, only 26 of which fulfilled the objective of our study. The increasing number of papers published indicates that from 1999 onwards, this subject has begun to draw the attention of Iranian researchers in a progressive fashion and Iranian papers in English have also been compiled and published in international sources.

  14. Extracting Databases from Dark Data with DeepDive

    PubMed Central

    Zhang, Ce; Shin, Jaeho; Ré, Christopher; Cafarella, Michael; Niu, Feng

    2016-01-01

    DeepDive is a system for extracting relational databases from dark data: the mass of text, tables, and images that are widely collected and stored but which cannot be exploited by standard relational tools. If the information in dark data — scientific papers, Web classified ads, customer service notes, and so on — were instead in a relational database, it would give analysts a massive and valuable new set of “big data.” DeepDive is distinctive when compared to previous information extraction systems in its ability to obtain very high precision and recall at reasonable engineering cost; in a number of applications, we have used DeepDive to create databases with accuracy that meets that of human annotators. To date we have successfully deployed DeepDive to create data-centric applications for insurance, materials science, genomics, paleontologists, law enforcement, and others. The data unlocked by DeepDive represents a massive opportunity for industry, government, and scientific researchers. DeepDive is enabled by an unusual design that combines large-scale probabilistic inference with a novel developer interaction cycle. This design is enabled by several core innovations around probabilistic training and inference. PMID:28316365

  15. Inspiring Collaboration: The Legacy of Theo Colborn's Transdisciplinary Research on Fracking.

    PubMed

    Wylie, Sara; Schultz, Kim; Thomas, Deborah; Kassotis, Chris; Nagel, Susan

    2016-09-13

    This article describes Dr Theo Colborn's legacy of inspiring complementary and synergistic environmental health research and advocacy. Colborn, a founder of endocrine disruption research, also stimulated study of hydraulic fracturing (fracking). In 2014, the United States led the world in oil and gas production, with fifteen million Americans living within one mile of an oil or gas well. Colborn pioneered efforts to understand and control the impacts of this sea change in energy production. In 2005, her research organization The Endocrine Disruption Exchange (TEDX) developed a database of chemicals used in natural gas extraction and their health effects. This database stimulated novel scientific and social scientific research and informed advocacy by (1) connecting communities' diverse health impacts to chemicals used in natural gas development, (2) inspiring social science research on open-source software and hardware for citizen science, and (3) posing new scientific questions about the endocrine-disrupting properties of fracking chemicals. © The Author(s) 2016.

  16. Systematic literature review of methodologies and data sources of existing economic models across the full spectrum of Alzheimer's disease and dementia from apparently healthy through disease progression to end of life care: a systematic review protocol.

    PubMed

    Karagiannidou, Maria; Wittenberg, Raphael; Landeiro, Filipa Isabel Trigo; Park, A-La; Fry, Andra; Knapp, Martin; Gray, Alastair M; Tockhorn-Heidenreich, Antje; Castro Sanchez, Amparo Yovanna; Ghinai, Isaac; Handels, Ron; Lecomte, Pascal; Wolstenholme, Jane

    2018-06-08

    Dementia is one of the greatest health challenges the world will face in the coming decades, as it is one of the principal causes of disability and dependency among older people. Economic modelling is used widely across many health conditions to inform decisions on health and social care policy and practice. The aim of this literature review is to systematically identify, review and critically evaluate existing health economics models in dementia. We included the full spectrum of dementia, including Alzheimer's disease (AD), from preclinical stages through to severe dementia and end of life. This review forms part of the Real world Outcomes across the Alzheimer's Disease spectrum for better care: multimodal data Access Platform (ROADMAP) project. Electronic searches were conducted in Medical Literature Analysis and Retrieval System Online, Excerpta Medica dataBASE, Economic Literature Database, NHS Economic Evaluation Database, Cochrane Central Register of Controlled Trials, Cost-Effectiveness Analysis Registry, Research Papers in Economics, Database of Abstracts of Reviews of Effectiveness, Science Citation Index, Turning Research Into Practice and Open Grey for studies published between January 2000 and the end of June 2017. Two reviewers will independently assess each study against predefined eligibility criteria. A third reviewer will resolve any disagreement. Data will be extracted using a predefined data extraction form following best practice. Study quality will be assessed using the Phillips checklist for decision analytic modelling. A narrative synthesis will be used. The results will be made available in a scientific peer-reviewed journal paper, will be presented at relevant conferences and will also be made available through the ROADMAP project. CRD42017073874. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. Systematic literature review of methodologies and data sources of existing economic models across the full spectrum of Alzheimer’s disease and dementia from apparently healthy through disease progression to end of life care: a systematic review protocol

    PubMed Central

    Karagiannidou, Maria; Wittenberg, Raphael; Landeiro, Filipa Isabel Trigo; Park, A-La; Fry, Andra; Knapp, Martin; Tockhorn-Heidenreich, Antje; Castro Sanchez, Amparo Yovanna; Ghinai, Isaac; Handels, Ron; Lecomte, Pascal; Wolstenholme, Jane

    2018-01-01

    Introduction Dementia is one of the greatest health challenges the world will face in the coming decades, as it is one of the principal causes of disability and dependency among older people. Economic modelling is used widely across many health conditions to inform decisions on health and social care policy and practice. The aim of this literature review is to systematically identify, review and critically evaluate existing health economics models in dementia. We included the full spectrum of dementia, including Alzheimer’s disease (AD), from preclinical stages through to severe dementia and end of life. This review forms part of the Real world Outcomes across the Alzheimer’s Disease spectrum for better care: multimodal data Access Platform (ROADMAP) project. Methods and analysis Electronic searches were conducted in Medical Literature Analysis and Retrieval System Online, Excerpta Medica dataBASE, Economic Literature Database, NHS Economic Evaluation Database, Cochrane Central Register of Controlled Trials, Cost-Effectiveness Analysis Registry, Research Papers in Economics, Database of Abstracts of Reviews of Effectiveness, Science Citation Index, Turning Research Into Practice and Open Grey for studies published between January 2000 and the end of June 2017. Two reviewers will independently assess each study against predefined eligibility criteria. A third reviewer will resolve any disagreement. Data will be extracted using a predefined data extraction form following best practice. Study quality will be assessed using the Phillips checklist for decision analytic modelling. A narrative synthesis will be used. Ethics and dissemination The results will be made available in a scientific peer-reviewed journal paper, will be presented at relevant conferences and will also be made available through the ROADMAP project. PROSPERO registration number CRD42017073874. PMID:29884696

  18. Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface

    NASA Astrophysics Data System (ADS)

    Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.

    2016-12-01

    Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.

  19. Proposals to conserve Botryodiplodia theobromae (Lasiodiplodia theobromae) against Sphaeria glandicola, .....Ramularia brunnea against Sphaerella tussilaginis (Mycosphaerella tussilaginis) (Ascomycota: Dothideomycetes)

    USDA-ARS?s Scientific Manuscript database

    In the course of updating the scientific names of plant-associated fungi in the U. S. National Fungus Collections Fungal Databases to conform with one scientific name for fungi as required by the International Code of Nomenclature for algae, fungi and plants (ICN, McNeill & al. in Regnum Vegetable 1...

  20. From scientific discovery to cures: bright stars within a galaxy.

    PubMed

    Williams, R Sanders; Lotia, Samad; Holloway, Alisha K; Pico, Alexander R

    2015-09-24

    We propose that data mining and network analysis utilizing public databases can identify and quantify relationships between scientific discoveries and major advances in medicine (cures). Further development of such approaches could help to increase public understanding and governmental support for life science research and could enhance decision making in the quest for cures. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Keeping Research Data from the Continental Deep Drilling Programme (KTB) Accessible and Taking First Steps Towards Digital Preservation

    NASA Astrophysics Data System (ADS)

    Klump, J. F.; Ulbricht, D.; Conze, R.

    2014-12-01

    The Continental Deep Drilling Programme (KTB) was a scientific drilling project from 1987 to 1995 near Windischeschenbach, Bavaria. The main super-deep borehole reached a depth of 9,101 meters into the Earth's continental crust. The project used the most current equipment for data capture and processing. After the end of the project key data were disseminated through the web portal of the International Continental Scientific Drilling Program (ICDP). The scientific reports were published as printed volumes. As similar projects have also experienced, it becomes increasingly difficult to maintain a data portal over a long time. Changes in software and underlying hardware make a migration of the entire system inevitable. Around 2009 the data presented on the ICDP web portal were migrated to the Scientific Drilling Database (SDDB) and published through DataCite using Digital Object Identifiers (DOI) as persistent identifiers. The SDDB portal used a relational database with a complex data model to store data and metadata. A PHP-based Content Management System with custom modifications made it possible to navigate and browse datasets using the metadata and then download datasets. The data repository software eSciDoc allows storing self-contained packages consistent with the OAIS reference model. Each package consists of binary data files and XML-metadata. Using a REST-API the packages can be stored in the eSciDoc repository and can be searched using the XML-metadata. During the last maintenance cycle of the SDDB the data and metadata were migrated into the eSciDoc repository. Discovery metadata was generated following the GCMD-DIF, ISO19115 and DataCite schemas. The eSciDoc repository allows to store an arbitrary number of XML-metadata records with each data object. In addition to descriptive metadata each data object may contain pointers to related materials, such as IGSN-metadata to link datasets to physical specimens, or identifiers of literature interpreting the data. Datasets are presented by XSLT-stylesheet transformation using the stored metadata. The presentation shows several migration cycles of data and metadata, which were driven by aging software systems. Currently the datasets reside as self-contained entities in a repository system that is ready for digital preservation.

  2. Refining animal models in fracture research: seeking consensus in optimising both animal welfare and scientific validity for appropriate biomedical use.

    PubMed

    Auer, Jorg A; Goodship, Allen; Arnoczky, Steven; Pearce, Simon; Price, Jill; Claes, Lutz; von Rechenberg, Brigitte; Hofmann-Amtenbrinck, Margarethe; Schneider, Erich; Müller-Terpitz, R; Thiele, F; Rippe, Klaus-Peter; Grainger, David W

    2007-08-01

    In an attempt to establish some consensus on the proper use and design of experimental animal models in musculoskeletal research, AOVET (the veterinary specialty group of the AO Foundation) in concert with the AO Research Institute (ARI), and the European Academy for the Study of Scientific and Technological Advance, convened a group of musculoskeletal researchers, veterinarians, legal experts, and ethicists to discuss, in a frank and open forum, the use of animals in musculoskeletal research. The group narrowed the field to fracture research. The consensus opinion resulting from this workshop can be summarized as follows: Anaesthesia and pain management protocols for research animals should follow standard protocols applied in clinical work for the species involved. This will improve morbidity and mortality outcomes. A database should be established to facilitate selection of anaesthesia and pain management protocols for specific experimental surgical procedures and adopted as an International Standard (IS) according to animal species selected. A list of 10 golden rules and requirements for conduction of animal experiments in musculoskeletal research was drawn up comprising 1) Intelligent study designs to receive appropriate answers; 2) Minimal complication rates (5 to max. 10%); 3) Defined end-points for both welfare and scientific outputs analogous to quality assessment (QA) audit of protocols in GLP studies; 4) Sufficient details for materials and methods applied; 5) Potentially confounding variables (genetic background, seasonal, hormonal, size, histological, and biomechanical differences); 6) Post-operative management with emphasis on analgesia and follow-up examinations; 7) Study protocols to satisfy criteria established for a "justified animal study"; 8) Surgical expertise to conduct surgery on animals; 9) Pilot studies as a critical part of model validation and powering of the definitive study design; 10) Criteria for funding agencies to include requirements related to animal experiments as part of the overall scientific proposal review protocols. Such agencies are also encouraged to seriously consider and adopt the recommendations described here when awarding funds for specific projects. Specific new requirements and mandates related both to improving the welfare and scientific rigour of animal-based research models are urgently needed as part of international harmonization of standards.

  3. Continual improvement: A bibliography with indexes, 1992-1993

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This bibliography lists 606 references to reports and journal articles entered into the NASA Scientific and Technical Information Database during 1992 to 1993. Topics cover the philosophy and history of Continual Improvement (CI), basic approaches and strategies for implementation, and lessons learned from public and private sector models. Entries are arranged according to the following categories: Leadership for Quality, Information and Analysis, Strategic Planning for CI, Human Resources Utilization, Management of Process Quality, Supplier Quality, Assessing Results, Customer Focus and Satisfaction, TQM Tools and Philosophies, and Applications. Indexes include subject, personal author, corporate source, contract number, report number, and accession number.

  4. Through Kazan ASPERA to Modern Projects

    NASA Astrophysics Data System (ADS)

    Gusev, Alexander; Kitiashvili, Irina; Petrova, Natasha

    Now the European Union form the Sixth Framework Programme. One of its the objects of the EU Programme is opening national researches and training programmes. The Russian PhD students and young astronomers have business and financial difficulties in access to modern databases and astronomical projects and so they has not been included in European overview of priorities. Modern requirements to the organization of observant projects on powerful telescopes assumes painstaking scientific computer preparation of the application. A rigid competition for observation time assume preliminary computer modeling of target object for success of the application. Kazan AstroGeoPhysics Partnership

  5. Evaluation of the Inhalation Carcinogenicity of Ethylene Oxide ...

    EPA Pesticide Factsheets

    EPA is initiating a public comment period prior to peer review of the scientific basis supporting the human health hazard and dose-response assessment of ethylene oxide (cancer) that will appear in the Integrated Risk Information System (IRIS) database. EPA seeks external peer review on how the Agency responded to the SAB panel recommendations, the exposure-response modeling of epidemiologic data, including new analyses since the 2007 external peer review, and on the adequacy, transparency, and clarity of the revised draft. The peer review will include an opportunity for the public to address the peer reviewers.

  6. IRIS Toxicological Review of Ammonia (External Review Draft ...

    EPA Pesticide Factsheets

    EPA is conducting a peer review of the scientific basis supporting the human health hazard and dose-response assessment of ammonia that will appear in the Integrated Risk Information System (IRIS) database. EPA is undertaking an Integrated Risk Information System (IRIS) health assessment for ammonia. IRIS is an EPA database containing Agency scientific positions on potential adverse human health effects that may result from chronic (or lifetime) exposure to chemicals in the environment. IRIS contains chemical-specific summaries of qualitative and quantitative health information in support of two steps of the risk assessment paradigm, i.e., hazard identification and dose-response evaluation. IRIS assessments are used in combination with specific situational exposure assessment information to evaluate potential public health risk associated with environmental contaminants.

  7. IRIS Toxicological Review of n-Butanol (External Review Draft ...

    EPA Pesticide Factsheets

    EPA is conducting a peer review of the scientific basis supporting the human health hazard and dose-response assessment of n-butanol that will appear in the Integrated Risk Information System (IRIS) database. EPA is undertaking an Integrated Risk Information System (IRIS) health assessment for n-butanol. IRIS is an EPA database containing Agency scientific positions on potential adverse human health effects that may result from chronic (or lifetime) exposure to chemicals in the environment. IRIS contains chemical-specific summaries of qualitative and quantitative health information in support of two steps of the risk assessment paradigm, i.e., hazard identification and dose-response evaluation. IRIS assessments are used in combination with specific situational exposure assessment information to evaluate potential public health risk associated with environmental contaminants.

  8. BrassiBase: introduction to a novel knowledge database on Brassicaceae evolution.

    PubMed

    Kiefer, Markus; Schmickl, Roswitha; German, Dmitry A; Mandáková, Terezie; Lysak, Martin A; Al-Shehbaz, Ihsan A; Franzke, Andreas; Mummenhoff, Klaus; Stamatakis, Alexandros; Koch, Marcus A

    2014-01-01

    The Brassicaceae family (mustards or crucifers) includes Arabidopsis thaliana as one of the most important model species in plant biology and a number of important crop plants such as the various Brassica species (e.g. cabbage, canola and mustard). Moreover, the family comprises an increasing number of species that serve as study systems in many fields of plant science and evolutionary research. However, the systematics and taxonomy of the family are very complex and access to scientifically valuable and reliable information linked to species and genus names and its interpretation are often difficult. BrassiBase is a continuously developing and growing knowledge database (http://brassibase.cos.uni-heidelberg.de) that aims at providing direct access to many different types of information ranging from taxonomy and systematics to phylo- and cytogenetics. Providing critically revised key information, the database intends to optimize comparative evolutionary research in this family and supports the introduction of the Brassicaceae as the model family for evolutionary biology and plant sciences. Some features that should help to accomplish these goals within a comprehensive taxonomic framework have now been implemented in the new version 1.1.9. A 'Phylogenetic Placement Tool' should help to identify critical accessions and germplasm and provide a first visualization of phylogenetic relationships. The 'Cytogenetics Tool' provides in-depth information on genome sizes, chromosome numbers and polyploidy, and sets this information into a Brassicaceae-wide context.

  9. Bibliometric trend and patent analysis in nano-alloys research for period 2000-2013.

    PubMed

    Živković, Dragana; Niculović, Milica; Manasijević, Dragan; Minić, Duško; Ćosović, Vladan; Sibinović, Maja

    2015-05-04

    This paper presents an overview of current situation in nano-alloys investigations based on bibliometric and patent analysis. Bibliometric analysis data, for period from 2000 to September 2013, were obtained using Scopus database as selected index database, whereas analyzed parameters were: number of scientific papers per years, authors, countries, affiliations, subject areas and document types. Analysis of nano-alloys patents was done with specific database, using the International Patent Classification and Patent Scope for the period from 2003 to 2013 year. Information found in this database was the number of patents, patent classification by country, patent applicators, main inventors and pub date.

  10. Bibliometric trend and patent analysis in nano-alloys research for period 2000-2013.

    PubMed

    Živković, Dragana; Niculović, Milica; Manasijević, Dragan; Minić, Duško; Ćosović, Vladan; Sibinović, Maja

    2015-01-01

    This paper presents an overview of current situation in nano-alloys investigations based on bibliometric and patent analysis. Bibliometric analysis data, for the period 2000 to 2013, were obtained using Scopus database as selected index database, whereas analyzed parameters were: number of scientific papers per year, authors, countries, affiliations, subject areas and document types. Analysis of nano-alloys patents was done with specific database, using the International Patent Classification and Patent Scope for the period 2003 to 2013. Information found in this database was the number of patents, patent classification by country, patent applicators, main inventors and publication date.

  11. PYRN-Bib: The Permafrost Young Researchers Network Bibliography of Permafrost-Related Degree-Earning Theses

    NASA Astrophysics Data System (ADS)

    Grosse, Guido; Lantuit, Hugues; Gärtner-Roer, Isabelle

    2010-05-01

    PYRN-Bib is an international bibliographical database aiming at collecting and distributing information on all theses submitted for earning a scientific degree in permafrost-related research. PYRN-Bib is hosted by the Permafrost Young Researchers Network (PYRN, http://pyrn.ways.org), an international network of early career students and young scientists in permafrost related research with currently more than 750 members. The fully educational, non-profit project PYRN-Bib is published under the patronage of the International Permafrost Association (IPA). The bibliography covers all theses as long as they clearly treat aspects of permafrost research from such diverse fields as: Geophysics, Geology, Cryolithology, Biology, Biogeochemistry, Microbiology, Astrobiology, Chemistry, Engineering, Geomorphology, Remote Sensing, Modeling, Mineral and Hydrocarbon Exploration, and Science History and Education. The specific goals of PYRN-Bib are (1) to generate a comprehensive database that includes all degree-earning theses (e.g. Diploma, Ph.D., Master, etc.), coming from any country and any scientific field, under the single condition that the thesis is strongly related to research on permafrost and/or periglacial processes; (2) to reference unique but buried sources of information including theses published in languages other than English; (3) to make the database widely available to the scientific community and the general public; (4) to solicit PYRN membership; and (5) to provide a mean to map the evolution of permafrost research over the last decades, including regional trends, shifts in research direction, and/or the place of permafrost research in society. PYRN-Bib is available online and maintained by PYRN. The complete bibliography can be downloaded at no cost and is offered in different file formats: tagged Endnote library, XML, BibTex, and PDF. New entries are continuously provided by PYRN members and the scientific community. PYRN-Bib currently contains more than 1000 references to theses covering the period 1947-2009 and includes degree-earning theses from bachelor to doctoral and even professorial habilitation theses. The increasing number of thesis references starts to reflect the diversity as well as focus regions in permafrost-research. Theses currently originate from 22 countries and 10 languages. All references in PYRN-Bib are translated into English to guarantee a wider distribution. PYRN-Bib opens the door to assess to highly valuable scientific work previously hidden either by language barriers or archive dust. PYRN-Bib is a unique tool for finding information about previous student research on permafrost topics. Such theses, often the backbone of modern research, are otherwise spread over hundreds of university libraries and hard to find or even know about. We encourage students who do research in a permafrost-related topic to submit their thesis after graduation.

  12. A spatial database for landslides in northern Bavaria: A methodological approach

    NASA Astrophysics Data System (ADS)

    Jäger, Daniel; Kreuzer, Thomas; Wilde, Martina; Bemm, Stefan; Terhorst, Birgit

    2018-04-01

    Landslide databases provide essential information for hazard modeling, damages on buildings and infrastructure, mitigation, and research needs. This study presents the development of a landslide database system named WISL (Würzburg Information System on Landslides), currently storing detailed landslide data for northern Bavaria, Germany, in order to enable scientific queries as well as comparisons with other regional landslide inventories. WISL is based on free open source software solutions (PostgreSQL, PostGIS) assuring good correspondence of the various softwares and to enable further extensions with specific adaptions of self-developed software. Apart from that, WISL was designed to be particularly compatible for easy communication with other databases. As a central pre-requisite for standardized, homogeneous data acquisition in the field, a customized data sheet for landslide description was compiled. This sheet also serves as an input mask for all data registration procedures in WISL. A variety of "in-database" solutions for landslide analysis provides the necessary scalability for the database, enabling operations at the local server. In its current state, WISL already enables extensive analysis and queries. This paper presents an example analysis of landslides in Oxfordian Limestones in the northeastern Franconian Alb, northern Bavaria. The results reveal widely differing landslides in terms of geometry and size. Further queries related to landslide activity classifies the majority of the landslides as currently inactive, however, they clearly possess a certain potential for remobilization. Along with some active mass movements, a significant percentage of landslides potentially endangers residential areas or infrastructure. The main aspect of future enhancements of the WISL database is related to data extensions in order to increase research possibilities, as well as to transfer the system to other regions and countries.

  13. Development and Uses of Offline and Web-Searchable Metabolism Databases - The Case of Benzo[a]pyrene.

    PubMed

    Rendic, Slobodan P; Guengerich, Frederick P

    2018-01-01

    The present work describes development of offline and web-searchable metabolism databases for drugs, other chemicals, and physiological compounds using human and model species, prompted by the large amount of data published after year 1990. The intent was to provide a rapid and accurate approach to published data to be applied both in science and to assist therapy. Searches for the data were done using the Pub Med database, accessing the Medline database of references and abstracts. In addition, data presented at scientific conferences (e.g., ISSX conferences) are included covering the publishing period beginning with the year 1976. Application of the data is illustrated by the properties of benzo[a]pyrene (B[a]P) and its metabolites. Analysis show higher activity of P450 1A1 for activation of the (-)- isomer of trans-B[a]P-7,8-diol, while P4501B1 exerts higher activity for the (+)- isomer. P450 1A2 showed equally low activity in the metabolic activation of both isomers. The information collected in the databases is applicable in prediction of metabolic drug-drug and/or drug-chemical interactions in clinical and environmental studies. The data on the metabolism of searched compound (exemplified by benzo[a]pyrene and its metabolites) also indicate toxicological properties of the products of specific reactions. The offline and web-searchable databases had wide range of applications (e.g. computer assisted drug design and development, optimization of clinical therapy, toxicological applications) and adjustment in everyday life styles. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. Computational intelligence approaches for pattern discovery in biological systems.

    PubMed

    Fogel, Gary B

    2008-07-01

    Biology, chemistry and medicine are faced by tremendous challenges caused by an overwhelming amount of data and the need for rapid interpretation. Computational intelligence (CI) approaches such as artificial neural networks, fuzzy systems and evolutionary computation are being used with increasing frequency to contend with this problem, in light of noise, non-linearity and temporal dynamics in the data. Such methods can be used to develop robust models of processes either on their own or in combination with standard statistical approaches. This is especially true for database mining, where modeling is a key component of scientific understanding. This review provides an introduction to current CI methods, their application to biological problems, and concludes with a commentary about the anticipated impact of these approaches in bioinformatics.

  15. BAO Plate Archive digitization, creation of electronic database and its scientific usage

    NASA Astrophysics Data System (ADS)

    Mickaelian, Areg M.

    2015-08-01

    Astronomical plate archives created on the basis of numerous observations at many observatories are important part of the astronomical heritage. Byurakan Astrophysical Observatory (BAO) plate archive consists of 37,500 photographic plates and films, obtained at 2.6m telescope, 1m and 0.5m Schmidt telescopes and other smaller ones during 1947-1991. In 2002-2005, the famous Markarian Survey (First Byurakan Survey, FBS) 2000 plates were digitized and the Digitized FBS (DFBS, http://www.aras.am/Dfbs/dfbs.html) was created. New science projects have been conducted based on these low-dispersion spectroscopic material. In 2015, we have started a project on the whole BAO Plate Archive digitization, creation of electronic database and its scientific usage. A Science Program Board is created to evaluate the observing material, to investigate new possibilities and to propose new projects based on the combined usage of these observations together with other world databases. The Executing Team consists of 9 astronomers and 3 computer scientists and will use 2 EPSON Perfection V750 Pro scanners for the digitization, as well as Armenian Virtual Observatory (ArVO) database to accommodate all new data. The project will run during 3 years in 2015-2017 and the final result will be an electronic database and online interactive sky map to be used for further research projects.

  16. Online-data Bases On Natural-hazard Research, Early-warning Systems and Operative Disaster Prevention Programs

    NASA Astrophysics Data System (ADS)

    Hermanns, R. L.; Zentel, K.-O.; Wenzel, F.; Hövel, M.; Hesse, A.

    In order to benefit from synergies and to avoid replication in the field of disaster re- duction programs and related scientific projects it is important to create an overview on the state of art, the fields of activity and their key aspects. Therefore, the German Committee for Disaster Reduction intends to document projects and institution related to natural disaster prevention in three databases. One database is designed to docu- ment scientific programs and projects related to natural hazards. In a first step data acquisition concentrated on projects carried out by German institutions. In a second step projects from all other European countries will be archived. The second database focuses on projects on early-warning systems and has no regional limit. Data mining started in November 2001 and will be finished soon. The third database documents op- erational projects dealing with disaster prevention and concentrates on international projects or internationally funded projects. These databases will be available on the internet end of spring 2002 (http://www.dkkv.org) and will be updated continuously. They will allow rapid and concise information on various international projects, pro- vide up-to-date descriptions, and facilitate exchange as all relevant information in- cluding contact addresses are available to the public. The aim of this contribution is to present concepts and the work done so far, to invite participation, and to contact other organizations with similar objectives.

  17. Application of cloud database in the management of clinical data of patients with skin diseases.

    PubMed

    Mao, Xiao-fei; Liu, Rui; DU, Wei; Fan, Xue; Chen, Dian; Zuo, Ya-gang; Sun, Qiu-ning

    2015-04-01

    To evaluate the needs and applications of using cloud database in the daily practice of dermatology department. The cloud database was established for systemic scleroderma and localized scleroderma. Paper forms were used to record the original data including personal information, pictures, specimens, blood biochemical indicators, skin lesions,and scores of self-rating scales. The results were input into the cloud database. The applications of the cloud database in the dermatology department were summarized and analyzed. The personal and clinical information of 215 systemic scleroderma patients and 522 localized scleroderma patients were included and analyzed using the cloud database. The disease status,quality of life, and prognosis were obtained by statistical calculations. The cloud database can efficiently and rapidly store and manage the data of patients with skin diseases. As a simple, prompt, safe, and convenient tool, it can be used in patients information management, clinical decision-making, and scientific research.

  18. U.S. Air Force Scientific and Technical Information Program - The STINFO Program

    NASA Technical Reports Server (NTRS)

    Blados, Walter R.

    1991-01-01

    The U.S. Air Force STINFO (Scientific and Technical Information) program has as its main goal the proper use of all available scientific and technical information in the development of programs. The organization of STINFO databases, the use of STINFO in the development and advancement of aerospace science and technology and the acquisition of superior systems at lowest cost, and the application to public and private sectors of technologies developed for military uses are examined. STINFO user training is addressed. A project for aerospace knowledge diffusion is discussed.

  19. The National Deep-Sea Coral and Sponge Database: A Comprehensive Resource for United States Deep-Sea Coral and Sponge Records

    NASA Astrophysics Data System (ADS)

    Dornback, M.; Hourigan, T.; Etnoyer, P.; McGuinn, R.; Cross, S. L.

    2014-12-01

    Research on deep-sea corals has expanded rapidly over the last two decades, as scientists began to realize their value as long-lived structural components of high biodiversity habitats and archives of environmental information. The NOAA Deep Sea Coral Research and Technology Program's National Database for Deep-Sea Corals and Sponges is a comprehensive resource for georeferenced data on these organisms in U.S. waters. The National Database currently includes more than 220,000 deep-sea coral records representing approximately 880 unique species. Database records from museum archives, commercial and scientific bycatch, and from journal publications provide baseline information with relatively coarse spatial resolution dating back as far as 1842. These data are complemented by modern, in-situ submersible observations with high spatial resolution, from surveys conducted by NOAA and NOAA partners. Management of high volumes of modern high-resolution observational data can be challenging. NOAA is working with our data partners to incorporate this occurrence data into the National Database, along with images and associated information related to geoposition, time, biology, taxonomy, environment, provenance, and accuracy. NOAA is also working to link associated datasets collected by our program's research, to properly archive them to the NOAA National Data Centers, to build a robust metadata record, and to establish a standard protocol to simplify the process. Access to the National Database is provided through an online mapping portal. The map displays point based records from the database. Records can be refined by taxon, region, time, and depth. The queries and extent used to view the map can also be used to download subsets of the database. The database, map, and website is already in use by NOAA, regional fishery management councils, and regional ocean planning bodies, but we envision it as a model that can expand to accommodate data on a global scale.

  20. Academic Impact of a Public Electronic Health Database: Bibliometric Analysis of Studies Using the General Practice Research Database

    PubMed Central

    Chen, Yu-Chun; Wu, Jau-Ching; Haschler, Ingo; Majeed, Azeem; Chen, Tzeng-Ji; Wetter, Thomas

    2011-01-01

    Background Studies that use electronic health databases as research material are getting popular but the influence of a single electronic health database had not been well investigated yet. The United Kingdom's General Practice Research Database (GPRD) is one of the few electronic health databases publicly available to academic researchers. This study analyzed studies that used GPRD to demonstrate the scientific production and academic impact by a single public health database. Methodology and Findings A total of 749 studies published between 1995 and 2009 with ‘General Practice Research Database’ as their topics, defined as GPRD studies, were extracted from Web of Science. By the end of 2009, the GPRD had attracted 1251 authors from 22 countries and been used extensively in 749 studies published in 193 journals across 58 study fields. Each GPRD study was cited 2.7 times by successive studies. Moreover, the total number of GPRD studies increased rapidly, and it is expected to reach 1500 by 2015, twice the number accumulated till the end of 2009. Since 17 of the most prolific authors (1.4% of all authors) contributed nearly half (47.9%) of GPRD studies, success in conducting GPRD studies may accumulate. The GPRD was used mainly in, but not limited to, the three study fields of “Pharmacology and Pharmacy”, “General and Internal Medicine”, and “Public, Environmental and Occupational Health”. The UK and United States were the two most active regions of GPRD studies. One-third of GRPD studies were internationally co-authored. Conclusions A public electronic health database such as the GPRD will promote scientific production in many ways. Data owners of electronic health databases at a national level should consider how to reduce access barriers and to make data more available for research. PMID:21731733

  1. Predictive models in urology.

    PubMed

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  2. Comprehensive Case Analysis on Participatory Approaches, from Nexus Perspectives

    NASA Astrophysics Data System (ADS)

    Masuhara, N.; Baba, K.

    2014-12-01

    According to Messages from the Bonn2011 Conference, involving local communities fully and effectively in the planning and implementation processes related to water, energy and food nexus for local ownership and commitment should be strongly needed. The participatory approaches such as deliberative polling, "joint fact-finding" and so on have been applied so far to resolve various environmental disputes, however the drivers and barriers in such processes have not been necessarily enough analyzed in a comprehensive manner, especially in Japan. Our research aims to explore solutions for conflicts in the context of water-energy-food nexus in local communities. To achieve it, we clarify drivers and barriers of each approaches applied so far in water, energy and food policy, focusing on how to deal with scientific facts. We generate hypotheses primarily that multi-issue solutions through policy integration will be more effective for conflicts in the context of water-energy-food nexus than single issue solutions for each policy. One of the key factors to formulate effective solutions is to integrate "scientific fact (expert knowledge)" and "local knowledge". Given this primary hypothesis, more specifically, we assume that it is effective for building consensus to provide opportunities to resolve the disagreement of "framing" that stakeholders can offer experts the points for providing scientific facts and that experts can get common understanding of scientific facts in the early stage of the process. To verify the hypotheses, we develop a database of the cases which such participatory approaches have been applied so far to resolve various environmental disputes based on literature survey of journal articles and public documents of Japanese cases. At present, our database is constructing. But it's estimated that conditions of framing and providing scientific information are important driving factors for problem solving and consensus building. And it's important to refine the driving factors, evaluating if components of database are enough to present each process or not.

  3. Award for Distinguished Scientific Early Career Contributions to Psychology: Tania Lombrozo.

    PubMed

    2016-11-01

    APA's Awards for Distinguished Scientific Early Career Contributions to Psychology recognize psychologists who have demonstrated excellence early in their careers. One of the 2016 award winners is Tania Lombrozo, whose "groundbreaking studies have shown just how, and why, explanations are so important to people." Lombrozo's award citation, biography, and bibliography are presented here. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Proposals to conserve the names Chaetomium piluliferum (Botryotrichum piluliferum) against ……and Gnomonia intermedia (Ophiognomonia intermedia) against Gloeosporium betulae (Discula betulae) (Ascomycota: Sordariomycetes)

    USDA-ARS?s Scientific Manuscript database

    In the course of updating the scientific names of plant-associated fungi in the USDA-ARS U.S. National Fungus Collections Fungal Databases to conform with one scientific name for fungi as required by the International Code of Nomenclature for algae, fungi and plants (ICN, McNeill & al. in Regnum Veg...

  5. NASA scientific and technical publications: A catalog of special publications, reference publications, conference publications, and technical papers, 1987-1990

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This catalog lists 783 citations of all NASA Special Publications, NASA Reference Publications, NASA Conference Publications, and NASA Technical Papers that were entered into NASA Scientific and Technical Information Database during the year's 1987 through 1990. The entries are grouped by subject category. Indexes of subject terms, personal authors, and NASA report numbers are provided.

  6. NASA scientific and technical publications: A catalog of special publications, reference publications, conference publications, and technical papers, 1989

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This catalog lists 190 citations of all NASA Special Publications, NASA Reference Publications, NASA Conference Publications, and NASA Technical Papers that were entered into the NASA scientific and technical information database during accession year 1989. The entries are grouped by subject category. Indexes of subject terms, personal authors, and NASA report numbers are provided.

  7. NASA scientific and technical publications: A catalog of special publications, reference publications, conference publications, and technical papers, 1991-1992

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This catalog lists 458 citations of all NASA Special Publications, NASA Reference Publications, NASA Conference Publications, and NASA Technical Papers that were entered into the NASA Scientific and Technical Information database during accession year 1991 through 1992. The entries are grouped by subject category. Indexes of subject terms, personal authors, and NASA report numbers are provided.

  8. NASA scientific and technical publications: A catalog of Special Publications, Reference Publications, Conference Publications, and Technical Papers, 1987

    NASA Technical Reports Server (NTRS)

    1988-01-01

    This catalog lists 239 citations of all NASA Special Publications, NASA Reference Publications, NASA Conference Publications, and NASA Technical Papers that were entered in the NASA scientific and technical information database during accession year 1987. The entries are grouped by subject category. Indexes of subject terms, personal authors, and NASA report numbers are provided.

  9. IRIS Toxicological Review of Urea (External Review Draft) ...

    EPA Pesticide Factsheets

    EPA conducted a peer review and public comment of the scientific basis of a draft report supporting the human health hazard and dose-response assessment of Urea that when finalized will appear on the Integrated Risk Information System (IRIS) database. The draft Toxicological Review of Urea provides scientific support and rationale for the hazard and dose-response assessment pertaining to chronic exposure to Urea.

  10. IRIS Toxicological Review of Trichloroacetic Acid (TCA) ...

    EPA Pesticide Factsheets

    EPA is conducting a peer review and public comment of the scientific basis supporting the human health hazard and dose-response assessment of Trichloroacetic acid (TCA) that when finalized will appear on the Integrated Risk Information System (IRIS) database. The draft Toxicological Review of trichloroacetic acid provides scientific support and rationale for the hazard and dose-response assessment pertaining to chronic exposure to trichloroacetic acid.

  11. A survey of scientific production and collaboration rate among of medical library and information sciences in ISI, scopus and Pubmed databases during 2001-2010.

    PubMed

    Yousefy, Alireza; Malekahmadi, Parisa

    2013-01-01

    Research is essential for development. In other words, scientific development of each country can be evaluated by researchers' scientific production. Understanding and assessing the activities of researchers for planning and policy making is essential. The significance of collaboration in the production of scientific publications in today's complex world where technology is everything is very apparent. Scientists realized that in order to get their work wildly used and cited to by experts, they must collaborate. The collaboration among researchers results in the development of scientific knowledge and hence, attainment of wider information. The main objective of this research is to survey scientific production and collaboration rate in philosophy and theoretical bases of medical library and information sciences in ISI, SCOPUS, and Pubmed databases during 2001-2010. This is a descriptive survey and scientometrics methods were used for this research. Then data gathered via check list and analyzed by the SPSS software. Collaboration rate was calculated according to the formula. Among the 294 related abstracts about philosophy, and theoretical bases of medical library and information science in ISI, SCOPUS, and Pubmed databases during 2001-2010, the year 2007 with 45 articles has the most and the year 2003 with 16 articles has the least number of related collaborative articles in this scope. "B. Hjorland" with eight collaborative articles had the most one among Library and Information Sciences (LIS) professionals in ISI, SCOPUS, and Pubmed. Journal of Documentation with 29 articles and 12 collaborative articles had the most related articles. Medical library and information science challenges with 150 articles had first place in number of articles. Results also show that the most elaborative country in terms of collaboration point of view and number of articles was US. "University of Washington" and "University Western Ontario" are the most elaborative affiliation from a collaboration point. The average collaboration rate between researchers in this field during the years studied is 0.25. The most completive reviewed articles are single authors that included 60.54% of the whole articles. Only 30.46% of articles were provided with two or more than two authors.

  12. Profile and scientific output analysis of physical therapy researchers with research productivity fellowship from the Brazilian National Council for Scientific and Technological Development.

    PubMed

    Sturmer, Giovani; Viero, Carolina C M; Silveira, Matheus N; Lukrafka, Janice L; Plentz, Rodrigo D M

    2013-01-01

    To describe the profile and the scientific output of physical therapists researchers holding a research productivity fellowship (PQ) from the Brazilian National Council of Scientific and Technological Development (Conselho Nacional de Desenvolvimento Científico e Tecnológico-CNPq). This is a cross-sectional study, which has evaluated the Lattes Curriculum of all PQ physiotherapy researchers registered at CNPq holding a research productivity fellowship in the period of 2010. The variables analyzed were: gender, geographic and institutional distribution, duration since doctorate defense, research productivity fellowship level, scientific output until 2010 and the H index in Scopus(®) and ISI databases. A total of 55 PQ from the CNPq were identified in the area of knowledge of Physical Therapy and Occupational Therapy, being 81.8% from the Southeast region of Brazil. They were predominantly female (61.8%), with research productivity fellowship level PQ2 (74.5%), and with average time since doctorate defense of 10.1 (±4.1) years. A total of 2.381 articles were published, with average of 42.5 (±18.9) articles/researcher. The average of articles published after doctorate defense was 39.40 (±18.9) articles/researchers with a mean output of 4.2 (±2.0) articles/year. We found 304 articles indexed in the Scopus(®) database with 2.463 citations, and 222 articles indexed in the Web of Science with 1.805 citations. The articles were published in 481 journals, being 244 (50.7%) of them listed on JCR-web. The researchers presented a median 5 of the H index in the Scopus(®) database, and a median 3 in ISI. The scientific output of the researchers with research productivity fellowship in the field of physical therapy stands out in their indicators, since the figures are very promising for a relatively young area and as it can be observed by the amount of published articles and citations obtained by the national and international research community.

  13. ExplorEnz: a MySQL database of the IUBMB enzyme nomenclature.

    PubMed

    McDonald, Andrew G; Boyce, Sinéad; Moss, Gerard P; Dixon, Henry B F; Tipton, Keith F

    2007-07-27

    We describe the database ExplorEnz, which is the primary repository for EC numbers and enzyme data that are being curated on behalf of the IUBMB. The enzyme nomenclature is incorporated into many other resources, including the ExPASy-ENZYME, BRENDA and KEGG bioinformatics databases. The data, which are stored in a MySQL database, preserve the formatting of chemical and enzyme names. A simple, easy to use, web-based query interface is provided, along with an advanced search engine for more complex queries. The database is publicly available at http://www.enzyme-database.org. The data are available for download as SQL and XML files via FTP. ExplorEnz has powerful and flexible search capabilities and provides the scientific community with the most up-to-date version of the IUBMB Enzyme List.

  14. [The biomedical periodicals of Hungarian editions--historical overview].

    PubMed

    Berhidi, Anna; Geges, József; Vasas, Lívia

    2006-03-12

    The majority of Hungarian scientific results are published in international periodicals in foreign languages. Yet the publications in Hungarian scientific periodicals also should not be ignored. This study analyses biomedical periodicals of Hungarian edition from different points of view. Based on different databases a list of titles consisting of 119 items resulted, which contains both the core and the peripheral journals of the biomedical field. These periodicals were analysed empirically, one by one: checking out the titles. 13 of the titles are ceased, among the rest 106 Hungarian scientific journals 10 are published in English language. From the remaining majority of Hungarian language and publishing only a few show up in international databases. Although quarter of the Hungarian biomedical journals meet the requirements, which means they could be represented in international databases, these periodicals are not indexed. 42 biomedical periodicals are available online. Although quarter of these journals come with restricted access. 2/3 of the Hungarian biomedical journals have detailed instructions to authors. These instructions inform the publishing doctors and researchers of the requirements of a biomedical periodical. The increasing number of Hungarian biomedical journals published is welcome news. But it would be important for quality publications which are cited a lot to appear in the Hungarian journals. The more publications are cited, the more journals and authors gain in prestige on home and international level.

  15. ArrayBridge: Interweaving declarative array processing with high-performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, Haoyuan; Floratos, Sofoklis; Blanas, Spyros

    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aimsmore » to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.« less

  16. Exploring Antarctic Land Surface Temperature Extremes Using Condensed Anomaly Databases

    NASA Astrophysics Data System (ADS)

    Grant, Glenn Edwin

    Satellite observations have revolutionized the Earth Sciences and climate studies. However, data and imagery continue to accumulate at an accelerating rate, and efficient tools for data discovery, analysis, and quality checking lag behind. In particular, studies of long-term, continental-scale processes at high spatiotemporal resolutions are especially problematic. The traditional technique of downloading an entire dataset and using customized analysis code is often impractical or consumes too many resources. The Condensate Database Project was envisioned as an alternative method for data exploration and quality checking. The project's premise was that much of the data in any satellite dataset is unneeded and can be eliminated, compacting massive datasets into more manageable sizes. Dataset sizes are further reduced by retaining only anomalous data of high interest. Hosting the resulting "condensed" datasets in high-speed databases enables immediate availability for queries and exploration. Proof of the project's success relied on demonstrating that the anomaly database methods can enhance and accelerate scientific investigations. The hypothesis of this dissertation is that the condensed datasets are effective tools for exploring many scientific questions, spurring further investigations and revealing important information that might otherwise remain undetected. This dissertation uses condensed databases containing 17 years of Antarctic land surface temperature anomalies as its primary data. The study demonstrates the utility of the condensate database methods by discovering new information. In particular, the process revealed critical quality problems in the source satellite data. The results are used as the starting point for four case studies, investigating Antarctic temperature extremes, cloud detection errors, and the teleconnections between Antarctic temperature anomalies and climate indices. The results confirm the hypothesis that the condensate databases are a highly useful tool for Earth Science analyses. Moreover, the quality checking capabilities provide an important method for independent evaluation of dataset veracity.

  17. The use and misuse of biomedical data: is bigger really better?

    PubMed

    Hoffman, Sharona; Podgurski, Andy

    2013-01-01

    Very large biomedical research databases, containing electronic health records (EHR) and genomic data from millions of patients, have been heralded recently for their potential to accelerate scientific discovery and produce dramatic improvements in medical treatments. Research enabled by these databases may also lead to profound changes in law, regulation, social policy, and even litigation strategies. Yet, is "big data" necessarily better data? This paper makes an original contribution to the legal literature by focusing on what can go wrong in the process of biomedical database research and what precautions are necessary to avoid critical mistakes. We address three main reasons for approaching such research with care and being cautious in relying on its outcomes for purposes of public policy or litigation. First, the data contained in biomedical databases is surprisingly likely to be incorrect or incomplete. Second, systematic biases, arising from both the nature of the data and the preconceptions of investigators, are serious threats to the validity of research results, especially in answering causal questions. Third, data mining of biomedical databases makes it easier for individuals with political, social, or economic agendas to generate ostensibly scientific but misleading research findings for the purpose of manipulating public opinion and swaying policymakers. In short, this paper sheds much-needed light on the problems of credulous and uninformed acceptance of research results derived from biomedical databases. An understanding of the pitfalls of big data analysis is of critical importance to anyone who will rely on or dispute its outcomes, including lawyers, policymakers, and the public at large. The Article also recommends technical, methodological, and educational interventions to combat the dangers of database errors and abuses.

  18. M4SF-17LL010301071: Thermodynamic Database Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavarin, M.; Wolery, T. J.

    2017-09-05

    This progress report (Level 4 Milestone Number M4SF-17LL010301071) summarizes research conducted at Lawrence Livermore National Laboratory (LLNL) within the Argillite Disposal R&D Work Package Number M4SF-17LL01030107. The DR Argillite Disposal R&D control account is focused on the evaluation of important processes in the analysis of disposal design concepts and related materials for nuclear fuel disposal in clay-bearing repository media. The objectives of this work package are to develop model tools for evaluating impacts of THMC process on long-term disposal of spent fuel in argillite rocks, and to establish the scientific basis for high thermal limits. This work is contributing tomore » the GDSA model activities to identify gaps, develop process models, provide parameter feeds and support requirements providing the capability for a robust repository performance assessment model by 2020.« less

  19. Probing concept of critical thinking in nursing education in Iran: a concept analysis.

    PubMed

    Tajvidi, Mansooreh; Ghiyasvandian, Shahrzad; Salsali, Mahvash

    2014-06-01

    Given the wide disagreement over the definition of critical thinking in different disciplines, defining and standardizing the concept according to the discipline of nursing is essential. Moreover, there is limited scientific evidence regarding critical thinking in the context of nursing in Iran. The aim of this study was to analyze and clarify the concept of critical thinking in nursing education in Iran. We employed the hybrid model to define the concept of critical thinking. The hybrid model has three interconnected phases--the theoretical phase, the fieldwork phase, and the final analytic phase. In the theoretical phase, we searched the online scientific databases (such as Elsevier, Wiley, CINAHL, Proquest, Ovid, and Springer as well as Iranian databases such as SID, Magiran, and Iranmedex). In the fieldwork phase, a purposive sample of 17 nursing faculties, PhD students, clinical instructors, and clinical nurses was recruited. Participants were interviewed by using an interview guide. In the analytical phase we compared the data from the theoretical and the fieldwork phases. The concept of critical thinking had many different antecedents, attributes, and consequences. Antecedents, attributes, and consequences of critical thinking concept identified in the theoretical phase were in some ways different and in some way similar to antecedents, attributes, and consequences identified in the fieldwork phase. Finally critical thinking in nursing education in Iran was clarified. Critical thinking is a logical, situational, purposive, and outcome-oriented thinking process. It is an acquired and evolving ability which develops individually. Such thinking process could lead to the professional accountability, personal development, God's consent, conscience appeasement, and personality development. Copyright © 2014. Published by Elsevier B.V.

  20. Management and assimilation of diverse, distributed watershed datasets

    NASA Astrophysics Data System (ADS)

    Varadharajan, C.; Faybishenko, B.; Versteeg, R.; Agarwal, D.; Hubbard, S. S.; Hendrix, V.

    2016-12-01

    The U.S. Department of Energy's (DOE) Watershed Function Scientific Focus Area (SFA) seeks to determine how perturbations to mountainous watersheds (e.g., floods, drought, early snowmelt) impact the downstream delivery of water, nutrients, carbon, and metals over seasonal to decadal timescales. We are building a software platform that enables integration of diverse and disparate field, laboratory, and simulation datasets, of various types including hydrological, geological, meteorological, geophysical, geochemical, ecological and genomic datasets across a range of spatial and temporal scales within the Rifle floodplain and the East River watershed, Colorado. We are using agile data management and assimilation approaches, to enable web-based integration of heterogeneous, multi-scale dataSensor-based observations of water-level, vadose zone and groundwater temperature, water quality, meteorology as well as biogeochemical analyses of soil and groundwater samples have been curated and archived in federated databases. Quality Assurance and Quality Control (QA/QC) are performed on priority datasets needed for on-going scientific analyses, and hydrological and geochemical modeling. Automated QA/QC methods are used to identify and flag issues in the datasets. Data integration is achieved via a brokering service that dynamically integrates data from distributed databases via web services, based on user queries. The integrated results are presented to users in a portal that enables intuitive search, interactive visualization and download of integrated datasets. The concepts, approaches and codes being used are shared across various data science components of various large DOE-funded projects such as the Watershed Function SFA, Next Generation Ecosystem Experiment (NGEE) Tropics, Ameriflux/FLUXNET, and Advanced Simulation Capability for Environmental Management (ASCEM), and together contribute towards DOE's cyberinfrastructure for data management and model-data integration.

  1. Scientific Production of Research Fellows at the Zagreb University School of Medicine, Croatia

    PubMed Central

    Polašek, Ozren; Kolčić, Ivana; Buneta, Zoran; Čikeš, Nada; Pećina, Marko

    2006-01-01

    Aim To evaluate scientific production among research fellows employed at the Zagreb University School of Medicine and identify factors associated with their scientific output. Method We conducted a survey among research fellows and their mentors during June 2005. The main outcome measure was publication success, defined for each fellow as publishing at least 0.5 articles per employment year in journals indexed in the Current Contents bibliographic database. Bivariate methods and binary logistic regression were used in data analysis. Results A total of 117 fellows (response rate 95%) and 83 mentors (100%) were surveyed. The highest scientific production was recorded among research fellows employed in public health departments (median 3.0 articles, interquartile range 4.0), compared with those from pre-clinical (median 0.0, interquartile range 2.0) and clinical departments (median 1.0, interquartile range 2.0) (Kruskal-Wallis, P = 0.003). A total of 36 (29%) research fellows published at least 0.5 articles per employment year and were considered successful. Three variables were associated with fellows’ publication success: mentor’s scientific production (odds ratio [OR], 3.14; 95% confidence interval [CI], 1.31-7.53), positive mentor’s assessment (OR, 3.15; 95% CI, 1.10-9.05), and fellows’ undergraduate publication in journals indexed in the Current Contents bibliographic database (OR, 4.05; 95% CI, 1.07-15.34). Conclusion Undergraduate publication could be used as one of the main criteria in selecting research fellows. One of the crucial factors in a fellow’s scientific production and career advancement is mentor’s input, which is why research fellows would benefit most from working with scientifically productive mentors. PMID:17042070

  2. Profile and scientific production of the Brazilian Council for Scientific and Technological Development (CNPq) researchers in the field of Hematology/Oncology.

    PubMed

    Oliveira, Maria Christina Lopes Araujo; Martelli, Daniella Reis; Quirino, Isabel Gomes; Colosimo, Enrico Antônio; Silva, Ana Cristina Simões e; Martelli Júnior, Hercílio; Oliveira, Eduardo Araujo de

    2014-01-01

    several studies have examined the academic production of the researchers at the CNPq, in several areas of knowledge. The aim of this study was to evaluate the scientific production of researchers in Hematology/Oncology who hold scientific productivity grants from the Brazilian Council for Scientific and Technological Development. the Academic CVs of 28 researchers in Hematology/Oncology with active grants in the three-year period from 2006 to 2008 were included in the analysis. The variables of interest were: institution, researchers' time after doctorate, tutoring of undergraduate students, masters and PhD degree, scientific production and its impact. from a total of 411 researchers in Medicine, 28 (7%) were identified as being in the area of Hematology/Oncology. There was a slight predominance of males (53.6%) and grant holders in category 1. Three Brazilian states are responsible for approximately 90% of the researchers: São Paulo (21,75%), Rio de Janeiro (3,11%), and Minas Gerais (2, 7%). During their academic careers, the researchers published 2,655 articles, with a median of 87 articles per researcher (IQR = 52 to 122). 65 and 78% of this total were indexed on the Web of Science and Scopus databases, respectively. The researchers received 14,247 citations on the WoS database with a median of 385 citations per researcher. The average number of citations per article was 8.2. in this investigation, it was noted that researchers in the field of Hematology/Oncology have a relevant scientific output from the point of view of quantity and quality compared to other medical specialties.

  3. Research trends in Carrion's disease in the last 60 years. A bibliometric assessment of Latin American scientific production.

    PubMed

    Culquichicón, Carlos; Ramos-Cedano, Emanuel; Helguero-Santin, Luis; Niño-Garcia, Roberto; Rodriguez-Morales, Alfonso J

    2018-03-01

    Carrion's disease is a major re-emerging and occupational health disease. This bibliometric study aimed to evaluate scientific production on this disease both globally and in Latin America. SCI-E, MEDLINE/GoPubMed, SCOPUS, ScIELO, and LILACS databases were searched for Carrion's disease-related articles. They were classified according to publication year, type, city and institution of origin, international cooperation, scientific journal, impact factor, publication language, author(s), and H-index. There were 170 articles in SCI-E. The USA was the largest contributor (42.9%), followed by Peru (24.1%) and Spain (12.4%). Latin American publications were cited 811 times (regional H-index=18). There were 335 articles in SCOPUS: 25.9%, 11.6%, and 8.3% were published by the USA, Peru, and Spain, respectively. Latin American publications were cited 613 times (H-index=12): Peru, Colombia, and Brazil received the most citations (n=395, H-index=10; n=61, H-index=1; and n=54, H-index=4, respectively). The most scientifically productive American institution was the University of Montana (2.9% of American production). In Peru, it was the Institute of Tropical Medicine Alexander von Humboldt of Peruvian University Cayetano Heredia (6.5% of Peruvian scientific production). There were 3,802 articles in Medline (1.2% were Peruvian), 35 in SciELO (94.3% were from Peru), and 168 in LILACS (11% were published in 2010-2014; only one article was published in 2015). Scientific production worldwide is led by the USA, and, in Latin America, by Peru and Brazil. However, Latin American scientific production in bibliographic databases is much lower than in other regions, despite being an endemic area for Carrion's disease.

  4. The ChArMEx database

    NASA Astrophysics Data System (ADS)

    Ferré, Hélène; Belmahfoud, Nizar; Boichard, Jean-Luc; Brissebrat, Guillaume; Cloché, Sophie; Descloitres, Jacques; Fleury, Laurence; Focsa, Loredana; Henriot, Nicolas; Mière, Arnaud; Ramage, Karim; Vermeulen, Anne; Boulanger, Damien

    2015-04-01

    The Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at a scientific assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project includes long term monitoring of environmental parameters , intensive field campaigns, use of satellite data and modelling studies. Therefore ChARMEx scientists produce and need to access a wide diversity of data. In this context, the objective of the database task is to organize data management, distribution system and services, such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between ICARE, IPSL and OMP data centers and has been set up in the framework of the Mediterranean Integrated Studies at Regional And Locals Scales (MISTRALS) program data portal. ChArMEx data, either produced or used by the project, are documented and accessible through the database website: http://mistrals.sedoo.fr/ChArMEx. The website offers the usual but user-friendly functionalities: data catalog, user registration procedure, search tool to select and access data... The metadata (data description) are standardized, and comply with international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). A Digital Object Identifier (DOI) assignement procedure allows to automatically register the datasets, in order to make them easier to access, cite, reuse and verify. At present, the ChArMEx database contains about 120 datasets, including more than 80 in situ datasets (2012, 2013 and 2014 summer campaigns, background monitoring station of Ersa...), 25 model output sets (dust model intercomparison, MEDCORDEX scenarios...), a high resolution emission inventory over the Mediterranean... Many in situ datasets have been inserted in a relational database, in order to enable more accurate selection and download of different datasets in a shared format. Many dedicated satellite products (SEVIRI, TRIMM, PARASOL...) are processed and will soon be accessible through the database website. In order to meet the operational needs of the airborne and ground based observational teams during the ChArMEx campaigns, a day-to-day chart display website has been developed and operated: http://choc.sedoo.org. It offers a convenient way to browse weather conditions and chemical composition during the campaign periods. Every scientist is invited to visit the ChArMEx websites, to register and to request data. Feel free to contact charmex-database@sedoo.fr for any question.

  5. Skill complementarity enhances heterophily in collaboration networks

    PubMed Central

    Xie, Wen-Jie; Li, Ming-Xia; Jiang, Zhi-Qiang; Tan, Qun-Zhao; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene

    2016-01-01

    Much empirical evidence shows that individuals usually exhibit significant homophily in social networks. We demonstrate, however, skill complementarity enhances heterophily in the formation of collaboration networks, where people prefer to forge social ties with people who have professions different from their own. We construct a model to quantify the heterophily by assuming that individuals choose collaborators to maximize utility. Using a huge database of online societies, we find evidence of heterophily in collaboration networks. The results of model calibration confirm the presence of heterophily. Both empirical analysis and model calibration show that the heterophilous feature is persistent along the evolution of online societies. Furthermore, the degree of skill complementarity is positively correlated with their production output. Our work sheds new light on the scientific research utility of virtual worlds for studying human behaviors in complex socioeconomic systems. PMID:26743687

  6. The MIPAS2D: 2-D analysis of MIPAS observations of ESA target molecules and minor species

    NASA Astrophysics Data System (ADS)

    Arnone, E.; Brizzi, G.; Carlotti, M.; Dinelli, B. M.; Magnani, L.; Papandrea, E.; Ridolfi, M.

    2008-12-01

    Measurements from the MIPAS instrument onboard the ENVISAT satellite were analyzed with the Geofit Multi- Target Retrieval (GMTR) system to obtain 2-dimensional fields of pressure, temperature and volume mixing ratios of H2O, O3, HNO3, CH4, N2O, and NO2. Secondary target species relevant to stratospheric chemistry were also analysed and robust mixing ratios of N2O5, ClONO2, F11, F12, F14 and F22 were obtained. Other minor species with high uncertainties were not included in the database and will be the object of further studies. The analysis covers the original nominal observation mode from July 2002 to March 2004 and it is currently being extended to the ongoing reduced resolution mission. The GMTR algorithm was operated on a fixed 5 degrees latitudinal grid in order to ease the comparison with model calculations and climatological datasets. The generated database of atmospheric fields can be directly used for analyses based on averaging processes with no need of further interpolation. Samples of the obtained products are presented and discussed. The database of the retrieved quantities is made available to the scientific community.

  7. The Global Genome Biodiversity Network (GGBN) Data Standard specification

    PubMed Central

    Droege, G.; Barker, K.; Seberg, O.; Coddington, J.; Benson, E.; Berendsohn, W. G.; Bunk, B.; Butler, C.; Cawsey, E. M.; Deck, J.; Döring, M.; Flemons, P.; Gemeinholzer, B.; Güntsch, A.; Hollowell, T.; Kelbert, P.; Kostadinov, I.; Kottmann, R.; Lawlor, R. T.; Lyal, C.; Mackenzie-Dodds, J.; Meyer, C.; Mulcahy, D.; Nussbeck, S. Y.; O'Tuama, É.; Orrell, T.; Petersen, G.; Robertson, T.; Söhngen, C.; Whitacre, J.; Wieczorek, J.; Yilmaz, P.; Zetzsche, H.; Zhang, Y.; Zhou, X.

    2016-01-01

    Genomic samples of non-model organisms are becoming increasingly important in a broad range of studies from developmental biology, biodiversity analyses, to conservation. Genomic sample definition, description, quality, voucher information and metadata all need to be digitized and disseminated across scientific communities. This information needs to be concise and consistent in today’s ever-increasing bioinformatic era, for complementary data aggregators to easily map databases to one another. In order to facilitate exchange of information on genomic samples and their derived data, the Global Genome Biodiversity Network (GGBN) Data Standard is intended to provide a platform based on a documented agreement to promote the efficient sharing and usage of genomic sample material and associated specimen information in a consistent way. The new data standard presented here build upon existing standards commonly used within the community extending them with the capability to exchange data on tissue, environmental and DNA sample as well as sequences. The GGBN Data Standard will reveal and democratize the hidden contents of biodiversity biobanks, for the convenience of everyone in the wider biobanking community. Technical tools exist for data providers to easily map their databases to the standard. Database URL: http://terms.tdwg.org/wiki/GGBN_Data_Standard PMID:27694206

  8. TISSUES 2.0: an integrative web resource on mammalian tissue expression

    PubMed Central

    Palasca, Oana; Santos, Alberto; Stolte, Christian; Gorodkin, Jan; Jensen, Lars Juhl

    2018-01-01

    Abstract Physiological and molecular similarities between organisms make it possible to translate findings from simpler experimental systems—model organisms—into more complex ones, such as human. This translation facilitates the understanding of biological processes under normal or disease conditions. Researchers aiming to identify the similarities and differences between organisms at the molecular level need resources collecting multi-organism tissue expression data. We have developed a database of gene–tissue associations in human, mouse, rat and pig by integrating multiple sources of evidence: transcriptomics covering all four species and proteomics (human only), manually curated and mined from the scientific literature. Through a scoring scheme, these associations are made comparable across all sources of evidence and across organisms. Furthermore, the scoring produces a confidence score assigned to each of the associations. The TISSUES database (version 2.0) is publicly accessible through a user-friendly web interface and as part of the STRING app for Cytoscape. In addition, we analyzed the agreement between datasets, across and within organisms, and identified that the agreement is mainly affected by the quality of the datasets rather than by the technologies used or organisms compared. Database URL: http://tissues.jensenlab.org/ PMID:29617745

  9. The BioMart community portal: an innovative alternative to large, centralized data repositories

    PubMed Central

    Smedley, Damian; Haider, Syed; Durinck, Steffen; Pandini, Luca; Provero, Paolo; Allen, James; Arnaiz, Olivier; Awedh, Mohammad Hamza; Baldock, Richard; Barbiera, Giulia; Bardou, Philippe; Beck, Tim; Blake, Andrew; Bonierbale, Merideth; Brookes, Anthony J.; Bucci, Gabriele; Buetti, Iwan; Burge, Sarah; Cabau, Cédric; Carlson, Joseph W.; Chelala, Claude; Chrysostomou, Charalambos; Cittaro, Davide; Collin, Olivier; Cordova, Raul; Cutts, Rosalind J.; Dassi, Erik; Genova, Alex Di; Djari, Anis; Esposito, Anthony; Estrella, Heather; Eyras, Eduardo; Fernandez-Banet, Julio; Forbes, Simon; Free, Robert C.; Fujisawa, Takatomo; Gadaleta, Emanuela; Garcia-Manteiga, Jose M.; Goodstein, David; Gray, Kristian; Guerra-Assunção, José Afonso; Haggarty, Bernard; Han, Dong-Jin; Han, Byung Woo; Harris, Todd; Harshbarger, Jayson; Hastings, Robert K.; Hayes, Richard D.; Hoede, Claire; Hu, Shen; Hu, Zhi-Liang; Hutchins, Lucie; Kan, Zhengyan; Kawaji, Hideya; Keliet, Aminah; Kerhornou, Arnaud; Kim, Sunghoon; Kinsella, Rhoda; Klopp, Christophe; Kong, Lei; Lawson, Daniel; Lazarevic, Dejan; Lee, Ji-Hyun; Letellier, Thomas; Li, Chuan-Yun; Lio, Pietro; Liu, Chu-Jun; Luo, Jie; Maass, Alejandro; Mariette, Jerome; Maurel, Thomas; Merella, Stefania; Mohamed, Azza Mostafa; Moreews, Francois; Nabihoudine, Ibounyamine; Ndegwa, Nelson; Noirot, Céline; Perez-Llamas, Cristian; Primig, Michael; Quattrone, Alessandro; Quesneville, Hadi; Rambaldi, Davide; Reecy, James; Riba, Michela; Rosanoff, Steven; Saddiq, Amna Ali; Salas, Elisa; Sallou, Olivier; Shepherd, Rebecca; Simon, Reinhard; Sperling, Linda; Spooner, William; Staines, Daniel M.; Steinbach, Delphine; Stone, Kevin; Stupka, Elia; Teague, Jon W.; Dayem Ullah, Abu Z.; Wang, Jun; Ware, Doreen; Wong-Erasmus, Marie; Youens-Clark, Ken; Zadissa, Amonida; Zhang, Shi-Jian; Kasprzyk, Arek

    2015-01-01

    The BioMart Community Portal (www.biomart.org) is a community-driven effort to provide a unified interface to biomedical databases that are distributed worldwide. The portal provides access to numerous database projects supported by 30 scientific organizations. It includes over 800 different biological datasets spanning genomics, proteomics, model organisms, cancer data, ontology information and more. All resources available through the portal are independently administered and funded by their host organizations. The BioMart data federation technology provides a unified interface to all the available data. The latest version of the portal comes with many new databases that have been created by our ever-growing community. It also comes with better support and extensibility for data analysis and visualization tools. A new addition to our toolbox, the enrichment analysis tool is now accessible through graphical and web service interface. The BioMart community portal averages over one million requests per day. Building on this level of service and the wealth of information that has become available, the BioMart Community Portal has introduced a new, more scalable and cheaper alternative to the large data stores maintained by specialized organizations. PMID:25897122

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, P.Y.; Wassom, J.S.

    Scientific and technological developments bring unprecedented stress to our environment. Society has to predict the results of potential health risks from technologically based actions that may have serious, far-reaching consequences. The potential for error in making such predictions or assessment is great and multiplies with the increasing size and complexity of the problem being studied. Because of this, the availability and use of reliable data is the key to any successful forecasting effort. Scientific research and development generate new data and information. Much of the scientific data being produced daily is stored in computers for subsequent analysis. This situation providesmore » both an invaluable resource and an enormous challenge. With large amounts of government funds being devoted to health and environmental research programs and with maintenance of our living environment at stake, we must make maximum use of the resulting data to forecast and avert catastrophic effects. Along with the readily available. The most efficient means of obtaining the data necessary for assessing the health effects of chemicals is to utilize applications include the toxicology databases and information files developed at ORNL. To make most efficient use of the data/information that has already been prepared, attention and resources should be directed toward projects that meticulously evaluate the available data/information and create specialized peer-reviewed value-added databases. Such projects include the National Library of Medicine`s Hazardous Substances Data Bank, and the U.S. Air Force Installation Restoration Toxicology Guide. These and similar value-added toxicology databases were developed at ORNL and are being maintained and updated. These databases and supporting information files, as well as some data evaluation techniques are discussed in this paper with special focus on how they are used to assess potential health effects of environmental agents. 19 refs., 5 tabs.« less

  11. usSEABED: Gulf of Mexico and Caribbean (Puerto Rico and U.S. Virgin Islands) offshore surficial sediment data release

    USGS Publications Warehouse

    Buczkowski, Brian J.; Reid, Jane A.; Jenkins, Chris J.; Reid, Jamey M.; Williams, S. Jeffress; Flocks, James G.

    2006-01-01

    Over the past 50 years there has been an explosion in scientific interest, research effort and information gathered on the geologic sedimentary character of the United States continental margin. Data and information from thousands of publications have greatly increased our scientific understanding of the geologic origins of the shelf surface but rarely have those data been combined and integrated. This publication is the first release of the Gulf of Mexico and Caribbean (Puerto Rico and U.S. Virgin Islands) coastal and offshore data from the usSEABED database. The report contains a compilation of published and previously unpublished sediment texture and other geologic data about the sea floor from diverse sources. usSEABED is an innovative database system developed to bring assorted data together in a unified database. The dbSEABED system is used to process the data. Examples of maps displaying attributes such as grain size and sediment color are included. This database contains information that is a scientific foundation for the USGS Marine Aggregate Resources and Processes Assessment and Benthic Habitats projects, and will be useful to the marine science community for other studies of the Gulf of Mexico and Caribbean continental margins. This publication is divided into ten sections: Home, Introduction, Content, usSEABED (data), dbSEABED (processing), Data Catalog, References, Contacts, Acknowledgments and Frequently Asked Questions. Use the navigation bar on the left to navigate to specific sections of this report. Underlined topics throughout the publication are links to more information. Links to specific and detailed information on processing and those to pages outside this report will open in a new browser window.

  12. Unified Access Architecture for Large-Scale Scientific Datasets

    NASA Astrophysics Data System (ADS)

    Karna, Risav

    2014-05-01

    Data-intensive sciences have to deploy diverse large scale database technologies for data analytics as scientists have now been dealing with much larger volume than ever before. While array databases have bridged many gaps between the needs of data-intensive research fields and DBMS technologies (Zhang 2011), invocation of other big data tools accompanying these databases is still manual and separate the database management's interface. We identify this as an architectural challenge that will increasingly complicate the user's work flow owing to the growing number of useful but isolated and niche database tools. Such use of data analysis tools in effect leaves the burden on the user's end to synchronize the results from other data manipulation analysis tools with the database management system. To this end, we propose a unified access interface for using big data tools within large scale scientific array database using the database queries themselves to embed foreign routines belonging to the big data tools. Such an invocation of foreign data manipulation routines inside a query into a database can be made possible through a user-defined function (UDF). UDFs that allow such levels of freedom as to call modules from another language and interface back and forth between the query body and the side-loaded functions would be needed for this purpose. For the purpose of this research we attempt coupling of four widely used tools Hadoop (hadoop1), Matlab (matlab1), R (r1) and ScaLAPACK (scalapack1) with UDF feature of rasdaman (Baumann 98), an array-based data manager, for investigating this concept. The native array data model used by an array-based data manager provides compact data storage and high performance operations on ordered data such as spatial data, temporal data, and matrix-based data for linear algebra operations (scidbusr1). Performances issues arising due to coupling of tools with different paradigms, niche functionalities, separate processes and output data formats have been anticipated and considered during the design of the unified architecture. The research focuses on the feasibility of the designed coupling mechanism and the evaluation of the efficiency and benefits of our proposed unified access architecture. Zhang 2011: Zhang, Ying and Kersten, Martin and Ivanova, Milena and Nes, Niels, SciQL: Bridging the Gap Between Science and Relational DBMS, Proceedings of the 15th Symposium on International Database Engineering Applications, 2011. Baumann 98: Baumann, P., Dehmel, A., Furtado, P., Ritsch, R., Widmann, N., "The Multidimensional Database System RasDaMan", SIGMOD 1998, Proceedings ACM SIGMOD International Conference on Management of Data, June 2-4, 1998, Seattle, Washington, 1998. hadoop1: hadoop.apache.org, "Hadoop", http://hadoop.apache.org/, [Online; accessed 12-Jan-2014]. scalapack1: netlib.org/scalapack, "ScaLAPACK", http://www.netlib.org/scalapack,[Online; accessed 12-Jan-2014]. r1: r-project.org, "R", http://www.r-project.org/,[Online; accessed 12-Jan-2014]. matlab1: mathworks.com, "Matlab Documentation", http://www.mathworks.de/de/help/matlab/,[Online; accessed 12-Jan-2014]. scidbusr1: scidb.org, "SciDB User's Guide", http://scidb.org/HTMLmanual/13.6/scidb_ug,[Online; accessed 01-Dec-2013].

  13. Featured Article: Genotation: Actionable knowledge for the scientific reader

    PubMed Central

    Willis, Ethan; Sakauye, Mark; Jose, Rony; Chen, Hao; Davis, Robert L

    2016-01-01

    We present an article viewer application that allows a scientific reader to easily discover and share knowledge by linking genomics-related concepts to knowledge of disparate biomedical databases. High-throughput data streams generated by technical advancements have contributed to scientific knowledge discovery at an unprecedented rate. Biomedical Informaticists have created a diverse set of databases to store and retrieve the discovered knowledge. The diversity and abundance of such resources present biomedical researchers a challenge with knowledge discovery. These challenges highlight a need for a better informatics solution. We use a text mining algorithm, Genomine, to identify gene symbols from the text of a journal article. The identified symbols are supplemented with information from the GenoDB knowledgebase. Self-updating GenoDB contains information from NCBI Gene, Clinvar, Medgen, dbSNP, KEGG, PharmGKB, Uniprot, and Hugo Gene databases. The journal viewer is a web application accessible via a web browser. The features described herein are accessible on www.genotation.org. The Genomine algorithm identifies gene symbols with an accuracy shown by .65 F-Score. GenoDB currently contains information regarding 59,905 gene symbols, 5633 drug–gene relationships, 5981 gene–disease relationships, and 713 pathways. This application provides scientific readers with actionable knowledge related to concepts of a manuscript. The reader will be able to save and share supplements to be visualized in a graphical manner. This provides convenient access to details of complex biological phenomena, enabling biomedical researchers to generate novel hypothesis to further our knowledge in human health. This manuscript presents a novel application that integrates genomic, proteomic, and pharmacogenomic information to supplement content of a biomedical manuscript and enable readers to automatically discover actionable knowledge. PMID:26900164

  14. Tracing the scientific outputs in the field of Ebola research based on publications in the Web of Science.

    PubMed

    Yi, Fengyun; Yang, Pin; Sheng, Huifeng

    2016-04-15

    Ebola virus disease (hereafter EVD or Ebola) has a high fatality rate. The devastating effects of the current epidemic of Ebola in West Africa have put the global health response in acute focus. In response, the World Health Organization (WHO) has declared the Ebola outbreak in West Africa as a "Public Health Emergency of International Concern". A small proportion of scientific literature is dedicated to Ebola research. To identify global research trends in Ebola research, the Institute for Scientific Information (ISI) Web of Science™ database was used to search for data, which encompassed original articles published from 1900 to 2013. The keyword "Ebola" was used to identify articles for the purposes of this review. In order to include all published items, the database was searched using the Basic Search method. The earliest record of literature about Ebola indexed in the Web of Science is from 1977. A total of 2477 publications on Ebola, published between 1977 and 2014 (with the number of publications increasing annually), were retrieved from the database. Original research articles (n = 1623, 65.5%) were the most common type of publication. Almost all (96.5%) of the literature in this field was in English. The USA had the highest scientific output and greatest number of funding agencies. Journal of Virology published 239 papers on Ebola, followed by Journal of Infectious Diseases and Virology, which published 113 and 99 papers, respectively. A total of 1911 papers on Ebola were cited 61,477 times. This analysis identified the current state of research and trends in studies about Ebola between 1977 and 2014. Our bibliometric analysis provides a historical perspective on the progress in Ebola research.

  15. Development of a Data Citations Database for an Interdisciplinary Data Center

    NASA Astrophysics Data System (ADS)

    Chen, R. S.; Downs, R. R.; Schumacher, J.; Gerard, A.

    2017-12-01

    The scientific community has long depended on consistent citation of the scientific literature to enable traceability, support replication, and facilitate analysis and debate about scientific hypotheses, theories, assumptions, and conclusions. However, only in the past few years has the community focused on consistent citation of scientific data, e.g., through the application of Digital Object Identifiers (DOIs) to data, the development of peer-reviewed data publications, community principles and guidelines, and other mechanisms. This means that, moving ahead, it should be easier to identify and track data citations and conduct systematic bibliometric studies. However, this still leaves the problem that many legacy datasets and past citations lack DOIs, making it difficult to develop a historical baseline or assess trends. With this in mind, the NASA Socioeconomic Data and Applications Center (SEDAC) has developed a searchable citations database, containing more than 3,400 citations of SEDAC data and information products over the past 20 years. These citations were collected through various indices and search tools and in some cases through direct contacts with authors. The citations come from a range of natural, social, health, and engineering science journals, books, reports, and other media. The database can be used to find and extract citations filtered by a range of criteria, enabling quantitative analysis of trends, intercomparisons between data collections, and categorization of citations by type. We present a preliminary analysis of citations for selected SEDAC data collections, in order to establish a baseline and assess options for ongoing metrics to track the impact of SEDAC data on interdisciplinary science. We also present an analysis of the uptake of DOIs within data citations reported in published studies that used SEDAC data.

  16. Suggestions for better data presentation in papers: an experience from a comprehensive study on national and sub-national trends of overweight and obesity.

    PubMed

    Djalalinia, Shirin; Kelishadi, Roya; Qorbani, Mostafa; Peykari, Niloofar; Kasaeian, Amir; Saeedi Moghaddam, Sahar; Gohari, Kimiya; Larijani, Bagher; Farzadfar, Farshad

    2014-12-01

    The importance of data quality whether in collection, analysis or presenting stage is a tangible and undeniable scientific fact and the main objects of researches implementation. This paper aims at explaining the main problems of the Iranian scientific papers for providing better data in the field of national and sub-national prevalence, incidence estimates and trends of obesity and overweight. To assess and evaluate papers, we systematically followed an approved standard protocol. Retrieval of studies was performed through Thomson Reuters Web of Science, PubMed, and Scopus, as well as Iranian databases including Irandoc, Scientific Information Database (SID), and IranMedex. Using GBD (Global Burden of Diseases) validated quality assessment forms to assess the quality and availability of data in papers, we considered the following four main domains: a) Quality of studies, b) Quality report of the results, c) Responsiveness of corresponding authors, and d) Diversity in study settings. We retrieved 3,253 records; of these 1,875 were from international and 1378 from national databases. After refining steps, 129 (3.97%) papers remained related to our study domain. More than 51% of relevant papers were excluded because of poor quality of studies. The number of reported total population and points of data were 22,972 and 29 for boys, and 38,985 and 47 for girls, respectively. For all measures, missing values and diversities in studies' setting limited our ability to compare and analyze the results. Moreover, we had some serious problems in contacting the corresponding authors for complementary information necessary (Receptiveness: 17.9%). As the present paper focused on the main problems of Iranian scientific papers and proposed suggestions, the results will have implications for better policy making.

  17. Featured Article: Genotation: Actionable knowledge for the scientific reader.

    PubMed

    Nagahawatte, Panduka; Willis, Ethan; Sakauye, Mark; Jose, Rony; Chen, Hao; Davis, Robert L

    2016-06-01

    We present an article viewer application that allows a scientific reader to easily discover and share knowledge by linking genomics-related concepts to knowledge of disparate biomedical databases. High-throughput data streams generated by technical advancements have contributed to scientific knowledge discovery at an unprecedented rate. Biomedical Informaticists have created a diverse set of databases to store and retrieve the discovered knowledge. The diversity and abundance of such resources present biomedical researchers a challenge with knowledge discovery. These challenges highlight a need for a better informatics solution. We use a text mining algorithm, Genomine, to identify gene symbols from the text of a journal article. The identified symbols are supplemented with information from the GenoDB knowledgebase. Self-updating GenoDB contains information from NCBI Gene, Clinvar, Medgen, dbSNP, KEGG, PharmGKB, Uniprot, and Hugo Gene databases. The journal viewer is a web application accessible via a web browser. The features described herein are accessible on www.genotation.org The Genomine algorithm identifies gene symbols with an accuracy shown by .65 F-Score. GenoDB currently contains information regarding 59,905 gene symbols, 5633 drug-gene relationships, 5981 gene-disease relationships, and 713 pathways. This application provides scientific readers with actionable knowledge related to concepts of a manuscript. The reader will be able to save and share supplements to be visualized in a graphical manner. This provides convenient access to details of complex biological phenomena, enabling biomedical researchers to generate novel hypothesis to further our knowledge in human health. This manuscript presents a novel application that integrates genomic, proteomic, and pharmacogenomic information to supplement content of a biomedical manuscript and enable readers to automatically discover actionable knowledge. © 2016 by the Society for Experimental Biology and Medicine.

  18. [Bibliographic survey of the Orvosi Hetilap of Hungary: looking back and moving forward].

    PubMed

    Berhidi, Anna; Margittai, Zsuzsa; Vasas, Lívia

    2012-12-02

    The first step in the process of acquisition of impact factor for a scientific journal is to get registered at Thomson Reuters Web of Science database. The aim of this article is to evaluate the content and structure of Orvosi Hetilap with regards to selection criteria of Thomson Reuters, in particular to objectives of citation analysis. Authors evaluated issues of Orvosi Hetilap published in 2011 and calculated the unofficial impact factor of the journal based on systematic search in various citation index databases. Number of citations, quality of citing journals and scientific output of the editorial board members were evaluated. Adherence to guidelines of international publishers was assessed, as well. Unofficial impact factor of Orvosi Hetilap has been continuously rising every year in the past decade (except for 2004 and 2010). The articles of Orvosi Hetilap are widely cited by international authors and high impact factor journals, too. Further, more than half the articles cited are open access. The most frequently cited categories are original and review articles as well as clinical studies. Orvosi Hetilap is a weekly published journal, which is covered by many international databases such as PubMed/Medline, Scopus, Embase, and BIOSIS Previews. As regards to the scientific output of the editorial board members, the truncated mean of the number of their publications was 497, citations 2446, independent citations 2014 and h-index 21. While Orvosi Hetilap fulfils many criteria for getting covered by Thomson Reuters, it is worthwhile to implement a method of online citation system in order to increase the number of citations. In addition, scientific publications of all editorial board members should be made easily accessible. Finally, publications of comparative studies by multiple authors are encouraged as well as papers containing epidemiological data analyses.

  19. Medicinal Plants from Mexico, Central America, and the Caribbean Used as Immunostimulants

    PubMed Central

    Juárez-Vázquez, María del Carmen; Campos-Xolalpa, Nimsi

    2016-01-01

    A literature review was undertaken by analyzing distinguished books, undergraduate and postgraduate theses, and peer-reviewed scientific articles and by consulting worldwide accepted scientific databases, such as SCOPUS, Web of Science, SCIELO, Medline, and Google Scholar. Medicinal plants used as immunostimulants were classified into two categories: (1) plants with pharmacological studies and (2) plants without pharmacological research. Medicinal plants with pharmacological studies of their immunostimulatory properties were subclassified into four groups as follows: (a) plant extracts evaluated for in vitro effects, (b) plant extracts with documented in vivo effects, (c) active compounds tested on in vitro studies, and (d) active compounds assayed in animal models. Pharmacological studies have been conducted on 29 of the plants, including extracts and compounds, whereas 75 plants lack pharmacological studies regarding their immunostimulatory activity. Medicinal plants were experimentally studied in vitro (19 plants) and in vivo (8 plants). A total of 12 compounds isolated from medicinal plants used as immunostimulants have been tested using in vitro (11 compounds) and in vivo (2 compounds) assays. This review clearly indicates the need to perform scientific studies with medicinal flora from Mexico, Central America, and the Caribbean, to obtain new immunostimulatory agents. PMID:27042188

  20. [The health food product Noni--does marketing harmonize with the current status of research?].

    PubMed

    Johansen, Rolf

    2008-03-13

    Norwegian cancer patients frequently use Noni. The objective of this study was to find out whether the way noni is marketed in Norway and the health claims made about the product harmonize with current scientific knowledge of its benefits/adverse effects. An overview of medical research on noni was obtained from three databases. Web sites for private persons and for companies that sell noni in Norway were examined. Books, pamphlets etc. from a company specializing in selling information material about noni, were also examined. 48 scientific articles were included in the study, but none of these were clinical studies of humans. Several pharmacological effects of noni have been shown in vitro and in animal models (e.g., increased survival for animals with cancer). Information material describes noni as a health-promoting product that patients with most diseases will benefit from. Noni is to a great extent sold by multi-level marketing, but is also commonly sold by health food stores. There is no scientific basis for claiming that patients will benefit from using noni for any diseases. The way this product is sold has several worrying aspects.

  1. Geodesy, a Bibliometric Approach for 2000-2006

    NASA Astrophysics Data System (ADS)

    Vazquez, G.; Landeros, C. F.

    2007-12-01

    In recent years, bibliometric science has been frequently applied in the development and evaluation of scientific research. This work presents a bibliometric analysis for the research work performed in the field of geodesy "science of the measurement and mapping of the earth surface including its external gravity field". The objective of this work is to present a complete overview of the generated research on this field to assemble and study the most important publications occurred during the past seven years. The analysis was performed including the SCOPUS and WEB OF SCIENCE databases for all the geodetic scientific articles published between 2000 and 2006. The search profile was designed considering a strategy to seek for titles and article descriptors using the terms geodesy and geodetic and some other terms associated with the topics: geodetic surfaces, vertical measurements, reference systems and frames, modern space-geodetic techniques and satellite missions. Some preliminary results had been achieved specifically Bradford law of distribution for journals and education institutes, and Lotka's law for authors that also includes the cooperation between countries in terms of writing together scientific articles. In the particular case of distributions, the model suggested by Egghe (2002) was adopted for determining the cores.

  2. Medicinal Plants from Mexico, Central America, and the Caribbean Used as Immunostimulants.

    PubMed

    Alonso-Castro, Angel Josabad; Juárez-Vázquez, María Del Carmen; Campos-Xolalpa, Nimsi

    2016-01-01

    A literature review was undertaken by analyzing distinguished books, undergraduate and postgraduate theses, and peer-reviewed scientific articles and by consulting worldwide accepted scientific databases, such as SCOPUS, Web of Science, SCIELO, Medline, and Google Scholar. Medicinal plants used as immunostimulants were classified into two categories: (1) plants with pharmacological studies and (2) plants without pharmacological research. Medicinal plants with pharmacological studies of their immunostimulatory properties were subclassified into four groups as follows: (a) plant extracts evaluated for in vitro effects, (b) plant extracts with documented in vivo effects, (c) active compounds tested on in vitro studies, and (d) active compounds assayed in animal models. Pharmacological studies have been conducted on 29 of the plants, including extracts and compounds, whereas 75 plants lack pharmacological studies regarding their immunostimulatory activity. Medicinal plants were experimentally studied in vitro (19 plants) and in vivo (8 plants). A total of 12 compounds isolated from medicinal plants used as immunostimulants have been tested using in vitro (11 compounds) and in vivo (2 compounds) assays. This review clearly indicates the need to perform scientific studies with medicinal flora from Mexico, Central America, and the Caribbean, to obtain new immunostimulatory agents.

  3. Electronic structure of atoms: atomic spectroscopy information system

    NASA Astrophysics Data System (ADS)

    Kazakov, V. V.; Kazakov, V. G.; Kovalev, V. S.; Meshkov, O. I.; Yatsenko, A. S.

    2017-10-01

    The article presents a Russian atomic spectroscopy, information system electronic structure of atoms (IS ESA) (http://grotrian.nsu.ru), and describes its main features and options to support research and training. The database contains over 234 000 records, great attention paid to experimental data and uniform filling of the database for all atomic numbers Z, including classified levels and transitions of rare earth and transuranic elements and their ions. Original means of visualization of scientific data in the form of spectrograms and Grotrian diagrams have been proposed. Presentation of spectral data in the form of interactive color charts facilitates understanding and analysis of properties of atomic systems. The use of the spectral data of the IS ESA together with its functionality is effective for solving various scientific problems and training of specialists.

  4. [Changes in nursing administration in supporting transplantation in Brazil].

    PubMed

    Cintra, Vivian; Sanna, Maria Cristina

    2005-01-01

    This historical and bibliographic study aimed to understand how Nursing was organized to support care in transplantation. The HISA, LILACS, BDENF, PERIENF and DEDALUS databases were consulted, and thirteen references were found, ten of which were scientific articles, two were master's dissertations and one was a doctoral thesis. The span of time chosen for study ranges from the date of the first kidney transplant in Brazil (1965), to the date of publication of the last scientific article found in the databases mentioned above (2003). After reading these articles, the ones that were similar in topic were grouped together, thus creating the thematic axis for the presentation of the results. The results showed that the Nursing profession has played an important and active role in transplants ever since the first procedure in 1965.

  5. IRIS Toxicological Review of Tetrahydrofuran (THF) (External ...

    EPA Pesticide Factsheets

    EPA is conducting a peer review and public comment of the scientific basis supporting the human health hazard and dose-response assessment of tetrahydrofuran (THF) that when finalized will appear on the Integrated Risk Information System (IRIS) database. EPA is undertaking an Integrated Risk Information System (IRIS) health assessment for tetrahydrofuran. IRIS is an EPA database containing Agency scientific positions on potential adverse human health effects that may result from chronic (or lifetime) exposure to chemicals in the environment. IRIS contains chemical-specific summaries of qualitative and quantitative health information in support of two steps of the risk assessment paradigm, i.e., hazard identification and dose-response evaluation. IRIS assessments are used in combination with specific situational exposure assessment information to evaluate potential public health risk associated with environmental contaminants.

  6. Development of a conceptual model evaluating the humanistic and economic burden of Crohn's disease: implications for patient-reported outcomes measurement and economic evaluation.

    PubMed

    Gater, Adam; Kitchen, Helen; Heron, Louise; Pollard, Catherine; Håkan-Bloch, Jonas; Højbjerre, Lise; Hansen, Brian Bekker; Strandberg-Larsen, Martin

    2015-01-01

    The primary objective of this review is to develop a conceptual model for Crohn's disease (CD) outlining the disease burden for patients, healthcare systems and wider society, as reported in the scientific literature. A search was conducted using MEDLINE, PsycINFO, EconLit, Health Economic Evaluation Database and Centre for Reviews and Dissemination databases. Patient-reported outcome (PRO) measures widely used in CD were reviewed according to the US FDA PRO Guidance for Industry. The resulting conceptual model highlights the characterization of CD by gastrointestinal disturbances, extra-intestinal and systemic symptoms. These symptoms impact physical functioning, ability to complete daily activities, emotional wellbeing, social functioning, sexual functioning and ability to work. Gaps in conceptual coverage and evidence of reliability and validity for some PRO measures were noted. Review findings also highlight the substantial direct and indirect costs associated with CD. Evidence from the literature confirms the substantial burden of CD to patients and wider society; however, future research is still needed to further understand burden from the perspective of patients and to accurately understand the economic burden of disease. Challenges with existing PRO measures also suggest the need for future research to refine or develop new measures.

  7. BioModels.net Web Services, a free and integrated toolkit for computational modelling software.

    PubMed

    Li, Chen; Courtot, Mélanie; Le Novère, Nicolas; Laibe, Camille

    2010-05-01

    Exchanging and sharing scientific results are essential for researchers in the field of computational modelling. BioModels.net defines agreed-upon standards for model curation. A fundamental one, MIRIAM (Minimum Information Requested in the Annotation of Models), standardises the annotation and curation process of quantitative models in biology. To support this standard, MIRIAM Resources maintains a set of standard data types for annotating models, and provides services for manipulating these annotations. Furthermore, BioModels.net creates controlled vocabularies, such as SBO (Systems Biology Ontology) which strictly indexes, defines and links terms used in Systems Biology. Finally, BioModels Database provides a free, centralised, publicly accessible database for storing, searching and retrieving curated and annotated computational models. Each resource provides a web interface to submit, search, retrieve and display its data. In addition, the BioModels.net team provides a set of Web Services which allows the community to programmatically access the resources. A user is then able to perform remote queries, such as retrieving a model and resolving all its MIRIAM Annotations, as well as getting the details about the associated SBO terms. These web services use established standards. Communications rely on SOAP (Simple Object Access Protocol) messages and the available queries are described in a WSDL (Web Services Description Language) file. Several libraries are provided in order to simplify the development of client software. BioModels.net Web Services make one step further for the researchers to simulate and understand the entirety of a biological system, by allowing them to retrieve biological models in their own tool, combine queries in workflows and efficiently analyse models.

  8. The CHARA Array Database

    NASA Astrophysics Data System (ADS)

    Jones, Jeremy; Schaefer, Gail; ten Brummelaar, Theo; Gies, Douglas; Farrington, Christopher

    2018-01-01

    We are building a searchable database for the CHARA Array data archive. The Array consists of six telescopes linked together as an interferometer, providing sub-milliarcsecond resolution in the optical and near-infrared. The Array enables a variety of scientific studies, including measuring stellar angular diameters, imaging stellar shapes and surface features, mapping the orbits of close binary companions, and resolving circumstellar environments. This database is one component of an NSF/MSIP funded program to provide open access to the CHARA Array to the broader astronomical community. This archive goes back to 2004 and covers all the beam combiners on the Array. We discuss the current status of and future plans for the public database, and give directions on how to access it.

  9. A Unique Digital Electrocardiographic Repository for the Development of Quantitative Electrocardiography and Cardiac Safety: The Telemetric and Holter ECG Warehouse (THEW)

    PubMed Central

    Couderc, Jean-Philippe

    2010-01-01

    The sharing of scientific data reinforces open scientific inquiry; it encourages diversity of analysis and opinion while promoting new research and facilitating the education of next generations of scientists. In this article, we present an initiative for the development of a repository containing continuous electrocardiographic information and their associated clinical information. This information is shared with the worldwide scientific community in order to improve quantitative electrocardiology and cardiac safety. First, we present the objectives of the initiative and its mission. Then, we describe the resources available in this initiative following three components: data, expertise and tools. The Data available in the Telemetric and Holter ECG Warehouse (THEW) includes continuous ECG signals and associated clinical information. The initiative attracted various academic and private partners whom expertise covers a large list of research arenas related to quantitative electrocardiography; their contribution to the THEW promotes cross-fertilization of scientific knowledge, resources, and ideas that will advance the field of quantitative electrocardiography. Finally, the tools of the THEW include software and servers to access and review the data available in the repository. To conclude, the THEW is an initiative developed to benefit the scientific community and to advance the field of quantitative electrocardiography and cardiac safety. It is a new repository designed to complement the existing ones such as Physionet, the AHA-BIH Arrhythmia Database, and the CSE database. The THEW hosts unique datasets from clinical trials and drug safety studies that, so far, were not available to the worldwide scientific community. PMID:20863512

  10. Digital Rebirth of the Greatest Church of Cluny Maior Ecclesia: from Optronic Surveys to Real Time Use of the Digital Model

    NASA Astrophysics Data System (ADS)

    Landrieu, J.; Père, C.; Rollier, J.; Castandet, S.; Schotte, G.

    2011-09-01

    Our multidisciplinary team has virtually reconstructed the greatest church of the Romanesque period in Europe. The third church of the Abbey of Cluny (12th c.) has been destroyed after the French Revolution, leaving only 8% of the building standing. Many documents have been studied, to include the latest archaeological knowledge in the virtual model. Most remains have been scanned for CAD restitution. The mock-up of the church needed 1600 different numerical files, including the scanned pieces and the anastylosis of a Romanesque portal, a Gothic façade and a mosaic pavement. We faced various difficulties to assemble the different elements of the huge building, and to include the digitized parts. Our workflow consisted in generating geometrical shapes of the church, enriched with metadata such as texture, material... The whole mock up was finally exported to dedicated software to run the rendering step. Our work consisted in creating a whole database of 3D models as well as 2D sources (plans, engravings, pictures...) accessible by the scientific community. The scientific perspectives focus on a representation in virtual immersion of the grand church at scale 1 and an access to the digital mock-up through Augmented Reality.

  11. Synthesis meets theory: Past, present and future of rational chemistry

    NASA Astrophysics Data System (ADS)

    Fianchini, Mauro

    2017-11-01

    Chemical synthesis has its roots in the empirical approach of alchemy. Nonetheless, the birth of the scientific method, the technical and technological advances (exploiting revolutionary discoveries in physics) and the improved management and sharing of growing databases greatly contributed to the evolution of chemistry from an esoteric ground into a mature scientific discipline during these last 400 years. Furthermore, thanks to the evolution of computational resources, platforms and media in the last 40 years, theoretical chemistry has added to the puzzle the final missing tile in the process of "rationalizing" chemistry. The use of mathematical models of chemical properties, behaviors and reactivities is nowadays ubiquitous in literature. Theoretical chemistry has been successful in the difficult task of complementing and explaining synthetic results and providing rigorous insights when these are otherwise unattainable by experiment. The first part of this review walks the reader through a concise historical overview on the evolution of the "model" in chemistry. Salient milestones have been highlighted and briefly discussed. The second part focuses more on the general description of recent state-of-the-art computational techniques currently used worldwide by chemists to produce synergistic models between theory and experiment. Each section is complemented by key-examples taken from the literature that illustrate the application of the technique discussed therein.

  12. Sensitivity test and ensemble hazard assessment for tephra fallout at Campi Flegrei, Italy

    NASA Astrophysics Data System (ADS)

    Selva, J.; Costa, A.; De Natale, G.; Di Vito, M. A.; Isaia, R.; Macedonio, G.

    2018-02-01

    We present the results of a statistical study on tephra dispersal in the case of a reactivation of the Campi Flegrei volcano. To represent the spectrum of possible eruptive sizes, four classes of eruptions were considered. Excluding the lava emission, three classes are explosive (Small, Medium, and Large) and can produce a significant quantity of volcanic ash. Hazard assessments were made through simulations of atmospheric dispersion of ash and lapilli, considering the full variability of winds and eruptive vents. The results are presented in form of conditional hazard curves given the occurrence of specific eruptive sizes, representative members of each size class, and then combined to quantify the conditional hazard given an eruption of any size. The main focus of this analysis was to constrain the epistemic uncertainty (i.e. associated with the level of scientific knowledge of phenomena), in order to provide unbiased hazard estimations. The epistemic uncertainty on the estimation of hazard curves was quantified, making use of scientifically acceptable alternatives to be aggregated in the final results. The choice of such alternative models was made after a comprehensive sensitivity analysis which considered different weather databases, alternative modelling of submarine eruptive vents and tephra total grain-size distributions (TGSD) with a different relative mass fraction of fine ash, and the effect of ash aggregation. The results showed that the dominant uncertainty is related to the combined effect of the uncertainty with regard to the fraction of fine particles with respect to the total mass and on how ash aggregation is modelled. The latter is particularly relevant in the case of magma-water interactions during explosive eruptive phases, when a large fraction of fine ash can form accretionary lapilli that might contribute significantly in increasing the tephra load in the proximal areas. The variability induced by the use of different meteorological databases and the selected approach to modelling offshore eruptions were relatively insignificant. The uncertainty arising from the alternative implementations, which would have been neglected in standard (Bayesian) quantifications, were finally quantified by ensemble modelling, and represented by hazard and probability maps produced at different confidence levels.

  13. Tuning hERG out: Antitarget QSAR Models for Drug Development

    PubMed Central

    Braga, Rodolpho C.; Alves, Vinícius M.; Silva, Meryck F. B.; Muratov, Eugene; Fourches, Denis; Tropsha, Alexander; Andrade, Carolina H.

    2015-01-01

    Several non-cardiovascular drugs have been withdrawn from the market due to their inhibition of hERG K+ channels that can potentially lead to severe heart arrhythmia and death. As hERG safety testing is a mandatory FDA-required procedure, there is a considerable interest for developing predictive computational tools to identify and filter out potential hERG blockers early in the drug discovery process. In this study, we aimed to generate predictive and well-characterized quantitative structure–activity relationship (QSAR) models for hERG blockage using the largest publicly available dataset of 11,958 compounds from the ChEMBL database. The models have been developed and validated according to OECD guidelines using four types of descriptors and four different machine-learning techniques. The classification accuracies discriminating blockers from non-blockers were as high as 0.83–0.93 on external set. Model interpretation revealed several SAR rules, which can guide structural optimization of some hERG blockers into non-blockers. We have also applied the generated models for screening the World Drug Index (WDI) database and identify putative hERG blockers and non-blockers among currently marketed drugs. The developed models can reliably identify blockers and non-blockers, which could be useful for the scientific community. A freely accessible web server has been developed allowing users to identify putative hERG blockers and non-blockers in chemical libraries of their interest (http://labmol.farmacia.ufg.br/predherg). PMID:24805060

  14. Continental Scientific Drilling Program Data Base

    NASA Astrophysics Data System (ADS)

    Pawloski, Gayle

    The Continental Scientific Drilling Program (CSDP) data base at Lawrence Livermore National Laboratory is a central repository, cataloguing information from United States drill holes. Most holes have been drilled or proposed by various federal agencies. Some holes have been commercially funded. This data base is funded by the Office of Basic Energy Sciences of t he Department of Energy (OBES/DOE) to serve the entire scientific community. Through the unrestricted use of the database, it is possible to reduce drilling costs and maximize the scientific value of current and planned efforts of federal agencies and industry by offering the opportunity for add-on experiments and supplementing knowledge with additional information from existing drill holes.

  15. Documenting Models for Interoperability and Reusability ...

    EPA Pesticide Factsheets

    Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration between scientific communities, since component-based modeling can integrate models from different disciplines. Integrated Environmental Modeling (IEM) systems focus on transferring information between components by capturing a conceptual site model; establishing local metadata standards for input/output of models and databases; managing data flow between models and throughout the system; facilitating quality control of data exchanges (e.g., checking units, unit conversions, transfers between software languages); warning and error handling; and coordinating sensitivity/uncertainty analyses. Although many computational software systems facilitate communication between, and execution of, components, there are no common approaches, protocols, or standards for turn-key linkages between software systems and models, especially if modifying components is not the intent. Using a standard ontology, this paper reviews how models can be described for discovery, understanding, evaluation, access, and implementation to facilitate interoperability and reusability. In the proceedings of the International Environmental Modelling and Software Society (iEMSs), 8th International Congress on Environmental Mod

  16. Chiropractic: An Introduction

    MedlinePlus

    ... the sciences. Chiropractic training is a 4-year academic program that includes both classroom work and direct ... health approaches, including publications and searches of Federal databases of scientific and medical literature. The Clearinghouse does ...

  17. rAvis: an R-package for downloading information stored in Proyecto AVIS, a citizen science bird project.

    PubMed

    Varela, Sara; González-Hernández, Javier; Casabella, Eduardo; Barrientos, Rafael

    2014-01-01

    Citizen science projects store an enormous amount of information about species distribution, diversity and characteristics. Researchers are now beginning to make use of this rich collection of data. However, access to these databases is not always straightforward. Apart from the largest and international projects, citizen science repositories often lack specific Application Programming Interfaces (APIs) to connect them to the scientific environments. Thus, it is necessary to develop simple routines to allow researchers to take advantage of the information collected by smaller citizen science projects, for instance, programming specific packages to connect them to popular scientific environments (like R). Here, we present rAvis, an R-package to connect R-users with Proyecto AVIS (http://proyectoavis.com), a Spanish citizen science project with more than 82,000 bird observation records. We develop several functions to explore the database, to plot the geographic distribution of the species occurrences, and to generate personal queries to the database about species occurrences (number of individuals, distribution, etc.) and birdwatcher observations (number of species recorded by each collaborator, UTMs visited, etc.). This new R-package will allow scientists to access this database and to exploit the information generated by Spanish birdwatchers over the last 40 years.

  18. A novel database of bio-effects from non-ionizing radiation.

    PubMed

    Leach, Victor; Weller, Steven; Redmayne, Mary

    2018-06-06

    A significant amount of electromagnetic field/electromagnetic radiation (EMF/EMR) research is available that examines biological and disease associated endpoints. The quantity, variety and changing parameters in the available research can be challenging when undertaking a literature review, meta-analysis, preparing a study design, building reference lists or comparing findings between relevant scientific papers. The Oceania Radiofrequency Scientific Advisory Association (ORSAA) has created a comprehensive, non-biased, multi-categorized, searchable database of papers on non-ionizing EMF/EMR to help address these challenges. It is regularly added to, freely accessible online and designed to allow data to be easily retrieved, sorted and analyzed. This paper demonstrates the content and search flexibility of the ORSAA database. Demonstration searches are presented by Effect/No Effect; frequency-band/s; in vitro; in vivo; biological effects; study type; and funding source. As of the 15th September 2017, the clear majority of 2653 papers captured in the database examine outcomes in the 300 MHz-3 GHz range. There are 3 times more biological "Effect" than "No Effect" papers; nearly a third of papers provide no funding statement; industry-funded studies more often than not find "No Effect", while institutional funding commonly reveal "Effects". Country of origin where the study is conducted/funded also appears to have a dramatic influence on the likely result outcome.

  19. rAvis: An R-Package for Downloading Information Stored in Proyecto AVIS, a Citizen Science Bird Project

    PubMed Central

    Varela, Sara; González-Hernández, Javier; Casabella, Eduardo; Barrientos, Rafael

    2014-01-01

    Citizen science projects store an enormous amount of information about species distribution, diversity and characteristics. Researchers are now beginning to make use of this rich collection of data. However, access to these databases is not always straightforward. Apart from the largest and international projects, citizen science repositories often lack specific Application Programming Interfaces (APIs) to connect them to the scientific environments. Thus, it is necessary to develop simple routines to allow researchers to take advantage of the information collected by smaller citizen science projects, for instance, programming specific packages to connect them to popular scientific environments (like R). Here, we present rAvis, an R-package to connect R-users with Proyecto AVIS (http://proyectoavis.com), a Spanish citizen science project with more than 82,000 bird observation records. We develop several functions to explore the database, to plot the geographic distribution of the species occurrences, and to generate personal queries to the database about species occurrences (number of individuals, distribution, etc.) and birdwatcher observations (number of species recorded by each collaborator, UTMs visited, etc.). This new R-package will allow scientists to access this database and to exploit the information generated by Spanish birdwatchers over the last 40 years. PMID:24626233

  20. Databases on biotechnology and biosafety of GMOs.

    PubMed

    Degrassi, Giuliano; Alexandrova, Nevena; Ripandelli, Decio

    2003-01-01

    Due to the involvement of scientific, industrial, commercial and public sectors of society, the complexity of the issues concerning the safety of genetically modified organisms (GMOs) for the environment, agriculture, and human and animal health calls for a wide coverage of information. Accordingly, development of the field of biotechnology, along with concerns related to the fate of released GMOs, has led to a rapid development of tools for disseminating such information. As a result, there is a growing number of databases aimed at collecting and storing information related to GMOs. Most of the sites deal with information on environmental releases, field trials, transgenes and related sequences, regulations and legislation, risk assessment documents, and literature. Databases are mainly established and managed by scientific, national or international authorities, and are addressed towards scientists, government officials, policy makers, consumers, farmers, environmental groups and civil society representatives. This complexity can lead to an overlapping of information. The purpose of the present review is to analyse the relevant databases currently available on the web, providing comments on their vastly different information and on the structure of the sites pertaining to different users. A preliminary overview on the development of these sites during the last decade, at both the national and international level, is also provided.

  1. The Evaluation of Hospital Performance in Iran: A Systematic Review Article

    PubMed Central

    BAHADORI, Mohammadkarim; IZADI, Ahmad Reza; GHARDASHI, Fatemeh; RAVANGARD, Ramin; HOSSEINI, Seyed Mojtaba

    2016-01-01

    Background: This research aimed to systematically study and outline the methods of hospital performance evaluation used in Iran. Methods: In this systematic review, all Persian and English-language articles published in the Iranian and non-Iranian scientific journals indexed from Sep 2004 to Sep 2014 were studied. For finding the related articles, the researchers searched the Iranian electronic databases, including SID, IranMedex, IranDoc, Magiran, as well as the non-Iranian electronic databases, including Medline, Embase, Scopus, and Google Scholar. For reviewing the selected articles, a data extraction form, developed by the researchers was used. Results: The entire review process led to the selection of 51 articles. The publication of articles on the hospital performance evaluation in Iran has increased considerably in the recent years. Besides, among these 51 articles, 38 articles (74.51%) had been published in Persian language and 13 articles (25.49%) in English language. Eight models were recognized as evaluation model for Iranian hospitals. Totally, in 15 studies, the data envelopment analysis model had been used to evaluate the hospital performance. Conclusion: Using a combination of model to integrate indicators in the hospital evaluation process is inevitable. Therefore, the Ministry of Health and Medical Education should use a set of indicators such as the balanced scorecard in the process of hospital evaluation and accreditation and encourage the hospital managers to use them. PMID:27516991

  2. The Evaluation of Hospital Performance in Iran: A Systematic Review Article.

    PubMed

    Bahadori, Mohammadkarim; Izadi, Ahmad Reza; Ghardashi, Fatemeh; Ravangard, Ramin; Hosseini, Seyed Mojtaba

    2016-07-01

    This research aimed to systematically study and outline the methods of hospital performance evaluation used in Iran. In this systematic review, all Persian and English-language articles published in the Iranian and non-Iranian scientific journals indexed from Sep 2004 to Sep 2014 were studied. For finding the related articles, the researchers searched the Iranian electronic databases, including SID, IranMedex, IranDoc, Magiran, as well as the non-Iranian electronic databases, including Medline, Embase, Scopus, and Google Scholar. For reviewing the selected articles, a data extraction form, developed by the researchers was used. The entire review process led to the selection of 51 articles. The publication of articles on the hospital performance evaluation in Iran has increased considerably in the recent years. Besides, among these 51 articles, 38 articles (74.51%) had been published in Persian language and 13 articles (25.49%) in English language. Eight models were recognized as evaluation model for Iranian hospitals. Totally, in 15 studies, the data envelopment analysis model had been used to evaluate the hospital performance. Using a combination of model to integrate indicators in the hospital evaluation process is inevitable. Therefore, the Ministry of Health and Medical Education should use a set of indicators such as the balanced scorecard in the process of hospital evaluation and accreditation and encourage the hospital managers to use them.

  3. Scientific Data Collection/Analysis: 1994-2004

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This custom bibliography from the NASA Scientific and Technical Information Program lists a sampling of records found in the NASA Aeronautics and Space Database. The scope of this topic includes technologies for lightweight, temperature-tolerant, radiation-hard sensors. This area of focus is one of the enabling technologies as defined by NASA s Report of the President s Commission on Implementation of United States Space Exploration Policy, published in June 2004.

  4. How Do You Like Your Science, Wet or Dry? How Two Lab Experiences Influence Student Understanding of Science Concepts and Perceptions of Authentic Scientific Practice

    ERIC Educational Resources Information Center

    Munn, Maureen; Knuth, Randy; Van Horne, Katie; Shouse, Andrew W.; Levias, Sheldon

    2017-01-01

    This study examines how two kinds of authentic research experiences related to smoking behavior--genotyping human DNA (wet lab) and using a database to test hypotheses about factors that affect smoking behavior (dry lab)--influence students' perceptions and understanding of scientific research and related science concepts. The study used pre and…

  5. mHealth: A Strategic Field without a Solid Scientific Soul. A Systematic Review of Pain-Related Apps

    PubMed Central

    de la Vega, Rocío; Miró, Jordi

    2014-01-01

    Background Mobile health (mHealth) has undergone exponential growth in recent years. Patients and healthcare professionals are increasingly using health-related applications, at the same time as concerns about ethical issues, bias, conflicts of interest and privacy are emerging. The general aim of this paper is to provide an overview of the current state of development of mHealth. Methods and Findings To exemplify the issues, we made a systematic review of the pain-related apps available in scientific databases (Medline, Web of Science, Gale, Psycinfo, etc.) and the main application shops (App Store, Blackberry App World, Google Play, Nokia Store and Windows Phone Store). Only applications (designed for both patients and clinicians) focused on pain education, assessment and treatment were included. Of the 47 papers published on 34 apps in scientific databases, none were available in the app shops. A total of 283 pain-related apps were found in the five shops searched, but no articles have been published on these apps. The main limitation of this review is that we did not look at all stores in all countries. Conclusions There is a huge gap between the scientific and commercial faces of mHealth. Specific efforts are needed to facilitate knowledge translation and regulate commercial health-related apps. PMID:24999983

  6. Assembling a Cellular User Manual for the Brain.

    PubMed

    Sloan, Steven A; Barres, Ben A

    2018-03-28

    For many years, efforts to decipher the various cellular components that comprise the CNS were stymied by a lack of technical strategies for isolating and profiling the brain's resident cell types. The advent of transcriptional profiling, combined with powerful new purification schemes, changed this reality and transformed our understanding of the macroglial populations within the brain. Here, we chronicle the historical context and scientific setting for our efforts to transcriptionally profile neurons, astrocytes, and oligodendrocytes, and highlight some of the profound discoveries that were cultivated by these data.Following a lengthy battle with pancreatic cancer, Ben Barres passed away during the writing of this Progression piece. Among Ben's innumerable contributions to the greater scientific community, his addition of publicly available transcriptome databases of CNS cell types will forever remain a relic of his generous spirit and boundless scientific curiosity. Although he had impressively committed a majority of these enormous gene lists to memory, Ben could oftentimes be spotted at meetings buried in his cell phone on the Barres RNAseq database. Perhaps the only thing he enjoyed more than exploring these data himself, was knowing how useful these contributions had been (and will hopefully continue to be) to his scientific peers. Copyright © 2018 the authors 0270-6474/18/383149-05$15.00/0.

  7. mHealth: a strategic field without a solid scientific soul. a systematic review of pain-related apps.

    PubMed

    de la Vega, Rocío; Miró, Jordi

    2014-01-01

    Mobile health (mHealth) has undergone exponential growth in recent years. Patients and healthcare professionals are increasingly using health-related applications, at the same time as concerns about ethical issues, bias, conflicts of interest and privacy are emerging. The general aim of this paper is to provide an overview of the current state of development of mHealth. To exemplify the issues, we made a systematic review of the pain-related apps available in scientific databases (Medline, Web of Science, Gale, Psycinfo, etc.) and the main application shops (App Store, Blackberry App World, Google Play, Nokia Store and Windows Phone Store). Only applications (designed for both patients and clinicians) focused on pain education, assessment and treatment were included. Of the 47 papers published on 34 apps in scientific databases, none were available in the app shops. A total of 283 pain-related apps were found in the five shops searched, but no articles have been published on these apps. The main limitation of this review is that we did not look at all stores in all countries. There is a huge gap between the scientific and commercial faces of mHealth. Specific efforts are needed to facilitate knowledge translation and regulate commercial health-related apps.

  8. Need for more research on and health interventions for transgender people.

    PubMed

    Ortiz-Martínez, Yeimer; Ríos-González, Carlos Miguel

    2017-04-01

    Background Recently, lesbian, gay, bisexual, and transgender (LGBT) scientific production is growing, but transgender (TG) people is less considered in the LGBT-related research, highlighting the lack of representative data on this neglected population. To assess the current status of scientific production on TG population, a bibliometric study was performed using the articles on TG people deposited in five databases, including PubMed/Medline, Scopus, Science Citation Index (SCI), Scientific Electronic Library Online (SciELO) and Latin American and Caribbean Health Sciences Literature (LILACS). The PubMed/Medline search retrieved 2370 documents, which represented 0.008% of all articles recorded in Medline. The Scopus search identified 4974 articles. At SCI, 2863 articles were identified. A search of the SciELO database identified 39 articles, whereas the LILACS search identified 44 articles. Most papers were from the US (57.59%), followed by Canada (5.15%), the UK (4.42%), Australia (3.19%), The Netherlands (2.46%) and Peru (1.83%). These six countries accounted for 74.6% of all scientific output. The findings indicate that the TG-related research is low, especially in low-income developing countries, where stigma and discrimination are common. More awareness, knowledge, and sensitivity in healthcare communities are needed to eliminate barriers in health attention and research in this population.

  9. AERONET's Development and Contributions through Two Decades of Aerosol Research

    NASA Astrophysics Data System (ADS)

    Holben, B. N.

    2016-12-01

    The name Brent Holben has been synonymous with AERONET since it's inception nearly two and a half decades ago. Like most scientific endeavors, progress relies on collaboration, persistence and the occasional good idea at the right time. And so it is with AERONET. I will use this opportunity to trace the history of AERONET's development and the scientific achievements that we, as a community, have developed and profited from in our research and understanding of aerosols, describe measurements from this simple instrument applied on a grand scale that created new research opportunities and most importantly acknowledge those that have been and continue to be key in AERONET contributions to aerosol science. Born from a need to remove atmospheric effects in remotely sensed data in the 1980's, molded at a confluence of ideas and shaped as a public domain database, the program has grown from a prototype instrument in 1992 designed to routinely monitor biomass burning aerosol optical depth to over 600 globally distributed sites providing near real-time aerosol properties for satellite validation, assimilation in models and access for numerous research projects. Although standardization and calibration are fundamental elements for scientific success, development for the scientific needs of the community drive new approaches for reprocessing archival data and making new measurements. I'll discuss these and glimpse into the future for AERONET.

  10. Cosmetic gynecology in the view of evidence-based medicine and ACOG recommendations: a review.

    PubMed

    Ostrzenski, Adam

    2011-09-01

    To conduct a methodological review of the existing scientific literature within the field of cosmetic gynecology in the view of evidence-based medicine and to establish their relevance to the ACOG Committee Opinion No. 378. The appropriate medical subject heading terms were selected and applied in the search of the Internet multiple databases since 1900 until January 2010. Articles focusing on cosmetic gynecology were reviewed. Also, anecdotal and advertising literatures were analyzed. A methodological review of the literatures was conducted. In peer review journals, 72 relevant articles related to cosmetic gynecology were identified. Anecdotal information was identified in 3 sources and over 1,100 published marketing literatures were identified on the Internet and no scientific journals. Among reviewed articles on cosmetic gynecology, only two articles met the level II-2 in evidence-based medicine. The absence of documentations on the safety and effectiveness of cosmetic vaginal procedures in the scientific literatures was ACOG's main concern. Practicing cosmetic gynecology within ACOG recommendations is desirable and possible. Currently, the standard of practice of cosmetic gynecology cannot be determined due to the absence of the documentation on safety and effectiveness. Traditional gynecologic surgical procedures cannot be called cosmetic procedures, since it is a deceptive form of practice and marketing. Creating medical terminology trademarks and establishing a business model that tries to control clinical-scientific knowledge dissemination is unethical.

  11. A Web Server and Mobile App for Computing Hemolytic Potency of Peptides

    NASA Astrophysics Data System (ADS)

    Chaudhary, Kumardeep; Kumar, Ritesh; Singh, Sandeep; Tuknait, Abhishek; Gautam, Ankur; Mathur, Deepika; Anand, Priya; Varshney, Grish C.; Raghava, Gajendra P. S.

    2016-03-01

    Numerous therapeutic peptides do not enter the clinical trials just because of their high hemolytic activity. Recently, we developed a database, Hemolytik, for maintaining experimentally validated hemolytic and non-hemolytic peptides. The present study describes a web server and mobile app developed for predicting, and screening of peptides having hemolytic potency. Firstly, we generated a dataset HemoPI-1 that contains 552 hemolytic peptides extracted from Hemolytik database and 552 random non-hemolytic peptides (from Swiss-Prot). The sequence analysis of these peptides revealed that certain residues (e.g., L, K, F, W) and motifs (e.g., “FKK”, “LKL”, “KKLL”, “KWK”, “VLK”, “CYCR”, “CRR”, “RFC”, “RRR”, “LKKL”) are more abundant in hemolytic peptides. Therefore, we developed models for discriminating hemolytic and non-hemolytic peptides using various machine learning techniques and achieved more than 95% accuracy. We also developed models for discriminating peptides having high and low hemolytic potential on different datasets called HemoPI-2 and HemoPI-3. In order to serve the scientific community, we developed a web server, mobile app and JAVA-based standalone software (http://crdd.osdd.net/raghava/hemopi/).

  12. A national framework for monitoring and reporting on environmental sustainability in Canada.

    PubMed

    Marshall, I B; Scott Smith, C A; Selby, C J

    1996-01-01

    In 1991, a collaborative project to revise the terrestrial component of a national ecological framework was undertaken with a wide range of stakeholders. This spatial framework consists of multiple, nested levels of ecological generalization with linkages to existing federal and provincial scientific databases. The broadest level of generalization is the ecozone. Macroclimate, major vegetation types and subcontinental scale physiographic formations constitute the definitive components of these major ecosystems. Ecozones are subdivided into approximately 200 ecoregions which are based on properties like regional physiography, surficial geology, climate, vegetation, soil, water and fauna. The ecozone and ecoregion levels of the framework have been depicted on a national map coverage at 1:7 500 000 scale. Ecoregions have been subdivided into ecodistricts based primarily on landform, parent material, topography, soils, waterbodies and vegetation at a scale (1:2 000 000) useful for environmental resource management, monitoring and modelling activities. Nested within the ecodistricts are the polygons that make up the Soil Landscapes of Canada series of 1:1 000 000 scale soil maps. The framework is supported by an ARC-INFO GIS at Agriculture Canada. The data model allows linkage to associated databases on climate, land use and socio-economic attributes.

  13. Dietary isoflavones and gastric cancer: A brief review of current studies.

    PubMed

    Golpour, Sahar; Rafie, Nahid; Safavi, Seyyed Morteza; Miraghajani, Maryam

    2015-09-01

    Although several in vitro and animal studies have suggested that isoflavones might exert inhibitory effects on gastric carcinogenesis, epidemiologic studies have reported inconclusive results in this field. The aim of this brief review was to investigate whether such an association exists among dietary isoflavones and gastric cancer incidence, prevention, and mortality in epidemiologic studies. We conducted a search of PubMed, Google Scholar, Cochrane, Science direct, and Iranian Scientific Databases including Scientific Information Database and IranMedex Database (up to November 2014) using common keywords for studies that focused on dietary isoflavones and gastric cancer risk. A total of nine epidemiologic studies consisting of five case-controls, three prospective cohorts, and one ecologic study were included in this review. An inverse association between dietary isoflavones and gastric cancer was shown in only one case-control and one ecologic study. In summary, whether anticarcinogenic properties of isoflavones are established, research found no substantial correlation in this field. There are insufficient studies to draw any firm conclusions about the relationship between isoflavones intake and the risk of gastric cancer. Hence, further evidence from cohort and trial studies are needed.

  14. Improving imbalanced scientific text classification using sampling strategies and dictionaries.

    PubMed

    Borrajo, L; Romero, R; Iglesias, E L; Redondo Marey, C M

    2011-09-15

    Many real applications have the imbalanced class distribution problem, where one of the classes is represented by a very small number of cases compared to the other classes. One of the systems affected are those related to the recovery and classification of scientific documentation. Sampling strategies such as Oversampling and Subsampling are popular in tackling the problem of class imbalance. In this work, we study their effects on three types of classifiers (Knn, SVM and Naive-Bayes) when they are applied to search on the PubMed scientific database. Another purpose of this paper is to study the use of dictionaries in the classification of biomedical texts. Experiments are conducted with three different dictionaries (BioCreative, NLPBA, and an ad-hoc subset of the UniProt database named Protein) using the mentioned classifiers and sampling strategies. Best results were obtained with NLPBA and Protein dictionaries and the SVM classifier using the Subsampling balancing technique. These results were compared with those obtained by other authors using the TREC Genomics 2005 public corpus. Copyright 2011 The Author(s). Published by Journal of Integrative Bioinformatics.

  15. Public health and epidemiology journals published in Brazil and other Portuguese speaking countries

    PubMed Central

    Barreto, Mauricio L; Barata, Rita Barradas

    2008-01-01

    It is well known that papers written in languages other than English have a great risk of being ignored simply because these languages are not accessible to the international scientific community. The objective of this paper is to facilitate the access to the public health and epidemiology literature available in Portuguese speaking countries. It was found that it is particularly concentrated in Brazil, with some few examples in Portugal and none in other Portuguese speaking countries. This literature is predominantly written in Portuguese, but also in other languages such as English or Spanish. The paper describes the several journals, as well as the bibliographic databases that index these journals and how to access them. Most journals provide open-access with direct links in the indexing databases. The importance of this scientific production for the development of epidemiology as a scientific discipline and as a basic discipline for public health practice is discussed. To marginalize these publications has implications for a more balanced knowledge and understanding of the health problems and their determinants at a world-wide level. PMID:18826592

  16. NASA aerospace database subject scope: An overview

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Outlined here is the subject scope of the NASA Aerospace Database, a publicly available subset of the NASA Scientific and Technical (STI) Database. Topics of interest to NASA are outlined and placed within the framework of the following broad aerospace subject categories: aeronautics, astronautics, chemistry and materials, engineering, geosciences, life sciences, mathematical and computer sciences, physics, social sciences, space sciences, and general. A brief discussion of the subject scope is given for each broad area, followed by a similar explanation of each of the narrower subject fields that follow. The subject category code is listed for each entry.

  17. VO Access to BASECOL Database

    NASA Astrophysics Data System (ADS)

    Moreau, N.; Dubernet, M. L.

    2006-07-01

    Basecol is a combination of a website (using PHP and HTML) and a MySQL database concerning molecular ro-vibrational transitions induced by collisions with atoms or molecules. This database has been created in view of the scientific preparation of the Heterodyne Instrument for the Far-Infrared on board the Herschel Space Observatory (HSO). Basecol offers an access to numerical and bibliographic data through various output methods such as ASCII, HTML or VOTable (which is a first step towards a VO compliant system). A web service using Apache Axis has been developed in order to provide a direct access to data for external applications.

  18. [Current status of DNA databases in the forensic field: new progress, new legal needs].

    PubMed

    Baeta, Miriam; Martínez-Jarreta, Begoña

    2009-01-01

    One of the most polemic issues regarding the use of deoxyribonucleic acid (DNA) in the legal sphere, refers to the creation of DNA databases. Until relatively recently, Spain did not have a law to support the establishment of a national DNA profile bank for forensic purposes, and preserve the fundamental rights of subjects whose data are archived therein. The regulatory law of police databases regarding identifiers obtained from DNA approved in 2007, covers this void in the Spanish legislation and responds to the incessant need to adapt the laws to continuous scientific and technological progress.

  19. An intermediary's perspective of online databases for local governments

    NASA Technical Reports Server (NTRS)

    Jack, R. F.

    1984-01-01

    Numerous public administration studies have indicated that local government agencies for a variety of reasons lack access to comprehensive information resources; furthermore, such entities are often unwilling or unable to share information regarding their own problem-solving innovations. The NASA/University of Kentucky Technology Applications Program devotes a considerable effort to providing scientific and technical information and assistance to local agencies, relying on its access to over 500 distinct online databases offered by 20 hosts. The author presents a subjective assessment, based on his own experiences, of several databases which may prove useful in obtaining information for this particular end-user community.

  20. The ChArMEx database

    NASA Astrophysics Data System (ADS)

    Ferré, Helene; Belmahfoud, Nizar; Boichard, Jean-Luc; Brissebrat, Guillaume; Descloitres, Jacques; Fleury, Laurence; Focsa, Loredana; Henriot, Nicolas; Mastrorillo, Laurence; Mière, Arnaud; Vermeulen, Anne

    2014-05-01

    The Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at a scientific assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project includes long term monitoring of environmental parameters, intensive field campaigns, use of satellite data and modelling studies. Therefore ChARMEx scientists produce and need to access a wide diversity of data. In this context, the objective of the database task is to organize data management, distribution system and services, such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between OMP and ICARE data centres and has been set up in the framework of the Mediterranean Integrated Studies at Regional And Locals Scales (MISTRALS) program data portal. All the data produced by or of interest for the ChArMEx community will be documented in the data catalogue and accessible through the database website: http://mistrals.sedoo.fr/ChArMEx. At present, the ChArMEx database contains about 75 datasets, including 50 in situ datasets (2012 and 2013 campaigns, Ersa background monitoring station), 25 model outputs (dust model intercomparison, MEDCORDEX scenarios), and a high resolution emission inventory over the Mediterranean. Many in situ datasets have been inserted in a relational database, in order to enable more accurate data selection and download of different datasets in a shared format. The database website offers different tools: - A registration procedure which enables any scientist to accept the data policy and apply for a user database account. - A data catalogue that complies with metadata international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). - Metadata forms to document observations or products that will be provided to the database. - A search tool to browse the catalogue using thematic, geographic and/or temporal criteria. - A shopping-cart web interface to order in situ data files. - A web interface to select and access to homogenized datasets. Interoperability between the two data centres is being set up using the OPEnDAP protocol. The data portal will soon propose a user-friendly access to satellite products managed by the ICARE data centre (SEVIRI, TRIMM, PARASOL...). In order to meet the operational needs of the airborne and ground based observational teams during the ChArMEx 2012 and 2013 campaigns, a day-to-day chart and report display website has been developed too: http://choc.sedoo.org. It offers a convenient way to browse weather conditions and chemical composition during the campaign periods.

  1. SBDN: an information portal on small bodies and interplanetary dust inside the Europlanet Research Infrastructure

    NASA Astrophysics Data System (ADS)

    Turrini, Diego; de Sanctis, Maria Cristina; Carraro, Francesco; Fonte, Sergio; Giacomini, Livia; Politi, Romolo

    In the framework of the Sixth Framework Programme (FP6) for Research and Technological Development of the European Community, the Europlanet project started the Integrated and Distributed Information Service (IDIS) initiative. The goal of this initiative was to "...offer to the planetary science community a common and user-friendly access to the data and infor-mation produced by the various types of research activities: earth-based observations, space observations, modelling and theory, laboratory experiments...". Four scientific nodes, repre-sentative of a significant fraction of the scientific themes covered by planetary sciences, were created: the Interiors and Surfaces node, the Atmospheres node, the Plasma node and the Small Bodies and Dust node. The original Europlanet program evolved into the Europlanet Research Infrastructure project, funded by the Seventh Framework Programme (FP7) for Research and Technological Development, and the IDIS initiative has been renewed with the addiction of a new scientific node, the Planetary Dynamics node. Here we present the Small Bodies and Dust node (SBDN) and the services it already provides to the scientific community, i.e. a searchable database of resources related to its thematic domains, an online and searchable cat-alogue of emission lines observed in the visible spectrum of comet 153P/2002 C1 Ikeya-Zhang supplemented by a visualization facility, a set of models of the simulated evolution of comet 67P/Churyumov-Gerasimenko with a particular focus on the effects of the distribution of dust and a information system on meteors through the Virtual Meteor Observatory. We will also introduce the new services that will be implemented and made available in the course of the Europlanet Research Infrastructure project.

  2. NONATObase: a database for Polychaeta (Annelida) from the Southwestern Atlantic Ocean.

    PubMed

    Pagliosa, Paulo R; Doria, João G; Misturini, Dairana; Otegui, Mariana B P; Oortman, Mariana S; Weis, Wilson A; Faroni-Perez, Larisse; Alves, Alexandre P; Camargo, Maurício G; Amaral, A Cecília Z; Marques, Antonio C; Lana, Paulo C

    2014-01-01

    Networks can greatly advance data sharing attitudes by providing organized and useful data sets on marine biodiversity in a friendly and shared scientific environment. NONATObase, the interactive database on polychaetes presented herein, will provide new macroecological and taxonomic insights of the Southwestern Atlantic region. The database was developed by the NONATO network, a team of South American researchers, who integrated available information on polychaetes from between 5°N and 80°S in the Atlantic Ocean and near the Antarctic. The guiding principle of the database is to keep free and open access to data based on partnerships. Its architecture consists of a relational database integrated in the MySQL and PHP framework. Its web application allows access to the data from three different directions: species (qualitative data), abundance (quantitative data) and data set (reference data). The database has built-in functionality, such as the filter of data on user-defined taxonomic levels, characteristics of site, sample, sampler, and mesh size used. Considering that there are still many taxonomic issues related to poorly known regional fauna, a scientific committee was created to work out consistent solutions to current misidentifications and equivocal taxonomy status of some species. Expertise from this committee will be incorporated by NONATObase continually. The use of quantitative data was possible by standardization of a sample unit. All data, maps of distribution and references from a data set or a specified query can be visualized and exported to a commonly used data format in statistical analysis or reference manager software. The NONATO network has initialized with NONATObase, a valuable resource for marine ecologists and taxonomists. The database is expected to grow in functionality as it comes in useful, particularly regarding the challenges of dealing with molecular genetic data and tools to assess the effects of global environment change. Database URL: http://nonatobase.ufsc.br/.

  3. NONATObase: a database for Polychaeta (Annelida) from the Southwestern Atlantic Ocean

    PubMed Central

    Pagliosa, Paulo R.; Doria, João G.; Misturini, Dairana; Otegui, Mariana B. P.; Oortman, Mariana S.; Weis, Wilson A.; Faroni-Perez, Larisse; Alves, Alexandre P.; Camargo, Maurício G.; Amaral, A. Cecília Z.; Marques, Antonio C.; Lana, Paulo C.

    2014-01-01

    Networks can greatly advance data sharing attitudes by providing organized and useful data sets on marine biodiversity in a friendly and shared scientific environment. NONATObase, the interactive database on polychaetes presented herein, will provide new macroecological and taxonomic insights of the Southwestern Atlantic region. The database was developed by the NONATO network, a team of South American researchers, who integrated available information on polychaetes from between 5°N and 80°S in the Atlantic Ocean and near the Antarctic. The guiding principle of the database is to keep free and open access to data based on partnerships. Its architecture consists of a relational database integrated in the MySQL and PHP framework. Its web application allows access to the data from three different directions: species (qualitative data), abundance (quantitative data) and data set (reference data). The database has built-in functionality, such as the filter of data on user-defined taxonomic levels, characteristics of site, sample, sampler, and mesh size used. Considering that there are still many taxonomic issues related to poorly known regional fauna, a scientific committee was created to work out consistent solutions to current misidentifications and equivocal taxonomy status of some species. Expertise from this committee will be incorporated by NONATObase continually. The use of quantitative data was possible by standardization of a sample unit. All data, maps of distribution and references from a data set or a specified query can be visualized and exported to a commonly used data format in statistical analysis or reference manager software. The NONATO network has initialized with NONATObase, a valuable resource for marine ecologists and taxonomists. The database is expected to grow in functionality as it comes in useful, particularly regarding the challenges of dealing with molecular genetic data and tools to assess the effects of global environment change. Database URL: http://nonatobase.ufsc.br/ PMID:24573879

  4. How to run a successful Journal

    PubMed Central

    Jawaid, Shaukat Ali; Jawaid, Masood

    2017-01-01

    Publishing and successfully running a good quality peer reviewed biomedical scientific journal is not an easy task. Some of the pre-requisites include a competent experienced editor supported by a team. Long term sustainability of a journal will depend on good quality manuscripts, active editorial board, good quality of reviewers, workable business model to ensure financial support, increased visibility which will ensure increased submissions, indexation in various important databases, online availability and easy to use website. This manuscript outlines the logistics and technical issues which need to be resolved before starting a new journal and ensuring sustainability of a good quality peer reviewed journal. PMID:29492089

  5. Medium- and long-term electric power demand forecasting based on the big data of smart city

    NASA Astrophysics Data System (ADS)

    Wei, Zhanmeng; Li, Xiyuan; Li, Xizhong; Hu, Qinghe; Zhang, Haiyang; Cui, Pengjie

    2017-08-01

    Based on the smart city, this paper proposed a new electric power demand forecasting model, which integrates external data such as meteorological information, geographic information, population information, enterprise information and economic information into the big database, and uses an improved algorithm to analyse the electric power demand and provide decision support for decision makers. The data mining technology is used to synthesize kinds of information, and the information of electric power customers is analysed optimally. The scientific forecasting is made based on the trend of electricity demand, and a smart city in north-eastern China is taken as a sample.

  6. High rate information systems - Architectural trends in support of the interdisciplinary investigator

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Preheim, Larry E.

    1990-01-01

    Data systems requirements in the Earth Observing System (EOS) Space Station Freedom (SSF) eras indicate increasing data volume, increased discipline interplay, higher complexity and broader data integration and interpretation. A response to the needs of the interdisciplinary investigator is proposed, considering the increasing complexity and rising costs of scientific investigation. The EOS Data Information System, conceived to be a widely distributed system with reliable communication links between central processing and the science user community, is described. Details are provided on information architecture, system models, intelligent data management of large complex databases, and standards for archiving ancillary data, using a research library, a laboratory and collaboration services.

  7. A geo-spatial data management system for potentially active volcanoes—GEOWARN project

    NASA Astrophysics Data System (ADS)

    Gogu, Radu C.; Dietrich, Volker J.; Jenny, Bernhard; Schwandner, Florian M.; Hurni, Lorenz

    2006-02-01

    Integrated studies of active volcanic systems for the purpose of long-term monitoring and forecast and short-term eruption prediction require large numbers of data-sets from various disciplines. A modern database concept has been developed for managing and analyzing multi-disciplinary volcanological data-sets. The GEOWARN project (choosing the "Kos-Yali-Nisyros-Tilos volcanic field, Greece" and the "Campi Flegrei, Italy" as test sites) is oriented toward potentially active volcanoes situated in regions of high geodynamic unrest. This article describes the volcanological database of the spatial and temporal data acquired within the GEOWARN project. As a first step, a spatial database embedded in a Geographic Information System (GIS) environment was created. Digital data of different spatial resolution, and time-series data collected at different intervals or periods, were unified in a common, four-dimensional representation of space and time. The database scheme comprises various information layers containing geographic data (e.g. seafloor and land digital elevation model, satellite imagery, anthropogenic structures, land-use), geophysical data (e.g. from active and passive seismicity, gravity, tomography, SAR interferometry, thermal imagery, differential GPS), geological data (e.g. lithology, structural geology, oceanography), and geochemical data (e.g. from hydrothermal fluid chemistry and diffuse degassing features). As a second step based on the presented database, spatial data analysis has been performed using custom-programmed interfaces that execute query scripts resulting in a graphical visualization of data. These query tools were designed and compiled following scenarios of known "behavior" patterns of dormant volcanoes and first candidate signs of potential unrest. The spatial database and query approach is intended to facilitate scientific research on volcanic processes and phenomena, and volcanic surveillance.

  8. Physical Science Informatics: Providing Open Science Access to Microheater Array Boiling Experiment Data

    NASA Technical Reports Server (NTRS)

    McQuillen, John; Green, Robert D.; Henrie, Ben; Miller, Teresa; Chiaramonte, Fran

    2014-01-01

    The Physical Science Informatics (PSI) system is the next step in this an effort to make NASA sponsored flight data available to the scientific and engineering community, along with the general public. The experimental data, from six overall disciplines, Combustion Science, Fluid Physics, Complex Fluids, Fundamental Physics, and Materials Science, will present some unique challenges. Besides data in textual or numerical format, large portions of both the raw and analyzed data for many of these experiments are digital images and video, requiring large data storage requirements. In addition, the accessible data will include experiment design and engineering data (including applicable drawings), any analytical or numerical models, publications, reports, and patents, and any commercial products developed as a result of the research. This objective of paper includes the following: Present the preliminary layout (Figure 2) of MABE data within the PSI database. Obtain feedback on the layout. Present the procedure to obtain access to this database.

  9. Word aligned bitmap compression method, data structure, and apparatus

    DOEpatents

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  10. R.E.DD.B.: A database for RESP and ESP atomic charges, and force field libraries

    PubMed Central

    Dupradeau, François-Yves; Cézard, Christine; Lelong, Rodolphe; Stanislawiak, Élodie; Pêcher, Julien; Delepine, Jean Charles; Cieplak, Piotr

    2008-01-01

    The web-based RESP ESP charge DataBase (R.E.DD.B., http://q4md-forcefieldtools.org/REDDB) is a free and new source of RESP and ESP atomic charge values and force field libraries for model systems and/or small molecules. R.E.DD.B. stores highly effective and reproducible charge values and molecular structures in the Tripos mol2 file format, information about the charge derivation procedure, scripts to integrate the charges and molecular topology in the most common molecular dynamics packages. Moreover, R.E.DD.B. allows users to freely store and distribute RESP or ESP charges and force field libraries to the scientific community, via a web interface. The first version of R.E.DD.B., released in January 2006, contains force field libraries for molecules as well as molecular fragments for standard residues and their analogs (amino acids, monosaccharides, nucleotides and ligands), hence covering a vast area of relevant biological applications. PMID:17962302

  11. Aurorasaurus Database of Real-Time, Soft-Sensor Sourced Aurora Data for Space Weather Research

    NASA Astrophysics Data System (ADS)

    Kosar, B.; MacDonald, E.; Heavner, M.

    2017-12-01

    Aurorasaurus is an innovative citizen science project focused on two fundamental objectives i.e., collecting real-time, ground-based signals of auroral visibility from citizen scientists (soft-sensors) and incorporating this new type of data into scientific investigations pertaining to aurora. The project has been live since the Fall of 2014, and as of Summer 2017, the database compiled approximately 12,000 observations (5295 direct reports and 6413 verified tweets). In this presentation, we will focus on demonstrating the utility of this robust science quality data for space weather research needs. These data scale with the size of the event and are well-suited to capture the largest, rarest events. Emerging state-of-the-art computational methods based on statistical inference such as machine learning frameworks and data-model integration methods can offer new insights that could potentially lead to better real-time assessment and space weather prediction when citizen science data are combined with traditional sources.

  12. Scopus: A system for the evaluation of scientific journals

    NASA Astrophysics Data System (ADS)

    Guz, A. N.; Rushchitsky, J. J.

    2009-04-01

    The paper discusses the evaluation of scientific journals based on the Scopus database, information tools, and criteria. The SJR (SCImago Journal Rank) as the main criterion used by Scopus to evaluate scientific journals is considered. The Scopus and ISI systems are compared using information on the journal Prikladnaya Mekhanika ( International Applied Mechanics), a number of world-known journals on mechanics, and some journals on natural sciences issued by the National Academy of Sciences of Ukraine. Some comments and proposals are formulated. This paper may be considered as a follow up on papers published in Prikladnaya Mekhanika ( International Applied Mechanics) in 2005-2009

  13. Use of traditional herbal medicine as an alternative in dental treatment in Mexican dentistry: a review.

    PubMed

    Cruz Martínez, Cindy; Diaz Gómez, Martha; Oh, Myung Sook

    2017-12-01

    Herbal therapies are used worldwide to treat health conditions. In Mexico, generations have used them to treat gingivitis, periodontitis, mouth infections, and discoloured teeth. However, few studies have collected scientific evidence on their effects. This study aimed at searching and compiling scientific evidence of alternative oral and dental treatments using medicinal herbs from Mexico. We collected various Mexican medicinal plants used in the dental treatment from the database of the Institute of Biology at the National Autonomous University of Mexico. To correlate with existing scientific evidence, we used the PubMed database with the key term '(scientific name) and (oral or dental)'. Mexico has various medical herbs with antibacterial and antimicrobial properties, according to ancestral medicinal books and healers. Despite a paucity of experimental research demonstrating the antibacterial, antimicrobial, and antiplaque effects of these Mexican plants, they could still be useful as an alternative treatment of several periodontal diseases or as anticariogenic agents. However, the number of studies supporting their uses and effects remains insufficient. It is important for the health of consumers to scientifically demonstrate the real effects of natural medicine, as well as clarify and establish their possible therapeutic applications. Through this bibliographical revision, we found papers that testify or refute their ancestral uses, and conclude that the use of plants to treat oral conditions or to add to the dental pharmacological arsenal should be based on experimental studies verifying their suitability for dental treatments.

  14. [The long pilgrimage of Spanish biomedical journals toward excellence. Who helps? Quality, impact and research merit].

    PubMed

    Alfonso, Fernando

    2010-03-01

    Biomedical journals must adhere to strict standards of editorial quality. In a globalized academic scenario, biomedical journals must compete firstly to publish the most relevant original research and secondly to obtain the broadest possible visibility and the widest dissemination of their scientific contents. The cornerstone of the scientific process is still the peer-review system but additional quality criteria should be met. Recently access to medical information has been revolutionized by electronic editions. Bibliometric databases such as MEDLINE, the ISI Web of Science and Scopus offer comprehensive online information on medical literature. Classically, the prestige of biomedical journals has been measured by their impact factor but, recently, other indicators such as SCImago SJR or the Eigenfactor are emerging as alternative indices of a journal's quality. Assessing the scholarly impact of research and the merits of individual scientists remains a major challenge. Allocation of authorship credit also remains controversial. Furthermore, in our Kafkaesque world, we prefer to count rather than read the articles we judge. Quantitative publication metrics (research output) and citations analyses (scientific influence) are key determinants of the scientific success of individual investigators. However, academia is embracing new objective indicators (such as the "h" index) to evaluate scholarly merit. The present review discusses some editorial issues affecting biomedical journals, currently available bibliometric databases, bibliometric indices of journal quality and, finally, indicators of research performance and scientific success. Copyright 2010 SEEN. Published by Elsevier Espana. All rights reserved.

  15. The AMMA information system

    NASA Astrophysics Data System (ADS)

    Fleury, Laurence; Brissebrat, Guillaume; Boichard, Jean-Luc; Cloché, Sophie; Mière, Arnaud; Moulaye, Oumarou; Ramage, Karim; Favot, Florence; Boulanger, Damien

    2015-04-01

    In the framework of the African Monsoon Multidisciplinary Analyses (AMMA) programme, several tools have been developed in order to boost the data and information exchange between researchers from different disciplines. The AMMA information system includes (i) a user-friendly data management and dissemination system, (ii) quasi real-time display websites and (iii) a scientific paper exchange collaborative tool. The AMMA information system is enriched by past and ongoing projects (IMPETUS, FENNEC, ESCAPE, QweCI, ACASIS, DACCIWA...) addressing meteorology, atmospheric chemistry, extreme events, health, adaptation of human societies... It is becoming a reference information system on environmental issues in West Africa. (i) The projects include airborne, ground-based and ocean measurements, social science surveys, satellite data use, modelling studies and value-added product development. Therefore, the AMMA data portal enables to access a great amount and a large variety of data: - 250 local observation datasets, that cover many geophysical components (atmosphere, ocean, soil, vegetation) and human activities (agronomy, health). They have been collected by operational networks since 1850, long term monitoring research networks (CATCH, IDAF, PIRATA...) and intensive scientific campaigns; - 1350 outputs of a socio-economics questionnaire; - 60 operational satellite products and several research products; - 10 output sets of meteorological and ocean operational models and 15 of research simulations. Data documentation complies with metadata international standards, and data are delivered into standard formats. The data request interface takes full advantage of the database relational structure and enables users to elaborate multicriteria requests (period, area, property, property value…). The AMMA data portal counts about 900 registered users, and 50 data requests every month. The AMMA databases and data portal have been developed and are operated jointly by SEDOO and ESPRI in France: http://database.amma-international.org. The complete system is fully duplicated and operated by CRA in Niger: http://amma.agrhymet.ne/amma-data. (ii) A day-to-day chart display software has been designed and operated in order to monitor meteorological and environment information and to meet the observational team needs during the AMMA 2006 SOP (http://aoc.amma-international.org) and FENNEC 2011 campaign (http://fenoc.sedoo.fr). At present the websites constitute a synthetic view on the campaigns and a preliminary investigation tool for researchers. Since 2011, the same application enables a group of French and Senegalese researchers and forecasters to exchange in near real-time physical indices and diagnosis calculated from numerical weather operational forecasts, satellite products and in situ operational observations along the monsoon season, in order to better assess, understand and anticipate the monsoon intraseasonal variability (http://misva.sedoo.fr). Another similar website is dedicated to diagnosis and forecast of heat waves in West Africa (http://acasis.sedoo.fr). It aims at becoming an operational component for national early warning systems. (iii) A collaborative WIKINDX tool has been set on line in order to gather together scientific publications, theses and communications of interest: http://biblio.amma-international.org. At present the bibliographic database counts about 1200 references. It is the most exhaustive document collection about the West African monsoon available for all. Every scientist is invited to make use of the AMMA online tools and data. Scientists or project leaders who have management needs for existing or future datasets concerning West Africa are welcome to use the AMMA database framework and to contact ammaAdmin@sedoo.fr .

  16. Atmospheric Ionizing Radiation (AIR) ER-2 Preflight Analysis

    NASA Technical Reports Server (NTRS)

    Tai, Hsiang; Wilson, John W.; Maiden, D. L.

    1998-01-01

    Atmospheric ionizing radiation (AIR) produces chemically active radicals in biological tissues that alter the cell function or result in cell death. The AIR ER-2 flight measurements will enable scientists to study the radiation risk associated with the high-altitude operation of a commercial supersonic transport. The ER-2 radiation measurement flights will follow predetermined, carefully chosen courses to provide an appropriate database matrix which will enable the evaluation of predictive modeling techniques. Explicit scientific results such as dose rate, dose equivalent rate, magnetic cutoff, neutron flux, and air ionization rate associated with those flights are predicted by using the AIR model. Through these flight experiments, we will further increase our knowledge and understanding of the AIR environment and our ability to assess the risk from the associated hazard.

  17. A personality trait model for the Diagnostic and Statistical Manual of Mental Disorders (DSM): the challenges ahead.

    PubMed

    Krueger, Robert F; Eaton, Nicholas R

    2010-04-01

    We were sincerely flattered to discover that John Gunderson, Michael First, Paul Costa, Robert McCrae, Michael Hallquist, and Paul Pilkonis provided commentaries on our target article. In this brief response, we cannot hope to discuss the myriad points raised by this august group. Such a task would be particularly daunting given the diversity of the commentaries. Indeed, the diversity of the commentaries provides a kind of "metacommentary" on the state of personality and psychopathology research. That is, the intellectual diversity contained in the commentaries underlines the substantial challenges that lie ahead of us, in terms of articulating a model of personality and psychopathology with both scientific validity and clinical applicability. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  18. [Establishement for regional pelvic trauma database in Hunan Province].

    PubMed

    Cheng, Liang; Zhu, Yong; Long, Haitao; Yang, Junxiao; Sun, Buhua; Li, Kanghua

    2017-04-28

    To establish a database for pelvic trauma in Hunan Province, and to start the work of multicenter pelvic trauma registry.
 Methods: To establish the database, literatures relevant to pelvic trauma were screened, the experiences from the established trauma database in China and abroad were learned, and the actual situations for pelvic trauma rescue in Hunan Province were considered. The database for pelvic trauma was established based on the PostgreSQL and the advanced programming language Java 1.6.
 Results: The complex procedure for pelvic trauma rescue was described structurally. The contents for the database included general patient information, injurious condition, prehospital rescue, conditions in admission, treatment in hospital, status on discharge, diagnosis, classification, complication, trauma scoring and therapeutic effect. The database can be accessed through the internet by browser/servicer. The functions for the database include patient information management, data export, history query, progress report, video-image management and personal information management.
 Conclusion: The database with whole life cycle pelvic trauma is successfully established for the first time in China. It is scientific, functional, practical, and user-friendly.

  19. Data Interpretation in the Digital Age

    PubMed Central

    Leonelli, Sabina

    2014-01-01

    The consultation of internet databases and the related use of computer software to retrieve, visualise and model data have become key components of many areas of scientific research. This paper focuses on the relation of these developments to understanding the biology of organisms, and examines the conditions under which the evidential value of data posted online is assessed and interpreted by the researchers who access them, in ways that underpin and guide the use of those data to foster discovery. I consider the types of knowledge required to interpret data as evidence for claims about organisms, and in particular the relevance of knowledge acquired through physical interaction with actual organisms to assessing the evidential value of data found online. I conclude that familiarity with research in vivo is crucial to assessing the quality and significance of data visualised in silico; and that studying how biological data are disseminated, visualised, assessed and interpreted in the digital age provides a strong rationale for viewing scientific understanding as a social and distributed, rather than individual and localised, achievement. PMID:25729262

  20. Technical note: The Linked Paleo Data framework - a common tongue for paleoclimatology

    NASA Astrophysics Data System (ADS)

    McKay, Nicholas P.; Emile-Geay, Julien

    2016-04-01

    Paleoclimatology is a highly collaborative scientific endeavor, increasingly reliant on online databases for data sharing. Yet there is currently no universal way to describe, store and share paleoclimate data: in other words, no standard. Data standards are often regarded by scientists as mere technicalities, though they underlie much scientific and technological innovation, as well as facilitating collaborations between research groups. In this article, we propose a preliminary data standard for paleoclimate data, general enough to accommodate all the archive and measurement types encountered in a large international collaboration (PAGES 2k). We also introduce a vehicle for such structured data (Linked Paleo Data, or LiPD), leveraging recent advances in knowledge representation (Linked Open Data).The LiPD framework enables quick querying and extraction, and we expect that it will facilitate the writing of open-source community codes to access, analyze, model and visualize paleoclimate observations. We welcome community feedback on this standard, and encourage paleoclimatologists to experiment with the format for their own purposes.

Top