Sample records for repository integration program

  1. Collaborative Learning Utilizing a Domain-Based Shared Data Repository to Enhance Learning Outcomes

    ERIC Educational Resources Information Center

    Lubliner, David; Widmeyer, George; Deek, Fadi P.

    2009-01-01

    The objective of this study was to determine whether there was a quantifiable improvement in learning outcomes by integrating course materials in a 4-year baccalaureate program, utilizing a knowledge repository with a conceptual map that spans a discipline. Two new models were developed to provide the framework for this knowledge repository. A…

  2. The Center for HIV/AIDS Vaccine Immunology (CHAVI) Multi-site Quality Assurance Program for Cryopreserved Human Peripheral Blood Mononuclear Cells

    PubMed Central

    Sarzotti-Kelsoe, Marcella; Needham, Leila K.; Rountree, Wes; Bainbridge, John; Gray, Clive M.; Fiscus, Susan A.; Ferrari, Guido; Stevens, Wendy S.; Stager, Susan L.; Binz, Whitney; Louzao, Raul; Long, Kristy O.; Mokgotho, Pauline; Moodley, Niranjini; Mackay, Melanie; Kerkau, Melissa; McMillion, Takesha; Kirchherr, Jennifer; Soderberg, Kelly A.; Haynes, Barton F.; Denny, Thomas N.

    2014-01-01

    The Center for HIV/AIDS Vaccine Immunology (CHAVI) consortium was established to determine the host and virus factors associated with HIV transmission, infection and containment of virus replication, with the goal of advancing the development of an HIV protective vaccine. Studies to meet this goal required the use of cryopreserved Peripheral Blood Mononuclear Cell (PBMC) specimens, and therefore it was imperative that a quality assurance (QA) oversight program be developed to monitor PBMC samples obtained from study participants at multiple international sites. Nine site-affiliated laboratories in Africa and the USA collected and processed PBMCs, and cryopreserved PBMC were shipped to CHAVI repositories in Africa and the USA for long-term storage. A three-stage program was designed, based on Good Clinical Laboratory Practices (GCLP), to monitor PBMC integrity at each step of this process. The first stage evaluated the integrity of fresh PBMCs for initial viability, overall yield, and processing time at the site-affiliated laboratories (Stage 1); for the second stage, the repositories determined post-thaw viability and cell recovery of cryopreserved PBMC, received from the site-affiliated laboratories (Stage 2); the third stage assessed the long-term specimen storage at each repository (Stage 3). Overall, the CHAVI PBMC QA oversight program results highlight the relative importance of each of these stages to the ultimate goal of preserving specimen integrity from peripheral blood collection to long-term repository storage. PMID:24910414

  3. High Integrity Can Design Interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaber, E.L.

    1998-08-01

    The National Spent Nuclear Fuel Program is chartered with facilitating the disposition of DOE-owned spent nuclear fuel to allow disposal at a geologic repository. This is done through coordination with the repository program and by assisting DOE Site owners of SNF with needed information, standardized requirements, packaging approaches, etc. The High Integrity Can (HIC) will be manufactured to provide a substitute or barrier enhancement for normal fuel geometry and cladding. The can would be nested inside the DOE standardized canister which is designed to interface with the repository waste package. The HIC approach may provide the following benefits over typicalmore » canning approaches for DOE SNF. (a) It allows ready calculation and management of criticality issues for miscellaneous. (b) It segments and further isolates damaged or otherwise problem materials from normal SNF in the repository package. (c) It provides a very long term corrosion barrier. (d) It provides an extra internal pressure barrier for particulates, gaseous fission products, hydrogen, and water vapor. (e) It delays any potential release of fission products to the repository environment. (f) It maintains an additional level of fuel geometry control during design basis accidents, rock-fall, and seismic events. (g) When seal welded, it could provide the additional containment required for shipments involving plutonium content in excess of 20 Ci. (10 CFR 71.63.b) if integrated with an appropriate cask design. Long term corrosion protection is central to the HIC concept. The material selected for the HIC (Hastelloy C-22) has undergone extensive testing for repository service. The most severe theoretical interactions between iron, repository water containing chlorides and other repository construction materials have been tested. These expected chemical species have not been shown capable of corroding the selected HIC material. Therefore, the HIC should provide a significant barrier to DOE SNF dispersal long after most commercial SNF has degraded and begun moving into the repository environment.« less

  4. The SeaView EarthCube project: Lessons Learned from Integrating Across Repositories

    NASA Astrophysics Data System (ADS)

    Diggs, S. C.; Stocks, K. I.; Arko, R. A.; Kinkade, D.; Shepherd, A.; Olson, C. J.; Pham, A.

    2017-12-01

    SeaView is an NSF-funded EarthCube Integrative Activity Project working with 5 existing data repositories* to provide oceanographers with highly integrated thematic data collections in user-requested formats. The project has three complementary goals: Supporting Scientists: SeaView targets scientists' need for easy access to data of interest that are ready to import into their preferred tool. Strengthening Repositories: By integrating data from multiple repositories for science use, SeaView is helping the ocean data repositories align their data and processes and make ocean data more accessible and easily integrated. Informing EarthCube (earthcube.org): SeaView's experience as an integration demonstration can inform the larger NSF EarthCube architecture and design effort. The challenges faced in this small-scale effort are informative to geosciences cyberinfrastructure more generally. Here we focus on the lessons learned that may inform other data facilities and integrative architecture projects. (The SeaView data collections will be presented at the Ocean Sciences 2018 meeting.) One example is the importance of shared semantics, with persistent identifiers, for key integration elements across the data sets (e.g. cruise, parameter, and project/program.) These must allow for revision through time and should have an agreed authority or process for resolving conflicts: aligning identifiers and correcting errors were time consuming and often required both deep domain knowledge and "back end" knowledge of the data facilities. Another example is the need for robust provenance, and tools that support automated or semi-automated data transform pipelines that capture provenance. Multiple copies and versions of data are now flowing into repositories, and onward to long-term archives such as NOAA NCEI and umbrella portals such as DataONE. Exact copies can be identified with hashes (for those that have the skills), but it can be painfully difficult to understand the processing or format changes that differentiates versions. As more sensors are deployed, and data re-use increases, this will only become more challenging. We will discuss these, and additional lessons learned, as well as invite discussion and solutions from others doing similar work. * BCO-DMO, CCHDO, OBIS, OOI, R2R

  5. Development of DKB ETL module in case of data conversion

    NASA Astrophysics Data System (ADS)

    Kaida, A. Y.; Golosova, M. V.; Grigorieva, M. A.; Gubin, M. Y.

    2018-05-01

    Modern scientific experiments involve the producing of huge volumes of data that requires new approaches in data processing and storage. These data themselves, as well as their processing and storage, are accompanied by a valuable amount of additional information, called metadata, distributed over multiple informational systems and repositories, and having a complicated, heterogeneous structure. Gathering these metadata for experiments in the field of high energy nuclear physics (HENP) is a complex issue, requiring the quest for solutions outside the box. One of the tasks is to integrate metadata from different repositories into some kind of a central storage. During the integration process, metadata taken from original source repositories go through several processing steps: metadata aggregation, transformation according to the current data model and loading it to the general storage in a standardized form. The R&D project of ATLAS experiment on LHC, Data Knowledge Base, is aimed to provide fast and easy access to significant information about LHC experiments for the scientific community. The data integration subsystem, being developed for the DKB project, can be represented as a number of particular pipelines, arranging data flow from data sources to the main DKB storage. The data transformation process, represented by a single pipeline, can be considered as a number of successive data transformation steps, where each step is implemented as an individual program module. This article outlines the specifics of program modules, used in the dataflow, and describes one of the modules developed and integrated into the data integration subsystem of DKB.

  6. Cross-Cutting Risk Framework: Mining Data for Common Risks Across the Portfolio

    NASA Technical Reports Server (NTRS)

    Klein, Gerald A., Jr.; Ruark, Valerie

    2017-01-01

    The National Aeronautics and Space Administration (NASA) defines risk management as an integrated framework, combining risk-informed decision making and continuous risk management to foster forward-thinking and decision making from an integrated risk perspective. Therefore, decision makers must have access to risks outside of their own project to gain the knowledge that provides the integrated risk perspective. Through the Goddard Space Flight Center (GSFC) Flight Projects Directorate (FPD) Business Change Initiative (BCI), risks were integrated into one repository to facilitate access to risk data between projects. With the centralized repository, communications between the FPD, project managers, and risk managers improved and GSFC created the cross-cutting risk framework (CCRF) team. The creation of the consolidated risk repository, in parallel with the initiation of monthly FPD risk managers and risk governance board meetings, are now providing a complete risk management picture spanning the entire directorate. This paper will describe the challenges, methodologies, tools, and techniques used to develop the CCRF, and the lessons learned as the team collectively worked to identify risks that FPD programs projects had in common, both past and present.

  7. Testimony of Dr. Raul A. Deju, Basalt Waste Isolation Project, before the Subcommittee on Energy Research and Production, Committee on Sceince and Technology, United States House of Representatives, March 2, 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1983-01-01

    Status of the Basalt Waste Isolation Project is given. Three key concerns have been identified that need to be resolved to either confirm or eliminate the basalts as a potential nuclear waste repository host medium. They are: A thorough understanding of the groundwater hydrology beneath the Hanford Site is needed to assure that a repository in basalt will not contribute unacceptable amounts of contaminants to the accessible environment. Our ability to construct a repository shaft and a network of underground tunnels needs to be fully demonstrated through an exploratory shaft program. Our ability to ultimately seal a repository, such thatmore » its integrity and the isolation of the waste are guaranteed, needs to be demonstrated.« less

  8. Desiderata for Healthcare Integrated Data Repositories Based on Architectural Comparison of Three Public Repositories

    PubMed Central

    Huser, Vojtech; Cimino, James J.

    2013-01-01

    Integrated data repositories (IDRs) are indispensable tools for numerous biomedical research studies. We compare three large IDRs (Informatics for Integrating Biology and the Bedside (i2b2), HMO Research Network’s Virtual Data Warehouse (VDW) and Observational Medical Outcomes Partnership (OMOP) repository) in order to identify common architectural features that enable efficient storage and organization of large amounts of clinical data. We define three high-level classes of underlying data storage models and we analyze each repository using this classification. We look at how a set of sample facts is represented in each repository and conclude with a list of desiderata for IDRs that deal with the information storage model, terminology model, data integration and value-sets management. PMID:24551366

  9. Desiderata for healthcare integrated data repositories based on architectural comparison of three public repositories.

    PubMed

    Huser, Vojtech; Cimino, James J

    2013-01-01

    Integrated data repositories (IDRs) are indispensable tools for numerous biomedical research studies. We compare three large IDRs (Informatics for Integrating Biology and the Bedside (i2b2), HMO Research Network's Virtual Data Warehouse (VDW) and Observational Medical Outcomes Partnership (OMOP) repository) in order to identify common architectural features that enable efficient storage and organization of large amounts of clinical data. We define three high-level classes of underlying data storage models and we analyze each repository using this classification. We look at how a set of sample facts is represented in each repository and conclude with a list of desiderata for IDRs that deal with the information storage model, terminology model, data integration and value-sets management.

  10. Organizing Diverse, Distributed Project Information

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    2003-01-01

    SemanticOrganizer is a software application designed to organize and integrate information generated within a distributed organization or as part of a project that involves multiple, geographically dispersed collaborators. SemanticOrganizer incorporates the capabilities of database storage, document sharing, hypermedia navigation, and semantic-interlinking into a system that can be customized to satisfy the specific information-management needs of different user communities. The program provides a centralized repository of information that is both secure and accessible to project collaborators via the World Wide Web. SemanticOrganizer's repository can be used to collect diverse information (including forms, documents, notes, data, spreadsheets, images, and sounds) from computers at collaborators work sites. The program organizes the information using a unique network-structured conceptual framework, wherein each node represents a data record that contains not only the original information but also metadata (in effect, standardized data that characterize the information). Links among nodes express semantic relationships among the data records. The program features a Web interface through which users enter, interlink, and/or search for information in the repository. By use of this repository, the collaborators have immediate access to the most recent project information, as well as to archived information. A key advantage to SemanticOrganizer is its ability to interlink information together in a natural fashion using customized terminology and concepts that are familiar to a user community.

  11. Implementing and Sustaining Data Lifecycle best Practices: a Framework for Researchers and Repositories

    NASA Astrophysics Data System (ADS)

    Stall, S.

    2016-02-01

    Emerging data management mandates in conjunction with cross-domain international interoperability are posing new challenges for researchers and repositories. Domain repositories are serving in this critical, growing role monitoring and leading data management standards and capability within their own repository and working on mappings between repositories internationally. Leading research institutions and companies will also be important as they develop and expand data curation efforts. This landscape poses a number of challenges for developing and ensuring the use of best practices in curating research data, enabling discovery, elevating quality across diverse repositories, and helping researchers collect and organize it through the full data life cycle. This multidimensional challenge will continue to grow in complexity. The American Geophysical Union (AGU) is developing two programs to help researchers and data repositories develop and elevate best practices and address these challenges. The goal is to provide tools for the researchers and repositories, whether domain, institutional, or other, that improve performance throughout the data lifecycle across the Earth and space science community. For scientists and researchers, AGU is developing courses around handling data that can lead toward a certification in geoscience data management. Course materials will cover metadata management and collection, data analysis, integration of data, and data presentation. The course topics are being finalized by the advisory board with the first one planned to be available later this year. AGU is also developing a program aimed at helping data repositories, large and small, domain-specific to general, assess and improve data management practices. AGU has partnered with the CMMI® Institute to adapt their Data Management Maturity (DMM)SM framework within the Earth and space sciences. A data management assessment using the DMMSM involves identifying accomplishments and weaknesses compared to leading practices for data management. Recommendations can help improve quality and consistency across the community that will facilitate reuse in the data lifecycle. Through governance, quality, and architecture process areas the assessment can measure the ability for data to be discoverable and interoperable.

  12. Integrating XQuery-Enabled SCORM XML Metadata Repositories into an RDF-Based E-Learning P2P Network

    ERIC Educational Resources Information Center

    Qu, Changtao; Nejdl, Wolfgang

    2004-01-01

    Edutella is an RDF-based E-Learning P2P network that is aimed to accommodate heterogeneous learning resource metadata repositories in a P2P manner and further facilitate the exchange of metadata between these repositories based on RDF. Whereas Edutella provides RDF metadata repositories with a quite natural integration approach, XML metadata…

  13. Integrating computer programs for engineering analysis and design

    NASA Technical Reports Server (NTRS)

    Wilhite, A. W.; Crisp, V. K.; Johnson, S. C.

    1983-01-01

    The design of a third-generation system for integrating computer programs for engineering and design has been developed for the Aerospace Vehicle Interactive Design (AVID) system. This system consists of an engineering data management system, program interface software, a user interface, and a geometry system. A relational information system (ARIS) was developed specifically for the computer-aided engineering system. It is used for a repository of design data that are communicated between analysis programs, for a dictionary that describes these design data, for a directory that describes the analysis programs, and for other system functions. A method is described for interfacing independent analysis programs into a loosely-coupled design system. This method emphasizes an interactive extension of analysis techniques and manipulation of design data. Also, integrity mechanisms exist to maintain database correctness for multidisciplinary design tasks by an individual or a team of specialists. Finally, a prototype user interface program has been developed to aid in system utilization.

  14. Semantic Repositories for eGovernment Initiatives: Integrating Knowledge and Services

    NASA Astrophysics Data System (ADS)

    Palmonari, Matteo; Viscusi, Gianluigi

    In recent years, public sector investments in eGovernment initiatives have depended on making more reliable existing governmental ICT systems and infrastructures. Furthermore, we assist at a change in the focus of public sector management, from the disaggregation, competition and performance measurements typical of the New Public Management (NPM), to new models of governance, aiming for the reintegration of services under a new perspective in bureaucracy, namely a holistic approach to policy making which exploits the extensive digitalization of administrative operations. In this scenario, major challenges are related to support effective access to information both at the front-end level, by means of highly modular and customizable content provision, and at the back-end level, by means of information integration initiatives. Repositories of information about data and services that exploit semantic models and technologies can support these goals by bridging the gap between the data-level representations and the human-level knowledge involved in accessing information and in searching for services. Moreover, semantic repository technologies can reach a new level of automation for different tasks involved in interoperability programs, both related to data integration techniques and service-oriented computing approaches. In this chapter, we discuss the above topics by referring to techniques and experiences where repositories based on conceptual models and ontologies are used at different levels in eGovernment initiatives: at the back-end level to produce a comprehensive view of the information managed in the public administrations' (PA) information systems, and at the front-end level to support effective service delivery.

  15. USAF Hearing Conservation Program, DOEHRS-HC Data Repository Annual Report: CY15

    DTIC Science & Technology

    2017-05-31

    AFRL-SA-WP-SR-2017-0014 USAF Hearing Conservation Program, DOEHRS-HC Data Repository Annual Report: CY15 Daniel A. Williams...Conservation Program, DOEHRS-HC Data Repository Annual Report: CY15 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR...Health Readiness System-Hearing Conservation Data Repository (DOEHRS-HC DR). Major command- and installation-level reports are available quarterly

  16. Developing an Integrated Institutional Repository at Imperial College London

    ERIC Educational Resources Information Center

    Afshari, Fereshteh; Jones, Richard

    2007-01-01

    Purpose: This paper aims to demonstrate how a highly integrated approach to repository development and deployment can be beneficial in producing a successful archive. Design/methodology/approach: Imperial College London undertook a significant specifications process to gather and formalise requirements for its repository system. This was done…

  17. Coupling Effects of Heat and Moisture on the Saturation Processes of Buffer Material in a Deep Geological Repository

    NASA Astrophysics Data System (ADS)

    Huang, Wei-Hsing

    2017-04-01

    Clay barrier plays a major role for the isolation of radioactive wastes in a underground repository. This paper investigates the resaturation behavior of clay barrier, with emphasis on the coupling effects of heat and moisture of buffer material in the near-field of a repository during groundwater intrusion processes. A locally available clay named "Zhisin clay" and a standard bentotine material were adopted in the laboratory program. Water uptake tests were conducted on clay specimens compacted at various densities to simulate the intrusion of groundwater into the buffer material. Soil suction of clay specimens was measured by psychrometers embedded in clay specimens and by vapor equilibrium technique conducted at varying temperatures. Using the soil water characteristic curve, an integration scheme was introduced to estimate the hydraulic conductivity of unsaturated clay. The finite element program ABAQUS was then employed to carry out the numerical simulation of the saturation process in the near field of a repository. Results of the numerical simulation were validated using the degree of saturation profile obtained from the water uptake tests on Zhisin clay. The numerical scheme was then extended to establish a model simulating the resaturation process after the closure of a repository. It was found that, due to the variation in suction and thermal conductivity with temperature of clay barrier material, the calculated temperature field shows a reduction as a result of incorporating the hydro-properties in the calculations.

  18. New Features of the re3data Registry of Research Data Repositories

    NASA Astrophysics Data System (ADS)

    Elger, K.; Pampel, H.; Vierkant, P.; Witt, M.

    2016-12-01

    re3data is a registry of research data repositories that lists over 1,600 repositories from around the world, making it the largest and most comprehensive online catalog of data repositories on the web. The registry offers researchers, funding agencies, libraries and publishers a comprehensive overview of the heterogeneous landscape of data repositories. The repositories are described, following the "Metadata Schema for the Description of Research Data Repositories". re3data summarises the properties of a repository into a user-friendly icon system helping users to easily identify an adequate repository for the storage of their datasets. The re3data entries are curated by an international, multi-disciplinary editorial board. An application programming interface (API) enables other information systems to list and fetch metadata for integration and interoperability. Funders like the European Commission (2015) and publishers like Springer Nature (2016) recommend the use of re3data.org in their policies. The original re3data project partners are the GFZ German Research Centre for Geosciences, the Humboldt-Universität zu Berlin, the Purdue University Libraries and the Karlsruhe Institute of Technology (KIT). Since 2015 re3data is operated as a service of DataCite, a global non-profit organisation that provides persistent identifiers (DOIs) for research data. At the 2016 AGU Fall Meeting we will describe the current status of re3data. An overview of the major developments and new features will be given. Furthermore, we will present our plans to increase the quality of the re3data entries.

  19. A Discussion of Issues in Integrity Constraint Monitoring

    NASA Technical Reports Server (NTRS)

    Fernandez, Francisco G.; Gates, Ann Q.; Cooke, Daniel E.

    1998-01-01

    In the development of large-scale software systems, analysts, designers, and programmers identify properties of data objects in the system. The ability to check those assertions during runtime is desirable as a means of verifying the integrity of the program. Typically, programmers ensure the satisfaction of such properties through the use of some form of manually embedded assertion check. The disadvantage to this approach is that these assertions become entangled within the program code. The goal of the research is to develop an integrity constraint monitoring mechanism whereby a repository of software system properties (called integrity constraints) are automatically inserted into the program by the mechanism to check for incorrect program behaviors. Such a mechanism would overcome many of the deficiencies of manually embedded assertion checks. This paper gives an overview of the preliminary work performed toward this goal. The manual instrumentation of constraint checking on a series of test programs is discussed, This review then is used as the basis for a discussion of issues to be considered in developing an automated integrity constraint monitor.

  20. SeaView: bringing EarthCube to the Oceanographer

    NASA Astrophysics Data System (ADS)

    Stocks, K. I.; Diggs, S. C.; Arko, R. A.; Kinkade, D.; Shepherd, A.

    2016-12-01

    As new instrument types are developed, and new observational programs start, that support a growing community of "dry" oceanographers, the ability to find, access, and visualize existing data of interest becomes increasingly critical. Yet ocean data, when available, is are held in multiple data facilities, in different formats, and accessible through different pathways. This creates practical problems with integrating and working across different data sets. The SeaView project is building connections between the rich data resources in five major oceanographic data facilities - BCO-DMO, CCHDO, OBIS, OOI, and R2R* - creating a federated set of thematic data collections that are organized around common characteristics (geographic location, time, expedition, program, data type, etc.) and published online in Web Accessible Folders using standard file formats such as ODV and NetCDF. The work includes not simply reformatting data, but identifying and, where possible, addressing interoperability challenges: which common identifiers for core concepts can connect data across repositories, which terms a scientist may want to search that, if added to the data repositories, will increase discoverability; the presence of duplicate data across repositories, etc. We will present the data collections available to date, including data from the OOI Pioneer Array region, and seek scientists' input on the data types and formats they prefer, the tools they use to analyze and visualize data, and their specific recommendations for future data collections to support oceanographic science. * Biological and Chemical Oceanography Data Management Office (BCO-DMO), CLIVAR and Carbon Hydrographic Data Office (CCHDO), International Ocean Biogeographic Information System (iOBIS), Ocean Observatories Initiative (OOI), and Rolling Deck to Repository (R2R) Program.

  1. Building a genome database using an object-oriented approach.

    PubMed

    Barbasiewicz, Anna; Liu, Lin; Lang, B Franz; Burger, Gertraud

    2002-01-01

    GOBASE is a relational database that integrates data associated with mitochondria and chloroplasts. The most important data in GOBASE, i. e., molecular sequences and taxonomic information, are obtained from the public sequence data repository at the National Center for Biotechnology Information (NCBI), and are validated by our experts. Maintaining a curated genomic database comes with a towering labor cost, due to the shear volume of available genomic sequences and the plethora of annotation errors and omissions in records retrieved from public repositories. Here we describe our approach to increase automation of the database population process, thereby reducing manual intervention. As a first step, we used Unified Modeling Language (UML) to construct a list of potential errors. Each case was evaluated independently, and an expert solution was devised, and represented as a diagram. Subsequently, the UML diagrams were used as templates for writing object-oriented automation programs in the Java programming language.

  2. Computational knowledge integration in biopharmaceutical research.

    PubMed

    Ficenec, David; Osborne, Mark; Pradines, Joel; Richards, Dan; Felciano, Ramon; Cho, Raymond J; Chen, Richard O; Liefeld, Ted; Owen, James; Ruttenberg, Alan; Reich, Christian; Horvath, Joseph; Clark, Tim

    2003-09-01

    An initiative to increase biopharmaceutical research productivity by capturing, sharing and computationally integrating proprietary scientific discoveries with public knowledge is described. This initiative involves both organisational process change and multiple interoperating software systems. The software components rely on mutually supporting integration techniques. These include a richly structured ontology, statistical analysis of experimental data against stored conclusions, natural language processing of public literature, secure document repositories with lightweight metadata, web services integration, enterprise web portals and relational databases. This approach has already begun to increase scientific productivity in our enterprise by creating an organisational memory (OM) of internal research findings, accessible on the web. Through bringing together these components it has also been possible to construct a very large and expanding repository of biological pathway information linked to this repository of findings which is extremely useful in analysis of DNA microarray data. This repository, in turn, enables our research paradigm to be shifted towards more comprehensive systems-based understandings of drug action.

  3. The Geodetic Seamless Archive Centers Service Layer: A System Architecture for Federating Geodesy Data Repositories

    NASA Astrophysics Data System (ADS)

    McWhirter, J.; Boler, F. M.; Bock, Y.; Jamason, P.; Squibb, M. B.; Noll, C. E.; Blewitt, G.; Kreemer, C. W.

    2010-12-01

    Three geodesy Archive Centers, Scripps Orbit and Permanent Array Center (SOPAC), NASA's Crustal Dynamics Data Information System (CDDIS) and UNAVCO are engaged in a joint effort to define and develop a common Web Service Application Programming Interface (API) for accessing geodetic data holdings. This effort is funded by the NASA ROSES ACCESS Program to modernize the original GPS Seamless Archive Centers (GSAC) technology which was developed in the 1990s. A new web service interface, the GSAC-WS, is being developed to provide uniform and expanded mechanisms through which users can access our data repositories. In total, our respective archives hold tens of millions of files and contain a rich collection of site/station metadata. Though we serve similar user communities, we currently provide a range of different access methods, query services and metadata formats. This leads to a lack of consistency in the userís experience and a duplication of engineering efforts. The GSAC-WS API and its reference implementation in an underlying Java-based GSAC Service Layer (GSL) supports metadata and data queries into site/station oriented data archives. The general nature of this API makes it applicable to a broad range of data systems. The overall goals of this project include providing consistent and rich query interfaces for end users and client programs, the development of enabling technology to facilitate third party repositories in developing these web service capabilities and to enable the ability to perform data queries across a collection of federated GSAC-WS enabled repositories. A fundamental challenge faced in this project is to provide a common suite of query services across a heterogeneous collection of data yet enabling each repository to expose their specific metadata holdings. To address this challenge we are developing a "capabilities" based service where a repository can describe its specific query and metadata capabilities. Furthermore, the architecture of the GSL is based on a model-view paradigm that decouples the underlying data model semantics from particular representations of the data model. This will allow for the GSAC-WS enabled repositories to evolve their service offerings to incorporate new metadata definition formats (e.g., ISO-19115, FGDC, JSON, etc.) and new techniques for accessing their holdings. Building on the core GSAC-WS implementations the project is also developing a federated/distributed query service. This service will seamlessly integrate with the GSAC Service Layer and will support data and metadata queries across a collection of federated GSAC repositories.

  4. Site characterization progress report: Yucca Mountain, Nevada. Number 15, April 1--September 30, 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-04-01

    During the second half of fiscal year 1996, activities at the Yucca Mountain Site Characterization Project (Project) supported the objectives of the revised Program Plan released this period by the Office of Civilian Radioactive Waste Management of the US Department of Energy (Department). Outlined in the revised plan is a focused, integrated program of site characterization, design, engineering, environmental, and performance assessment activities that will achieve key Program and statutory objectives. The plan will result in the development of a license application for repository construction at Yucca Mountain, if the site is found suitable. Activities this period focused on twomore » of the three near-term objectives of the revised plan: updating in 1997 the regulatory framework for determining the suitability of the site for the proposed repository concept and providing information for a 1998 viability assessment of continuing toward the licensing of a repository. The Project has also developed a new design approach that uses the advanced conceptual design published during the last reporting period as a base for developing a design that will support the viability assessment. The initial construction phase of the Thermal Testing Facility was completed and the first phase of the in situ heater tests began on schedule. In addition, phase-one construction was completed for the first of two alcoves that will provide access to the Ghost Dance fault.« less

  5. Using Linked Open Data and Semantic Integration to Search Across Geoscience Repositories

    NASA Astrophysics Data System (ADS)

    Mickle, A.; Raymond, L. M.; Shepherd, A.; Arko, R. A.; Carbotte, S. M.; Chandler, C. L.; Cheatham, M.; Fils, D.; Hitzler, P.; Janowicz, K.; Jones, M.; Krisnadhi, A.; Lehnert, K. A.; Narock, T.; Schildhauer, M.; Wiebe, P. H.

    2014-12-01

    The MBLWHOI Library is a partner in the OceanLink project, an NSF EarthCube Building Block, applying semantic technologies to enable knowledge discovery, sharing and integration. OceanLink is testing ontology design patterns that link together: two data repositories, Rolling Deck to Repository (R2R), Biological and Chemical Oceanography Data Management Office (BCO-DMO); the MBLWHOI Library Institutional Repository (IR) Woods Hole Open Access Server (WHOAS); National Science Foundation (NSF) funded awards; and American Geophysical Union (AGU) conference presentations. The Library is collaborating with scientific users, data managers, DSpace engineers, experts in ontology design patterns, and user interface developers to make WHOAS, a DSpace repository, linked open data enabled. The goal is to allow searching across repositories without any of the information providers having to change how they manage their collections. The tools developed for DSpace will be made available to the community of users. There are 257 registered DSpace repositories in the United Stated and over 1700 worldwide. Outcomes include: Integration of DSpace with OpenRDF Sesame triple store to provide SPARQL endpoint for the storage and query of RDF representation of DSpace resources, Mapping of DSpace resources to OceanLink ontology, and DSpace "data" add on to provide resolvable linked open data representation of DSpace resources.

  6. The Nevada initiative: A risk communication Fiasco

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flynn, J.; Solvic, P.; Mertz, C.K.

    The U.S. Congress has designated Yucca Mountain, Nevada as the only potential site to be studied for the nation`s first high-level nuclear waste repository. People in Nevada strongly oppose the program, managed by the U.S. Department of Energy. Survey research shows that the public believes there are great risks from a repository program, in contrast to a majority of scientists who feel the risks are acceptably small. Delays in the repository program resulting in part from public opposition in Nevada have concerned the nuclear power industry, which collects the fees for the federal repository program and believes it needs themore » repository as a final disposal facility for its high-level nuclear wastes. To assist the repository program, the American Nuclear Energy Council (ANEC), an industry group, sponsored a massive advertising campaign in Nevada. The campaign attempted to assure people that the risks of a repository were small and that the repository studies should proceed. The campaign failed because its managers misunderstood the issues underlying the controversy, attempted a covert manipulation of public opinion that was revealed, and most importantly, lacked the public trust that was necessary to communicate credibly about the risks of a nuclear waste facility. This article describes the advertising campaign and its effects. The manner in which the ANEC campaign itself became a controversial public issue is reviewed. The advertising campaign is discussed as it relates to risk assessment and communication. 29 refs., 2 tabs.« less

  7. Charting a Path to Location Intelligence for STD Control.

    PubMed

    Gerber, Todd M; Du, Ping; Armstrong-Brown, Janelle; McNutt, Louise-Anne; Coles, F Bruce

    2009-01-01

    This article describes the New York State Department of Health's GeoDatabase project, which developed new methods and techniques for designing and building a geocoding and mapping data repository for sexually transmitted disease (STD) control. The GeoDatabase development was supported through the Centers for Disease Control and Prevention's Outcome Assessment through Systems of Integrated Surveillance workgroup. The design and operation of the GeoDatabase relied upon commercial-off-the-shelf tools that other public health programs may also use for disease-control systems. This article provides a blueprint of the structure and software used to build the GeoDatabase and integrate location data from multiple data sources into the everyday activities of STD control programs.

  8. Information warehouse - a comprehensive informatics platform for business, clinical, and research applications.

    PubMed

    Kamal, Jyoti; Liu, Jianhua; Ostrander, Michael; Santangelo, Jennifer; Dyta, Ravi; Rogers, Patrick; Mekhjian, Hagop S

    2010-11-13

    Since its inception in 1997, the IW (Information Warehouse) at the Ohio State University Medical Center (OSUMC) has gradually transformed itself from a single purpose business decision support system to a comprehensive informatics platform supporting basic, clinical, and translational research. The IW today is the combination of four integrated components: a clinical data repository containing over a million patients; a research data repository housing various research specific data; an application development platform for building business and research enabling applications; a business intelligence environment assisting in reporting in all function areas. The IW is structured and encoded using standard terminologies such as SNOMED-CT, ICD, and CPT. The IW is an important component of OSUMC's Clinical and Translational Science Award (CTSA) informatics program.

  9. 78 FR 23938 - Privacy Act of 1974; Report of a New Routine Use for Selected CMS Systems of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-23

    ...-0558; Medicare Integrated Data Repository (IDR), System No. 09-70-0571; Common Working Files (CWF... Integrated Data Repository (IDR), System No. 09-70- 0571, published at 71 Fed. Reg., 74915 (December 13, 2006...

  10. Rolling Deck to Repository (R2R): Linking and Integrating Data for Oceanographic Research

    NASA Astrophysics Data System (ADS)

    Arko, R. A.; Chandler, C. L.; Clark, P. D.; Shepherd, A.; Moore, C.

    2012-12-01

    The Rolling Deck to Repository (R2R) program is developing infrastructure to ensure the underway sensor data from NSF-supported oceanographic research vessels are routinely and consistently documented, preserved in long-term archives, and disseminated to the science community. We have published the entire R2R Catalog as a Linked Data collection, making it easily accessible to encourage linking and integration with data at other repositories. We are developing the R2R Linked Data collection with specific goals in mind: 1.) We facilitate data access and reuse by providing the richest possible collection of resources to describe vessels, cruises, instruments, and datasets from the U.S. academic fleet, including data quality assessment results and clean trackline navigation. We are leveraging or adopting existing community-standard concepts and vocabularies, particularly concepts from the Biological and Chemical Oceanography Data Management Office (BCO-DMO) ontology and terms from the pan-European SeaDataNet vocabularies, and continually re-publish resources as new concepts and terms are mapped. 2.) We facilitate data citation through the entire data lifecycle from field acquisition to shoreside archiving to (ultimately) global syntheses and journal articles. We are implementing globally unique and persistent identifiers at the collection, dataset, and granule levels, and encoding these citable identifiers directly into the Linked Data resources. 3.) We facilitate linking and integration with other repositories that publish Linked Data collections for the U.S. academic fleet, such as BCO-DMO and the Index to Marine and Lacustrine Geological Samples (IMLGS). We are initially mapping datasets at the resource level, and plan to eventually implement rule-based mapping at the concept level. We work collaboratively with partner repositories to develop best practices for URI patterns and consensus on shared vocabularies. The R2R Linked Data collection is implemented as a lightweight "virtual RDF graph" generated on-the-fly from our SQL database using the D2RQ (http://d2rq.org) package. In addition to the default SPARQL endpoint for programmatic access, we are developing a Web-based interface from open-source software components that offers user-friendly browse and search.

  11. 75 FR 26788 - Public Land Order No. 7742; Withdrawal of Public Land for the Manning Canyon Tailings Repository; UT

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-12

    ... 79765] Public Land Order No. 7742; Withdrawal of Public Land for the Manning Canyon Tailings Repository... period of 5 years to protect the integrity of the Manning Canyon Tailings Repository and surrounding... Repository. The Bureau of Land Management intends to evaluate the need for a lengthier withdrawal through the...

  12. AdaNET phase 0 support for the AdaNET Dynamic Software Inventory (DSI) management system prototype. Catalog of available reusable software components

    NASA Technical Reports Server (NTRS)

    Hanley, Lionel

    1989-01-01

    The Ada Software Repository is a public-domain collection of Ada software and information. The Ada Software Repository is one of several repositories located on the SIMTEL20 Defense Data Network host computer at White Sands Missile Range, and available to any host computer on the network since 26 November 1984. This repository provides a free source for Ada programs and information. The Ada Software Repository is divided into several subdirectories. These directories are organized by topic, and their names and a brief overview of their topics are contained. The Ada Software Repository on SIMTEL20 serves two basic roles: to promote the exchange and use (reusability) of Ada programs and tools (including components) and to promote Ada education.

  13. An overview of platforms for cloud based development.

    PubMed

    Fylaktopoulos, G; Goumas, G; Skolarikis, M; Sotiropoulos, A; Maglogiannis, I

    2016-01-01

    This paper provides an overview of the state of the art technologies for software development in cloud environments. The surveyed systems cover the whole spectrum of cloud-based development including integrated programming environments, code repositories, software modeling, composition and documentation tools, and application management and orchestration. In this work we evaluate the existing cloud development ecosystem based on a wide number of characteristics like applicability (e.g. programming and database technologies supported), productivity enhancement (e.g. editor capabilities, debugging tools), support for collaboration (e.g. repository functionality, version control) and post-development application hosting and we compare the surveyed systems. The conducted survey proves that software engineering in the cloud era has made its initial steps showing potential to provide concrete implementation and execution environments for cloud-based applications. However, a number of important challenges need to be addressed for this approach to be viable. These challenges are discussed in the article, while a conclusion is drawn that although several steps have been made, a compact and reliable solution does not yet exist.

  14. Collaboration Nation: The Building of the Welsh Repository Network

    ERIC Educational Resources Information Center

    Knowles, Jacqueline

    2010-01-01

    Purpose: The purpose of this paper is to disseminate information about the Welsh Repository Network (WRN), innovative work being undertaken to build an integrated network of institutional digital repositories. A collaborative approach, in particular through the provision of centralised technical and organisational support, has demonstrated…

  15. The United States Antarctic Program Data Center (USAP-DC): Recent Developments

    NASA Astrophysics Data System (ADS)

    Nitsche, F. O.; Bauer, R.; Arko, R. A.; Shane, N.; Carbotte, S. M.; Scambos, T.

    2017-12-01

    Antarctic earth and environmental science data are highly valuable, often unique research assets. They are acquired with substantial and expensive logistical effort, frequently in areas that will not be re-visited for many years. The data acquired in support of Antarctic research span a wide range of disciplines. Historically, data management for the US Antarctic Program (USAP) has made use of existing disciplinary data centers, and the international Antarctic Master Directory (AMD) has served as a central metadata catalog linking to data files hosted in these external repositories. However, disciplinary repositories do not exist for all USAP-generated data types and often it is unclear what repositories are appropriate, leading to many datasets being served locally from scientist's websites or not available at all. The USAP Data Center (USAP-DC; www.usap-dc.org), operated as part of the Interdisciplinary Earth Data Alliance (IEDA), contributes to the broader preservation of research data acquired with funding from NSF's Office of Polar Programs by providing a repository for diverse data from the Antarctic region. USAP-DC hosts data that spans the range of Antarctic research from snow radar to volcano observatory imagery to penguin counts to meteorological model outputs. Data services include data documentation, long-term preservation, and web publication, as well as scientist support for registration of data descriptions into the AMD in fulfillment of US obligations under the International Antarctic Treaty. In Spring 2016, USAP-DC and the NSIDC began a new collaboration to consolidate data services for Antarctic investigators and to integrate the NSF-funded glaciology collection at NSIDC with the collection hosted by USAP-DC. Investigator submissions for NSF's Glaciology program now make use of USAP-DC's web submission tools, providing a uniform interface for Antarctic investigators. The tools have been redesigned to collect a broader range of metadata. Each data submission is reviewed and verified by a specialist from the USAP-DC/NSIDC team depending on disciplinary focus of the submission. A recently updated web search interface is available to search data by title, NSF program, award, dataset contributor, large scale project (e.g. WAIS Divide Ice Core) or by specifying an area in map view.

  16. Ontology for Transforming Geo-Spatial Data for Discovery and Integration of Scientific Data

    NASA Astrophysics Data System (ADS)

    Nguyen, L.; Chee, T.; Minnis, P.

    2013-12-01

    Discovery and access to geo-spatial scientific data across heterogeneous repositories and multi-discipline datasets can present challenges for scientist. We propose to build a workflow for transforming geo-spatial datasets into semantic environment by using relationships to describe the resource using OWL Web Ontology, RDF, and a proposed geo-spatial vocabulary. We will present methods for transforming traditional scientific dataset, use of a semantic repository, and querying using SPARQL to integrate and access datasets. This unique repository will enable discovery of scientific data by geospatial bound or other criteria.

  17. The preliminary design and feasibility study of the spent fuel and high level waste repository in the Czech Republic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valvoda, Z.; Holub, J.; Kucerka, M.

    1996-12-31

    In the year 1993, began the Program of Development of the Spent Fuel and High Level Waste Repository in the Conditions of the Czech Republic. During the first phase, the basic concept and structure of the Program has been developed, and the basic design criteria and requirements were prepared. In the conditions of the Czech Republic, only an underground repository in deep geological formation is acceptable. Expected depth is between 500 to 1000 meters and as host rock will be granites. A preliminary variant design study was realized in 1994, that analyzed the radioactive waste and spent fuel flow frommore » NPPs to the repository, various possibilities of transportation in accordance to the various concepts of spent fuel conditioning and transportation to the underground structures. Conditioning and encapsulation of spent fuel and/or radioactive waste is proposed on the repository site. Underground disposal structures are proposed at one underground floor. The repository will have reserve capacity for radioactive waste from NPPs decommissioning and for waste non acceptable to other repositories. Vertical disposal of unshielded canisters in boreholes and/or horizontal disposal of shielded canisters is studied. As the base term of the start up of the repository operation, the year 2035 has been established. From this date, a preliminary time schedule of the Project has been developed. A method of calculating leveled and discounted costs within the repository lifetime, for each of selected 5 variants, was used for economic calculations. Preliminary expected parametric costs of the repository are about 0,1 Kc ($0.004) per MWh, produced in the Czech NPPs. In 1995, the design and feasibility study has gone in more details to the technical concept of repository construction and proposed technologies, as well as to the operational phase of the repository. Paper will describe results of the 1995 design work and will present the program of the repository development in next period.« less

  18. Information Warehouse – A Comprehensive Informatics Platform for Business, Clinical, and Research Applications

    PubMed Central

    Kamal, Jyoti; Liu, Jianhua; Ostrander, Michael; Santangelo, Jennifer; Dyta, Ravi; Rogers, Patrick; Mekhjian, Hagop S.

    2010-01-01

    Since its inception in 1997, the IW (Information Warehouse) at the Ohio State University Medical Center (OSUMC) has gradually transformed itself from a single purpose business decision support system to a comprehensive informatics platform supporting basic, clinical, and translational research. The IW today is the combination of four integrated components: a clinical data repository containing over a million patients; a research data repository housing various research specific data; an application development platform for building business and research enabling applications; a business intelligence environment assisting in reporting in all function areas. The IW is structured and encoded using standard terminologies such as SNOMED-CT, ICD, and CPT. The IW is an important component of OSUMC’s Clinical and Translational Science Award (CTSA) informatics program. PMID:21347019

  19. NeuroVault.org: A repository for sharing unthresholded statistical maps, parcellations, and atlases of the human brain.

    PubMed

    Gorgolewski, Krzysztof J; Varoquaux, Gael; Rivera, Gabriel; Schwartz, Yannick; Sochat, Vanessa V; Ghosh, Satrajit S; Maumet, Camille; Nichols, Thomas E; Poline, Jean-Baptiste; Yarkoni, Tal; Margulies, Daniel S; Poldrack, Russell A

    2016-01-01

    NeuroVault.org is dedicated to storing outputs of analyses in the form of statistical maps, parcellations and atlases, a unique strategy that contrasts with most neuroimaging repositories that store raw acquisition data or stereotaxic coordinates. Such maps are indispensable for performing meta-analyses, validating novel methodology, and deciding on precise outlines for regions of interest (ROIs). NeuroVault is open to maps derived from both healthy and clinical populations, as well as from various imaging modalities (sMRI, fMRI, EEG, MEG, PET, etc.). The repository uses modern web technologies such as interactive web-based visualization, cognitive decoding, and comparison with other maps to provide researchers with efficient, intuitive tools to improve the understanding of their results. Each dataset and map is assigned a permanent Universal Resource Locator (URL), and all of the data is accessible through a REST Application Programming Interface (API). Additionally, the repository supports the NIDM-Results standard and has the ability to parse outputs from popular FSL and SPM software packages to automatically extract relevant metadata. This ease of use, modern web-integration, and pioneering functionality holds promise to improve the workflow for making inferences about and sharing whole-brain statistical maps. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Evolving the Living With a Star Data System Definition

    NASA Astrophysics Data System (ADS)

    Otranto, J.; Dijoseph, M.; Worrall, W.

    2003-04-01

    NASA’s Living With a Star (LWS) Program is a space weather-focused and applications-driven research program. The LWS Program is soliciting input from the solar, space physics, space weather, and climate science communities to develop a system that enables access to science data associated with these disciplines, and advances the development of discipline and interdisciplinary findings. The LWS Program will implement a data system that builds upon the existing and planned data capture, processing, and storage components put in place by individual spacecraft missions and also inter-project data management systems, such as active archives, deep archives, and multi-mission repositories. It is technically feasible for the LWS Program to integrate data from a broad set of resources, assuming they are either publicly accessible or access is permitted by the system’s administrators. The LWS Program data system will work in coordination with spacecraft mission data systems and science data repositories, integrating them into a common data representation. This common representation relies on a robust metadata definition that provides journalistic and technical data descriptions, plus linkages to supporting data products and tools. The LWS Program intends to become an enabling resource to PIs, interdisciplinary scientists, researchers, and students facilitating both access to a broad collection of science data, as well as the necessary supporting components to understand and make productive use of the data. For the LWS Program to represent science data that is physically distributed across various ground system elements, information about the data products stored on each system is collected through a series of LWS-created active agents. These active agents are customized to interface or interact with each one of these data systems, collect information, and forward updates to a single LWS-developed metadata broker. This broker, in turn, updates a centralized repository of LWS-specific metadata. A populated LWS metadata database is a single point-of-contact that can serve all users (the science community) with a “one-stop-shop” for data access. While data may not be physically stored in an LWS-specific repository, the LWS system enables data access from wherever the data are stored. Moreover, LWS provides the user access to information for understanding the data source, format, and calibration, enables access to ancillary and correlative data products, provides links to processing tools and models associated with the data, and any corresponding findings. The LWS may also support an active archive for solar, space physics, space weather, and climate data when these data would otherwise be discarded or archived off-line. This archive could potentially serve as a backup facility for LWS missions. This plan is developed based upon input already received from the science community; the architecture is based on system developed to date that have worked well on a smaller scale. The LWS Program continues to seek constructive input from the science community, examples of both successes and failures in dealing with science data systems, and insights regarding the obstacles between the current state-of-the-practice and this vision for the LWS Program data system.

  1. CMR Catalog Service for the Web

    NASA Technical Reports Server (NTRS)

    Newman, Doug; Mitchell, Andrew

    2016-01-01

    With the impending retirement of Global Change Master Directory (GCMD) Application Programming Interfaces (APIs) the Common Metadata Repository (CMR) was charged with providing a collection-level Catalog Service for the Web (CSW) that provided the same level of functionality as GCMD. This talk describes the capabilities of the CMR CSW API with particular reference to the support of the Committee on Earth Observation Satellites (CEOS) Working Group on Information Systems and Services (WGISS) Integrated Catalog (CWIC).

  2. US/German Collaboration in Salt Repository Research, Design and Operation - 13243

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steininger, Walter; Hansen, Frank; Biurrun, Enrique

    2013-07-01

    Recent developments in the US and Germany [1-3] have precipitated renewed efforts in salt repository investigations and related studies. Both the German rock salt repository activities and the US waste management programs currently face challenges that may adversely affect their respective current and future state-of-the-art core capabilities in rock salt repository science and technology. The research agenda being pursued by our respective countries leverages collective efforts for the benefit of both programs. The topics addressed by the US/German salt repository collaborations align well with the findings and recommendations summarized in the January 2012 US Blue Ribbon Commission on America's Nuclearmore » Future (BRC) report [4] and are consistent with the aspirations of the key topics of the Strategic Research Agenda of the Implementing Geological Disposal of Radioactive Waste Technology Platform (IGD-TP) [5]. Against this background, a revival of joint efforts in salt repository investigations after some years of hibernation has been undertaken to leverage collective efforts in salt repository research, design, operations, and related issues for the benefit of respective programs and to form a basis for providing an attractive, cost-effective insurance against the premature loss of virtually irreplaceable scientific expertise and institutional memory. (authors)« less

  3. An Integrative Approach to Archival Outreach: A Case Study of Becoming Part of the Constituents' Community

    ERIC Educational Resources Information Center

    Rettig, Patricia J.

    2007-01-01

    Archival outreach, an essential activity for any repository, should focus on what constituents are already doing and capitalize on existing venues related to the repository's subject area. The Water Resources Archive at Colorado State University successfully undertook this integrative approach to outreach. Detailed in the article are outreach…

  4. A Framework for Integrating Oceanographic Data Repositories

    NASA Astrophysics Data System (ADS)

    Rozell, E.; Maffei, A. R.; Beaulieu, S. E.; Fox, P. A.

    2010-12-01

    Oceanographic research covers a broad range of science domains and requires a tremendous amount of cross-disciplinary collaboration. Advances in cyberinfrastructure are making it easier to share data across disciplines through the use of web services and community vocabularies. Best practices in the design of web services and vocabularies to support interoperability amongst science data repositories are only starting to emerge. Strategic design decisions in these areas are crucial to the creation of end-user data and application integration tools. We present S2S, a novel framework for deploying customizable user interfaces to support the search and analysis of data from multiple repositories. Our research methods follow the Semantic Web methodology and technology development process developed by Fox et al. This methodology stresses the importance of close scientist-technologist interactions when developing scientific use cases, keeping the project well scoped and ensuring the result meets a real scientific need. The S2S framework motivates the development of standardized web services with well-described parameters, as well as the integration of existing web services and applications in the search and analysis of data. S2S also encourages the use and development of community vocabularies and ontologies to support federated search and reduce the amount of domain expertise required in the data discovery process. S2S utilizes the Web Ontology Language (OWL) to describe the components of the framework, including web service parameters, and OpenSearch as a standard description for web services, particularly search services for oceanographic data repositories. We have created search services for an oceanographic metadata database, a large set of quality-controlled ocean profile measurements, and a biogeographic search service. S2S provides an application programming interface (API) that can be used to generate custom user interfaces, supporting data and application integration across these repositories and other web resources. Although initially targeted towards a general oceanographic audience, the S2S framework shows promise in many science domains, inspired in part by the broad disciplinary coverage of oceanography. This presentation will cover the challenges addressed by the S2S framework, the research methods used in its development, and the resulting architecture for the system. It will demonstrate how S2S is remarkably extensible, and can be generalized to many science domains. Given these characteristics, the framework can simplify the process of data discovery and analysis for the end user, and can help to shift the responsibility of search interface development away from data managers.

  5. SATORI: a system for ontology-guided visual exploration of biomedical data repositories.

    PubMed

    Lekschas, Fritz; Gehlenborg, Nils

    2018-04-01

    The ever-increasing number of biomedical datasets provides tremendous opportunities for re-use but current data repositories provide limited means of exploration apart from text-based search. Ontological metadata annotations provide context by semantically relating datasets. Visualizing this rich network of relationships can improve the explorability of large data repositories and help researchers find datasets of interest. We developed SATORI-an integrative search and visual exploration interface for the exploration of biomedical data repositories. The design is informed by a requirements analysis through a series of semi-structured interviews. We evaluated the implementation of SATORI in a field study on a real-world data collection. SATORI enables researchers to seamlessly search, browse and semantically query data repositories via two visualizations that are highly interconnected with a powerful search interface. SATORI is an open-source web application, which is freely available at http://satori.refinery-platform.org and integrated into the Refinery Platform. nils@hms.harvard.edu. Supplementary data are available at Bioinformatics online.

  6. Geologic and geophysical characterization studies of Yucca Mountain, Nevada, a potential high-level radioactive-waste repository

    USGS Publications Warehouse

    Whitney, J.W.; Keefer, W.R.

    2000-01-01

    In recognition of a critical national need for permanent radioactive-waste storage, Yucca Mountain in southwestern Nevada has been investigated by Federal agencies since the 1970's, as a potential geologic disposal site. In 1987, Congress selected Yucca Mountain for an expanded and more detailed site characterization effort. As an integral part of this program, the U.S. Geological Survey began a series of detailed geologic, geophysical, and related investigations designed to characterize the tectonic setting, fault behavior, and seismicity of the Yucca Mountain area. This document presents the results of 13 studies of the tectonic environment of Yucca Mountain, in support of a broad goal to assess the effects of future seismic and fault activity in the area on design, long-term performance, and safe operation of the potential surface and subsurface repository facilities.

  7. Office of Science and Technology&International Year EndReport - 2005

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bodvarsson, G.S.

    2005-10-27

    Source Term, Materials Performance, Radionuclide Getters, Natural Barriers, and Advanced Technologies, a brief introduction in each section describes the overall organization and goals of each program area. All of these areas have great potential for improving our understanding of the safety performance of the proposed Yucca Mountain repository, as processes within these areas are generally very conservatively represented in the Total System Performance Assessment. In addition, some of the technology thrust areas in particular may enhance system efficiency and reduce risk to workers. Thus, rather modest effort in the S&T Program could lead to large savings in the lifetime repositorymore » total cost and significantly enhanced understanding of the behavior of the proposed Yucca Mountain repository, without safety being compromised, and in some instances being enhanced. An overall strength of the S&T Program is the significant amount of integration that has already been achieved after two years of research. As an example (illustrated in Figure 1), our understanding of the behavior of the total waste isolation system has been enhanced through integration of the Source Term, Materials Performance, and Natural Barriers Thrust areas. All three thrust areas contribute to the integration of different processes in the in-drift environment. These processes include seepage into the drift, dust accumulation on the waste package, brine formation and precipitation on the waste package, mass transfer through the fuel cladding, changes in the seepage-water chemical composition, and transport of released radionuclides through the invert and natural barriers. During FY2005, each of our program areas assembled a team of external experts to conduct an independent review of their respective projects, research directions, and emphasis. In addition, the S&T Program as a whole was independently reviewed by the S&T Programmatic Evaluation Panel. As a result of these reviews, adjustments to the S&T Program will be implemented in FY2006 to ensure that the Program is properly aligned with OCRWM's priorities. Also during FY2005, several programmatic documents were published, including the Science and Technology Program Strategic Plan, the Science and Technology Program Management Plan, and the Science and Technology Program Plan. These and other communication products are available on the OCRWM web site under the Science and Technology section (http://www.ocrwm.doe.gov/osti/index.shtml).« less

  8. Extreme ground motions and Yucca Mountain

    USGS Publications Warehouse

    Hanks, Thomas C.; Abrahamson, Norman A.; Baker, Jack W.; Boore, David M.; Board, Mark; Brune, James N.; Cornell, C. Allin; Whitney, John W.

    2013-01-01

    Yucca Mountain is the designated site of the underground repository for the United States' high-level radioactive waste (HLW), consisting of commercial and military spent nuclear fuel, HLW derived from reprocessing of uranium and plutonium, surplus plutonium, and other nuclear-weapons materials. Yucca Mountain straddles the western boundary of the Nevada Test Site, where the United States has tested nuclear devices since the 1950s, and is situated in an arid, remote, and thinly populated region of Nevada, ~100 miles northwest of Las Vegas. Yucca Mountain was originally considered as a potential underground repository of HLW because of its thick units of unsaturated rocks, with the repository horizon being not only ~300 m above the water table but also ~300 m below the Yucca Mountain crest. The fundamental rationale for a geologic (underground) repository for HLW is to securely isolate these materials from the environment and its inhabitants to the greatest extent possible and for very long periods of time. Given the present climate conditions and what is known about the current hydrologic system and conditions around and in the mountain itself, one would anticipate that the rates of infiltration, corrosion, and transport would be very low—except for the possibility that repository integrity might be compromised by low-probability disruptive events, which include earthquakes, strong ground motion, and (or) a repository-piercing volcanic intrusion/eruption. Extreme ground motions (ExGM), as we use the phrase in this report, refer to the extremely large amplitudes of earthquake ground motion that arise at extremely low probabilities of exceedance (hazard). They first came to our attention when the 1998 probabilistic seismic hazard analysis for Yucca Mountain was extended to a hazard level of 10-8/yr (a 10-4/yr probability for a 104-year repository “lifetime”). The primary purpose of this report is to summarize the principal results of the ExGM research program as they have developed over the past 5 years; what follows will be focused on Yucca Mountain, but not restricted to it.

  9. Interoperability Across the Stewardship Spectrum in the DataONE Repository Federation

    NASA Astrophysics Data System (ADS)

    Jones, M. B.; Vieglais, D.; Wilson, B. E.

    2016-12-01

    Thousands of earth and environmental science repositories serve many researchers and communities, each with their own community and legal mandates, sustainability models, and historical infrastructure. These repositories span the stewardship spectrum from highly curated collections that employ large numbers of staff members to review and improve data, to small, minimal budget repositories that accept data caveat emptor and where all responsibility for quality lies with the submitter. Each repository fills a niche, providing services that meet the stewardship tradeoffs of one or more communities. We have reviewed these stewardship tradeoffs for several DataONE member repositories ranging from minimally (KNB) to highly curated (Arctic Data Center), as well as general purpose (Dryad) to highly discipline or project specific (NEON). The rationale behind different levels of stewardship reflect resolution of these tradeoffs. Some repositories aim to encourage extensive uptake by keeping processes simple and minimizing the amount of information collected, but this limits the long-term utility of the data and the search, discovery, and integration systems that are possible. Other repositories require extensive metadata input, review, and assessment, allowing for excellent preservation, discovery, and integration but at the cost of significant time for submitters and expense for curatorial staff. DataONE recognizes these different levels of curation, and attempts to embrace them to create a federation that is useful across the stewardship spectrum. DataONE provides a tiered model for repositories with growing utility of DataONE services at higher tiers of curation. The lowest tier supports read-only access to data and requires little more than title and contact metadata. Repositories can gradually phase in support for higher levels of metadata and services as needed. These tiered capabilities are possible through flexible support for multiple metadata standards and services, where repositories can incrementally increase their requirements as they want to satisfy more use cases. Within DataONE, metadata search services support minimal metadata models, but significantly expanded precision and recall become possible when repositories provide more extensively curated metadata.

  10. Development and Implementation of a Learning Object Repository for French Teaching and Learning: Issues and Promises

    ERIC Educational Resources Information Center

    Caws, Catherine

    2008-01-01

    This paper discusses issues surrounding the development of a learning object repository (FLORE) for teaching and learning French at the postsecondary level. An evaluation based on qualitative and quantitative data was set up in order to better assess how second-language (L2) students in French perceived the integration of this new repository into…

  11. The visualization and availability of experimental research data at Elsevier

    NASA Astrophysics Data System (ADS)

    Keall, Bethan

    2014-05-01

    In the digital age, the visualization and availability of experimental research data is an increasingly prominent aspect of the research process and of the scientific output that researchers generate. We expect that the importance of data will continue to grow, driven by technological advancements, requirements from funding bodies to make research data available, and a developing research data infrastructure that is supported by data repositories, science publishers, and other stakeholders. Elsevier is actively contributing to these efforts, for example by setting up bidirectional links between online articles on ScienceDirect and relevant data sets on trusted data repositories. A key aspect of Elsevier's "Article of the Future" program, these links enrich the online article and make it easier for researchers to find relevant data and articles and help place data in the right context for re-use. Recently, we have set up such links with some of the leading data repositories in Earth Sciences, including the British Geological Survey, Integrated Earth Data Applications, the UK Natural Environment Research Council, and the Oak Ridge National Laboratory DAAC. Building on these links, Elsevier has also developed a number of data integration and visualization tools, such as an interactive map viewer that displays the locations of relevant data from PANGAEA next to articles on ScienceDirect. In this presentation we will give an overview of these and other capabilities of the Article of the Future, focusing on how they help advance communication of research in the digital age.

  12. 76 FR 81950 - Privacy Act; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-29

    ... ``Consolidated Data Repository'' (09-90-1000). This system of records is being amended to include records... Repository'' (SORN 09-90-1000). OIG is adding record sources to the system. This system fulfills our..., and investigations of the Medicare and Medicaid programs. SYSTEM NAME: Consolidated Data Repository...

  13. Microsoft Repository Version 2 and the Open Information Model.

    ERIC Educational Resources Information Center

    Bernstein, Philip A.; Bergstraesser, Thomas; Carlson, Jason; Pal, Shankar; Sanders, Paul; Shutt, David

    1999-01-01

    Describes the programming interface and implementation of the repository engine and the Open Information Model for Microsoft Repository, an object-oriented meta-data management facility that ships in Microsoft Visual Studio and Microsoft SQL Server. Discusses Microsoft's component object model, object manipulation, queries, and information…

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harmon, K.M.; Lakey, L.T.; Leigh, I.W.

    Worldwide activities related to nuclear fuel cycle and radioactive waste management programs are summarized. Several trends have developed in waste management strategy: All countries having to dispose of reprocessing wastes plan on conversion of the high-level waste (HLW) stream to a borosilicate glass and eventual emplacement of the glass logs, suitably packaged, in a deep geologic repository. Countries that must deal with plutonium-contaminated waste emphasize pluonium recovery, volume reduction and fixation in cement or bitumen in their treatment plans and expect to use deep geologic repositories for final disposal. Commercially available, classical engineering processing are being used worldwide to treatmore » and immobilize low- and intermediate-level wastes (LLW, ILW); disposal to surface structures, shallow-land burial and deep-underground repositories, such as played-out mines, is being done widely with no obvious technical problems. Many countries have established extensive programs to prepare for construction and operation of geologic repositories. Geologic media being studied fall into three main classes: argillites (clay or shale); crystalline rock (granite, basalt, gneiss or gabbro); and evaporates (salt formations). Most nations plan to allow 30 years or longer between discharge of fuel from the reactor and emplacement of HLW or spent fuel is a repository to permit thermal and radioactive decay. Most repository designs are based on the mined-gallery concept, placing waste or spent fuel packages into shallow holes in the floor of the gallery. Many countries have established extensive and costly programs of site evaluation, repository development and safety assessment. Two other waste management problems are the subject of major R and D programs in several countries: stabilization of uranium mill tailing piles; and immobilization or disposal of contaminated nuclear facilities, namely reactors, fuel cycle plants and R and D laboratories.« less

  15. Repository-Based Software Engineering Program: Working Program Management Plan

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Repository-Based Software Engineering Program (RBSE) is a National Aeronautics and Space Administration (NASA) sponsored program dedicated to introducing and supporting common, effective approaches to software engineering practices. The process of conceiving, designing, building, and maintaining software systems by using existing software assets that are stored in a specialized operational reuse library or repository, accessible to system designers, is the foundation of the program. In addition to operating a software repository, RBSE promotes (1) software engineering technology transfer, (2) academic and instructional support of reuse programs, (3) the use of common software engineering standards and practices, (4) software reuse technology research, and (5) interoperability between reuse libraries. This Program Management Plan (PMP) is intended to communicate program goals and objectives, describe major work areas, and define a management report and control process. This process will assist the Program Manager, University of Houston at Clear Lake (UHCL) in tracking work progress and describing major program activities to NASA management. The goal of this PMP is to make managing the RBSE program a relatively easy process that improves the work of all team members. The PMP describes work areas addressed and work efforts being accomplished by the program; however, it is not intended as a complete description of the program. Its focus is on providing management tools and management processes for monitoring, evaluating, and administering the program; and it includes schedules for charting milestones and deliveries of program products. The PMP was developed by soliciting and obtaining guidance from appropriate program participants, analyzing program management guidance, and reviewing related program management documents.

  16. Evolution of a Digital Repository: One Institution's Experience

    ERIC Educational Resources Information Center

    Owen, Terry M.

    2011-01-01

    In this article, the development of a digital repository is examined, specifically how the focus on acquiring content for the repository has transitioned from faculty-published research to include the gray literature produced by the research centers on campus, including unpublished technical reports and undergraduate research from honors programs.…

  17. Adapting a Clinical Data Repository to ICD-10-CM through the use of a Terminology Repository

    PubMed Central

    Cimino, James J.; Remennick, Lyubov

    2014-01-01

    Clinical data repositories frequently contain patient diagnoses coded with the International Classification of Diseases, Ninth Revision (ICD-9-CM). These repositories now need to accommodate data coded with the Tenth Revision (ICD-10-CM). Database users wish to retrieve relevant data regardless of the system by which they are coded. We demonstrate how a terminology repository (the Research Entities Dictionary or RED) serves as an ontology relating terms of both ICD versions to each other to support seamless version-independent retrieval from the Biomedical Translational Research Information System (BTRIS) at the National Institutes of Health. We make use of the Center for Medicare and Medicaid Services’ General Equivalence Mappings (GEMs) to reduce the modeling effort required to determine whether ICD-10-CM terms should be added to the RED as new concepts or as synonyms of existing concepts. A divide-and-conquer approach is used to develop integration heuristics that offer a satisfactory interim solution and facilitate additional refinement of the integration as time and resources allow. PMID:25954344

  18. iT2DMS: a Standard-Based Diabetic Disease Data Repository and its Pilot Experiment on Diabetic Retinopathy Phenotyping and Examination Results Integration.

    PubMed

    Wu, Huiqun; Wei, Yufang; Shang, Yujuan; Shi, Wei; Wang, Lei; Li, Jingjing; Sang, Aimin; Shi, Lili; Jiang, Kui; Dong, Jiancheng

    2018-06-06

    Type 2 diabetes mellitus (T2DM) is a common chronic disease, and the fragment data collected through separated vendors makes continuous management of DM patients difficult. The lack of standard of fragment data from those diabetic patients also makes the further potential phenotyping based on the diabetic data difficult. Traditional T2DM data repository only supports data collection from T2DM patients, lack of phenotyping ability and relied on standalone database design, limiting the secondary usage of these valuable data. To solve these issues, we proposed a novel T2DM data repository framework, which was based on standards. This repository can integrate data from various sources. It would be used as a standardized record for further data transfer as well as integration. Phenotyping was conducted based on clinical guidelines with KNIME workflow. To evaluate the phenotyping performance of the proposed system, data was collected from local community by healthcare providers and was then tested using algorithms. The results indicated that the proposed system could detect DR cases with an average accuracy of about 82.8%. Furthermore, these results had the promising potential of addressing fragmented data. The proposed system has integrating and phenotyping abilities, which could be used for diabetes research in future studies.

  19. Shared Medical Imaging Repositories.

    PubMed

    Lebre, Rui; Bastião, Luís; Costa, Carlos

    2018-01-01

    This article describes the implementation of a solution for the integration of ownership concept and access control over medical imaging resources, making possible the centralization of multiple instances of repositories. The proposed architecture allows the association of permissions to repository resources and delegation of rights to third entities. It includes a programmatic interface for management of proposed services, made available through web services, with the ability to create, read, update and remove all components resulting from the architecture. The resulting work is a role-based access control mechanism that was integrated with Dicoogle Open-Source Project. The solution has several application scenarios like, for instance, collaborative platforms for research and tele-radiology services deployed at Cloud.

  20. A Predictive Approach to Eliminating Errors in Software Code

    NASA Technical Reports Server (NTRS)

    2006-01-01

    NASA s Metrics Data Program Data Repository is a database that stores problem, product, and metrics data. The primary goal of this data repository is to provide project data to the software community. In doing so, the Metrics Data Program collects artifacts from a large NASA dataset, generates metrics on the artifacts, and then generates reports that are made available to the public at no cost. The data that are made available to general users have been sanitized and authorized for publication through the Metrics Data Program Web site by officials representing the projects from which the data originated. The data repository is operated by NASA s Independent Verification and Validation (IV&V) Facility, which is located in Fairmont, West Virginia, a high-tech hub for emerging innovation in the Mountain State. The IV&V Facility was founded in 1993, under the NASA Office of Safety and Mission Assurance, as a direct result of recommendations made by the National Research Council and the Report of the Presidential Commission on the Space Shuttle Challenger Accident. Today, under the direction of Goddard Space Flight Center, the IV&V Facility continues its mission to provide the highest achievable levels of safety and cost-effectiveness for mission-critical software. By extending its data to public users, the facility has helped improve the safety, reliability, and quality of complex software systems throughout private industry and other government agencies. Integrated Software Metrics, Inc., is one of the organizations that has benefited from studying the metrics data. As a result, the company has evolved into a leading developer of innovative software-error prediction tools that help organizations deliver better software, on time and on budget.

  1. Enhancing Ocean Research Data Access

    NASA Astrophysics Data System (ADS)

    Chandler, Cynthia; Groman, Robert; Shepherd, Adam; Allison, Molly; Arko, Robert; Chen, Yu; Fox, Peter; Glover, David; Hitzler, Pascal; Leadbetter, Adam; Narock, Thomas; West, Patrick; Wiebe, Peter

    2014-05-01

    The Biological and Chemical Oceanography Data Management Office (BCO-DMO) works in partnership with ocean science investigators to publish data from research projects funded by the Biological and Chemical Oceanography Sections and the Office of Polar Programs Antarctic Organisms & Ecosystems Program at the U.S. National Science Foundation. Since 2006, researchers have been contributing data to the BCO-DMO data system, and it has developed into a rich repository of data from ocean, coastal and Great Lakes research programs. While the ultimate goal of the BCO-DMO is to ensure preservation of NSF funded project data and to provide open access to those data, achievement of those goals is attained through a series of related phases that benefits from active collaboration and cooperation with a large community of research scientists as well as curators of data and information at complementary data repositories. The BCO-DMO is just one of many intermediate data management centers created to facilitate long-term preservation of data and improve access to ocean research data. Through partnerships with other data management professionals and active involvement in local and global initiatives, BCO-DMO staff members are working to enhance access to ocean research data available from the online BCO-DMO data system. Continuing efforts in use of controlled vocabulary terms, development of ontology design patterns and publication of content as Linked Open Data are contributing to improved discovery and availability of BCO-DMO curated data and increased interoperability of related content available from distributed repositories. We will demonstrate how Semantic Web technologies (e.g. RDF/XML, SKOS, OWL and SPARQL) have been integrated into BCO-DMO data access and delivery systems to better serve the ocean research community and to contribute to an expanding global knowledge network.

  2. Semantic framework for mapping object-oriented model to semantic web languages

    PubMed Central

    Ježek, Petr; Mouček, Roman

    2015-01-01

    The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework. PMID:25762923

  3. Semantic framework for mapping object-oriented model to semantic web languages.

    PubMed

    Ježek, Petr; Mouček, Roman

    2015-01-01

    The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework.

  4. Monte Carlo simulations for generic granite repository studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu, Shaoping; Lee, Joon H; Wang, Yifeng

    In a collaborative study between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) for the DOE-NE Office of Fuel Cycle Technologies Used Fuel Disposition (UFD) Campaign project, we have conducted preliminary system-level analyses to support the development of a long-term strategy for geologic disposal of high-level radioactive waste. A general modeling framework consisting of a near- and a far-field submodel for a granite GDSE was developed. A representative far-field transport model for a generic granite repository was merged with an integrated systems (GoldSim) near-field model. Integrated Monte Carlo model runs with the combined near- and farfield transport modelsmore » were performed, and the parameter sensitivities were evaluated for the combined system. In addition, a sub-set of radionuclides that are potentially important to repository performance were identified and evaluated for a series of model runs. The analyses were conducted with different waste inventory scenarios. Analyses were also conducted for different repository radionuelide release scenarios. While the results to date are for a generic granite repository, the work establishes the method to be used in the future to provide guidance on the development of strategy for long-term disposal of high-level radioactive waste in a granite repository.« less

  5. 75 FR 66110 - Guidelines for Use of Stored Specimens and Access to Ancillary Data and Proposed Cost Schedule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-27

    ... repository of datasets from completed studies, biospecimens, and ancillary data. The Division intends to make... Sharing Policy. The Division has established an internal committee, the Biospecimen Repository Access and Data Sharing Committee (BRADSC), to oversee the repository access and data sharing program. The purpose...

  6. 75 FR 73095 - Privacy Act of 1974; Report of New System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-29

    ... Repository'' System No. 09-70-0587. The final rule for the Medicare and Medicaid EHR Incentive Program... primary purpose of this system, called the National Level Repository or NLR, is to collect, maintain, and... Maintenance of Data in the System The National Level Repository (NLR) contains information on eligible...

  7. NA-42 TI Shared Software Component Library FY2011 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knudson, Christa K.; Rutz, Frederick C.; Dorow, Kevin E.

    The NA-42 TI program initiated an effort in FY2010 to standardize its software development efforts with the long term goal of migrating toward a software management approach that will allow for the sharing and reuse of code developed within the TI program, improve integration, ensure a level of software documentation, and reduce development costs. The Pacific Northwest National Laboratory (PNNL) has been tasked with two activities that support this mission. PNNL has been tasked with the identification, selection, and implementation of a Shared Software Component Library. The intent of the library is to provide a common repository that is accessiblemore » by all authorized NA-42 software development teams. The repository facilitates software reuse through a searchable and easy to use web based interface. As software is submitted to the repository, the component registration process captures meta-data and provides version control for compiled libraries, documentation, and source code. This meta-data is then available for retrieval and review as part of library search results. In FY2010, PNNL and staff from the Remote Sensing Laboratory (RSL) teamed up to develop a software application with the goal of replacing the aging Aerial Measuring System (AMS). The application under development includes an Advanced Visualization and Integration of Data (AVID) framework and associated AMS modules. Throughout development, PNNL and RSL have utilized a common AMS code repository for collaborative code development. The AMS repository is hosted by PNNL, is restricted to the project development team, is accessed via two different geographic locations and continues to be used. The knowledge gained from the collaboration and hosting of this repository in conjunction with PNNL software development and systems engineering capabilities were used in the selection of a package to be used in the implementation of the software component library on behalf of NA-42 TI. The second task managed by PNNL is the development and continued maintenance of the NA-42 TI Software Development Questionnaire. This questionnaire is intended to help software development teams working under NA-42 TI in documenting their development activities. When sufficiently completed, the questionnaire illustrates that the software development activities recorded incorporate significant aspects of the software engineering lifecycle. The questionnaire template is updated as comments are received from NA-42 and/or its development teams and revised versions distributed to those using the questionnaire. PNNL also maintains a list of questionnaire recipients. The blank questionnaire template, the AVID and AMS software being developed, and the completed AVID AMS specific questionnaire are being used as the initial content to be established in the TI Component Library. This report summarizes the approach taken to identify requirements, search for and evaluate technologies, and the approach taken for installation of the software needed to host the component library. Additionally, it defines the process by which users request access for the contribution and retrieval of library content.« less

  8. Java-based browsing, visualization and processing of heterogeneous medical data from remote repositories.

    PubMed

    Masseroli, M; Bonacina, S; Pinciroli, F

    2004-01-01

    The actual development of distributed information technologies and Java programming enables employing them also in the medical arena to support the retrieval, integration and evaluation of heterogeneous data and multimodal images in a web browser environment. With this aim, we used them to implement a client-server architecture based on software agents. The client side is a Java applet running in a web browser and providing a friendly medical user interface to browse and visualize different patient and medical test data, integrating them properly. The server side manages secure connections and queries to heterogeneous remote databases and file systems containing patient personal and clinical data. Based on the Java Advanced Imaging API, processing and analysis tools were developed to support the evaluation of remotely retrieved bioimages through the quantification of their features in different regions of interest. The Java platform-independence allows the centralized management of the implemented prototype and its deployment to each site where an intranet or internet connection is available. Giving healthcare providers effective support for comprehensively browsing, visualizing and evaluating medical images and records located in different remote repositories, the developed prototype can represent an important aid in providing more efficient diagnoses and medical treatments.

  9. The Use of Underground Research Laboratories to Support Repository Development Programs. A Roadmap for the Underground Research Facilities Network.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacKinnon, Robert J.

    2015-10-26

    Under the auspices of the International Atomic Energy Agency (IAEA), nationally developed underground research laboratories (URLs) and associated research institutions are being offered for use by other nations. These facilities form an Underground Research Facilities (URF) Network for training in and demonstration of waste disposal technologies and the sharing of knowledge and experience related to geologic repository development, research, and engineering. In order to achieve its objectives, the URF Network regularly sponsors workshops and training events related to the knowledge base that is transferable between existing URL programs and to nations with an interest in developing a new URL. Thismore » report describes the role of URLs in the context of a general timeline for repository development. This description includes identification of key phases and activities that contribute to repository development as a repository program evolves from an early research and development phase to later phases such as construction, operations, and closure. This information is cast in the form of a matrix with the entries in this matrix forming the basis of the URF Network roadmap that will be used to identify and plan future workshops and training events.« less

  10. OWLing Clinical Data Repositories With the Ontology Web Language

    PubMed Central

    Pastor, Xavier; Lozano, Esther

    2014-01-01

    Background The health sciences are based upon information. Clinical information is usually stored and managed by physicians with precarious tools, such as spreadsheets. The biomedical domain is more complex than other domains that have adopted information and communication technologies as pervasive business tools. Moreover, medicine continuously changes its corpus of knowledge because of new discoveries and the rearrangements in the relationships among concepts. This scenario makes it especially difficult to offer good tools to answer the professional needs of researchers and constitutes a barrier that needs innovation to discover useful solutions. Objective The objective was to design and implement a framework for the development of clinical data repositories, capable of facing the continuous change in the biomedicine domain and minimizing the technical knowledge required from final users. Methods We combined knowledge management tools and methodologies with relational technology. We present an ontology-based approach that is flexible and efficient for dealing with complexity and change, integrated with a solid relational storage and a Web graphical user interface. Results Onto Clinical Research Forms (OntoCRF) is a framework for the definition, modeling, and instantiation of data repositories. It does not need any database design or programming. All required information to define a new project is explicitly stated in ontologies. Moreover, the user interface is built automatically on the fly as Web pages, whereas data are stored in a generic repository. This allows for immediate deployment and population of the database as well as instant online availability of any modification. Conclusions OntoCRF is a complete framework to build data repositories with a solid relational storage. Driven by ontologies, OntoCRF is more flexible and efficient to deal with complexity and change than traditional systems and does not require very skilled technical people facilitating the engineering of clinical software systems. PMID:25599697

  11. OWLing Clinical Data Repositories With the Ontology Web Language.

    PubMed

    Lozano-Rubí, Raimundo; Pastor, Xavier; Lozano, Esther

    2014-08-01

    The health sciences are based upon information. Clinical information is usually stored and managed by physicians with precarious tools, such as spreadsheets. The biomedical domain is more complex than other domains that have adopted information and communication technologies as pervasive business tools. Moreover, medicine continuously changes its corpus of knowledge because of new discoveries and the rearrangements in the relationships among concepts. This scenario makes it especially difficult to offer good tools to answer the professional needs of researchers and constitutes a barrier that needs innovation to discover useful solutions. The objective was to design and implement a framework for the development of clinical data repositories, capable of facing the continuous change in the biomedicine domain and minimizing the technical knowledge required from final users. We combined knowledge management tools and methodologies with relational technology. We present an ontology-based approach that is flexible and efficient for dealing with complexity and change, integrated with a solid relational storage and a Web graphical user interface. Onto Clinical Research Forms (OntoCRF) is a framework for the definition, modeling, and instantiation of data repositories. It does not need any database design or programming. All required information to define a new project is explicitly stated in ontologies. Moreover, the user interface is built automatically on the fly as Web pages, whereas data are stored in a generic repository. This allows for immediate deployment and population of the database as well as instant online availability of any modification. OntoCRF is a complete framework to build data repositories with a solid relational storage. Driven by ontologies, OntoCRF is more flexible and efficient to deal with complexity and change than traditional systems and does not require very skilled technical people facilitating the engineering of clinical software systems.

  12. Concept document of the repository-based software engineering program: A constructive appraisal

    NASA Technical Reports Server (NTRS)

    1992-01-01

    A constructive appraisal of the Concept Document of the Repository-Based Software Engineering Program is provided. The Concept Document is designed to provide an overview of the Repository-Based Software Engineering (RBSE) Program. The Document should be brief and provide the context for reading subsequent requirements and product specifications. That is, all requirements to be developed should be traceable to the Concept Document. Applied Expertise's analysis of the Document was directed toward assuring that: (1) the Executive Summary provides a clear, concise, and comprehensive overview of the Concept (rewrite as necessary); (2) the sections of the Document make best use of the NASA 'Data Item Description' for concept documents; (3) the information contained in the Document provides a foundation for subsequent requirements; and (4) the document adequately: identifies the problem being addressed; articulates RBSE's specific role; specifies the unique aspects of the program; and identifies the nature and extent of the program's users.

  13. Oceanotron, Scalable Server for Marine Observations

    NASA Astrophysics Data System (ADS)

    Loubrieu, T.; Bregent, S.; Blower, J. D.; Griffiths, G.

    2013-12-01

    Ifremer, French marine institute, is deeply involved in data management for different ocean in-situ observation programs (ARGO, OceanSites, GOSUD, ...) or other European programs aiming at networking ocean in-situ observation data repositories (myOcean, seaDataNet, Emodnet). To capitalize the effort for implementing advance data dissemination services (visualization, download with subsetting) for these programs and generally speaking water-column observations repositories, Ifremer decided to develop the oceanotron server (2010). Knowing the diversity of data repository formats (RDBMS, netCDF, ODV, ...) and the temperamental nature of the standard interoperability interface profiles (OGC/WMS, OGC/WFS, OGC/SOS, OpeNDAP, ...), the server is designed to manage plugins: - StorageUnits : which enable to read specific data repository formats (netCDF/OceanSites, RDBMS schema, ODV binary format). - FrontDesks : which get external requests and send results for interoperable protocols (OGC/WMS, OGC/SOS, OpenDAP). In between a third type of plugin may be inserted: - TransformationUnits : which enable ocean business related transformation of the features (for example conversion of vertical coordinates from pressure in dB to meters under sea surface). The server is released under open-source license so that partners can develop their own plugins. Within MyOcean project, University of Reading has plugged a WMS implementation as an oceanotron frontdesk. The modules are connected together by sharing the same information model for marine observations (or sampling features: vertical profiles, point series and trajectories), dataset metadata and queries. The shared information model is based on OGC/Observation & Measurement and Unidata/Common Data Model initiatives. The model is implemented in java (http://www.ifremer.fr/isi/oceanotron/javadoc/). This inner-interoperability level enables to capitalize ocean business expertise in software development without being indentured to specific data formats or protocols. Oceanotron is deployed at seven European data centres for marine in-situ observations within myOcean. While additional extensions are still being developed, to promote new collaborative initiatives, a work is now done on continuous and distributed integration (jenkins, maven), shared reference documentation (on alfresco) and code and release dissemination (sourceforge, github).

  14. End of FY10 report - used fuel disposition technical bases and lessons learned : legal and regulatory framework for high-level waste disposition in the United States.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiner, Ruth F.; Blink, James A.; Rechard, Robert Paul

    This report examines the current policy, legal, and regulatory framework pertaining to used nuclear fuel and high level waste management in the United States. The goal is to identify potential changes that if made could add flexibility and possibly improve the chances of successfully implementing technical aspects of a nuclear waste policy. Experience suggests that the regulatory framework should be established prior to initiating future repository development. Concerning specifics of the regulatory framework, reasonable expectation as the standard of proof was successfully implemented and could be retained in the future; yet, the current classification system for radioactive waste, including hazardousmore » constituents, warrants reexamination. Whether or not consideration of multiple sites are considered simultaneously in the future, inclusion of mechanisms such as deliberate use of performance assessment to manage site characterization would be wise. Because of experience gained here and abroad, diversity of geologic media is not particularly necessary as a criterion in site selection guidelines for multiple sites. Stepwise development of the repository program that includes flexibility also warrants serious consideration. Furthermore, integration of the waste management system from storage, transportation, and disposition, should be examined and would be facilitated by integration of the legal and regulatory framework. Finally, in order to enhance acceptability of future repository development, the national policy should be cognizant of those policy and technical attributes that enhance initial acceptance, and those policy and technical attributes that maintain and broaden credibility.« less

  15. A proposed application programming interface for a physical volume repository

    NASA Technical Reports Server (NTRS)

    Jones, Merritt; Williams, Joel; Wrenn, Richard

    1996-01-01

    The IEEE Storage System Standards Working Group (SSSWG) has developed the Reference Model for Open Storage Systems Interconnection, Mass Storage System Reference Model Version 5. This document, provides the framework for a series of standards for application and user interfaces to open storage systems. More recently, the SSSWG has been developing Application Programming Interfaces (APIs) for the individual components defined by the model. The API for the Physical Volume Repository is the most fully developed, but work is being done on APIs for the Physical Volume Library and for the Mover also. The SSSWG meets every other month, and meetings are open to all interested parties. The Physical Volume Repository (PVR) is responsible for managing the storage of removable media cartridges and for mounting and dismounting these cartridges onto drives. This document describes a model which defines a Physical Volume Repository, and gives a brief summary of the Application Programming Interface (API) which the IEEE Storage Systems Standards Working Group (SSSWG) is proposing as the standard interface for the PVR.

  16. A repository based on a dynamically extensible data model supporting multidisciplinary research in neuroscience.

    PubMed

    Corradi, Luca; Porro, Ivan; Schenone, Andrea; Momeni, Parastoo; Ferrari, Raffaele; Nobili, Flavio; Ferrara, Michela; Arnulfo, Gabriele; Fato, Marco M

    2012-10-08

    Robust, extensible and distributed databases integrating clinical, imaging and molecular data represent a substantial challenge for modern neuroscience. It is even more difficult to provide extensible software environments able to effectively target the rapidly changing data requirements and structures of research experiments. There is an increasing request from the neuroscience community for software tools addressing technical challenges about: (i) supporting researchers in the medical field to carry out data analysis using integrated bioinformatics services and tools; (ii) handling multimodal/multiscale data and metadata, enabling the injection of several different data types according to structured schemas; (iii) providing high extensibility, in order to address different requirements deriving from a large variety of applications simply through a user runtime configuration. A dynamically extensible data structure supporting collaborative multidisciplinary research projects in neuroscience has been defined and implemented. We have considered extensibility issues from two different points of view. First, the improvement of data flexibility has been taken into account. This has been done through the development of a methodology for the dynamic creation and use of data types and related metadata, based on the definition of "meta" data model. This way, users are not constrainted to a set of predefined data and the model can be easily extensible and applicable to different contexts. Second, users have been enabled to easily customize and extend the experimental procedures in order to track each step of acquisition or analysis. This has been achieved through a process-event data structure, a multipurpose taxonomic schema composed by two generic main objects: events and processes. Then, a repository has been built based on such data model and structure, and deployed on distributed resources thanks to a Grid-based approach. Finally, data integration aspects have been addressed by providing the repository application with an efficient dynamic interface designed to enable the user to both easily query the data depending on defined datatypes and view all the data of every patient in an integrated and simple way. The results of our work have been twofold. First, a dynamically extensible data model has been implemented and tested based on a "meta" data-model enabling users to define their own data types independently from the application context. This data model has allowed users to dynamically include additional data types without the need of rebuilding the underlying database. Then a complex process-event data structure has been built, based on this data model, describing patient-centered diagnostic processes and merging information from data and metadata. Second, a repository implementing such a data structure has been deployed on a distributed Data Grid in order to provide scalability both in terms of data input and data storage and to exploit distributed data and computational approaches in order to share resources more efficiently. Moreover, data managing has been made possible through a friendly web interface. The driving principle of not being forced to preconfigured data types has been satisfied. It is up to users to dynamically configure the data model for the given experiment or data acquisition program, thus making it potentially suitable for customized applications. Based on such repository, data managing has been made possible through a friendly web interface. The driving principle of not being forced to preconfigured data types has been satisfied. It is up to users to dynamically configure the data model for the given experiment or data acquisition program, thus making it potentially suitable for customized applications.

  17. A repository based on a dynamically extensible data model supporting multidisciplinary research in neuroscience

    PubMed Central

    2012-01-01

    Background Robust, extensible and distributed databases integrating clinical, imaging and molecular data represent a substantial challenge for modern neuroscience. It is even more difficult to provide extensible software environments able to effectively target the rapidly changing data requirements and structures of research experiments. There is an increasing request from the neuroscience community for software tools addressing technical challenges about: (i) supporting researchers in the medical field to carry out data analysis using integrated bioinformatics services and tools; (ii) handling multimodal/multiscale data and metadata, enabling the injection of several different data types according to structured schemas; (iii) providing high extensibility, in order to address different requirements deriving from a large variety of applications simply through a user runtime configuration. Methods A dynamically extensible data structure supporting collaborative multidisciplinary research projects in neuroscience has been defined and implemented. We have considered extensibility issues from two different points of view. First, the improvement of data flexibility has been taken into account. This has been done through the development of a methodology for the dynamic creation and use of data types and related metadata, based on the definition of “meta” data model. This way, users are not constrainted to a set of predefined data and the model can be easily extensible and applicable to different contexts. Second, users have been enabled to easily customize and extend the experimental procedures in order to track each step of acquisition or analysis. This has been achieved through a process-event data structure, a multipurpose taxonomic schema composed by two generic main objects: events and processes. Then, a repository has been built based on such data model and structure, and deployed on distributed resources thanks to a Grid-based approach. Finally, data integration aspects have been addressed by providing the repository application with an efficient dynamic interface designed to enable the user to both easily query the data depending on defined datatypes and view all the data of every patient in an integrated and simple way. Results The results of our work have been twofold. First, a dynamically extensible data model has been implemented and tested based on a “meta” data-model enabling users to define their own data types independently from the application context. This data model has allowed users to dynamically include additional data types without the need of rebuilding the underlying database. Then a complex process-event data structure has been built, based on this data model, describing patient-centered diagnostic processes and merging information from data and metadata. Second, a repository implementing such a data structure has been deployed on a distributed Data Grid in order to provide scalability both in terms of data input and data storage and to exploit distributed data and computational approaches in order to share resources more efficiently. Moreover, data managing has been made possible through a friendly web interface. The driving principle of not being forced to preconfigured data types has been satisfied. It is up to users to dynamically configure the data model for the given experiment or data acquisition program, thus making it potentially suitable for customized applications. Conclusions Based on such repository, data managing has been made possible through a friendly web interface. The driving principle of not being forced to preconfigured data types has been satisfied. It is up to users to dynamically configure the data model for the given experiment or data acquisition program, thus making it potentially suitable for customized applications. PMID:23043673

  18. The Listening and Spoken Language Data Repository: Design and Project Overview

    ERIC Educational Resources Information Center

    Bradham, Tamala S.; Fonnesbeck, Christopher; Toll, Alice; Hecht, Barbara F.

    2018-01-01

    Purpose: The purpose of the Listening and Spoken Language Data Repository (LSL-DR) was to address a critical need for a systemwide outcome data-monitoring program for the development of listening and spoken language skills in highly specialized educational programs for children with hearing loss highlighted in Goal 3b of the 2007 Joint Committee…

  19. A metadata-driven approach to data repository design.

    PubMed

    Harvey, Matthew J; McLean, Andrew; Rzepa, Henry S

    2017-01-01

    The design and use of a metadata-driven data repository for research data management is described. Metadata is collected automatically during the submission process whenever possible and is registered with DataCite in accordance with their current metadata schema, in exchange for a persistent digital object identifier. Two examples of data preview are illustrated, including the demonstration of a method for integration with commercial software that confers rich domain-specific data analytics without introducing customisation into the repository itself.

  20. Academic Research Library as Broker in Addressing Interoperability Challenges for the Geosciences

    NASA Astrophysics Data System (ADS)

    Smith, P., II

    2015-12-01

    Data capture is an important process in the research lifecycle. Complete descriptive and representative information of the data or database is necessary during data collection whether in the field or in the research lab. The National Science Foundation's (NSF) Public Access Plan (2015) mandates the need for federally funded projects to make their research data more openly available. Developing, implementing, and integrating metadata workflows into to the research process of the data lifecycle facilitates improved data access while also addressing interoperability challenges for the geosciences such as data description and representation. Lack of metadata or data curation can contribute to (1) semantic, (2) ontology, and (3) data integration issues within and across disciplinary domains and projects. Some researchers of EarthCube funded projects have identified these issues as gaps. These gaps can contribute to interoperability data access, discovery, and integration issues between domain-specific and general data repositories. Academic Research Libraries have expertise in providing long-term discovery and access through the use of metadata standards and provision of access to research data, datasets, and publications via institutional repositories. Metadata crosswalks, open archival information systems (OAIS), trusted-repositories, data seal of approval, persistent URL, linking data, objects, resources, and publications in institutional repositories and digital content management systems are common components in the library discipline. These components contribute to a library perspective on data access and discovery that can benefit the geosciences. The USGS Community for Data Integration (CDI) has developed the Science Support Framework (SSF) for data management and integration within its community of practice for contribution to improved understanding of the Earth's physical and biological systems. The USGS CDI SSF can be used as a reference model to map to EarthCube Funded projects with academic research libraries facilitating the data and information assets components of the USGS CDI SSF via institutional repositories and/or digital content management. This session will explore the USGS CDI SSF for cross-discipline collaboration considerations from a library perspective.

  1. Biological Web Service Repositories Review

    PubMed Central

    Urdidiales‐Nieto, David; Navas‐Delgado, Ismael

    2016-01-01

    Abstract Web services play a key role in bioinformatics enabling the integration of database access and analysis of algorithms. However, Web service repositories do not usually publish information on the changes made to their registered Web services. Dynamism is directly related to the changes in the repositories (services registered or unregistered) and at service level (annotation changes). Thus, users, software clients or workflow based approaches lack enough relevant information to decide when they should review or re‐execute a Web service or workflow to get updated or improved results. The dynamism of the repository could be a measure for workflow developers to re‐check service availability and annotation changes in the services of interest to them. This paper presents a review on the most well‐known Web service repositories in the life sciences including an analysis of their dynamism. Freshness is introduced in this paper, and has been used as the measure for the dynamism of these repositories. PMID:27783459

  2. Evolving the Living With a Star Data System Definition

    NASA Astrophysics Data System (ADS)

    Otranto, J. F.; Dijoseph, M.

    2003-12-01

    NASA's Living With a Star (LWS) Program is a space weather-focused and applications-driven research program. The LWS Program is soliciting input from the solar, space physics, space weather, and climate science communities to develop a system that enables access to science data associated with these disciplines, and advances the development of discipline and interdisciplinary findings. The LWS Program will implement a data system that builds upon the existing and planned data capture, processing, and storage components put in place by individual spacecraft missions and also inter-project data management systems, including active and deep archives, and multi-mission data repositories. It is technically feasible for the LWS Program to integrate data from a broad set of resources, assuming they are either publicly accessible or allow access by permission. The LWS Program data system will work in coordination with spacecraft mission data systems and science data repositories, integrating their holdings using a common metadata representation. This common representation relies on a robust metadata definition that provides journalistic and technical data descriptions, plus linkages to supporting data products and tools. The LWS Program intends to become an enabling resource to PIs, interdisciplinary scientists, researchers, and students facilitating both access to a broad collection of science data, as well as the necessary supporting components to understand and make productive use of these data. For the LWS Program to represent science data that are physically distributed across various ground system elements, information will be collected about these distributed data products through a series of LWS Program-created agents. These agents will be customized to interface or interact with each one of these data systems, collect information, and forward any new metadata records to a LWS Program-developed metadata library. A populated LWS metadata library will function as a single point-of-contact that serves the entire science community as a first stop for data availability, whether or not science data are physically stored in an LWS-operated repository. Further, this metadata library will provide the user access to information for understanding these data including descriptions of the associated spacecraft and instrument, data format, calibration and operations issues, links to ancillary and correlative data products, links to processing tools and models associated with these data, and any corresponding findings produced using these data. The LWS may also support an active archive for solar, space physics, space weather, and climate data when these data would otherwise be discarded or archived off-line. This archive could potentially serve also as a data storage backup facility for LWS missions. The plan for the LWS Program metadata library is developed based upon input received from the solar and geospace science communities; the library's architecture is based on existing systems developed for serving science metadata. The LWS Program continues to seek constructive input from the science community, examples of both successes and failures in dealing with science data systems, and insights regarding the obstacles between the current state-of-the-practice and this vision for the LWS Program metadata library.

  3. Developing the Tools for Geologic Repository Monitoring - Andra's Monitoring R and D Program - 12045

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buschaert, S.; Lesoille, S.; Bertrand, J.

    2012-07-01

    The French Safety Guide recommends that Andra develop a monitoring program to be implemented during repository construction and conducted until (and possibly after) closure, in order to confirm expected behavior and enhance knowledge of relevant processes. To achieve this, Andra has developed an overall monitoring strategy and identified specific technical objectives to inform disposal process management on evolutions relevant to both the long term safety and reversible, pre-closure management of the repository. Andra has launched an ambitious R and D program to ensure that reliable, durable, metrologically qualified and tested monitoring systems will be available at the time of repositorymore » construction in order to respond to monitoring objectives. After four years of a specific R and D program, first observations are described and recommendations are proposed. The results derived from 4 years of Andra's R and D program allow three main observations to be shared. First, while other industries also invest in monitoring equipment, their obvious emphasis will always be on their specific requirements and needs, thus often only providing a partial match with repository requirements. Examples can be found for all available sensors, which are generally not resistant to radiation. Second, the very close scrutiny anticipated for the geologic disposal process is likely to place an unprecedented emphasis on the quality of monitoring results. It therefore seems important to emphasize specific developments with an aim at providing metrologically qualified systems. Third, adapting existing technology to specific repository needs, and providing adequate proof of their worth, is a lengthy process. In conclusion, it therefore seems prudent to plan ahead and to invest wisely in the adequate development of those monitoring tools that will likely be needed in the repository to respond to the implementers' and regulators' requirements, including those agreed and developed to respond to potential stakeholder expectations. (authors)« less

  4. Transuranic inventory reduction in repository by partitioning and transmutation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, C.H.; Kazimi, M.S.

    1992-01-01

    The promise of a new reprocessing technology and the issuance of Environmental Protection Agency (EPA) and U.S. Nuclear Regulatory Commission regulations concerning a geologic repository rekindle the interest in partitioning and transmutation of transuranic (TRU) elements from discharged reactor fuel as a high level waste management option. This paper investigates the TRU repository inventory reduction capability of the proposed advanced liquid metal reactors (ALMRs) and integral fast reactors (IFRs) as well as the plutonium recycled light water reactors (LWRs).

  5. Integrated web-based viewing and secure remote access to a clinical data repository and diverse clinical systems.

    PubMed

    Duncan, R G; Saperia, D; Dulbandzhyan, R; Shabot, M M; Polaschek, J X; Jones, D T

    2001-01-01

    The advent of the World-Wide-Web protocols and client-server technology has made it easy to build low-cost, user-friendly, platform-independent graphical user interfaces to health information systems and to integrate the presentation of data from multiple systems. The authors describe a Web interface for a clinical data repository (CDR) that was moved from concept to production status in less than six months using a rapid prototyping approach, multi-disciplinary development team, and off-the-shelf hardware and software. The system has since been expanded to provide an integrated display of clinical data from nearly 20 disparate information systems.

  6. A Python library for FAIRer access and deposition to the Metabolomics Workbench Data Repository.

    PubMed

    Smelter, Andrey; Moseley, Hunter N B

    2018-01-01

    The Metabolomics Workbench Data Repository is a public repository of mass spectrometry and nuclear magnetic resonance data and metadata derived from a wide variety of metabolomics studies. The data and metadata for each study is deposited, stored, and accessed via files in the domain-specific 'mwTab' flat file format. In order to improve the accessibility, reusability, and interoperability of the data and metadata stored in 'mwTab' formatted files, we implemented a Python library and package. This Python package, named 'mwtab', is a parser for the domain-specific 'mwTab' flat file format, which provides facilities for reading, accessing, and writing 'mwTab' formatted files. Furthermore, the package provides facilities to validate both the format and required metadata elements of a given 'mwTab' formatted file. In order to develop the 'mwtab' package we used the official 'mwTab' format specification. We used Git version control along with Python unit-testing framework as well as continuous integration service to run those tests on multiple versions of Python. Package documentation was developed using sphinx documentation generator. The 'mwtab' package provides both Python programmatic library interfaces and command-line interfaces for reading, writing, and validating 'mwTab' formatted files. Data and associated metadata are stored within Python dictionary- and list-based data structures, enabling straightforward, 'pythonic' access and manipulation of data and metadata. Also, the package provides facilities to convert 'mwTab' files into a JSON formatted equivalent, enabling easy reusability of the data by all modern programming languages that implement JSON parsers. The 'mwtab' package implements its metadata validation functionality based on a pre-defined JSON schema that can be easily specialized for specific types of metabolomics studies. The library also provides a command-line interface for interconversion between 'mwTab' and JSONized formats in raw text and a variety of compressed binary file formats. The 'mwtab' package is an easy-to-use Python package that provides FAIRer utilization of the Metabolomics Workbench Data Repository. The source code is freely available on GitHub and via the Python Package Index. Documentation includes a 'User Guide', 'Tutorial', and 'API Reference'. The GitHub repository also provides 'mwtab' package unit-tests via a continuous integration service.

  7. Repository-based software engineering program

    NASA Technical Reports Server (NTRS)

    Wilson, James

    1992-01-01

    The activities performed during September 1992 in support of Tasks 01 and 02 of the Repository-Based Software Engineering Program are outlined. The recommendations and implementation strategy defined at the September 9-10 meeting of the Reuse Acquisition Action Team (RAAT) are attached along with the viewgraphs and reference information presented at the Institute for Defense Analyses brief on legal and patent issues related to software reuse.

  8. Coupled Biological-Geomechanical-Geochemical Effects of the Disturbed Rock Zone on the Performance of the Waste Isolation Pilot Plant

    NASA Astrophysics Data System (ADS)

    Dunagan, S. C.; Herrick, C. G.; Lee, M. Y.

    2008-12-01

    The Waste Isolation Pilot Plant (WIPP) is located at a depth of 655 m in bedded salt in southeastern New Mexico and is operated by the U.S. Department of Energy as a deep underground disposal facility for transuranic (TRU) waste. The WIPP must comply with the EPA's environmental regulations that require a probabilistic risk analysis of releases of radionuclides due to inadvertent human intrusion into the repository at some time during the 10,000-year regulatory period. Sandia National Laboratories conducts performance assessments (PAs) of the WIPP using a system of computer codes representing the evolution of underground repository and emplaced TRU waste in order to demonstrate compliance. One of the important features modeled in a PA is the disturbed rock zone (DRZ) surrounding the emplacement rooms in the repository. The extent and permeability of DRZ play a significant role in the potential radionuclide release scenarios. We evaluated the phenomena occurring in the repository that affect the DRZ and their potential effects on the extent and permeability of the DRZ. Furthermore, we examined the DRZ's role in determining the performance of the repository. Pressure in the completely sealed repository will be increased by creep closure of the salt and degradation of TRU waste contents by microbial activity in the repository. An increased pressure in the repository will reduce the extent and permeability of the DRZ. The reduced DRZ extent and permeability will decrease the amount of brine that is available to interact with the waste. Furthermore, the potential for radionuclide release from the repository is dependent on the amount of brine that enters the repository. As a result of these coupled biological-geomechanical-geochemical phenomena, the extent and permeability of the DRZ has a significant impact on the potential radionuclide releases from the repository and, in turn, the repository performance. Sandia is a multi program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000. This research is funded by WIPP programs administered by the Office of Environmental Management (EM) of the U.S. Department of Energy.

  9. Iron-nickel alloys as canister material for radioactive waste disposal in underground repositories

    NASA Astrophysics Data System (ADS)

    Apps, J. A.

    1982-09-01

    Canisters containing high-level radioactive waste must retain their integrity in an underground waste repository for at least one thousand years after burial (Nuclear Regulatory Commission, 1981). Since no direct means of verifying canister integrity is plausible over such a long period, indirect methods must be chosen. A persuasive approach is to examine the natural environment and find a suitable material which is thermodynamically compatible with the host rock under the environmental conditions with the host rock under the environmental conditions expected in a waste repository. Several candidates have been proposed, among them being iron-nickel alloys that are known to occur naturally in altered ultramafic rocks. The following review of stability relations among iron-nickel alloys below 3500 C is the initial phase of a more detailed evaluation of these alloys as suitable canister materials.

  10. Cancer Epidemiology Data Repository (CEDR)

    Cancer.gov

    In an effort to broaden access and facilitate efficient data sharing, the Epidemiology and Genomics Research Program (EGRP) has created the Cancer Epidemiology Data Repository (CEDR), a centralized, controlled-access database, where Investigators can deposit individual-level de-identified observational cancer datasets.

  11. Biological Web Service Repositories Review.

    PubMed

    Urdidiales-Nieto, David; Navas-Delgado, Ismael; Aldana-Montes, José F

    2017-05-01

    Web services play a key role in bioinformatics enabling the integration of database access and analysis of algorithms. However, Web service repositories do not usually publish information on the changes made to their registered Web services. Dynamism is directly related to the changes in the repositories (services registered or unregistered) and at service level (annotation changes). Thus, users, software clients or workflow based approaches lack enough relevant information to decide when they should review or re-execute a Web service or workflow to get updated or improved results. The dynamism of the repository could be a measure for workflow developers to re-check service availability and annotation changes in the services of interest to them. This paper presents a review on the most well-known Web service repositories in the life sciences including an analysis of their dynamism. Freshness is introduced in this paper, and has been used as the measure for the dynamism of these repositories. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  12. Ontology Design Patterns: Bridging the Gap Between Local Semantic Use Cases and Large-Scale, Long-Term Data Integration

    NASA Astrophysics Data System (ADS)

    Shepherd, Adam; Arko, Robert; Krisnadhi, Adila; Hitzler, Pascal; Janowicz, Krzysztof; Chandler, Cyndy; Narock, Tom; Cheatham, Michelle; Schildhauer, Mark; Jones, Matt; Raymond, Lisa; Mickle, Audrey; Finin, Tim; Fils, Doug; Carbotte, Suzanne; Lehnert, Kerstin

    2015-04-01

    Integrating datasets for new use cases is one of the common drivers for adopting semantic web technologies. Even though linked data principles enables this type of activity over time, the task of reconciling new ontological commitments for newer use cases can be daunting. This situation was faced by the Biological and Chemical Oceanography Data Management Office (BCO-DMO) as it sought to integrate its existing linked data with other data repositories to address newer scientific use cases as a partner in the GeoLink Project. To achieve a successful integration with other GeoLink partners, BCO-DMO's metadata would need to be described using the new ontologies developed by the GeoLink partners - a situation that could impact semantic inferencing, pre-existing software and external users of BCO-DMO's linked data. This presentation describes the process of how GeoLink is bridging the gap between local, pre-existing ontologies to achieve scientific metadata integration for all its partners through the use of ontology design patterns. GeoLink, an NSF EarthCube Building Block, brings together experts from the geosciences, computer science, and library science in an effort to improve discovery and reuse of data and knowledge. Its participating repositories include content from field expeditions, laboratory analyses, journal publications, conference presentations, theses/reports, and funding awards that span scientific studies from marine geology to marine ecology and biogeochemistry to paleoclimatology. GeoLink's outcomes include a set of reusable ontology design patterns (ODPs) that describe core geoscience concepts, a network of Linked Data published by participating repositories using those ODPs, and tools to facilitate discovery of related content in multiple repositories.

  13. Electrical Resistance Tomography to Monitor Mitigation of Metal-Toxic Acid-Leachates Ruby Gulch Waste Rock Repository Gilt Edge Mine Superfund Site, South Dakota USA

    NASA Astrophysics Data System (ADS)

    Versteeg, R.; Heath, G.; Richardson, A.; Paul, D.; Wangerud, K.

    2003-12-01

    At a cyanide heap-leach open-pit mine, 15-million cubic yards of acid-generating sulfides were dumped at the head of a steep-walled mountain valley, with 30 inches/year precipitation generating 60- gallons/minute ARD leachate. Remediation has reshaped the dump to a 70-acre, 3.5:1-sloped geometry, installed drainage benches and runoff diversions, and capped the repository and lined diversions with a polyethylene geomembrane and cover system. Monitoring was needed to evaluate (a) long-term geomembrane integrity, (b) diversion liner integrity and long-term effectiveness, (c) ARD geochemistry, kinetics and pore-gas dynamics within the repository mass, and (d) groundwater interactions. Observation wells were paired with a 600-electrode resistivity survey system. Using near-surface and down-hole electrodes and automated data collection and post-processing, periodic two- and three-dimensional resistivity images are developed to reflect current and changed-conditions in moisture, temperature, geochemical components, and flow-direction analysis. Examination of total resistivity values and time variances between images allows direct observation of liner and cap integrity with precise identification and location of leaks; likewise, if runoff migrates from degraded diversion ditches into the repository zone, there is an accompanying and noticeable change in resistivity values. Used in combination with monitoring wells containing borehole resistivity electrodes (calibrated with direct sampling of dump water/moisture, temperature and pore-gas composition), the resistivity arrays allow at-depth imaging of geochemical conditions within the repository mass. The information provides early indications of progress or deficiencies in de-watering and ARD- mitigation that is the remedy intent. If emerging technologies present opportunities for secondary treatment, deep resistivity images may assist in developing application methods and evaluating the effectiveness of any reagents introduced into the repository mass to further effect changes in oxidation/reduction reactions.

  14. Working paper : the ITS cost data repository at Mitretek Systems

    DOT National Transportation Integrated Search

    1998-11-30

    Mitretek Systems has been tasked by the Intelligent Transportation Systems (ITS) Joint Program Office (JPO) to collect available information on ITS costs and maintain the information in a cost database, which serves as the ITS Cost Data Repository. T...

  15. Army Hearing Program Talking Points Calendar Year 2015

    DTIC Science & Technology

    2016-12-14

    outside the range of normal hearing sensitivity (greater than 25 dB), CY15 data.  Data: DOEHRS-HC Data Repository , Soldiers who had a DD2215 or...1.  Data: Defense Occupational and Environmental Health Readiness System-Hearing Conservation (DOEHRS-HC) Data Repository , CY15—Army Profile...Soldiers have a hearing loss that required a fit-for-duty (Readiness) evaluation:  An H-3 Hearing Profile.  Data: DOEHRS-HC Data Repository

  16. Smoothing Data Friction through building Service Oriented Data Platforms

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Richards, C. J.; Evans, B. J. K.; Wang, J.; Druken, K. A.

    2017-12-01

    Data Friction has been commonly defined as the costs in time, energy and attention required to simply collect, check, store, move, receive, and access data. On average, researchers spend a significant fraction of their time finding the data for their research project and then reformatting it so that it can be used by the software application of their choice. There is an increasing role for both data repositories and software to be modernised to help reduce data friction in ways that support the better use of the data. Many generic data repositories simply accept data in the format as supplied: the key check is that the data have sufficient metadata to enable discovery and download. Few generic repositories have both the expertise and infrastructure to support the multiple domain specific requirements that facilitate the increasing need for integration and reusability. In contrast, major science domain-focused repositories are increasingly able to implement and enforce community endorsed best practices and guidelines that ensure reusability and harmonization of data for use within the community by offering semi-automated QC workflows to improve quality of submitted data. The most advanced of these science repositories now operate as service-oriented data platforms that extend the use of data across domain silos and increasingly provide server-side programmatically-enabled access to data via network protocols and community standard APIs. To provide this, more rigorous QA/QC procedures are needed to validate data against standards and community software and tools. This ensures that the data can be accessed in expected ways and also demonstrates that the data works across different (non-domain specific) packages, tools and programming languages deployed by the various user communities. In Australia, the National Computational Infrastructure (NCI) has created such a service-oriented data platform which is demonstrating how this approach can reduce data friction, servicing both individual domains as well as facilitating cross-domain collaboration. The approach has required an increase in effort for the repository to provide the additional expertise, so as to enable a better capability and efficient system which ultimately saves time by the individual researcher.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huff, Kathryn D.

    Component level and system level abstraction of detailed computational geologic repository models have resulted in four rapid computational models of hydrologic radionuclide transport at varying levels of detail. Those models are described, as is their implementation in Cyder, a software library of interchangeable radionuclide transport models appropriate for representing natural and engineered barrier components of generic geology repository concepts. A proof of principle demonstration was also conducted in which these models were used to represent the natural and engineered barrier components of a repository concept in a reducing, homogenous, generic geology. This base case demonstrates integration of the Cyder openmore » source library with the Cyclus computational fuel cycle systems analysis platform to facilitate calculation of repository performance metrics with respect to fuel cycle choices. (authors)« less

  18. Data Collection, Collaboration, Analysis, and Publication Using the Open Data Repository's (ODR) Data Publisher

    NASA Astrophysics Data System (ADS)

    Lafuente, B.; Stone, N.; Bristow, T.; Keller, R. M.; Blake, D. F.; Downs, R. T.; Pires, A.; Dateo, C. E.; Fonda, M.

    2017-12-01

    In development for nearly four years, the Open Data Repository's (ODR) Data Publisher software has become a useful tool for researchers' data needs. Data Publisher facilitates the creation of customized databases with flexible permission sets that allow researchers to share data collaboratively while improving data discovery and maintaining ownership rights. The open source software provides an end-to-end solution from collection to final repository publication. A web-based interface allows researchers to enter data, view data, and conduct analysis using any programming language supported by JupyterHub (http://www.jupyterhub.org). This toolset makes it possible for a researcher to store and manipulate their data in the cloud from any internet capable device. Data can be embargoed in the system until a date selected by the researcher. For instance, open publication can be set to a date that coincides with publication of data analysis in a third party journal. In conjunction with teams at NASA Ames and the University of Arizona, a number of pilot studies are being conducted to guide the software development so that it allows them to publish and share their data. These pilots include (1) the Astrobiology Habitable Environments Database (AHED), a central searchable repository designed to promote and facilitate the integration and sharing of all the data generated by the diverse disciplines in astrobiology; (2) a database containing the raw and derived data products from the CheMin instrument on the MSL rover Curiosity (http://odr.io/CheMin), featuring a versatile graphing system, instructions and analytical tools to process the data, and a capability to download data in different formats; and (3) the Mineral Evolution project, which by correlating the diversity of mineral species with their ages, localities, and other measurable properties aims to understand how the episodes of planetary accretion and differentiation, plate tectonics, and origin of life lead to a selective evolution of mineral species through changes in temperature, pressure, and composition. Ongoing development will complete integration of third party meta-data standards and publishing data to the semantic web. This project is supported by the Science-Enabling Research Activity (SERA) and NASA NNX11AP82A, MSL.

  19. Good Data Can Be Better Data - How Data Management Maturity Can Help Repositories Improve Operations, Data Quality, And Usability, Helping Researchers

    NASA Astrophysics Data System (ADS)

    Stall, S.

    2015-12-01

    Much earth and space science data and metadata are managed and supported by an infrastructure of repositories, ranging from large agency or instrument facilities, to institutions, to smaller repositories including labs. Scientists face many challenges in this ecosystem both on storing their data and in accessing data from others for new research. Critical for all uses is ensuring the credibility and integrity of the data and conveying that and provenance information now and in the future. Accurate information is essential for future researchers to find (or discover) the data, evaluate the data for use (content, temporal, geolocation, precision) and finally select (or discard) that data as meeting a "fit-for-purpose" criteria. We also need to optimize the effort it takes in describing the data for these determinations, which means making it efficient for the researchers who collect the data. At AGU we are developing a program aimed at helping repositories, and thereby researchers, improve data quality and data usability toward these goals. AGU has partnered with the CMMI Institute to develop their Data Management Maturity (DMM) framework within the Earth and space sciences. The CMMI DMM framework guides best practices in a range of data operations, and the application of the DMM, through an assessment, reveals how repositories and institutions can best optimize efforts to improve operations and functionality throughout the data lifecycle and elevate best practices across a variety of data management operations. Supporting processes like data operations, data governance, and data architecture are included. An assessment involves identifying accomplishment, and weaknesses compared to leading practices for data management. Broad application of the DMM can help improve quality in data and operations, and consistency across the community that will facilitate interoperability, discovery, preservation, and reuse. Good data can be better data. Consistency results in sustainability.

  20. Integration and Cooperation in the Next Golden Age of Human Space Flight Data Repositories: Tools for Retrospective Analysis and Future Planning

    NASA Technical Reports Server (NTRS)

    Thomas, D.; Fitts, M.; Wear, M.; VanBaalen, M.

    2011-01-01

    As NASA transitions from the Space Shuttle era into the next phase of space exploration, the need to ensure the capture, analysis, and application of its research and medical data is of greater urgency than at any other previous time. In this era of limited resources and challenging schedules, the Human Research Program (HRP) based at NASA s Johnson Space Center (JSC) recognizes the need to extract the greatest possible amount of information from the data already captured, as well as focus current and future research funding on addressing the HRP goal to provide human health and performance countermeasures, knowledge, technologies, and tools to enable safe, reliable, and productive human space exploration. To this end, the Science Management Office and the Medical Informatics and Health Care Systems Branch within the HRP and the Space Medicine Division have been working to make both research data and clinical data more accessible to the user community. The Life Sciences Data Archive (LSDA), the research repository housing data and information regarding the physiologic effects of microgravity, and the Lifetime Surveillance of Astronaut Health Repository (LSAH-R), the clinical repository housing astronaut data, have joined forces to achieve this goal. The task of both repositories is to acquire, preserve, and distribute data and information both within the NASA community and to the science community at large. This is accomplished via the LSDA s public website (http://lsda.jsc.nasa.gov), which allows access to experiment descriptions including hardware, datasets, key personnel, mission descriptions and a mechanism for researchers to request additional data, research and clinical, that is not accessible from the public website. This will result in making the work of NASA and its partners available to the wider sciences community, both domestic and international. The desired outcome is the use of these data for knowledge discovery, retrospective analysis, and planning of future research studies.

  1. Metrology for decommissioning nuclear facilities: Partial outcomes of joint research project within the European Metrology Research Program.

    PubMed

    Suran, Jiri; Kovar, Petr; Smoldasova, Jana; Solc, Jaroslav; Van Ammel, Raf; Garcia Miranda, Maria; Russell, Ben; Arnold, Dirk; Zapata-García, Daniel; Boden, Sven; Rogiers, Bart; Sand, Johan; Peräjärvi, Kari; Holm, Philip; Hay, Bruno; Failleau, Guillaume; Plumeri, Stephane; Laurent Beck, Yves; Grisa, Tomas

    2018-04-01

    Decommissioning of nuclear facilities incurs high costs regarding the accurate characterisation and correct disposal of the decommissioned materials. Therefore, there is a need for the implementation of new and traceable measurement technologies to select the appropriate release or disposal route of radioactive wastes. This paper addresses some of the innovative outcomes of the project "Metrology for Decommissioning Nuclear Facilities" related to mapping of contamination inside nuclear facilities, waste clearance measurement, Raman distributed temperature sensing for long term repository integrity monitoring and validation of radiochemical procedures. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Scoping review and evaluation of SMS/text messaging platforms for mHealth projects or clinical interventions.

    PubMed

    Iribarren, Sarah J; Brown, William; Giguere, Rebecca; Stone, Patricia; Schnall, Rebecca; Staggers, Nancy; Carballo-Diéguez, Alex

    2017-05-01

    Mobile technology supporting text messaging interventions (TMIs) continues to evolve, presenting challenges for researchers and healthcare professionals who need to choose software solutions to best meet their program needs. The objective of this review was to systematically identify and compare text messaging platforms and to summarize their advantages and disadvantages as described in peer-reviewed literature. A scoping review was conducted using four steps: 1) identify currently available platforms through online searches and in mHealth repositories; 2) expand evaluation criteria of an mHealth mobile messaging toolkit and integrate prior user experiences as researchers; 3) evaluate each platform's functions and features based on the expanded criteria and a vendor survey; and 4) assess the documentation of platform use in the peer-review literature. Platforms meeting inclusion criteria were assessed independently by three reviewers and discussed until consensus was reached. The PRISMA guidelines were followed to report findings. Of the 1041 potentially relevant search results, 27 platforms met inclusion criteria. Most were excluded because they were not platforms (e.g., guides, toolkits, reports, or SMS gateways). Of the 27 platforms, only 12 were identified in existing mHealth repositories, 10 from Google searches, while five were found in both. The expanded evaluation criteria included 22 items. Results indicate no uniform presentation of platform features and functions, often making these difficult to discern. Fourteen of the platforms were reported as open source, 10 focused on health care and 16 were tailored to meet needs of low resource settings (not mutually exclusive). Fifteen platforms had do-it-yourself setup (programming not required) while the remainder required coding/programming skills or setups could be built to specification by the vendor. Frequently described features included data security and access to the platform via cloud-based systems. Pay structures and reported targeted end-users varied. Peer-reviewed publications listed only 6 of the 27 platforms across 21 publications. The majority of these articles reported the name of the platform used but did not describe advantages or disadvantages. Searching for and comparing mHealth platforms for TMIs remains a challenge. The results of this review can serve as a resource for researchers and healthcare professionals wanting to integrate TMIs into health interventions. Steps to identify, compare and assess advantages and disadvantages are outlined for consideration. Expanded evaluation criteria can be used by future researchers. Continued and more comprehensive platform tools should be integrated into mHealth repositories. Detailed descriptions of platform advantages and disadvantages are needed when mHealth researchers publish findings to expand the body of research on TMI tools for healthcare. Standardized descriptions and features are recommended for vendor sites. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Entrez Neuron RDFa: a pragmatic semantic web application for data integration in neuroscience research.

    PubMed

    Samwald, Matthias; Lim, Ernest; Masiar, Peter; Marenco, Luis; Chen, Huajun; Morse, Thomas; Mutalik, Pradeep; Shepherd, Gordon; Miller, Perry; Cheung, Kei-Hoi

    2009-01-01

    The amount of biomedical data available in Semantic Web formats has been rapidly growing in recent years. While these formats are machine-friendly, user-friendly web interfaces allowing easy querying of these data are typically lacking. We present "Entrez Neuron", a pilot neuron-centric interface that allows for keyword-based queries against a coherent repository of OWL ontologies. These ontologies describe neuronal structures, physiology, mathematical models and microscopy images. The returned query results are organized hierarchically according to brain architecture. Where possible, the application makes use of entities from the Open Biomedical Ontologies (OBO) and the 'HCLS knowledgebase' developed by the W3C Interest Group for Health Care and Life Science. It makes use of the emerging RDFa standard to embed ontology fragments and semantic annotations within its HTML-based user interface. The application and underlying ontologies demonstrate how Semantic Web technologies can be used for information integration within a curated information repository and between curated information repositories. It also demonstrates how information integration can be accomplished on the client side, through simple copying and pasting of portions of documents that contain RDFa markup.

  4. The Apache OODT Project: An Introduction

    NASA Astrophysics Data System (ADS)

    Mattmann, C. A.; Crichton, D. J.; Hughes, J. S.; Ramirez, P.; Goodale, C. E.; Hart, A. F.

    2012-12-01

    Apache OODT is a science data system framework, borne over the past decade, with 100s of FTEs of investment, tens of sponsoring agencies (NASA, NIH/NCI, DoD, NSF, universities, etc.), and hundreds of projects and science missions that it powers everyday to their success. At its core, Apache OODT carries with it two fundamental classes of software services and components: those that deal with information integration from existing science data repositories and archives, that themselves have already-in-use business processes and models for populating those archives. Information integration allows search, retrieval, and dissemination across these heterogeneous systems, and ultimately rapid, interactive data access, and retrieval. The other suite of services and components within Apache OODT handle population and processing of those data repositories and archives. Workflows, resource management, crawling, remote data retrieval, curation and ingestion, along with science data algorithm integration all are part of these Apache OODT software elements. In this talk, I will provide an overview of the use of Apache OODT to unlock and populate information from science data repositories and archives. We'll cover the basics, along with some advanced use cases and success stories.

  5. Integrating repositories with fuel cycles: The airport authority model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forsberg, C.

    2012-07-01

    The organization of the fuel cycle is a legacy of World War II and the cold war. Fuel cycle facilities were developed and deployed without consideration of the waste management implications. This led to the fuel cycle model of a geological repository site with a single owner, a single function (disposal), and no other facilities on site. Recent studies indicate large economic, safety, repository performance, nonproliferation, and institutional incentives to collocate and integrate all back-end facilities. Site functions could include geological disposal of spent nuclear fuel (SNF) with the option for future retrievability, disposal of other wastes, reprocessing with fuelmore » fabrication, radioisotope production, other facilities that generate significant radioactive wastes, SNF inspection (navy and commercial), and related services such as SNF safeguards equipment testing and training. This implies a site with multiple facilities with different owners sharing some facilities and using common facilities - the repository and SNF receiving. This requires a different repository site institutional structure. We propose development of repository site authorities modeled after airport authorities. Airport authorities manage airports with government-owned runways, collocated or shared public and private airline terminals, commercial and federal military facilities, aircraft maintenance bases, and related operations - all enabled and benefiting the high-value runway asset and access to it via taxi ways. With a repository site authority the high value asset is the repository. The SNF and HLW receiving and storage facilities (equivalent to the airport terminal) serve the repository, any future reprocessing plants, and others with needs for access to SNF and other wastes. Non-public special-built roadways and on-site rail lines (equivalent to taxi ways) connect facilities. Airport authorities are typically chartered by state governments and managed by commissions with members appointed by the state governor, county governments, and city governments. This structure (1) enables state and local governments to work together to maximize job and tax benefits to local communities and the state, (2) provides a mechanism to address local concerns such as airport noise, and (3) creates an institutional structure with large incentives to maximize the value of the common asset, the runway. A repository site authority would have a similar structure and be the local interface to any national waste management authority. (authors)« less

  6. Core Certification of Data Repositories: Trustworthiness and Long-Term Stewardship

    NASA Astrophysics Data System (ADS)

    de Sherbinin, A. M.; Mokrane, M.; Hugo, W.; Sorvari, S.; Harrison, S.

    2017-12-01

    Scientific integrity and norms dictate that data created and used by scientists should be managed, curated, and archived in trustworthy data repositories thus ensuring that science is verifiable and reproducible while preserving the initial investment in collecting data. Research stakeholders including researchers, science funders, librarians, and publishers must also be able to establish the trustworthiness of data repositories they use to confirm that the data they submit and use remain useful and meaningful in the long term. Data repositories are increasingly recognized as a key element of the global research infrastructure and the importance of establishing their trustworthiness is recognised as a prerequisite for efficient scientific research and data sharing. The Core Trustworthy Data Repository Requirements are a set of universal requirements for certification of data repositories at the core level (see: https://goo.gl/PYsygW). They were developed by the ICSU World Data System (WDS: www.icsu-wds.org) and the Data Seal of Approval (DSA: www.datasealofapproval.org)—the two authoritative organizations responsible for the development and implementation of this standard to be further developed under the CoreTrustSeal branding . CoreTrustSeal certification of data repositories involves a minimally intensive process whereby repositories supply evidence that they are sustainable and trustworthy. Repositories conduct a self-assessment which is then reviewed by community peers. Based on this review CoreTrustSeal certification is granted by the CoreTrustSeal Standards and Certification Board. Certification helps data communities—producers, repositories, and consumers—to improve the quality and transparency of their processes, and to increase awareness of and compliance with established standards. This presentation will introduce the CoreTrustSeal certification requirements for repositories and offer an opportunity to discuss ways to improve the contribution of certified data repositories to sustain open data for open scientific research.

  7. Assessment of Effectiveness of Geologic Isolation Systems: REFERENCE SITE INITIAL ASSESSMENT FOR A SALT DOME REPOSITORY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harwell, M. A.; Brandstetter, A.; Benson, G. L.

    1982-06-01

    As a methodology demonstration for the Office of Nuclear Waste Isolation (ONWI), the Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program conducted an initial reference site analysis of the long-term effectiveness of a salt dome repository. The Hainesville Salt Dome in Texas was chosen to be representative of the Gulf Coast interior salt domes; however, the Hainesville Site has been eliminated as a possible nuclear waste repository site. The data used for this exercise are not adequate for an actual assessment, nor have all the parametric analyses been made that would adequately characterize the response of the geosystem surroundingmore » the repository. Additionally, because this was the first exercise of the complete AEGIS and WASTE Rock Interaction Technology (WRIT) methodology, this report provides the initial opportunity for the methodology, specifically applied to a site, to be reviewed by the community outside the AEGIS. The scenario evaluation, as a part of the methodology demonstration, involved consideration of a large variety of potentially disruptive phenomena, which alone or in concert could lead to a breach in a salt dome repository and to a subsequent transport of the radionuclides to the environment. Without waste- and repository-induced effects, no plausible natural geologic events or processes which would compromise the repository integrity could be envisioned over the one-million-year time frame after closure. Near-field (waste- and repository-induced) effects were excluded from consideration in this analysis, but they can be added in future analyses when that methodology development is more complete. The potential for consequential human intrusion into salt domes within a million-year time frame led to the consideration of a solution mining intrusion scenario. The AEGIS staff developed a specific human intrusion scenario at 100 years and 1000 years post-closure, which is one of a whole suite of possible scenarios. This scenario resulted in the delivery of radionuclidecontaminated brine to the surface, where a portion was diverted to culinary salt for direct ingestion by the existing population. Consequence analyses indicated calculated human doses that would be highly deleterious. Additional analyses indicated that doses well above background would occur from such a scenario t even if it occurred a million years into the future. The way to preclude such an intrusion is for continued control over the repository sitet either through direct institutional control or through the effective passive transfer of information. A secondary aspect of the specific human intrusion scenario involved a breach through the side of the salt dome t through which radionuclides migrated via the ground-water system to the accessible environment. This provided a demonstration of the geotransport methodology that AEGIS can use in actual site evaluations, as well as the WRIT program's capabilities with respect to defining the source term and retardation rates of the radionuclides in the repository. This reference site analysis was initially published as a Working Document in December 1979. That version was distributed for a formal peer review by individuals and organizations not involved in its development. The present report represents a revisiont based in part on the responses received from the external reviewers. Summaries of the comments from the reviewers and responses to these comments by the AEGIS staff are presented. The exercise of the AEGIS methodology was sUGcessful in demonstrating the methodologyt and thus t in providing a basis for substantive peer review, in terms of further development of the AEGIS site-applications capability and in terms of providing insight into the potential for consequential human intrusion into a salt dome repository.« less

  8. Assessment of Effectiveness of Geologic Isolation Systems: REFERENCE SITE INITIAL ASSESSMENT FOR A SALT DOME REPOSITORY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harwell, M. A.; Brandstetter, A.; Benson, G. L.

    1982-06-01

    As a methodology demonstration for the Office of Nuclear Waste Isolation (ONWI), the Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program conducted an initial reference site analysis of the long-term effectiveness of a salt dome repository. The Hainesville Salt Dome in Texas was chosen to be representative of the Gulf Coast interior salt domes; however, the Hainesville Site has been eliminated as a possible nuclear waste repository site. The data used for this exercise are not adequate for an actual assessment, nor have all the parametric analyses been made that would adequately characterize the response of the geosystem surroundingmore » the repository. Additionally, because this was the first exercise of the complete AEGIS and WASTE Rock Interaction Technology (WRIT) methodology, this report provides the initial opportunity for the methodology, specifically applied to a site, to be reviewed by the community outside the AEGIS. The scenario evaluation, as a part of the methodology demonstration, involved consideration of a large variety of potentially disruptive phenomena, which alone or in concert could lead to a breach in a salt dome repository and to a subsequent transport of the radionuclides to the environment. Without waste- and repository-induced effects, no plausible natural geologic events or processes which would compromise the repository integrity could be envisioned over the one-million-year time frame after closure. Near-field (waste- and repository-induced) effects were excluded from consideration in this analysis, but they can be added in future analyses when that methodology development is more complete. The potential for consequential human intrusion into salt domes within a million-year time frame led to the consideration of a solution mining intrusion scenario. The AEGIS staff developed a specific human intrusion scenario at 100 years and 1000 years post-closure, which is one of a whole suite of possible scenarios. This scenario resulted in the delivery of radionuclidecontaminated brine to the surface, where a portion was diverted to culinary salt for direct ingestion by the existing population. Consequence analyses indicated calculated human doses that would be highly deleterious. Additional analyses indicated that doses well above background would occur from such a scenario t even if it occurred a million years into the future. The way to preclude such an intrusion is for continued control over the repository sitet either through direct institutional control or through the effective passive transfer of information. A secondary aspect of the specific human intrusion scenario involved a breach through the side of the salt dome t through which radionuclides migrated via the ground-water system to the accessible environment. This provided a demonstration of the geotransport methodology that AEGIS can use in actual site evaluations, as well as the WRIT program's capabilities with respect to defining the source term and retardation rates of the radionuclides in the repository. This reference site analysis was initially published as a Working Document in December 1979. That version was distributed for a formal peer review by individuals and organizations not involved in its development. The present report represents a revisiont based in part on the responses received from the external reviewers. Summaries of the comments from the reviewers and responses to these comments by the AEGIS staff are presented. The exercise of the AEGIS methodology was successful in demonstrating the methodologyt and thus t in providing a basis for substantive peer review, in terms of further development of the AEGIS site-applications capability and in terms of providing insight into the potential for consequential human intrusion into a salt dome repository.« less

  9. Scientific information repository assisting reflectance spectrometry in legal medicine.

    PubMed

    Belenki, Liudmila; Sterzik, Vera; Bohnert, Michael; Zimmermann, Klaus; Liehr, Andreas W

    2012-06-01

    Reflectance spectrometry is a fast and reliable method for the characterization of human skin if the spectra are analyzed with respect to a physical model describing the optical properties of human skin. For a field study performed at the Institute of Legal Medicine and the Freiburg Materials Research Center of the University of Freiburg, a scientific information repository has been developed, which is a variant of an electronic laboratory notebook and assists in the acquisition, management, and high-throughput analysis of reflectance spectra in heterogeneous research environments. At the core of the repository is a database management system hosting the master data. It is filled with primary data via a graphical user interface (GUI) programmed in Java, which also enables the user to browse the database and access the results of data analysis. The latter is carried out via Matlab, Python, and C programs, which retrieve the primary data from the scientific information repository, perform the analysis, and store the results in the database for further usage.

  10. Credentialing Data Scientists: A Domain Repository Perspective

    NASA Astrophysics Data System (ADS)

    Lehnert, K. A.; Furukawa, H.

    2015-12-01

    A career in data science can have many paths: data curation, data analysis, metadata modeling - all of these in different commercial or scientific applications. Can a certification as 'data scientist' provide the guarantee that an applicant or candidate for a data science position has just the right skills? How valuable is a 'generic' certification as data scientist for an employer looking to fill a data science position? Credentials that are more specific and discipline-oriented may be more valuable to both the employer and the job candidate. One employment sector for data scientists are the data repositories that provide discipline-specific data services for science communities. Data science positions within domain repositories include a wide range of responsibilities in support of the full data life cycle - from data preservation and curation to development of data models, ontologies, and user interfaces, to development of data analysis and visualization tools to community education and outreach, and require a substantial degree of discipline-specific knowledge of scientific data acquisition and analysis workflows, data quality measures, and data cultures. Can there be certification programs for domain-specific data scientists that help build the urgently needed workforce for the repositories? The American Geophysical Union has recently started an initiative to develop a program for data science continuing education and data science professional certification for the Earth and space sciences. An Editorial Board has been charged to identify and develop curricula and content for these programs and to provide input and feedback in the implementation of the program. This presentation will report on the progress of this initiative and evaluate its utility for the needs of domain repositories in the Earth and space sciences.

  11. Coupled Heat and Moisture Transport Simulation on the Re-saturation of Engineered Clay Barrier

    NASA Astrophysics Data System (ADS)

    Huang, W. H.; Chuang, Y. F.

    2014-12-01

    Engineered clay barrier plays a major role for the isolation of radioactive wastes in a underground repository. This paper investigates the resaturation processes of clay barrier, with emphasis on the coupling effects of heat and moisture during the intrusion of groundwater to the repository. A reference bentonite and a locally available clay were adopted in the laboratory program. Soil suction of clay specimens was measured by psychrometers embedded in clay specimens and by vapor equilibrium technique conducted at varying temperatures so as to determine the soil water characteristic curves of the two clays at different temperatures. And water uptake tests were conducted on clay specimens compacted at various densities to simulate the intrusion of groundwater into the clay barrier. Using the soil water characteristic curve, an integration scheme was introduced to estimate the hydraulic conductivity of unsaturated clay. It was found that soil suction decreases as temperature increases, resulting in a reduction in water retention capability. The finite element method was then employed to carry out the numerical simulation of the saturation process in the near field of a repository. Results of the numerical simulation were validated using the degree of saturation profile obtained from the water uptake tests on the clays. The numerical scheme was then extended to establish a model simulating the resaturation process after the closure of a repository. Finally, the model was then used to evaluate the effect of clay barrier thickness on the time required for groundwater to penetrate the clay barrier and approach saturation. Due to the variation in clay suction and thermal conductivity with temperature of clay barrier material, the calculated temperature field shows a reduction as a result of incorporating the hydro-properties in the calculations.

  12. Rolling Deck to Repository (R2R): Collaborative Development of Linked Data for Oceanographic Research

    NASA Astrophysics Data System (ADS)

    Arko, Robert; Chandler, Cynthia; Stocks, Karen; Smith, Shawn; Clark, Paul; Shepherd, Adam; Moore, Carla; Beaulieu, Stace

    2013-04-01

    The Rolling Deck to Repository (R2R) program is developing infrastructure to ensure the underway sensor data from U.S. academic oceanographic research vessels are routinely and consistently documented, preserved in long-term archives, and disseminated to the science community. The entire R2R Catalog is published online as a Linked Data collection, making it easily accessible to encourage discovery and integration with data at other repositories. We are developing the R2R Linked Data collection with specific goals in mind: 1.) We facilitate data access and reuse by publishing the richest possible collection of resources to describe vessels, cruises, instruments, and datasets from the U.S. academic fleet, including data quality assessment results and clean trackline navigation; 2.) We facilitate data citation through the entire lifecycle from field acquisition to shoreside archiving to journal articles and global syntheses, by publishing Digital Object Identifiers (DOIs) for datasets and encoding them directly into our Linked Data resources; and 3.) We facilitate federation with other repositories such as the Biological and Chemical Oceanography Data Management Office (BCO-DMO), InterRidge Vents Database, and Index to Marine and Lacustrine Geological Samples (IMLGS), by reciprocal linking between RDF resources and supporting the RDF Query Language. R2R participates in the Ocean Data Interoperability Platform (ODIP), a joint European-U.S.-Australian partnership to facilitate the sharing of data and documentation across international borders. We publish our controlled vocabularies as a Simple Knowledge Organization System (SKOS) concept collection, and are working toward alignment with SeaDataNet and other community-standard terms using the NERC Vocabulary Server (NVS). http://rvdata.us/

  13. Iterative performance assessments as a regulatory tool for evaluating repository safety: How experiences from SKI Project-90 were used in formulating the new performance assessment project SITE-94

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersson, J.

    1993-12-31

    The Swedish Nuclear Power Inspectorate, SKI, regulatory research program has to prepare for the process of licensing a repository for spent nuclear fuel, by building up the necessary knowledge and review capacity. SKIs main strategy for meeting this demand is to develop an independent performance assessment capability. SKIs first own performance assessment project, Project-90, was completed in 1991 and is now followed by a new project, SITE-94. SITE-94 is based on conclusions reached within Project-90. An independent review of Project-90, carried out by a NEA team of experts, has also contributed to the formation of the project. Another important reasonmore » for the project is that the implementing organization in Sweden, SKB, has proposed to submit an application to start detailed investigation of a repository candidate site around 1997. SITE-94 is a performance assessment of a hypothetical repository at a real site. The main objective of the project is to determine how site specific data should be assimilated into the performance assessment process, and to evaluate how uncertainties inherent in site characterization will influence performance assessment results. This will be addressed by exploring multiple interpretations, conceptual models, and parameters consistent with the site data. The site evaluation will strive for consistency between geological, hydrological, rock mechanical, and geochemical descriptions. Other important elements of SITE-94 are the development of a practical and defensible methodology for defining, constructing and analyzing scenarios, the development of approaches for treatment of uncertainties, evaluation of canister integrity, and the development and application of an appropriate quality assurance plan for performance assessments.« less

  14. 10 CFR 60.51 - License amendment for permanent closure.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... description of the program for post-permanent closure monitoring of the geologic repository. (2) A detailed... postclosure controlled area and geologic repository operations area by monuments that have been designed... tests, experiments, and any other analyses relating to backfill of excavated areas, shaft sealing, waste...

  15. 10 CFR 60.51 - License amendment for permanent closure.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... description of the program for post-permanent closure monitoring of the geologic repository. (2) A detailed... postclosure controlled area and geologic repository operations area by monuments that have been designed... tests, experiments, and any other analyses relating to backfill of excavated areas, shaft sealing, waste...

  16. 10 CFR 60.51 - License amendment for permanent closure.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... description of the program for post-permanent closure monitoring of the geologic repository. (2) A detailed... postclosure controlled area and geologic repository operations area by monuments that have been designed... tests, experiments, and any other analyses relating to backfill of excavated areas, shaft sealing, waste...

  17. 10 CFR 60.51 - License amendment for permanent closure.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... description of the program for post-permanent closure monitoring of the geologic repository. (2) A detailed... postclosure controlled area and geologic repository operations area by monuments that have been designed... tests, experiments, and any other analyses relating to backfill of excavated areas, shaft sealing, waste...

  18. 10 CFR 60.51 - License amendment for permanent closure.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... description of the program for post-permanent closure monitoring of the geologic repository. (2) A detailed... postclosure controlled area and geologic repository operations area by monuments that have been designed... tests, experiments, and any other analyses relating to backfill of excavated areas, shaft sealing, waste...

  19. Establishment of Peripheral Nerve Injury Data Repository to Monitor and Support Population Health Decisions

    DTIC Science & Technology

    2017-07-01

    AWARD NUMBER: W81XWH-16-0-DM167033 TITLE: Establishment of Peripheral Nerve Injury Data Repository to Monitor and Support Population Health...Injury Data Repository to Monitor and Support Population Health Decisions 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-16-0-DM167033 5c. PROGRAM...patient enrollment. Collected data will be utilized to 1) describe the outcomes of various PNI and 2) suggest outcomes that support population health

  20. A Safety Case Approach for Deep Geologic Disposal of DOE HLW and DOE SNF in Bedded Salt - 13350

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sevougian, S. David; MacKinnon, Robert J.; Leigh, Christi D.

    2013-07-01

    The primary objective of this study is to investigate the feasibility and utility of developing a defensible safety case for disposal of United States Department of Energy (U.S. DOE) high-level waste (HLW) and DOE spent nuclear fuel (SNF) in a conceptual deep geologic repository that is assumed to be located in a bedded salt formation of the Delaware Basin [1]. A safety case is a formal compilation of evidence, analyses, and arguments that substantiate and demonstrate the safety of a proposed or conceptual repository. We conclude that a strong initial safety case for potential licensing can be readily compiled bymore » capitalizing on the extensive technical basis that exists from prior work on the Waste Isolation Pilot Plant (WIPP), other U.S. repository development programs, and the work published through international efforts in salt repository programs such as in Germany. The potential benefits of developing a safety case include leveraging previous investments in WIPP to reduce future new repository costs, enhancing the ability to effectively plan for a repository and its licensing, and possibly expediting a schedule for a repository. A safety case will provide the necessary structure for organizing and synthesizing existing salt repository science and identifying any issues and gaps pertaining to safe disposal of DOE HLW and DOE SNF in bedded salt. The safety case synthesis will help DOE to plan its future R and D activities for investigating salt disposal using a risk-informed approach that prioritizes test activities that include laboratory, field, and underground investigations. It should be emphasized that the DOE has not made any decisions regarding the disposition of DOE HLW and DOE SNF. Furthermore, the safety case discussed herein is not intended to either site a repository in the Delaware Basin or preclude siting in other media at other locations. Rather, this study simply presents an approach for accelerated development of a safety case for a potential DOE HLW and DOE SNF repository using the currently available technical basis for bedded salt. This approach includes a summary of the regulatory environment relevant to disposal of DOE HLW and DOE SNF in a deep geologic repository, the key elements of a safety case, the evolution of the safety case through the successive phases of repository development and licensing, and the existing technical basis that could be used to substantiate the safety of a geologic repository if it were to be sited in the Delaware Basin. We also discuss the potential role of an underground research laboratory (URL). (authors)« less

  1. Education and Outreach Plans for the U.S. Drillship in IODP

    NASA Astrophysics Data System (ADS)

    White, K. S.; Reagan, M.; Klaus, A. D.

    2003-12-01

    The Integrated Ocean Drilling Program (IODP) began on October 1, 2003, following the end of operations of the 20-year Ocean Drilling Program (ODP). Education and outreach is a key component of IODP both nationally and internationally. The JOI Alliance (Joint Oceanographic Institutions, Inc., Texas A&M University, and Lamont Doherty Earth Observatory of Columbia University) will lead activities related to the U.S. drillship, coordinating these education and outreach efforts with those undertaken by the Central Management Organization, other IODP platform operators, and a U.S. Science Support Program successor. The Alliance will serve the national and assist the international scientific drilling communities by providing the results from the U.S. vessel to the public, government representatives, and scientists. The Alliance will expand upon media outreach strategies that were successful in ODP, such as issuing press releases at the conclusion of each leg and for major scientific breakthroughs; conducting tours, press conferences, and events during port calls; working with the press at major scientific meetings, and encouraging journalists to sail on expeditions. The Alliance will increase its education role by developing, coordinating, and disseminating educational materials and programs for teachers and students on the scientific themes and discoveries of IODP science. An important component of the outreach plan is using the vessel and associated laboratories and repositories as classrooms. IODP plans include multiple ship berths each year for teachers, based on the success of a pilot program conducted by ODP in 2001. This program, featuring a teacher onboard for a cruise, was accompanied by a distance-learning program and on-line curriculum models. Teachers can tour, both virtually and directly, laboratories and core repositories and participate in scheduled activities and courses. Using science conducted onboard the ship, the Alliance will develop online curriculum materials, as well as publications and fact sheets geared toward nonscientists. The Alliance will partner with existing scientific and education organizations, including programs at their universities, to widely disseminate IODP results and materials.

  2. Integration of Multi-Modal Biomedical Data to Predict Cancer Grade and Patient Survival.

    PubMed

    Phan, John H; Hoffman, Ryan; Kothari, Sonal; Wu, Po-Yen; Wang, May D

    2016-02-01

    The Big Data era in Biomedical research has resulted in large-cohort data repositories such as The Cancer Genome Atlas (TCGA). These repositories routinely contain hundreds of matched patient samples for genomic, proteomic, imaging, and clinical data modalities, enabling holistic and multi-modal integrative analysis of human disease. Using TCGA renal and ovarian cancer data, we conducted a novel investigation of multi-modal data integration by combining histopathological image and RNA-seq data. We compared the performances of two integrative prediction methods: majority vote and stacked generalization. Results indicate that integration of multiple data modalities improves prediction of cancer grade and outcome. Specifically, stacked generalization, a method that integrates multiple data modalities to produce a single prediction result, outperforms both single-data-modality prediction and majority vote. Moreover, stacked generalization reveals the contribution of each data modality (and specific features within each data modality) to the final prediction result and may provide biological insights to explain prediction performance.

  3. Repository-Based Software Engineering (RBSE) program

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Support of a software engineering program was provided in the following areas: client/customer liaison; research representation/outreach; and program support management. Additionally, a list of deliverables is presented.

  4. New directions in medical e-curricula and the use of digital repositories.

    PubMed

    Fleiszer, David M; Posel, Nancy H; Steacy, Sean P

    2004-03-01

    Medical educators involved in the growth of multimedia-enhanced e-curricula are increasingly aware of the need for digital repositories to catalogue, store and ensure access to learning objects that are integrated within their online material. The experience at the Faculty of Medicine at McGill University during initial development of a mainstream electronic curriculum reflects this growing recognition that repositories can facilitate the development of a more comprehensive as well as effective electronic curricula. Also, digital repositories can help to ensure efficient utilization of resources through the use, re-use, and reprocessing of multimedia learning, addressing the potential for collaboration among repositories and increasing available material exponentially. The authors review different approaches to the development of a digital repository application, as well as global and specific issues that should be examined in the initial requirements definition and development phase, to ensure current initiatives meet long-term requirements. Often, decisions regarding creation of e-curricula and associated digital repositories are left to interested faculty and their individual development teams. However, the development of an e-curricula and digital repository is not predominantly a technical exercise, but rather one that affects global pedagogical strategies and curricular content and involves a commitment of large-scale resources. Outcomes of these decisions can have long-term consequences and as such, should involve faculty at the highest levels including the dean.

  5. Preliminary Concept of Operations for the Spent Fuel Management System--WM2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cumberland, Riley M; Adeniyi, Abiodun Idowu; Howard, Rob L

    The Nuclear Fuels Storage and Transportation Planning Project (NFST) within the U.S. Department of Energy s Office of Nuclear Energy is tasked with identifying, planning, and conducting activities to lay the groundwork for developing interim storage and transportation capabilities in support of an integrated waste management system. The system will provide interim storage for commercial spent nuclear fuel (SNF) from reactor sites and deliver it to a repository. The system will also include multiple subsystems, potentially including; one or more interim storage facilities (ISF); one or more repositories; facilities to package and/or repackage SNF; and transportation systems. The project teammore » is analyzing options for an integrated waste management system. To support analysis, the project team has developed a Concept of Operations document that describes both the potential integrated system and inter-dependencies between system components. The goal of this work is to aid systems analysts in the development of consistent models across the project, which involves multiple investigators. The Concept of Operations document will be updated periodically as new developments emerge. At a high level, SNF is expected to travel from reactors to a repository. SNF is first unloaded from reactors and placed in spent fuel pools for wet storage at utility sites. After the SNF has cooled enough to satisfy loading limits, it is placed in a container at reactor sites for storage and/or transportation. After transportation requirements are met, the SNF is transported to an ISF to store the SNF until a repository is developed or directly to a repository if available. While the high level operation of the system is straightforward, analysts must evaluate numerous alternative options. Alternative options include the number of ISFs (if any), ISF design, the stage at which SNF repackaging occurs (if any), repackaging technology, the types of containers used, repository design, component sizing, and timing of events. These alternative options arise due to technological, economic, or policy considerations. As new developments regularly emerge, the operational concepts will be periodically updated. This paper gives an overview of the different potential alternatives identified in the Concept of Operations document at a conceptual level.« less

  6. EPA Facility Registry Service (FRS): ICIS

    EPA Pesticide Factsheets

    This web feature service contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the Integrated Compliance Information System (ICIS). When complete, ICIS will provide a database that will contain integrated enforcement and compliance information across most of EPA's programs. The vision for ICIS is to replace EPA's independent databases that contain enforcement data with a single repository for that information. Currently, ICIS contains all Federal Administrative and Judicial enforcement actions and a subset of the Permit Compliance System (PCS), which supports the National Pollutant Discharge Elimination System (NPDES). ICIS exchanges non-sensitive enforcement/compliance activities, non-sensitive formal enforcement actions and NPDES information with FRS. This web feature service contains the enforcement/compliance activities and formal enforcement action related facilities; the NPDES facilities are contained in the PCS_NPDES web feature service. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on f

  7. Connecting the pieces: Using ORCIDs to improve research impact and repositories.

    PubMed

    Baessa, Mohamed; Lery, Thibaut; Grenz, Daryl; Vijayakumar, J K

    2015-01-01

    Quantitative data are crucial in the assessment of research impact in the academic world. However, as a young university created in 2009, King Abdullah University of Science and Technology (KAUST) needs to aggregate bibliometrics from researchers coming from diverse origins, not necessarily with the proper affiliations. In this context, the University has launched an institutional repository in September 2012 with the objectives of creating a home for the intellectual outputs of KAUST researchers. Later, the university adopted the first mandated institutional open access policy in the Arab region, effective June 31, 2014. Several projects were then initiated in order to accurately identify the research being done by KAUST authors and bring it into the repository in accordance with the open access policy. Integration with ORCID has been a key element in this process and the best way to ensure data quality for researcher's scientific contributions. It included the systematic inclusion and creation, if necessary, of ORCID identifiers in the existing repository system, an institutional membership in ORCID, and the creation of dedicated integration tools. In addition and in cooperation with the Office of Research Evaluation, the Library worked at implementing a Current Research Information System (CRIS) as a standardized common resource to monitor KAUST research outputs. We will present our findings about the CRIS implementation, the ORCID API, the repository statistics as well as our approach in conducting the assessment of research impact in terms of usage by the global research community.

  8. Evaluation of Five Sedimentary Rocks Other Than Salt for Geologic Repository Siting Purposes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croff, A.G.; Lomenick, T.F.; Lowrie, R.S.

    The US Department of Energy (DOE), in order to increase the diversity of rock types under consideration by the geologic disposal program, initiated the Sedimary ROck Program (SERP), whose immediate objectiv eis to evaluate five types of secimdnary rock - sandstone, chalk, carbonate rocks (limestone and dolostone), anhydrock, and shale - to determine the potential for siting a geologic repository. The evaluation of these five rock types, together with the ongoing salt studies, effectively results in the consideration of all types of relatively impermeable sedimentary rock for repository purposes. The results of this evaluation are expressed in terms of amore » ranking of the five rock types with respect to their potential to serve as a geologic repository host rock. This comparative evaluation was conducted on a non-site-specific basis, by use of generic information together with rock evaluation criteria (RECs) derived from the DOE siting guidelines for geologic repositories (CFR 1984). An information base relevant to rock evaluation using these RECs was developed in hydrology, geochemistry, rock characteristics (rock occurrences, thermal response, rock mechanics), natural resources, and rock dissolution. Evaluation against postclosure and preclosure RECs yielded a ranking of the five subject rocks with respect to their potential as repository host rocks. Shale was determined to be the most preferred of the five rock types, with sandstone a distant second, the carbonate rocks and anhydrock a more distant third, and chalk a relatively close fourth.« less

  9. Integrative Lifecourse and Genetic Analysis of Military Working Dogs

    DTIC Science & Technology

    2015-12-01

    TITLE AND SUBTITLE Integrative Lifecourse and Genetic Analysis of Military Working Dogs 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-11-2-0225 5c...developments for realizing the potential of canine models”, with subsection “Epidemiology, longitudinal cohorts, tissue repositories and integrative

  10. Getting Beyond Yucca Mountain - 12305

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halstead, Robert J.; Williams, James M.

    2012-07-01

    The U.S. Department of Energy has terminated the Yucca Mountain repository project. The U.S. Nuclear Regulatory Commission has indefinitely suspended the Yucca Mountain licensing proceeding. The presidentially-appointed Blue Ribbon Commission (BRC) on America's Nuclear Future is preparing a report, due in January 2012, to the Secretary of Energy on recommendations for a new national nuclear waste management and disposal program. The BRC Draft Report published in July 2011 provides a compelling critique of the past three decades failed efforts in the United States to site storage and disposal facilities for spent nuclear fuel (SNF) and high-level radioactive waste (HLW). However,more » the BRC Draft Report fails to provide detailed guidance on how to implement an alternative, successful approach to facility site selection. The comments submitted to the BRC by the State of Nevada Agency for Nuclear Projects provide useful details on how the US national nuclear waste program can get beyond the failed Yucca Mountain repository project. A detailed siting process, consisting of legislative elements, procedural elements, and 'rules' for volunteer sites, could meet the objectives of the BRC and the Western Governors Association (WGA), while promoting and protecting the interests of potential host states. The recent termination of the proposed Yucca Mountain repository provides both an opportunity and a need to re-examine the United States' nuclear waste management program. The BRC Draft Report published in July 2011 provides a compelling critique of the past three decades failed efforts in the United States to site storage and disposal facilities for SNF and HLW. It is anticipated that the BRC Final report in January 2012 will recommend a new general course of action, but there will likely continue to be a need for detailed guidance on how to implement an alternative, successful approach to facility site selection. Getting the nation's nuclear waste program back on track requires, among other things, new principles for siting-principles based on partnership between the federal implementing agency and prospective host states. These principles apply to the task of developing an integrated waste management strategy, to interactions between the federal government and prospective host states for consolidated storage and disposal facilities, and to the logistically and politically complicated task of transportation system design. Lessons from the past 25 years, in combination with fundamental parameters of the nuclear waste management task in the US, suggest new principles for partnership outlined in this paper. These principles will work better if well-grounded and firm guidelines are set out beforehand and if the challenge of maintaining competence, transparency and integrity in the new organization is treated as a problem to be addressed rather than a result to be expected. (authors)« less

  11. Entrez Neuron RDFa: a pragmatic Semantic Web application for data integration in neuroscience research

    PubMed Central

    Samwald, Matthias; Lim, Ernest; Masiar, Peter; Marenco, Luis; Chen, Huajun; Morse, Thomas; Mutalik, Pradeep; Shepherd, Gordon; Miller, Perry; Cheung, Kei-Hoi

    2013-01-01

    The amount of biomedical data available in Semantic Web formats has been rapidly growing in recent years. While these formats are machine-friendly, user-friendly web interfaces allowing easy querying of these data are typically lacking. We present “Entrez Neuron”, a pilot neuron-centric interface that allows for keyword-based queries against a coherent repository of OWL ontologies. These ontologies describe neuronal structures, physiology, mathematical models and microscopy images. The returned query results are organized hierarchically according to brain architecture. Where possible, the application makes use of entities from the Open Biomedical Ontologies (OBO) and the ‘HCLS knowledgebase’ developed by the W3C Interest Group for Health Care and Life Science. It makes use of the emerging RDFa standard to embed ontology fragments and semantic annotations within its HTML-based user interface. The application and underlying ontologies demonstrates how Semantic Web technologies can be used for information integration within a curated information repository and between curated information repositories. It also demonstrates how information integration can be accomplished on the client side, through simple copying and pasting of portions of documents that contain RDFa markup. PMID:19745321

  12. Making proteomics data accessible and reusable: Current state of proteomics databases and repositories

    PubMed Central

    Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-01-01

    Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. PMID:25158685

  13. Development and implementation of an integrated EHR for Homecare Service: a South American experience.

    PubMed

    Aguilera Díaz, Jerónimo; Arias, Antonio Eduardo; Budalich, Cintia Mabel; Benítez, Sonia Elizabeth; López, Gastón; Borbolla, Damián; Plazzotta, Fernando; Luna, Daniel; de Quirós, Fernán González Bernaldo

    2010-01-01

    This paper describes the development and implementation of a web based electronic health record for the Homecare Service program in the Hospital Italiano de Buenos Aires. It reviews the process of the integration of the new electronic health record to the hospital information system, allowing physicians to access the clinical data repository from their Pc's at home and with the capability of consulting past and present history of the patient health care, order, tests, and referrals with others professionals trough the new Electronic Health Record. We also discuss how workflow processes were changed and improved for the physicians, nurses, and administrative personnel of the Homecare Services and the educational methods used to improve acceptance and adoption of these new technologies. We also briefly describe the validation of physicians and their field work with electronic signatures.

  14. Using RxNorm for cross-institutional formulary data normalization within a distributed grid-computing environment.

    PubMed

    Wynden, Rob; Anderson, Nick; Casale, Marco; Lakshminarayanan, Prakash; Anderson, Kent; Prosser, Justin; Errecart, Larry; Livshits, Alice; Thimman, Tim; Weiner, Mark

    2011-01-01

    Within the CTSA (Clinical Translational Sciences Awards) program, academic medical centers are tasked with the storage of clinical formulary data within an Integrated Data Repository (IDR) and the subsequent exposure of that data over grid computing environments for hypothesis generation and cohort selection. Formulary data collected over long periods of time across multiple institutions requires normalization of terms before those data sets can be aggregated and compared. This paper sets forth a solution to the challenge of generating derived aggregated normalized views from large, distributed data sets of clinical formulary data intended for re-use within clinical translational research.

  15. Integrating SAP to Information Systems Curriculum: Design and Delivery

    ERIC Educational Resources Information Center

    Wang, Ming

    2011-01-01

    Information Systems (IS) education is being transformed from the segmented applications toward the integrated enterprise-wide system software Enterprise Resource Planning (ERP). ERP is a platform that integrates all business functions with its centralized data repository shared by all the business operations in the enterprise. This tremendous…

  16. A "Simple Query Interface" Adapter for the Discovery and Exchange of Learning Resources

    ERIC Educational Resources Information Center

    Massart, David

    2006-01-01

    Developed as part of CEN/ISSS Workshop on Learning Technology efforts to improve interoperability between learning resource repositories, the Simple Query Interface (SQI) is an Application Program Interface (API) for querying heterogeneous repositories of learning resource metadata. In the context of the ProLearn Network of Excellence, SQI is used…

  17. Thermoelastic analysis of spent fuel and high level radioactive waste repositories in salt. A semi-analytical solution. [JUDITH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    St. John, C.M.

    1977-04-01

    An underground repository containing heat generating, High Level Waste or Spent Unreprocessed Fuel may be approximated as a finite number of heat sources distributed across the plane of the repository. The resulting temperature, displacement and stress changes may be calculated using analytical solutions, providing linear thermoelasticity is assumed. This report documents a computer program based on this approach and gives results that form the basis for a comparison between the effects of disposing of High Level Waste and Spent Unreprocessed Fuel.

  18. Managing the nation`s nuclear waste. Site descriptions: Cypress Creek, Davis Canyon, Deaf Smith, Hanford Reference, Lavender Canyon, Richton Dome, Swisher, Vacherie Dome, and Yucca Mountain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1985-12-31

    In 1982, the Congress enacted the Nuclear Waste Policy Act (Public Law 97-425), which established a comprehensive national program directed toward siting, constructing, and operating geologic repositories for the permanent disposal of high-level radioactive waste. In February 1983, the United States Department of Energy (DOE) identified the nine referenced repository locations as potentially acceptable sites for a mined geologic repository. These sites have been evaluated in accordance with the DOE`s General Guidelines for the Recommendation of Sites for Nuclear Waste Repositories. The DOE findings and determinations are based on the evaluations contained in the draft Environmental Assessments (EA). A finalmore » EA will be prepared after considering the comments received on the draft EA. The purpose of this document is to provide the public with specific site information on each potential repository location.« less

  19. Git Replacement for the

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, P.

    2014-09-23

    GRAPE is a tool for managing software project workflows for the Git version control system. It provides a suite of tools to simplify and configure branch based development, integration with a project's testing suite, and integration with the Atlassian Stash repository hosting tool.

  20. KiMoSys: a web-based repository of experimental data for KInetic MOdels of biological SYStems

    PubMed Central

    2014-01-01

    Background The kinetic modeling of biological systems is mainly composed of three steps that proceed iteratively: model building, simulation and analysis. In the first step, it is usually required to set initial metabolite concentrations, and to assign kinetic rate laws, along with estimating parameter values using kinetic data through optimization when these are not known. Although the rapid development of high-throughput methods has generated much omics data, experimentalists present only a summary of obtained results for publication, the experimental data files are not usually submitted to any public repository, or simply not available at all. In order to automatize as much as possible the steps of building kinetic models, there is a growing requirement in the systems biology community for easily exchanging data in combination with models, which represents the main motivation of KiMoSys development. Description KiMoSys is a user-friendly platform that includes a public data repository of published experimental data, containing concentration data of metabolites and enzymes and flux data. It was designed to ensure data management, storage and sharing for a wider systems biology community. This community repository offers a web-based interface and upload facility to turn available data into publicly accessible, centralized and structured-format data files. Moreover, it compiles and integrates available kinetic models associated with the data. KiMoSys also integrates some tools to facilitate the kinetic model construction process of large-scale metabolic networks, especially when the systems biologists perform computational research. Conclusions KiMoSys is a web-based system that integrates a public data and associated model(s) repository with computational tools, providing the systems biology community with a novel application facilitating data storage and sharing, thus supporting construction of ODE-based kinetic models and collaborative research projects. The web application implemented using Ruby on Rails framework is freely available for web access at http://kimosys.org, along with its full documentation. PMID:25115331

  1. VisRseq: R-based visual framework for analysis of sequencing data

    PubMed Central

    2015-01-01

    Background Several tools have been developed to enable biologists to perform initial browsing and exploration of sequencing data. However the computational tool set for further analyses often requires significant computational expertise to use and many of the biologists with the knowledge needed to interpret these data must rely on programming experts. Results We present VisRseq, a framework for analysis of sequencing datasets that provides a computationally rich and accessible framework for integrative and interactive analyses without requiring programming expertise. We achieve this aim by providing R apps, which offer a semi-auto generated and unified graphical user interface for computational packages in R and repositories such as Bioconductor. To address the interactivity limitation inherent in R libraries, our framework includes several native apps that provide exploration and brushing operations as well as an integrated genome browser. The apps can be chained together to create more powerful analysis workflows. Conclusions To validate the usability of VisRseq for analysis of sequencing data, we present two case studies performed by our collaborators and report their workflow and insights. PMID:26328469

  2. VisRseq: R-based visual framework for analysis of sequencing data.

    PubMed

    Younesy, Hamid; Möller, Torsten; Lorincz, Matthew C; Karimi, Mohammad M; Jones, Steven J M

    2015-01-01

    Several tools have been developed to enable biologists to perform initial browsing and exploration of sequencing data. However the computational tool set for further analyses often requires significant computational expertise to use and many of the biologists with the knowledge needed to interpret these data must rely on programming experts. We present VisRseq, a framework for analysis of sequencing datasets that provides a computationally rich and accessible framework for integrative and interactive analyses without requiring programming expertise. We achieve this aim by providing R apps, which offer a semi-auto generated and unified graphical user interface for computational packages in R and repositories such as Bioconductor. To address the interactivity limitation inherent in R libraries, our framework includes several native apps that provide exploration and brushing operations as well as an integrated genome browser. The apps can be chained together to create more powerful analysis workflows. To validate the usability of VisRseq for analysis of sequencing data, we present two case studies performed by our collaborators and report their workflow and insights.

  3. Toward Multi-Model Frameworks Addressing Multi-Sector Dynamics, Risks, and Resiliency

    NASA Astrophysics Data System (ADS)

    Moss, R. H.; Fisher-Vanden, K.; Barrett, C.; Kraucunas, I.; Rice, J.; Sue Wing, I.; Bhaduri, B. L.; Reed, P. M.

    2016-12-01

    This presentation will report on the findings of recent modeling studies and a series of workshops and other efforts convened under the auspices of the US Global Change Research Program (USGCRP) to improve integration of critical infrastructure, natural resources, integrated assessment, and human systems modeling. The focus is issues related to drought and increased variability of water supply at the energy-water-land nexus. One motivation for the effort is the potential for impact cascades across coupled built, natural, and socioeconomic systems stressed by social and environmental change. The design is for an adaptable modeling framework that will includes a repository of independently-developed modeling tools of varying complexity - from coarser grid, longer time-horizon to higher-resolution shorter-term models of socioeconomic systems, infrastructure, and natural resources. The models draw from three interlocking research communities: Earth system, impacts/adaptation/vulnerability, and integrated assessment. A key lesson will be explored, namely the importance of defining a clear use perspective to limit dimensionality, focus modeling, and facilitate uncertainty characterization and communication.

  4. Repository Profiles for Atmospheric and Climate Sciences: Capabilities and Trends in Data Services

    NASA Astrophysics Data System (ADS)

    Hou, C. Y.; Thompson, C. A.; Palmer, C. L.

    2014-12-01

    As digital research data proliferate and expectations for open access escalate, the landscape of data repositories is becoming more complex. For example, DataBib currently identifies 980 data repositories across the disciplines, with 117 categorized under Geosciences. In atmospheric and climate sciences, there are great expectations for the integration and reuse of data for advancing science. To realize this potential, resources are needed that explicate the range of repository options available for locating and depositing open data, their conditions of access and use, and the services and tools they provide. This study profiled 38 open digital repositories in the atmospheric and climate sciences, analyzing each on 55 criteria through content analysis of their websites. The results provide a systematic way to assess and compare capabilities, services, and institutional characteristics and identify trends across repositories. Selected results from the more detailed outcomes to be presented: Most repositories offer guidance on data format(s) for submission and dissemination. 42% offer authorization-free access. More than half use some type of data identification system such as DOIs. Nearly half offer some data processing, with a similar number providing software or tools. 78.9% request that users cite or acknowledge datasets used and the data center. Only 21.1% recommend specific metadata standards, such as ISO 19115 or Dublin Core, with more than half utilizing a customized metadata scheme. Information was rarely provided on repository certification and accreditation and uneven for transfer of rights and data security. Few provided policy information on preservation, migration, reappraisal, disposal, or long-term sustainability. As repository use increases, it will be important for institutions to make their procedures and policies explicit, to build trust with user communities and improve efficiencies in data sharing. Resources such as repository profiles will be essential for scientists to weigh options and understand trends in data services across the evolving network of repositories.

  5. Integrating a Learning Management System with a Student Assignments Digital Repository. A Case Study

    ERIC Educational Resources Information Center

    Díaz, Javier; Schiavoni, Alejandra; Osorio, María Alejandra; Amadeo, Ana Paola; Charnelli, María Emilia

    2013-01-01

    The integration of different platforms and information Systems in the academic environment is highly important and quite a challenge within the field of Information Technology. This integration allows for higher resource availability and improved interaction among intervening actors. In the field of e-Learning, where Learning Management Systems…

  6. Tracking Research Data Footprints via Integration with Research Graph

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Wang, J.; Aryani, A.; Conlon, M.; Wyborn, L. A.; Choudhury, S. A.

    2017-12-01

    The researcher of today is likely to be part of a team that will use subsets of data from at least one, if not more external repositories, and that same data could be used by multiple researchers for many different purposes. At best, the repositories that host this data will know who is accessing their data, but rarely what they are using it for, resulting in funders of data collecting programs and data repositories that store the data unlikely to know: 1) which research funding contributed to the collection and preservation of a dataset, and 2) which data contributed to high impact research and publications. In days of funding shortages there is a growing need to be able to trace the footprint a data set from the originator that collected the data to the repository that stores the data and ultimately to any derived publications. The Research Data Alliance's Data Description Registry Interoperability Working Group (DDRIWG) has addressed this problem through the development of a distributed graph, called Research Graph that can map each piece of the research interaction puzzle by building aggregated graphs. It can connect datasets on the basis of co-authorship or other collaboration models such as joint funding and grants and can connect research datasets, publications, grants and researcher profiles across research repositories and infrastructures such as DataCite and ORCID. National Computational Infrastructure (NCI) in Australia is one of the early adopters of Research Graph. The graphic view and quantitative analysis helps NCI track the usage of their National reference data collections thus quantifying the role that these NCI-hosted data assets play within the funding-researcher-data-publication-cycle. The graph can unlock the complex interactions of the research projects by tracking the contribution of datasets, the various funding bodies and the downstream data users. RMap Project is a similar initiative which aims to solve complex relationships among scholarly publications and their underlying data, including IEEE publications. It is hoped to combine RMap and Research Graph in the near futures and also to add physical samples to Research Graph.

  7. Integrating diverse databases into an unified analysis framework: a Galaxy approach

    PubMed Central

    Blankenberg, Daniel; Coraor, Nathan; Von Kuster, Gregory; Taylor, James; Nekrutenko, Anton

    2011-01-01

    Recent technological advances have lead to the ability to generate large amounts of data for model and non-model organisms. Whereas, in the past, there have been a relatively small number of central repositories that serve genomic data, an increasing number of distinct specialized data repositories and resources have been established. Here, we describe a generic approach that provides for the integration of a diverse spectrum of data resources into a unified analysis framework, Galaxy (http://usegalaxy.org). This approach allows the simplified coupling of external data resources with the data analysis tools available to Galaxy users, while leveraging the native data mining facilities of the external data resources. Database URL: http://usegalaxy.org PMID:21531983

  8. Consolidated Storage Facilities: Camel's Nose or Shared Burden? - 13112

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, James M.

    2013-07-01

    The Blue Ribbon Commission (BRC) made a strong argument why the reformulated nuclear waste program should make prompt efforts to develop one or more consolidated storage facilities (CSFs), and recommended the amendment of NWPA Section 145(b) 2 (linking 'monitored retrievable storage' to repository development) as an essential means to that end. However, other than recommending that the siting of CSFs should be 'consent-based' and that spent nuclear fuel (SNF) at stranded sites should be first-in-line for removal, the Commission made few recommendations regarding how CSF development should proceed. Working with three other key Senators, Jeff Bingaman attempted in the 112.more » Congress to craft legislation (S. 3469) to put the BRC recommendations into legislative language. The key reason why the Nuclear Waste Administration Act of 2012 did not proceed was the inability of the four senators to agree on whether and how to amend NWPA Section 145(b). A brief review of efforts to site consolidated storage since the Nuclear Waste Policy Amendments Act of 1987 suggests a strong and consistent motivation to shift the burden to someone (anyone) else. This paper argues that modification of NWPA Section 145(b) should be accompanied by guidelines for regional development and operation of CSFs. After review of the BRC recommendations regarding CSFs, and the 'camel's nose' prospects if implementation is not accompanied by further guidelines, the paper outlines a proposal for implementation of CSFs on a regional basis, including priorities for removal from reactor sites and subsequently from CSFs to repositories. Rather than allowing repository siting to be prejudiced by the location of a single remote CSF, the regional approach limits transport for off-site acceptance and storage, increases the efficiency of removal operations, provides a useful basis for compensation to states and communities that accept CSFs, and gives states with shared circumstances a shared stake in storage and disposal in an integrated national program. (authors)« less

  9. Standards-based metadata procedures for retrieving data for display or mining utilizing persistent (data-DOI) identifiers.

    PubMed

    Harvey, Matthew J; Mason, Nicholas J; McLean, Andrew; Rzepa, Henry S

    2015-01-01

    We describe three different procedures based on metadata standards for enabling automated retrieval of scientific data from digital repositories utilising the persistent identifier of the dataset with optional specification of the attributes of the data document such as filename or media type. The procedures are demonstrated using the JSmol molecular visualizer as a component of a web page and Avogadro as a stand-alone modelling program. We compare our methods for automated retrieval of data from a standards-compliant data repository with those currently in operation for a selection of existing molecular databases and repositories. Our methods illustrate the importance of adopting a standards-based approach of using metadata declarations to increase access to and discoverability of repository-based data. Graphical abstract.

  10. Making proteomics data accessible and reusable: current state of proteomics databases and repositories.

    PubMed

    Perez-Riverol, Yasset; Alpi, Emanuele; Wang, Rui; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-03-01

    Compared to other data-intensive disciplines such as genomics, public deposition and storage of MS-based proteomics, data are still less developed due to, among other reasons, the inherent complexity of the data and the variety of data types and experimental workflows. In order to address this need, several public repositories for MS proteomics experiments have been developed, each with different purposes in mind. The most established resources are the Global Proteome Machine Database (GPMDB), PeptideAtlas, and the PRIDE database. Additionally, there are other useful (in many cases recently developed) resources such as ProteomicsDB, Mass Spectrometry Interactive Virtual Environment (MassIVE), Chorus, MaxQB, PeptideAtlas SRM Experiment Library (PASSEL), Model Organism Protein Expression Database (MOPED), and the Human Proteinpedia. In addition, the ProteomeXchange consortium has been recently developed to enable better integration of public repositories and the coordinated sharing of proteomics information, maximizing its benefit to the scientific community. Here, we will review each of the major proteomics resources independently and some tools that enable the integration, mining and reuse of the data. We will also discuss some of the major challenges and current pitfalls in the integration and sharing of the data. © 2014 The Authors. PROTEOMICS published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Repository-based software engineering program: Concept document

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This document provides the context for Repository-Based Software Engineering's (RBSE's) evolving functional and operational product requirements, and it is the parent document for development of detailed technical and management plans. When furnished, requirements documents will serve as the governing RBSE product specification. The RBSE Program Management Plan will define resources, schedules, and technical and organizational approaches to fulfilling the goals and objectives of this concept. The purpose of this document is to provide a concise overview of RBSE, describe the rationale for the RBSE Program, and define a clear, common vision for RBSE team members and customers. The document also provides the foundation for developing RBSE user and system requirements and a corresponding Program Management Plan. The concept is used to express the program mission to RBSE users and managers and to provide an exhibit for community review.

  12. Ocean Drilling Program: Completed Legs

    Science.gov Websites

    . Austin Leg summary Repository Wolfgang Schlager 102 14-Mar-85 25-Apr-85 Miami, Florida 418 Bermuda Rise Lisbon, Portugal 902-906 New Jersey Sea-Level Transect Peter Blum Gregory Mountain Leg summary Repository , Nova Scotia 1071-1073 Continuing the New Jersey Sea-Level Transect Mitchell J. Malone James A. Austin

  13. Design and Implementation of an International Training Program on Repository Development and Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vugrin, K.W.; Twitchell, Ch.A.

    2008-07-01

    Korea Hydro and Nuclear Power Co., Ltd. (KHNP) is an electric company in the Republic of Korea with twenty operational nuclear power plants and eight additional units that are either planned or currently under construction. Regulations require that KHNP manage the radioactive waste generated by their nuclear power plants. In the course of planning low, intermediate, and high level waste storage facilities, KHNP sought interaction with an acknowledged expert in the field of radioactive waste management and, consequently, contacted Sandia National Laboratories (SNL). KHNP has contracted with SNL to provide a year long training program on repository science. This papermore » discusses the design of the curriculum, specific plans for execution of the training program, and recommendations for smooth implementation of international training programs. (authors)« less

  14. Ridge 2000 Data Management System

    NASA Astrophysics Data System (ADS)

    Goodwillie, A. M.; Carbotte, S. M.; Arko, R. A.; Haxby, W. F.; Ryan, W. B.; Chayes, D. N.; Lehnert, K. A.; Shank, T. M.

    2005-12-01

    Hosted at Lamont by the marine geoscience Data Management group, mgDMS, the NSF-funded Ridge 2000 electronic database, http://www.marine-geo.org/ridge2000/, is a key component of the Ridge 2000 multi-disciplinary program. The database covers each of the three Ridge 2000 Integrated Study Sites: Endeavour Segment, Lau Basin, and 8-11N Segment. It promotes the sharing of information to the broader community, facilitates integration of the suite of information collected at each study site, and enables comparisons between sites. The Ridge 2000 data system provides easy web access to a relational database that is built around a catalogue of cruise metadata. Any web browser can be used to perform a versatile text-based search which returns basic cruise and submersible dive information, sample and data inventories, navigation, and other relevant metadata such as shipboard personnel and links to NSF program awards. In addition, non-proprietary data files, images, and derived products which are hosted locally or in national repositories, as well as science and technical reports, can be freely downloaded. On the Ridge 2000 database page, our Data Link allows users to search the database using a broad range of parameters including data type, cruise ID, chief scientist, geographical location. The first Ridge 2000 field programs sailed in 2004 and, in addition to numerous data sets collected prior to the Ridge 2000 program, the database currently contains information on fifteen Ridge 2000-funded cruises and almost sixty Alvin dives. Track lines can be viewed using a recently- implemented Web Map Service button labelled Map View. The Ridge 2000 database is fully integrated with databases hosted by the mgDMS group for MARGINS and the Antarctic multibeam and seismic reflection data initiatives. Links are provided to partner databases including PetDB, SIOExplorer, and the ODP Janus system. Improved inter-operability with existing and new partner repositories continues to be strengthened. One major effort involves the gradual unification of the metadata across these partner databases. Standardised electronic metadata forms that can be filled in at sea are available from our web site. Interactive map-based exploration and visualisation of the Ridge 2000 database is provided by GeoMapApp, a freely-available Java(tm) application being developed within the mgDMS group. GeoMapApp includes high-resolution bathymetric grids for the 8-11N EPR segment and allows customised maps and grids for any of the Ridge 2000 ISS to be created. Vent and instrument locations can be plotted and saved as images, and Alvin dive photos are also available.

  15. Construction of a nasopharyngeal carcinoma 2D/MS repository with Open Source XML database--Xindice.

    PubMed

    Li, Feng; Li, Maoyu; Xiao, Zhiqiang; Zhang, Pengfei; Li, Jianling; Chen, Zhuchu

    2006-01-11

    Many proteomics initiatives require integration of all information with uniformcriteria from collection of samples and data display to publication of experimental results. The integration and exchanging of these data of different formats and structure imposes a great challenge to us. The XML technology presents a promise in handling this task due to its simplicity and flexibility. Nasopharyngeal carcinoma (NPC) is one of the most common cancers in southern China and Southeast Asia, which has marked geographic and racial differences in incidence. Although there are some cancer proteome databases now, there is still no NPC proteome database. The raw NPC proteome experiment data were captured into one XML document with Human Proteome Markup Language (HUP-ML) editor and imported into native XML database Xindice. The 2D/MS repository of NPC proteome was constructed with Apache, PHP and Xindice to provide access to the database via Internet. On our website, two methods, keyword query and click query, were provided at the same time to access the entries of the NPC proteome database. Our 2D/MS repository can be used to share the raw NPC proteomics data that are generated from gel-based proteomics experiments. The database, as well as the PHP source codes for constructing users' own proteome repository, can be accessed at http://www.xyproteomics.org/.

  16. Digital Authenticity and Integrity: Digital Cultural Heritage Documents as Research Resources

    ERIC Educational Resources Information Center

    Bradley; Rachael

    2005-01-01

    This article presents the results of a survey addressing methods of securing digital content and ensuring the content's authenticity and integrity, as well as the perceived importance of authenticity and integrity. The survey was sent to 40 digital repositories in the United States and Canada between June 30 and July 19, 2003. Twenty-two…

  17. OntoVIP: an ontology for the annotation of object models used for medical image simulation.

    PubMed

    Gibaud, Bernard; Forestier, Germain; Benoit-Cattin, Hugues; Cervenansky, Frédéric; Clarysse, Patrick; Friboulet, Denis; Gaignard, Alban; Hugonnard, Patrick; Lartizien, Carole; Liebgott, Hervé; Montagnat, Johan; Tabary, Joachim; Glatard, Tristan

    2014-12-01

    This paper describes the creation of a comprehensive conceptualization of object models used in medical image simulation, suitable for major imaging modalities and simulators. The goal is to create an application ontology that can be used to annotate the models in a repository integrated in the Virtual Imaging Platform (VIP), to facilitate their sharing and reuse. Annotations make the anatomical, physiological and pathophysiological content of the object models explicit. In such an interdisciplinary context we chose to rely on a common integration framework provided by a foundational ontology, that facilitates the consistent integration of the various modules extracted from several existing ontologies, i.e. FMA, PATO, MPATH, RadLex and ChEBI. Emphasis is put on methodology for achieving this extraction and integration. The most salient aspects of the ontology are presented, especially the organization in model layers, as well as its use to browse and query the model repository. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. ODMedit: uniform semantic annotation for data integration in medicine based on a public metadata repository.

    PubMed

    Dugas, Martin; Meidt, Alexandra; Neuhaus, Philipp; Storck, Michael; Varghese, Julian

    2016-06-01

    The volume and complexity of patient data - especially in personalised medicine - is steadily increasing, both regarding clinical data and genomic profiles: Typically more than 1,000 items (e.g., laboratory values, vital signs, diagnostic tests etc.) are collected per patient in clinical trials. In oncology hundreds of mutations can potentially be detected for each patient by genomic profiling. Therefore data integration from multiple sources constitutes a key challenge for medical research and healthcare. Semantic annotation of data elements can facilitate to identify matching data elements in different sources and thereby supports data integration. Millions of different annotations are required due to the semantic richness of patient data. These annotations should be uniform, i.e., two matching data elements shall contain the same annotations. However, large terminologies like SNOMED CT or UMLS don't provide uniform coding. It is proposed to develop semantic annotations of medical data elements based on a large-scale public metadata repository. To achieve uniform codes, semantic annotations shall be re-used if a matching data element is available in the metadata repository. A web-based tool called ODMedit ( https://odmeditor.uni-muenster.de/ ) was developed to create data models with uniform semantic annotations. It contains ~800,000 terms with semantic annotations which were derived from ~5,800 models from the portal of medical data models (MDM). The tool was successfully applied to manually annotate 22 forms with 292 data items from CDISC and to update 1,495 data models of the MDM portal. Uniform manual semantic annotation of data models is feasible in principle, but requires a large-scale collaborative effort due to the semantic richness of patient data. A web-based tool for these annotations is available, which is linked to a public metadata repository.

  19. Information and image integration: project spectrum

    NASA Astrophysics Data System (ADS)

    Blaine, G. James; Jost, R. Gilbert; Martin, Lori; Weiss, David A.; Lehmann, Ron; Fritz, Kevin

    1998-07-01

    The BJC Health System (BJC) and the Washington University School of Medicine (WUSM) formed a technology alliance with industry collaborators to develop and implement an integrated, advanced clinical information system. The industry collaborators include IBM, Kodak, SBC and Motorola. The activity, called Project Spectrum, provides an integrated clinical repository for the multiple hospital facilities of the BJC. The BJC System consists of 12 acute care hospitals serving over one million patients in Missouri and Illinois. An interface engine manages transactions from each of the hospital information systems, lab systems and radiology information systems. Data is normalized to provide a consistent view for the primary care physician. Access to the clinical repository is supported by web-based server/browser technology which delivers patient data to the physician's desktop. An HL7 based messaging system coordinates the acquisition and management of radiological image data and sends image keys to the clinical data repository. Access to the clinical chart browser currently provides radiology reports, laboratory data, vital signs and transcribed medical reports. A chart metaphor provides tabs for the selection of the clinical record for review. Activation of the radiology tab facilitates a standardized view of radiology reports and provides an icon used to initiate retrieval of available radiology images. The selection of the image icon spawns an image browser plug-in and utilizes the image key from the clinical repository to access the image server for the requested image data. The Spectrum system is collecting clinical data from five hospital systems and imaging data from two hospitals. Domain specific radiology imaging systems support the acquisition and primary interpretation of radiology exams. The spectrum clinical workstations are deployed to over 200 sites utilizing local area networks and ISDN connectivity.

  20. Practical management of heterogeneous neuroimaging metadata by global neuroimaging data repositories

    PubMed Central

    Neu, Scott C.; Crawford, Karen L.; Toga, Arthur W.

    2012-01-01

    Rapidly evolving neuroimaging techniques are producing unprecedented quantities of digital data at the same time that many research studies are evolving into global, multi-disciplinary collaborations between geographically distributed scientists. While networked computers have made it almost trivial to transmit data across long distances, collecting and analyzing this data requires extensive metadata if the data is to be maximally shared. Though it is typically straightforward to encode text and numerical values into files and send content between different locations, it is often difficult to attach context and implicit assumptions to the content. As the number of and geographic separation between data contributors grows to national and global scales, the heterogeneity of the collected metadata increases and conformance to a single standardization becomes implausible. Neuroimaging data repositories must then not only accumulate data but must also consolidate disparate metadata into an integrated view. In this article, using specific examples from our experiences, we demonstrate how standardization alone cannot achieve full integration of neuroimaging data from multiple heterogeneous sources and why a fundamental change in the architecture of neuroimaging data repositories is needed instead. PMID:22470336

  1. Practical management of heterogeneous neuroimaging metadata by global neuroimaging data repositories.

    PubMed

    Neu, Scott C; Crawford, Karen L; Toga, Arthur W

    2012-01-01

    Rapidly evolving neuroimaging techniques are producing unprecedented quantities of digital data at the same time that many research studies are evolving into global, multi-disciplinary collaborations between geographically distributed scientists. While networked computers have made it almost trivial to transmit data across long distances, collecting and analyzing this data requires extensive metadata if the data is to be maximally shared. Though it is typically straightforward to encode text and numerical values into files and send content between different locations, it is often difficult to attach context and implicit assumptions to the content. As the number of and geographic separation between data contributors grows to national and global scales, the heterogeneity of the collected metadata increases and conformance to a single standardization becomes implausible. Neuroimaging data repositories must then not only accumulate data but must also consolidate disparate metadata into an integrated view. In this article, using specific examples from our experiences, we demonstrate how standardization alone cannot achieve full integration of neuroimaging data from multiple heterogeneous sources and why a fundamental change in the architecture of neuroimaging data repositories is needed instead.

  2. Software support for Huntingtons disease research.

    PubMed

    Conneally, P M; Gersting, J M; Gray, J M; Beidleman, K; Wexler, N S; Smith, C L

    1991-01-01

    Huntingtons disease (HD) is a hereditary disorder involving the central nervous system. Its effects are devastating, to the affected person as well as his family. The Department of Medical and Molecular Genetics at Indiana University (IU) plays an integral part in Huntingtons research by providing computerized repositories of HD family information for researchers and families. The National Huntingtons Disease Research Roster, founded in 1979 at IU, and the Huntingtons Disease in Venezuela Project database contain information that has proven to be invaluable in the worldwide field of HD research. This paper addresses the types of information stored in each database, the pedigree database program (MEGADATS) used to manage the data, and significant findings that have resulted from access to the data.

  3. Lotus Base: An integrated information portal for the model legume Lotus japonicus

    PubMed Central

    Mun, Terry; Bachmann, Asger; Gupta, Vikas; Stougaard, Jens; Andersen, Stig U.

    2016-01-01

    Lotus japonicus is a well-characterized model legume widely used in the study of plant-microbe interactions. However, datasets from various Lotus studies are poorly integrated and lack interoperability. We recognize the need for a comprehensive repository that allows comprehensive and dynamic exploration of Lotus genomic and transcriptomic data. Equally important are user-friendly in-browser tools designed for data visualization and interpretation. Here, we present Lotus Base, which opens to the research community a large, established LORE1 insertion mutant population containing an excess of 120,000 lines, and serves the end-user tightly integrated data from Lotus, such as the reference genome, annotated proteins, and expression profiling data. We report the integration of expression data from the L. japonicus gene expression atlas project, and the development of tools to cluster and export such data, allowing users to construct, visualize, and annotate co-expression gene networks. Lotus Base takes advantage of modern advances in browser technology to deliver powerful data interpretation for biologists. Its modular construction and publicly available application programming interface enable developers to tap into the wealth of integrated Lotus data. Lotus Base is freely accessible at: https://lotus.au.dk. PMID:28008948

  4. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability.

    PubMed

    Komatsoulis, George A; Warzel, Denise B; Hartel, Francis W; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; Coronado, Sherri de; Reeves, Dianne M; Hadfield, Jillaine B; Ludet, Christophe; Covitz, Peter A

    2008-02-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service-Oriented Architecture (SSOA) for cancer research by the National Cancer Institute's cancer Biomedical Informatics Grid (caBIG).

  5. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability

    PubMed Central

    Komatsoulis, George A.; Warzel, Denise B.; Hartel, Frank W.; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; de Coronado, Sherri; Reeves, Dianne M.; Hadfield, Jillaine B.; Ludet, Christophe; Covitz, Peter A.

    2008-01-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service Oriented Architecture (SSOA) for cancer research by the National Cancer Institute’s cancer Biomedical Informatics Grid (caBIG™). PMID:17512259

  6. Software for Sharing and Management of Information

    NASA Technical Reports Server (NTRS)

    Chen, James R.; Wolfe, Shawn R.; Wragg, Stephen D.

    2003-01-01

    DIAMS is a set of computer programs that implements a system of collaborative agents that serve multiple, geographically distributed users communicating via the Internet. DIAMS provides a user interface as a Java applet that runs on each user s computer and that works within the context of the user s Internet-browser software. DIAMS helps all its users to manage, gain access to, share, and exchange information in databases that they maintain on their computers. One of the DIAMS agents is a personal agent that helps its owner find information most relevant to current needs. It provides software tools and utilities for users to manage their information repositories with dynamic organization and virtual views. Capabilities for generating flexible hierarchical displays are integrated with capabilities for indexed- query searching to support effective access to information. Automatic indexing methods are employed to support users queries and communication between agents. The catalog of a repository is kept in object-oriented storage to facilitate sharing of information. Collaboration between users is aided by matchmaker agents and by automated exchange of information. The matchmaker agents are designed to establish connections between users who have similar interests and expertise.

  7. Characterizing the proposed geologic repository for high-level radioactive waste at Yucca Mountain, Nevada--hydrology and geochemistry

    USGS Publications Warehouse

    Stuckless, John S.; Levich, Robert A.

    2012-01-01

    This hydrology and geochemistry volume is a companion volume to the 2007 Geological Society of America Memoir 199, The Geology and Climatology of Yucca Mountain and Vicinity, Southern Nevada and California, edited by Stuckless and Levich. The work in both volumes was originally reported in the U.S. Department of Energy regulatory document Yucca Mountain Site Description, for the site characterization study of Yucca Mountain, Nevada, as the proposed U.S. geologic repository for high-level radioactive waste. The selection of Yucca Mountain resulted from a nationwide search and numerous committee studies during a period of more than 40 yr. The waste, largely from commercial nuclear power reactors and the government's nuclear weapons programs, is characterized by intense penetrating radiation and high heat production, and, therefore, it must be isolated from the biosphere for tens of thousands of years. The extensive, unique, and often innovative geoscience investigations conducted at Yucca Mountain for more than 20 yr make it one of the most thoroughly studied geologic features on Earth. The results of these investigations contribute extensive knowledge to the hydrologic and geochemical aspects of radioactive waste disposal in the unsaturated zone. The science, analyses, and interpretations are important not only to Yucca Mountain, but also to the assessment of other sites or alternative processes that may be considered for waste disposal in the future. Groundwater conditions, processes, and geochemistry, especially in combination with the heat from radionuclide decay, are integral to the ability of a repository to isolate waste. Hydrology and geochemistry are discussed here in chapters on unsaturated zone hydrology, saturated zone hydrology, paleohydrology, hydrochemistry, radionuclide transport, and thermally driven coupled processes affecting long-term waste isolation. This introductory chapter reviews some of the reasons for choosing to study Yucca Mountain as a repository site.

  8. Characterizing the proposed geologic repository for high-level radioactive waste at Yucca Mountain, Nevada: hydrology and geochemistry

    USGS Publications Warehouse

    Stuckless, John S.; Levich, Robert A.

    2012-01-01

    This hydrology and geochemistry volume is a companion volume to the 2007 Geological Society of America Memoir 199, The Geology and Climatology of Yucca Mountain and Vicinity, Southern Nevada and California, edited by Stuckless and Levich. The work in both volumes was originally reported in the U.S. Department of Energy regulatory document Yucca Mountain Site Description, for the site characterization study of Yucca Mountain, Nevada, as the proposed U.S. geologic repository for high-level radioactive waste. The selection of Yucca Mountain resulted from a nationwide search and numerous committee studies during a period of more than 40 yr. The waste, largely from commercial nuclear power reactors and the government's nuclear weapons programs, is characterized by intense penetrating radiation and high heat production, and, therefore, it must be isolated from the biosphere for tens of thousands of years. The extensive, unique, and often innovative geoscience investigations conducted at Yucca Mountain for more than 20 yr make it one of the most thoroughly studied geologic features on Earth. The results of these investigations contribute extensive knowledge to the hydrologic and geochemical aspects of radioactive waste disposal in the unsaturated zone. The science, analyses, and interpretations are important not only to Yucca Mountain, but also to the assessment of other sites or alternative processes that may be considered for waste disposal in the future. Groundwater conditions, processes, and geochemistry, especially in combination with the heat from radionuclide decay, are integral to the ability of a repository to isolate waste. Hydrology and geochemistry are discussed here in chapters on unsaturated zone hydrology, saturated zone hydrology, paleohydrology, hydrochemistry, radionuclide transport, and thermally driven coupled processes affecting long-term waste isolation. This introductory chapter reviews some of the reasons for choosing to study Yucca Mountain as a repository site.

  9. DASMiner: discovering and integrating data from DAS sources

    PubMed Central

    2009-01-01

    Background DAS is a widely adopted protocol for providing syntactic interoperability among biological databases. The popularity of DAS is due to a simplified and elegant mechanism for data exchange that consists of sources exposing their RESTful interfaces for data access. As a growing number of DAS services are available for molecular biology resources, there is an incentive to explore this protocol in order to advance data discovery and integration among these resources. Results We developed DASMiner, a Matlab toolkit for querying DAS data sources that enables creation of integrated biological models using the information available in DAS-compliant repositories. DASMiner is composed by a browser application and an API that work together to facilitate gathering of data from different DAS sources, which can be used for creating enriched datasets from multiple sources. The browser is used to formulate queries and navigate data contained in DAS sources. Users can execute queries against these sources in an intuitive fashion, without the need of knowing the specific DAS syntax for the particular source. Using the source's metadata provided by the DAS Registry, the browser's layout adapts to expose only the set of commands and coordinate systems supported by the specific source. For this reason, the browser can interrogate any DAS source, independently of the type of data being served. The API component of DASMiner may be used for programmatic access of DAS sources by programs in Matlab. Once the desired data is found during navigation, the query is exported in the format of an API call to be used within any Matlab application. We illustrate the use of DASMiner by creating integrative models of histone modification maps and protein-protein interaction networks. These enriched datasets were built by retrieving and integrating distributed genomic and proteomic DAS sources using the API. Conclusion The support of the DAS protocol allows that hundreds of molecular biology databases to be treated as a federated, online collection of resources. DASMiner enables full exploration of these resources, and can be used to deploy applications and create integrated views of biological systems using the information deposited in DAS repositories. PMID:19919683

  10. Digital Rocks Portal: a Sustainable Platform for Data Management, Analysis and Remote Visualization of Volumetric Images of Porous Media

    NASA Astrophysics Data System (ADS)

    Prodanovic, M.; Esteva, M.; Ketcham, R. A.

    2017-12-01

    Nanometer to centimeter-scale imaging such as (focused ion beam) scattered electron microscopy, magnetic resonance imaging and X-ray (micro)tomography has since 1990s introduced 2D and 3D datasets of rock microstructure that allow investigation of nonlinear flow and mechanical phenomena on the length scales that are otherwise impervious to laboratory measurements. The numerical approaches that use such images produce various upscaled parameters required by subsurface flow and deformation simulators. All of this has revolutionized our knowledge about grain scale phenomena. However, a lack of data-sharing infrastructure among research groups makes it difficult to integrate different length scales. We have developed a sustainable, open and easy-to-use repository called the Digital Rocks Portal (https://www.digitalrocksportal.org), that (1) organizes images and related experimental measurements of different porous materials, (2) improves access to them for a wider community of engineering or geosciences researchers not necessarily trained in computer science or data analysis. Digital Rocks Portal (NSF EarthCube Grant 1541008) is the first repository for imaged porous microstructure data. It is implemented within the reliable, 24/7 maintained High Performance Computing Infrastructure supported by the Texas Advanced Computing Center (University of Texas at Austin). Long-term storage is provided through the University of Texas System Research Cyber-infrastructure initiative. We show how the data can be documented, referenced in publications via digital object identifiers (see Figure below for examples), visualized, searched for and linked to other repositories. We show recently implemented integration of the remote parallel visualization, bulk upload for large datasets as well as preliminary flow simulation workflow with the pore structures currently stored in the repository. We discuss the issues of collecting correct metadata, data discoverability and repository sustainability.

  11. Putting the School Interoperability Framework to the Test

    ERIC Educational Resources Information Center

    Mercurius, Neil; Burton, Glenn; Hopkins, Bill; Larsen, Hans

    2004-01-01

    The Jurupa Unified School District in Southern California recently partnered with Microsoft, Dell and the Zone Integration Group for the implementation of a School Interoperability Framework (SIF) database repository model throughout the district (Magner 2002). A two-week project--the Integrated District Education Applications System, better known…

  12. High-Performance Computing in Neuroscience for Data-Driven Discovery, Integration, and Dissemination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouchard, Kristofer E.; Aimone, James B.; Chun, Miyoung

    A lack of coherent plans to analyze, manage, and understand data threatens the various opportunities offered by new neuro-technologies. High-performance computing will allow exploratory analysis of massive datasets stored in standardized formats, hosted in open repositories, and integrated with simulations.

  13. High-Performance Computing in Neuroscience for Data-Driven Discovery, Integration, and Dissemination

    DOE PAGES

    Bouchard, Kristofer E.; Aimone, James B.; Chun, Miyoung; ...

    2016-11-01

    A lack of coherent plans to analyze, manage, and understand data threatens the various opportunities offered by new neuro-technologies. High-performance computing will allow exploratory analysis of massive datasets stored in standardized formats, hosted in open repositories, and integrated with simulations.

  14. Donor human milk bank data collection in north america: an assessment of current status and future needs.

    PubMed

    Brownell, Elizabeth A; Lussier, Mary M; Herson, Victor C; Hagadorn, James I; Marinelli, Kathleen A

    2014-02-01

    The Human Milk Banking Association of North America (HMBANA) is a nonprofit association that standardizes and facilitates the establishment and operation of donor human milk (DHM) banks in North America. Each HMBANA milk bank in the network collects data on the DHM it receives and distributes, but a centralized data repository does not yet exist. In 2010, the Food and Drug Administration recognized the need to collect and disseminate systematic, standardized DHM bank data and suggested that HMBANA develop a DHM data repository. This study aimed to describe data currently collected by HMBANA DHM banks and evaluate feasibility and interest in participating in a centralized data repository. We conducted phone interviews with individuals in different HMBANA milk banks and summarized descriptive statistics. Eight of 13 (61.5%) sites consented to participate. All respondents collected donor demographics, and half (50%; n = 4) rescreened donors after 6 months of continued donation. The definition of preterm milk varied between DHM banks (≤ 32 to ≤ 40 weeks). The specific computer program used to house the data also differed. Half (50%; n = 4) indicated that they would consider participation in a centralized repository. Without standardized data across all HMBANA sites, the creation of a centralized data repository is not yet feasible. Lack of standardization and transparency may deter implementation of donor milk programs in the neonatal intensive care unit setting and hinder benchmarking, research, and quality improvement initiatives.

  15. Yucca Mountain Biological Resources Monitoring Program; Annual report, FY91

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1992-01-01

    The US Department of Energy (DOE) is required by the Nuclear Waste Policy Act of 1982 (as amended in 1987) to study and characterize Yucca Mountain as a possible site for a geologic repository for high-level nuclear waste. During site characterization, the DOE will conduct a variety of geotechnical, geochemical, geological, and hydrological studies to determine the suitability of Yucca Mountain as a repository. To ensure that site characterization activities (SCA) do not adversely affect the Yucca Mountain area, an environmental program has been implemented to monitor and mitigate potential impacts and to ensure that activities comply with applicable environmentalmore » regulations. This report describes the activities and accomplishments during fiscal year 1991 (FY91) for six program areas within the Terrestrial Ecosystem component of the YMP environmental program. The six program areas are Site Characterization Activities Effects, Desert Tortoises, Habitat Reclamation, Monitoring and Mitigation, Radiological Monitoring, and Biological Support.« less

  16. Multi-institutional tumor banking: lessons learned from a pancreatic cancer biospecimen repository.

    PubMed

    Demeure, Michael J; Sielaff, Timothy; Koep, Larry; Prinz, Richard; Moser, A James; Zeh, Herb; Hostetter, Galen; Black, Jodi; Decker, Ardis; Rosewell, Sandra; Bussey, Kimberly J; Von Hoff, Daniel

    2010-10-01

    Clinically annotated pancreatic cancer samples are needed for progress to be made toward developing more effective treatments for this deadly cancer. As part of a National Cancer Institute-funded program project, we established a biospecimen core to support the research efforts. This article summarizes the key hurdles encountered and solutions we found in the process of developing a successful multi-institution biospecimen repository.

  17. Scoping Review and Evaluation of SMS/text Messaging Platforms for mHealth Projects or Clinical Interventions

    PubMed Central

    Iribarren, Sarah; Brown, William; Giguere, Rebecca; Stone, Patricia; Schnall, Rebecca; Staggers, Nancy; Carballo-Diéguez, Alex

    2017-01-01

    Objectives Mobile technology supporting text messaging interventions (TMIs) continues to evolve, presenting challenges for researchers and healthcare professionals who need to choose software solutions to best meet their program needs. The objective of this review was to systematically identify and compare text messaging platforms and to summarize their advantages and disadvantages as described in peer-reviewed literature. Methods A scoping review was conducted using four steps: 1) identify currently available platforms through online searches and in mHealth repositories; 2) expand evaluation criteria of an mHealth mobile messaging toolkit and prior user experiences as researchers; 3) evaluate each platform’s functions and features based on the expanded criteria and a vendor survey; and 4) assess the documentation of platform use in the peer-review literature. Platforms meeting inclusion criteria were assessed independently by three reviewers and discussed until consensus was reached. The PRISMA guidelines were followed to report findings. Results Of the 1041 potentially relevant search results, 27 platforms met inclusion criteria. Most were excluded because they were not platforms (e.g., guides, toolkits, reports, or SMS gateways). Of the 27 platforms, only 12 were identified in existing mHealth repositories, 10 from Google searches, while five were found in both. The expanded evaluation criteria included 22 items. Results indicate no uniform presentation of platform features and functions, often making these difficult to discern. Fourteen of the platforms were reported as open source, 10 focused on health care and 16 were tailored to meet needs of low resource settings (not mutually exclusive). Fifteen platforms had do-it-yourself setup (programming not required) while the remainder required coding/programming skills or setups could be built to specification by the vendor. Frequently described features included data security and access to the platform via cloud-based systems. Pay structures and reported targeted end-users varied. Peer-reviewed publications listed only 6 of the 27 platforms across 21 publications. The majority of these articles reported the name of the platform used but did not describe advantages or disadvantages. Conclusions Searching for and comparing mHealth platforms for TMIs remains a challenge. The results of this review can serve as a resource for researchers and healthcare professionals wanting to integrate TMIs into health interventions. Steps to identify, compare and assess advantages and disadvantages are outlined for consideration. Expanded evaluation criteria can be used by future researchers. Continued and more comprehensive platform tools should be integrated into mHealth repositories. Detailed descriptions of platform advantages and disadvantages are needed when mHealth researchers publish findings to expand the body of research on texting-based tools for healthcare. Standardized descriptions and features are recommended for vendor sites. PMID:28347445

  18. Colloid formation during waste form reaction: Implications for nuclear waste disposal

    USGS Publications Warehouse

    Bates, J. K.; Bradley, J.; Teetsov, A.; Bradley, C. R.; Buchholtz ten Brink, Marilyn R.

    1992-01-01

    Insoluble plutonium- and americium-bearing colloidal particles formed during simulated weathering of a high-level nuclear waste glass. Nearly 100 percent of the total plutonium and americium in test ground water was concentrated in these submicrometer particles. These results indicate that models of actinide mobility and repository integrity, which assume complete solubility of actinides in ground water, underestimate the potential for radionuclide release into the environment. A colloid-trapping mechanism may be necessary for a waste repository to meet long-term performance specifications.

  19. The National Geological and Geophysical Data Preservation Program

    NASA Astrophysics Data System (ADS)

    Dickinson, T. L.; Steinmetz, J. C.; Gundersen, L. C.; Pierce, B. S.

    2006-12-01

    The ability to preserve and maintain geoscience data and collections has not kept pace with the growing need for accessible digital information and the technology to make it so. The Nation has lost valuable and unique geologic records and is in danger of losing much more. Many federal and state geological repositories are currently at their capacity for maintaining and storing data or samples. Some repositories are gaining additional, but temporary and substandard space, using transport containers or offsite warehouses where access is limited and storage conditions are poor. Over the past several years, there has been an increasing focus on the state of scientific collections in the United States. For example, the National Geological and Geophysical Data Preservation Program (NGGDPP) Act was passed as part of the Energy Policy Act of 2005, authorizing $30 million in funding for each of five years. The Act directs the U.S. Geological Survey to administer this program that includes a National Digital Catalog and Federal assistance to support our nation's repositories. Implementation of the Program awaits federal appropriations. The NGGDPP is envisioned as a national network of cooperating geoscience materials and data repositories that are operated independently yet guided by unified standards, procedures, and protocols for metadata. The holdings will be widely accessible through a common and mirrored Internet-based catalog (National Digital Catalog). The National Digital Catalog will tie the observations and analyses to the physical materials they come from. Our Nation's geological and geophysical data are invaluable and in some instances irreplaceable due to the destruction of outcrops, urbanization and restricted access. These data will enable the next generation of scientific research and education, enable more effective and efficient research, and may have future economic benefits through the discovery of new oil and gas accumulations, and mineral deposits.

  20. Integrity Constraint Monitoring in Software Development: Proposed Architectures

    NASA Technical Reports Server (NTRS)

    Fernandez, Francisco G.

    1997-01-01

    In the development of complex software systems, designers are required to obtain from many sources and manage vast amounts of knowledge of the system being built and communicate this information to personnel with a variety of backgrounds. Knowledge concerning the properties of the system, including the structure of, relationships between and limitations of the data objects in the system, becomes increasingly more vital as the complexity of the system and the number of knowledge sources increases. Ensuring that violations of these properties do not occur becomes steadily more challenging. One approach toward managing the enforcement or system properties, called context monitoring, uses a centralized repository of integrity constraints and a constraint satisfiability mechanism for dynamic verification of property enforcement during program execution. The focus of this paper is to describe possible software architectures that define a mechanism for dynamically checking the satisfiability of a set of constraints on a program. The next section describes the context monitoring approach in general. Section 3 gives an overview of the work currently being done toward the addition of an integrity constraint satisfiability mechanism to a high-level program language, SequenceL, and demonstrates how this model is being examined to develop a general software architecture. Section 4 describes possible architectures for a general constraint satisfiability mechanism, as well as an alternative approach that, uses embedded database queries in lieu of an external monitor. The paper concludes with a brief summary outlining the, current state of the research and future work.

  1. Web Based Autonomous Geophysical/Hydrological Monitoring of the Gilt Edge Mine Site: Implementation and Results

    NASA Astrophysics Data System (ADS)

    Versteeg, R. J.; Wangerud, K.; Mattson, E.; Ankeny, M.; Richardson, A.; Heath, G.

    2005-05-01

    The Ruby Gulch repository at the Gilt Edge Mine Superfund site is a capped waste rock repository. Early in the system design EPA and its subcontractor, Bureau of Reclamation, recognized the need for long-term monitoring system to provide information on the repository behavior with the following objectives: 1 Provide information on the integrity of the newly constructed surface cover and diversion system 2 Continually assess the waste's hydrological and geochemical behavior, such that rational decisions can be made for the operation of this cover and liner system 3 Easily access of information pertaining to the system performance to stakeholders 4 Integration of a variety of data sources to produce information which could be used to enhance future cover designs. Through discussions between EPA, the Bureau of Reclamation and Idaho National Laboratory a long-term monitoring system was designed and implemented allowing EPA to meet these objectives. This system was designed to provide a cost effective way to deal with massive amounts of data and information, subject to the following specifications: 1 Data acquisition should occur autonomously and automatically, 2 Data management, processing and presentation should be automated as much as possible, 3 Users should be able to access all data and information remotely through a web browser. The INL long-term monitoring system integrates the data from a set of 522 electrodes resistivity electrodes consisting of 462 surface electrodes and 60 borehole electrodes (in 4 wells with 15 electrodes each), an outflow meter at the toe of the repository, an autonomous, remotely accessible weather station, and four wells (average depths of 250 feet) with thermocouples, pressure transducers and sampling ports for water and air. The monitoring system has currently been in operation for over a year, and has collected data continuously over this period. Results from this system have shown both the diurnal variation in rockmass behavior, movement of water through the waste (allowing estimated in residence time) and are leading to a comprehensive model of the repository behavior. Due to the sheer volume of data, a user driven interface allows users to create their own views of the different datasets.

  2. Health care transformation through collaboration on open-source informatics projects: integrating a medical applications platform, research data repository, and patient summarization.

    PubMed

    Klann, Jeffrey G; McCoy, Allison B; Wright, Adam; Wattanasin, Nich; Sittig, Dean F; Murphy, Shawn N

    2013-05-30

    The Strategic Health IT Advanced Research Projects (SHARP) program seeks to conquer well-understood challenges in medical informatics through breakthrough research. Two SHARP centers have found alignment in their methodological needs: (1) members of the National Center for Cognitive Informatics and Decision-making (NCCD) have developed knowledge bases to support problem-oriented summarizations of patient data, and (2) Substitutable Medical Apps, Reusable Technologies (SMART), which is a platform for reusable medical apps that can run on participating platforms connected to various electronic health records (EHR). Combining the work of these two centers will ensure wide dissemination of new methods for synthesized views of patient data. Informatics for Integrating Biology and the Bedside (i2b2) is an NIH-funded clinical research data repository platform in use at over 100 sites worldwide. By also working with a co-occurring initiative to SMART-enabling i2b2, we can confidently write one app that can be used extremely broadly. Our goal was to facilitate development of intuitive, problem-oriented views of the patient record using NCCD knowledge bases that would run in any EHR. To do this, we developed a collaboration between the two SHARPs and an NIH center, i2b2. First, we implemented collaborative tools to connect researchers at three institutions. Next, we developed a patient summarization app using the SMART platform and a previously validated NCCD problem-medication linkage knowledge base derived from the National Drug File-Reference Terminology (NDF-RT). Finally, to SMART-enable i2b2, we implemented two new Web service "cells" that expose the SMART application programming interface (API), and we made changes to the Web interface of i2b2 to host a "carousel" of SMART apps. We deployed our SMART-based, NDF-RT-derived patient summarization app in this SMART-i2b2 container. It displays a problem-oriented view of medications and presents a line-graph display of laboratory results. This summarization app can be run in any EHR environment that either supports SMART or runs SMART-enabled i2b2. This i2b2 "clinical bridge" demonstrates a pathway for reusable app development that does not require EHR vendors to immediately adopt the SMART API. Apps can be developed in SMART and run by clinicians in the i2b2 repository, reusing clinical data extracted from EHRs. This may encourage the adoption of SMART by supporting SMART app development until EHRs adopt the platform. It also allows a new variety of clinical SMART apps, fueled by the broad aggregation of data types available in research repositories. The app (including its knowledge base) and SMART-i2b2 are open-source and freely available for download.

  3. A method and software framework for enriching private biomedical sources with data from public online repositories.

    PubMed

    Anguita, Alberto; García-Remesal, Miguel; Graf, Norbert; Maojo, Victor

    2016-04-01

    Modern biomedical research relies on the semantic integration of heterogeneous data sources to find data correlations. Researchers access multiple datasets of disparate origin, and identify elements-e.g. genes, compounds, pathways-that lead to interesting correlations. Normally, they must refer to additional public databases in order to enrich the information about the identified entities-e.g. scientific literature, published clinical trial results, etc. While semantic integration techniques have traditionally focused on providing homogeneous access to private datasets-thus helping automate the first part of the research, and there exist different solutions for browsing public data, there is still a need for tools that facilitate merging public repositories with private datasets. This paper presents a framework that automatically locates public data of interest to the researcher and semantically integrates it with existing private datasets. The framework has been designed as an extension of traditional data integration systems, and has been validated with an existing data integration platform from a European research project by integrating a private biological dataset with data from the National Center for Biotechnology Information (NCBI). Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Metadata management and semantics in microarray repositories.

    PubMed

    Kocabaş, F; Can, T; Baykal, N

    2011-12-01

    The number of microarray and other high-throughput experiments on primary repositories keeps increasing as do the size and complexity of the results in response to biomedical investigations. Initiatives have been started on standardization of content, object model, exchange format and ontology. However, there are backlogs and inability to exchange data between microarray repositories, which indicate that there is a great need for a standard format and data management. We have introduced a metadata framework that includes a metadata card and semantic nets that make experimental results visible, understandable and usable. These are encoded in syntax encoding schemes and represented in RDF (Resource Description Frame-word), can be integrated with other metadata cards and semantic nets, and can be exchanged, shared and queried. We demonstrated the performance and potential benefits through a case study on a selected microarray repository. We concluded that the backlogs can be reduced and that exchange of information and asking of knowledge discovery questions can become possible with the use of this metadata framework.

  5. Development of a sorption data base for the cementitious near-field of a repository for radioactive waste

    NASA Astrophysics Data System (ADS)

    Wieland, E.; Bradbury, M. H.; van Loon, L.

    2003-01-01

    The migration of radionuclides within a repository for radioactive waste is retarded due to interaction with the engineered barrier system. Sorption processes play a decisive role in the retardation of radionuclides in the repository environment, and thus, the development of sorption data bases (SDBs) is an important task and an integral part of performance assessment. The methodology applied in the development of a SDB for the cementitious near-field of a repository for long-lived intermediate-level waste is presented in this study. The development of such a SDB requires knowledge of the chemical conditions of the near-field and information on the uptake process of radionuclides by hardened cement paste. The principles upon which the selection of the “best available” laboratory sorption values is based are outlined. The influence of cellulose degradation products, cement additives and cement-derived colloids on the sorption behaviour of radionuclides is addressed in conjunction with the development of the SDB.

  6. Convection and thermal radiation analytical models applicable to a nuclear waste repository room

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, B.W.

    1979-01-17

    Time-dependent temperature distributions in a deep geologic nuclear waste repository have a direct impact on the physical integrity of the emplaced canisters and on the design of retrievability options. This report (1) identifies the thermodynamic properties and physical parameters of three convection regimes - forced, natural, and mixed; (2) defines the convection correlations applicable to calculating heat flow in a ventilated (forced-air) and in a nonventilated nuclear waste repository room; and (3) delineates a computer code that (a) computes and compares the floor-to-ceiling heat flow by convection and radiation, and (b) determines the nonlinear equivalent conductivity table for a repositorymore » room. (The tables permit the use of the ADINAT code to model surface-to-surface radiation and the TRUMP code to employ two different emissivity properties when modeling radiation exchange between the surface of two different materials.) The analysis shows that thermal radiation dominates heat flow modes in a nuclear waste repository room.« less

  7. Generation and validation of a universal perinatal database and biospecimen repository: PeriBank.

    PubMed

    Antony, K M; Hemarajata, P; Chen, J; Morris, J; Cook, C; Masalas, D; Gedminas, M; Brown, A; Versalovic, J; Aagaard, K

    2016-11-01

    There is a dearth of biospecimen repositories available to perinatal researchers. In order to address this need, here we describe the methodology used to establish such a resource. With the collaboration of MedSci.net, we generated an online perinatal database with 847 fields of clinical information. Simultaneously, we established a biospecimen repository of the same clinical participants. The demographic and clinical outcomes data are described for the first 10 000 participants enrolled. The demographic characteristics are consistent with the demographics of the delivery hospitals. Quality analysis of the biospecimens reveals variation in very few analytes. Furthermore, since the creation of PeriBank, we have demonstrated validity of the database and tissue integrity of the biospecimen repository. Here we establish that the creation of a universal perinatal database and biospecimen collection is not only possible, but allows for the performance of state-of-the-science translational perinatal research and is a potentially valuable resource to academic perinatal researchers.

  8. 3D numerical modelling of the thermal state of deep geological nuclear waste repositories

    NASA Astrophysics Data System (ADS)

    Butov, R. A.; Drobyshevsky, N. I.; Moiseenko, E. V.; Tokarev, Yu. N.

    2017-09-01

    One of the important aspects of the high-level radioactive waste (HLW) disposal in deep geological repositories is ensuring the integrity of the engineered barriers which is, among other phenomena, considerably influenced by the thermal loads. As the HLW produce significant amount of heat, the design of the repository should maintain the balance between the cost-effectiveness of the construction and the sufficiency of the safety margins, including those imposed on the thermal conditions of the barriers. The 3D finite-element computer code FENIA was developed as a tool for simulation of thermal processes in deep geological repositories. Further the models for mechanical phenomena and groundwater hydraulics will be added resulting in a fully coupled thermo-hydro-mechanical (THM) solution. The long-term simulations of the thermal state were performed for two possible layouts of the repository. One was based on the proposed project of Russian repository, and another features larger HLW amount within the same space. The obtained results describe the spatial and temporal evolution of the temperature filed inside the repository and in the surrounding rock for 3500 years. These results show that practically all generated heat was ultimately absorbed by the host rock without any significant temperature increase. Still in the short time span even in case of smaller amount of the HLW the temperature maximum exceeds 100 °C, and for larger amount of the HLW the local temperature remains above 100 °C for considerable time. Thus, the substantiation of the long-term stability of the repository would require an extensive study of the materials properties and behaviour in order to remove the excessive conservatism from the simulations and to reduce the uncertainty of the input data.

  9. Development of the performance confirmation program at YUCCA mountain, nevada

    USGS Publications Warehouse

    LeCain, G.D.; Barr, D.; Weaver, D.; Snell, R.; Goodin, S.W.; Hansen, F.D.

    2006-01-01

    The Yucca Mountain Performance Confirmation program consists of tests, monitoring activities, experiments, and analyses to evaluate the adequacy of assumptions, data, and analyses that form the basis of the conceptual and numerical models of flow and transport associated with a proposed radioactive waste repository at Yucca Mountain, Nevada. The Performance Confirmation program uses an eight-stage risk-informed, performance-based approach. Selection of the Performance Confirmation activities for inclusion in the Performance Confirmation program was done using a risk-informed performance-based decision analysis. The result of this analysis was a Performance Confirmation base portfolio that consists of 20 activities. The 20 Performance Confirmation activities include geologic, hydrologie, and construction/engineering testing. Some of the activities began during site characterization, and others will begin during construction, or post emplacement, and continue until repository closure.

  10. Summary of International Waste Management Programs (LLNL Input to SNL L3 MS: System-Wide Integration and Site Selection Concepts for Future Disposition Options for HLW)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenberg, Harris R.; Blink, James A.; Halsey, William G.

    2011-08-11

    The Used Fuel Disposition Campaign (UFDC) within the Department of Energy’s Office of Nuclear Energy (DOE-NE) Fuel Cycle Technology (FCT) program has been tasked with investigating the disposal of the nation’s spent nuclear fuel (SNF) and high-level nuclear waste (HLW) for a range of potential waste forms and geologic environments. This Lessons Learned task is part of a multi-laboratory effort, with this LLNL report providing input to a Level 3 SNL milestone (System-Wide Integration and Site Selection Concepts for Future Disposition Options for HLW). The work package number is: FTLL11UF0328; the work package title is: Technical Bases / Lessons Learned;more » the milestone number is: M41UF032802; and the milestone title is: “LLNL Input to SNL L3 MS: System-Wide Integration and Site Selection Concepts for Future Disposition Options for HLW”. The system-wide integration effort will integrate all aspects of waste management and disposal, integrating the waste generators, interim storage, transportation, and ultimate disposal at a repository site. The review of international experience in these areas is required to support future studies that address all of these components in an integrated manner. Note that this report is a snapshot of nuclear power infrastructure and international waste management programs that is current as of August 2011, with one notable exception. No attempt has been made to discuss the currently evolving world-wide response to the tragic consequences of the earthquake and tsunami that devastated Japan on March 11, 2011, leaving more than 15,000 people dead and more than 8,000 people missing, and severely damaging the Fukushima Daiichi nuclear power complex. Continuing efforts in FY 2012 will update the data, and summarize it in an Excel spreadsheet for easy comparison and assist in the knowledge management of the study cases.« less

  11. 10 CFR 63.144 - Quality assurance program change.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... assurance program information that duplicates language in quality assurance regulatory guides and quality... 10 Energy 2 2013-01-01 2013-01-01 false Quality assurance program change. 63.144 Section 63.144... REPOSITORY AT YUCCA MOUNTAIN, NEVADA Quality Assurance § 63.144 Quality assurance program change. Changes to...

  12. 10 CFR 63.144 - Quality assurance program change.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... assurance program information that duplicates language in quality assurance regulatory guides and quality... 10 Energy 2 2014-01-01 2014-01-01 false Quality assurance program change. 63.144 Section 63.144... REPOSITORY AT YUCCA MOUNTAIN, NEVADA Quality Assurance § 63.144 Quality assurance program change. Changes to...

  13. 10 CFR 63.144 - Quality assurance program change.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... assurance program information that duplicates language in quality assurance regulatory guides and quality... 10 Energy 2 2012-01-01 2012-01-01 false Quality assurance program change. 63.144 Section 63.144... REPOSITORY AT YUCCA MOUNTAIN, NEVADA Quality Assurance § 63.144 Quality assurance program change. Changes to...

  14. LTPP InfoPave

    DOT National Transportation Integrated Search

    2015-12-29

    The LTPP program was initiated in 1987 to satisfy a wide range of pavement information needs. Over the years, the program has accumulated a vast repository of research quality data, extensive documentation, and related tools, which compose LTPPs c...

  15. Building and Using Digital Repository Certifications across Science

    NASA Astrophysics Data System (ADS)

    McIntosh, L.

    2017-12-01

    When scientific recommendations are made based upon research, the quality and integrity of the data should be rigorous enough to verify claims and in a trusted location. Key to ensuring the transparency and verifiability of research, reproducibility hinges not only on the availability of the documentation, analyses, and data, but the ongoing accessibility and viability of the files and documents, enhanced through a process of curation. The Research Data Alliance (RDA) is an international, community-driven, action-oriented, virtual organization committed to enabling the open sharing of data by building social and technical bridges. Within the RDA, multiple groups are working on consensus-building around the certification of digital repositories across scientific domains. For this section of the panel, we will discuss the work to date on repository certification from this RDA perspective.

  16. Multimedia data repository for the World Wide Web

    NASA Astrophysics Data System (ADS)

    Chen, Ken; Lu, Dajin; Xu, Duanyi

    1998-08-01

    This paper introduces the design and implementation of a Multimedia Data Repository served as a multimedia information system, which provides users a Web accessible, platform independent interface to query, browse, and retrieve multimedia data such as images, graphics, audio, video from a large multimedia data repository. By integrating the multimedia DBMS, in which the textual information and samples of the multimedia data is organized and stored, and Web server together into the Microsoft ActiveX Server Framework, users can access the DBMS and query the information by simply using a Web browser at the client-side. The original multimedia data can then be located and transmitted through the Internet from the tertiary storage device, a 400 CDROM optical jukebox at the server-side, to the client-side for further use.

  17. Warehousing re-annotated cancer genes for biomarker meta-analysis.

    PubMed

    Orsini, M; Travaglione, A; Capobianco, E

    2013-07-01

    Translational research in cancer genomics assigns a fundamental role to bioinformatics in support of candidate gene prioritization with regard to both biomarker discovery and target identification for drug development. Efforts in both such directions rely on the existence and constant update of large repositories of gene expression data and omics records obtained from a variety of experiments. Users who interactively interrogate such repositories may have problems in retrieving sample fields that present limited associated information, due for instance to incomplete entries or sometimes unusable files. Cancer-specific data sources present similar problems. Given that source integration usually improves data quality, one of the objectives is keeping the computational complexity sufficiently low to allow an optimal assimilation and mining of all the information. In particular, the scope of integrating intraomics data can be to improve the exploration of gene co-expression landscapes, while the scope of integrating interomics sources can be that of establishing genotype-phenotype associations. Both integrations are relevant to cancer biomarker meta-analysis, as the proposed study demonstrates. Our approach is based on re-annotating cancer-specific data available at the EBI's ArrayExpress repository and building a data warehouse aimed to biomarker discovery and validation studies. Cancer genes are organized by tissue with biomedical and clinical evidences combined to increase reproducibility and consistency of results. For better comparative evaluation, multiple queries have been designed to efficiently address all types of experiments and platforms, and allow for retrieval of sample-related information, such as cell line, disease state and clinical aspects. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. RSAT: regulatory sequence analysis tools.

    PubMed

    Thomas-Chollier, Morgane; Sand, Olivier; Turatsinze, Jean-Valéry; Janky, Rekin's; Defrance, Matthieu; Vervisch, Eric; Brohée, Sylvain; van Helden, Jacques

    2008-07-01

    The regulatory sequence analysis tools (RSAT, http://rsat.ulb.ac.be/rsat/) is a software suite that integrates a wide collection of modular tools for the detection of cis-regulatory elements in genome sequences. The suite includes programs for sequence retrieval, pattern discovery, phylogenetic footprint detection, pattern matching, genome scanning and feature map drawing. Random controls can be performed with random gene selections or by generating random sequences according to a variety of background models (Bernoulli, Markov). Beyond the original word-based pattern-discovery tools (oligo-analysis and dyad-analysis), we recently added a battery of tools for matrix-based detection of cis-acting elements, with some original features (adaptive background models, Markov-chain estimation of P-values) that do not exist in other matrix-based scanning tools. The web server offers an intuitive interface, where each program can be accessed either separately or connected to the other tools. In addition, the tools are now available as web services, enabling their integration in programmatic workflows. Genomes are regularly updated from various genome repositories (NCBI and EnsEMBL) and 682 organisms are currently supported. Since 1998, the tools have been used by several hundreds of researchers from all over the world. Several predictions made with RSAT were validated experimentally and published.

  19. Case-oriented computer-based-training in radiology: concept, implementation and evaluation

    PubMed Central

    Dugas, Martin; Trumm, Christoph; Stäbler, Axel; Pander, Ernst; Hundt, Walter; Scheidler, Jurgen; Brüning, Roland; Helmberger, Thomas; Waggershauser, Tobias; Matzko, Matthias; Reiser, Maximillian

    2001-01-01

    Background Providing high-quality clinical cases is important for teaching radiology. We developed, implemented and evaluated a program for a university hospital to support this task. Methods The system was built with Intranet technology and connected to the Picture Archiving and Communications System (PACS). It contains cases for every user group from students to attendants and is structured according to the ACR-code (American College of Radiology) [2]. Each department member was given an individual account, could gather his teaching cases and put the completed cases into the common database. Results During 18 months 583 cases containing 4136 images involving all radiological techniques were compiled and 350 cases put into the common case repository. Workflow integration as well as individual interest influenced the personal efforts to participate but an increasing number of cases and minor modifications of the program improved user acceptance continuously. 101 students went through an evaluation which showed a high level of acceptance and a special interest in elaborate documentation. Conclusion Electronic access to reference cases for all department members anytime anywhere is feasible. Critical success factors are workflow integration, reliability, efficient retrieval strategies and incentives for case authoring. PMID:11686856

  20. Enabling long-term oceanographic research: Changing data practices, information management strategies and informatics

    NASA Astrophysics Data System (ADS)

    Baker, Karen S.; Chandler, Cynthia L.

    2008-09-01

    Interdisciplinary global ocean science requires new ways of thinking about data and data management. With new data policies and growing technological capabilities, datasets of increasing variety and complexity are being made available digitally and data management is coming to be recognized as an integral part of scientific research. To meet the changing expectations of scientists collecting data and of data reuse by others, collaborative strategies involving diverse teams of information professionals are developing. These changes are stimulating the growth of information infrastructures that support multi-scale sampling, data repositories, and data integration. Two examples of oceanographic projects incorporating data management in partnership with science programs are discussed: the Palmer Station Long-Term Ecological Research program (Palmer LTER) and the United States Joint Global Ocean Flux Study (US JGOFS). Lessons learned from a decade of data management within these communities provide an experience base from which to develop information management strategies—short-term and long-term. Ocean Informatics provides one example of a conceptual framework for managing the complexities inherent to sharing oceanographic data. Elements are introduced that address the economies-of-scale and the complexities-of-scale pertinent to a broader vision of information management and scientific research.

  1. BioSPICE: access to the most current computational tools for biologists.

    PubMed

    Garvey, Thomas D; Lincoln, Patrick; Pedersen, Charles John; Martin, David; Johnson, Mark

    2003-01-01

    The goal of the BioSPICE program is to create a framework that provides biologists access to the most current computational tools. At the program midpoint, the BioSPICE member community has produced a software system that comprises contributions from approximately 20 participating laboratories integrated under the BioSPICE Dashboard and a methodology for continued software integration. These contributed software modules are the BioSPICE Dashboard, a graphical environment that combines Open Agent Architecture and NetBeans software technologies in a coherent, biologist-friendly user interface. The current Dashboard permits data sources, models, simulation engines, and output displays provided by different investigators and running on different machines to work together across a distributed, heterogeneous network. Among several other features, the Dashboard enables users to create graphical workflows by configuring and connecting available BioSPICE components. Anticipated future enhancements to BioSPICE include a notebook capability that will permit researchers to browse and compile data to support model building, a biological model repository, and tools to support the development, control, and data reduction of wet-lab experiments. In addition to the BioSPICE software products, a project website supports information exchange and community building.

  2. A perspective on the proliferation risks of plutonium mines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyman, E.S.

    1996-05-01

    The program of geologic disposal of spent fuel and other plutonium-containing materials is increasingly becoming the target of criticism by individuals who argue that in the future, repositories may become low-cost sources of fissile material for nuclear weapons. This paper attempts to outline a consistent framework for analyzing the proliferation risks of these so-called {open_quotes}plutonium mines{close_quotes} and putting them into perspective. First, it is emphasized that the attractiveness of plutonium in a repository as a source of weapons material depends on its accessibility relative to other sources of fissile material. Then, the notion of a {open_quotes}material production standard{close_quotes} (MPS) ismore » proposed: namely, that the proliferation risks posed by geologic disposal will be acceptable if one can demonstrate, under a number of reasonable scenarios, that the recovery of plutonium from a repository is likely to be as difficult as new production of fissile material. A preliminary analysis suggests that the range of circumstances under which current mined repository concepts would fail to meet this standard is fairly narrow. Nevertheless, a broad application of the MPS may impose severe restrictions on repository design. In this context, the relationship of repository design parameters to easy of recovery is discussed.« less

  3. 10 CFR 60.161 - Training and certification program.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Training and certification program. 60.161 Section 60.161 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES Training and Certification of Personnel § 60.161 Training and certification program. DOE shall...

  4. CellBase, a comprehensive collection of RESTful web services for retrieving relevant biological information from heterogeneous sources.

    PubMed

    Bleda, Marta; Tarraga, Joaquin; de Maria, Alejandro; Salavert, Francisco; Garcia-Alonso, Luz; Celma, Matilde; Martin, Ainoha; Dopazo, Joaquin; Medina, Ignacio

    2012-07-01

    During the past years, the advances in high-throughput technologies have produced an unprecedented growth in the number and size of repositories and databases storing relevant biological data. Today, there is more biological information than ever but, unfortunately, the current status of many of these repositories is far from being optimal. Some of the most common problems are that the information is spread out in many small databases; frequently there are different standards among repositories and some databases are no longer supported or they contain too specific and unconnected information. In addition, data size is increasingly becoming an obstacle when accessing or storing biological data. All these issues make very difficult to extract and integrate information from different sources, to analyze experiments or to access and query this information in a programmatic way. CellBase provides a solution to the growing necessity of integration by easing the access to biological data. CellBase implements a set of RESTful web services that query a centralized database containing the most relevant biological data sources. The database is hosted in our servers and is regularly updated. CellBase documentation can be found at http://docs.bioinfo.cipf.es/projects/cellbase.

  5. Metadata Repository for Improved Data Sharing and Reuse Based on HL7 FHIR.

    PubMed

    Ulrich, Hannes; Kock, Ann-Kristin; Duhm-Harbeck, Petra; Habermann, Jens K; Ingenerf, Josef

    2016-01-01

    Unreconciled data structures and formats are a common obstacle to the urgently required sharing and reuse of data within healthcare and medical research. Within the North German Tumor Bank of Colorectal Cancer, clinical and sample data, based on a harmonized data set, is collected and can be pooled by using a hospital-integrated Research Data Management System supporting biobank and study management. Adding further partners who are not using the core data set requires manual adaptations and mapping of data elements. Facing this manual intervention and focusing the reuse of heterogeneous healthcare instance data (value level) and data elements (metadata level), a metadata repository has been developed. The metadata repository is an ISO 11179-3 conformant server application built for annotating and mediating data elements. The implemented architecture includes the translation of metadata information about data elements into the FHIR standard using the FHIR Data Element resource with the ISO 11179 Data Element Extensions. The FHIR-based processing allows exchange of data elements with clinical and research IT systems as well as with other metadata systems. With increasingly annotated and harmonized data elements, data quality and integration can be improved for successfully enabling data analytics and decision support.

  6. NCBI2RDF: enabling full RDF-based access to NCBI databases.

    PubMed

    Anguita, Alberto; García-Remesal, Miguel; de la Iglesia, Diana; Maojo, Victor

    2013-01-01

    RDF has become the standard technology for enabling interoperability among heterogeneous biomedical databases. The NCBI provides access to a large set of life sciences databases through a common interface called Entrez. However, the latter does not provide RDF-based access to such databases, and, therefore, they cannot be integrated with other RDF-compliant databases and accessed via SPARQL query interfaces. This paper presents the NCBI2RDF system, aimed at providing RDF-based access to the complete NCBI data repository. This API creates a virtual endpoint for servicing SPARQL queries over different NCBI repositories and presenting to users the query results in SPARQL results format, thus enabling this data to be integrated and/or stored with other RDF-compliant repositories. SPARQL queries are dynamically resolved, decomposed, and forwarded to the NCBI-provided E-utilities programmatic interface to access the NCBI data. Furthermore, we show how our approach increases the expressiveness of the native NCBI querying system, allowing several databases to be accessed simultaneously. This feature significantly boosts productivity when working with complex queries and saves time and effort to biomedical researchers. Our approach has been validated with a large number of SPARQL queries, thus proving its reliability and enhanced capabilities in biomedical environments.

  7. Industrial Program of Waste Management - Cigeo Project - 13033

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butez, Marc; Bartagnon, Olivier; Gagner, Laurent

    2013-07-01

    The French Planning Act of 28 June 2006 prescribed that a reversible repository in a deep geological formation be chosen as the reference solution for the long-term management of high-level and intermediate-level long-lived radioactive waste. It also entrusted the responsibility of further studies and design of the repository (named Cigeo) upon the French Radioactive Waste Management Agency (Andra), in order for the review of the creation-license application to start in 2015 and, subject to its approval, the commissioning of the repository to take place in 2025. Andra is responsible for siting, designing, implementing, operating the future geological repository, including operationalmore » and long term safety and waste acceptance. Nuclear operators (Electricite de France (EDF), AREVA NC, and the French Commission in charge of Atomic Energy and Alternative Energies (CEA) are technically and financially responsible for the waste they generate, with no limit in time. They provide Andra, on one hand, with waste packages related input data, and on the other hand with their long term industrial experiences of high and intermediate-level long-lived radwaste management and nuclear operation. Andra, EDF, AREVA and CEA established a cooperation agreement for strengthening their collaborations in these fields. Within this agreement Andra and the nuclear operators have defined an industrial program for waste management. This program includes the waste inventory to be taken into account for the design of the Cigeo project and the structural hypothesis underlying its phased development. It schedules the delivery of the different categories of waste and defines associated flows. (authors)« less

  8. Software support for Huntingtons disease research.

    PubMed Central

    Conneally, P. M.; Gersting, J. M.; Gray, J. M.; Beidleman, K.; Wexler, N. S.; Smith, C. L.

    1991-01-01

    Huntingtons disease (HD) is a hereditary disorder involving the central nervous system. Its effects are devastating, to the affected person as well as his family. The Department of Medical and Molecular Genetics at Indiana University (IU) plays an integral part in Huntingtons research by providing computerized repositories of HD family information for researchers and families. The National Huntingtons Disease Research Roster, founded in 1979 at IU, and the Huntingtons Disease in Venezuela Project database contain information that has proven to be invaluable in the worldwide field of HD research. This paper addresses the types of information stored in each database, the pedigree database program (MEGADATS) used to manage the data, and significant findings that have resulted from access to the data. PMID:1839672

  9. A digital repository with an extensible data model for biobanking and genomic analysis management.

    PubMed

    Izzo, Massimiliano; Mortola, Francesco; Arnulfo, Gabriele; Fato, Marco M; Varesio, Luigi

    2014-01-01

    Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid.

  10. A digital repository with an extensible data model for biobanking and genomic analysis management

    PubMed Central

    2014-01-01

    Motivation Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. Results We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Conclusions Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid. PMID:25077808

  11. Importance of Data Management in a Long-term Biological Monitoring Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Sigurd W; Brandt, Craig C; McCracken, Kitty

    2011-01-01

    The long-term Biological Monitoring and Abatement Program (BMAP) has always needed to collect and retain high-quality data on which to base its assessments of ecological status of streams and their recovery after remediation. Its formal quality assurance, data processing, and data management components all contribute to this need. The Quality Assurance Program comprehensively addresses requirements from various institutions, funders, and regulators, and includes a data management component. Centralized data management began a few years into the program. An existing relational database was adapted and extended to handle biological data. Data modeling enabled the program's database to process, store, and retrievemore » its data. The data base's main data tables and several key reference tables are described. One of the most important related activities supporting long-term analyses was the establishing of standards for sampling site names, taxonomic identification, flagging, and other components. There are limitations. Some types of program data were not easily accommodated in the central systems, and many possible data-sharing and integration options are not easily accessible to investigators. The implemented relational database supports the transmittal of data to the Oak Ridge Environmental Information System (OREIS) as the permanent repository. From our experience we offer data management advice to other biologically oriented long-term environmental sampling and analysis programs.« less

  12. LTPP InfoPave Release 2017: What's New

    DOT National Transportation Integrated Search

    2017-01-01

    The LTPP program was initiated in 1987 to satisfy a wide range of pavement information needs. Over the years, the program has accumulated a vast repository of research quality data, extensive documentation, and related tools, which compose LTPPs c...

  13. The repository-based software engineering program: Redefining AdaNET as a mainstream NASA source

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Repository-based Software Engineering Program (RBSE) is described to inform and update senior NASA managers about the program. Background and historical perspective on software reuse and RBSE for NASA managers who may not be familiar with these topics are provided. The paper draws upon and updates information from the RBSE Concept Document, baselined by NASA Headquarters, Johnson Space Center, and the University of Houston - Clear Lake in April 1992. Several of NASA's software problems and what RBSE is now doing to address those problems are described. Also, next steps to be taken to derive greater benefit from this Congressionally-mandated program are provided. The section on next steps describes the need to work closely with other NASA software quality, technology transfer, and reuse activities and focuses on goals and objectives relative to this need. RBSE's role within NASA is addressed; however, there is also the potential for systematic transfer of technology outside of NASA in later stages of the RBSE program. This technology transfer is discussed briefly.

  14. Yucca Mountain biological resources monitoring program; Annual report FY92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1993-02-01

    The US Department of Energy (DOE) is required by the Nuclear Waste Policy Act of 1982 (as amended in 1987) to study and characterize Yucca Mountain as a potential site for a geologic repository for high-level nuclear waste. During site characterization, the DOE will conduct a variety of geotechnical, geochemical, geological, and hydrological studies to determine the suitability of Yucca Mountain as a potential repository. To ensure that site characterization activities (SCA) do not adversely affect the environment at Yucca Mountain, an environmental program has been implemented to monitor and mitigate potential impacts and ensure activities comply with applicable environmentalmore » regulations. This report describes the activities and accomplishments of EG&G Energy Measurements, Inc. (EG&G/EM) during fiscal year 1992 (FY92) for six program areas within the Terrestrial Ecosystem component of the YMP environmental program. The six program areas are Site Characterization Effects, Desert Tortoises, Habitat Reclamation, Monitoring and Mitigation, Radiological Monitoring, and Biological Support.« less

  15. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Continuous Improvement and the Safety Case for the Waste Isolation Pilot Plant Geologic Repository - 13467

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Luik, Abraham; Patterson, Russell; Nelson, Roger

    2013-07-01

    The Waste Isolation Pilot Plant (WIPP) is a geologic repository 2150 feet (650 m) below the surface of the Chihuahuan desert near Carlsbad, New Mexico. WIPP permanently disposes of transuranic waste from national defense programs. Every five years, the U.S. Department of Energy (DOE) submits an application to the U.S. Environmental Protection Agency (EPA) to request regulatory-compliance re-certification of the facility for another five years. Every ten years, DOE submits an application to the New Mexico Environment Department (NMED) for the renewal of its hazardous waste disposal permit. The content of the applications made by DOE to the EPA formore » re-certification, and to the NMED for permit-renewal, reflect any optimization changes made to the facility, with regulatory concurrence if warranted by the nature of the change. DOE points to such changes as evidence for its having taken seriously its 'continuous improvement' operations and management philosophy. Another opportunity for continuous improvement is to look at any delta that may exist between the re-certification and re-permitting cases for system safety and the consensus advice on the nature and content of a safety case as being developed and published by the Nuclear Energy Agency's Integration Group for the Safety Case (IGSC) expert group. DOE at WIPP, with the aid of its Science Advisor and teammate, Sandia National Laboratories, is in the process of discerning what can be done, in a reasonably paced and cost-conscious manner, to continually improve the case for repository safety that is being made to the two primary regulators on a recurring basis. This paper will discuss some aspects of that delta and potential paths forward to addressing them. (authors)« less

  17. Testing of candidate waste-package backfill and canister materials for basalt

    NASA Astrophysics Data System (ADS)

    Wood, M. I.; Anderson, W. J.; Aden, G. D.

    1982-09-01

    The Basalt Waste Isolation Project (BWIP) is developing a multiple-barrier waste package to contain high-level nuclear waste as part of an overall system (e.g., waste package, repository sealing system, and host rock) designed to isolate the waste in a repository located in basalt beneath the Hanford Site, Richland, Washington. The three basic components of the waste package are the waste form, the canister, and the backfill. An extensive testing program is under way to determine the chemical, physical, and mechanical properties of potential canister and backfill materials. The data derived from this testing program will be used to recommend those materials that most adequately perform the functions assigned to the canister and backfill.

  18. OntoCR: A CEN/ISO-13606 clinical repository based on ontologies.

    PubMed

    Lozano-Rubí, Raimundo; Muñoz Carrero, Adolfo; Serrano Balazote, Pablo; Pastor, Xavier

    2016-04-01

    To design a new semantically interoperable clinical repository, based on ontologies, conforming to CEN/ISO 13606 standard. The approach followed is to extend OntoCRF, a framework for the development of clinical repositories based on ontologies. The meta-model of OntoCRF has been extended by incorporating an OWL model integrating CEN/ISO 13606, ISO 21090 and SNOMED CT structure. This approach has demonstrated a complete evaluation cycle involving the creation of the meta-model in OWL format, the creation of a simple test application, and the communication of standardized extracts to another organization. Using a CEN/ISO 13606 based system, an indefinite number of archetypes can be merged (and reused) to build new applications. Our approach, based on the use of ontologies, maintains data storage independent of content specification. With this approach, relational technology can be used for storage, maintaining extensibility capabilities. The present work demonstrates that it is possible to build a native CEN/ISO 13606 repository for the storage of clinical data. We have demonstrated semantic interoperability of clinical information using CEN/ISO 13606 extracts. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Metabolomics Workbench: An international repository for metabolomics data and metadata, metabolite standards, protocols, tutorials and training, and analysis tools

    PubMed Central

    Sud, Manish; Fahy, Eoin; Cotter, Dawn; Azam, Kenan; Vadivelu, Ilango; Burant, Charles; Edison, Arthur; Fiehn, Oliver; Higashi, Richard; Nair, K. Sreekumaran; Sumner, Susan; Subramaniam, Shankar

    2016-01-01

    The Metabolomics Workbench, available at www.metabolomicsworkbench.org, is a public repository for metabolomics metadata and experimental data spanning various species and experimental platforms, metabolite standards, metabolite structures, protocols, tutorials, and training material and other educational resources. It provides a computational platform to integrate, analyze, track, deposit and disseminate large volumes of heterogeneous data from a wide variety of metabolomics studies including mass spectrometry (MS) and nuclear magnetic resonance spectrometry (NMR) data spanning over 20 different species covering all the major taxonomic categories including humans and other mammals, plants, insects, invertebrates and microorganisms. Additionally, a number of protocols are provided for a range of metabolite classes, sample types, and both MS and NMR-based studies, along with a metabolite structure database. The metabolites characterized in the studies available on the Metabolomics Workbench are linked to chemical structures in the metabolite structure database to facilitate comparative analysis across studies. The Metabolomics Workbench, part of the data coordinating effort of the National Institute of Health (NIH) Common Fund's Metabolomics Program, provides data from the Common Fund's Metabolomics Resource Cores, metabolite standards, and analysis tools to the wider metabolomics community and seeks data depositions from metabolomics researchers across the world. PMID:26467476

  20. iAnn: an event sharing platform for the life sciences.

    PubMed

    Jimenez, Rafael C; Albar, Juan P; Bhak, Jong; Blatter, Marie-Claude; Blicher, Thomas; Brazas, Michelle D; Brooksbank, Cath; Budd, Aidan; De Las Rivas, Javier; Dreyer, Jacqueline; van Driel, Marc A; Dunn, Michael J; Fernandes, Pedro L; van Gelder, Celia W G; Hermjakob, Henning; Ioannidis, Vassilios; Judge, David P; Kahlem, Pascal; Korpelainen, Eija; Kraus, Hans-Joachim; Loveland, Jane; Mayer, Christine; McDowall, Jennifer; Moran, Federico; Mulder, Nicola; Nyronen, Tommi; Rother, Kristian; Salazar, Gustavo A; Schneider, Reinhard; Via, Allegra; Villaveces, Jose M; Yu, Ping; Schneider, Maria V; Attwood, Teresa K; Corpas, Manuel

    2013-08-01

    We present iAnn, an open source community-driven platform for dissemination of life science events, such as courses, conferences and workshops. iAnn allows automatic visualisation and integration of customised event reports. A central repository lies at the core of the platform: curators add submitted events, and these are subsequently accessed via web services. Thus, once an iAnn widget is incorporated into a website, it permanently shows timely relevant information as if it were native to the remote site. At the same time, announcements submitted to the repository are automatically disseminated to all portals that query the system. To facilitate the visualization of announcements, iAnn provides powerful filtering options and views, integrated in Google Maps and Google Calendar. All iAnn widgets are freely available. http://iann.pro/iannviewer manuel.corpas@tgac.ac.uk.

  1. Method and system of integrating information from multiple sources

    DOEpatents

    Alford, Francine A [Livermore, CA; Brinkerhoff, David L [Antioch, CA

    2006-08-15

    A system and method of integrating information from multiple sources in a document centric application system. A plurality of application systems are connected through an object request broker to a central repository. The information may then be posted on a webpage. An example of an implementation of the method and system is an online procurement system.

  2. Digital Rocks Portal: Preservation, Sharing, Remote Visualization and Automated Analysis of Imaged Datasets

    NASA Astrophysics Data System (ADS)

    Prodanovic, M.; Esteva, M.; Ketcham, R. A.; Hanlon, M.; Pettengill, M.; Ranganath, A.; Venkatesh, A.

    2016-12-01

    Due to advances in imaging modalities such as X-ray microtomography and scattered electron microscopy, 2D and 3D imaged datasets of rock microstructure on nanometer to centimeter length scale allow investigation of nonlinear flow and mechanical phenomena using numerical approaches. This in turn produces various upscaled parameters required by subsurface flow and deformation simulators. However, a single research group typically specializes in an imaging modality and/or related modeling on a single length scale, and lack of data-sharing infrastructure makes it difficult to integrate different length scales. We developed a sustainable, open and easy-to-use repository called the Digital Rocks Portal (http://www.digitalrocksportal.org), that (1) organizes images and related experimental measurements of different porous materials, (2) improves access to them for a wider community of geosciences or engineering researchers not necessarily trained in computer science or data analysis. Our objective is to enable scientific inquiry and engineering decisions founded on a data-driven basis. We show how the data loaded in the portal can be documented, referenced in publications via digital object identifiers, visualize and linked to other repositories. We then show preliminary results on integrating remote parallel visualization and flow simulation workflow with the pore structures currently stored in the repository. We finally discuss the issues of collecting correct metadata, data discoverability and repository sustainability. This is the first repository for this particular data, but is part of the wider ecosystem of geoscience data and model cyber-infrastructure called "Earthcube" (http://earthcube.org/) sponsored by National Science Foundation. For data sustainability and continuous access, the portal is implemented within the reliable, 24/7 maintained High Performance Computing Infrastructure supported by the Texas Advanced Computing Center (TACC) at the University of Texas at Austin. Long-term storage is provided through the University of Texas System Research Cyber-infrastructure initiative.

  3. eXframe: reusable framework for storage, analysis and visualization of genomics experiments

    PubMed Central

    2011-01-01

    Background Genome-wide experiments are routinely conducted to measure gene expression, DNA-protein interactions and epigenetic status. Structured metadata for these experiments is imperative for a complete understanding of experimental conditions, to enable consistent data processing and to allow retrieval, comparison, and integration of experimental results. Even though several repositories have been developed for genomics data, only a few provide annotation of samples and assays using controlled vocabularies. Moreover, many of them are tailored for a single type of technology or measurement and do not support the integration of multiple data types. Results We have developed eXframe - a reusable web-based framework for genomics experiments that provides 1) the ability to publish structured data compliant with accepted standards 2) support for multiple data types including microarrays and next generation sequencing 3) query, analysis and visualization integration tools (enabled by consistent processing of the raw data and annotation of samples) and is available as open-source software. We present two case studies where this software is currently being used to build repositories of genomics experiments - one contains data from hematopoietic stem cells and another from Parkinson's disease patients. Conclusion The web-based framework eXframe offers structured annotation of experiments as well as uniform processing and storage of molecular data from microarray and next generation sequencing platforms. The framework allows users to query and integrate information across species, technologies, measurement types and experimental conditions. Our framework is reusable and freely modifiable - other groups or institutions can deploy their own custom web-based repositories based on this software. It is interoperable with the most important data formats in this domain. We hope that other groups will not only use eXframe, but also contribute their own useful modifications. PMID:22103807

  4. International Approaches for Nuclear Waste Disposal in Geological Formations: Geological Challenges in Radioactive Waste Isolation—Fifth Worldwide Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faybishenko, Boris; Birkholzer, Jens; Sassani, David

    The overall objective of the Fifth Worldwide Review (WWR-5) is to document the current state-of-the-art of major developments in a number of nations throughout the World pursuing geological disposal programs, and to summarize challenging problems and experience that have been obtained in siting, preparing and reviewing cases for the operational and long-term safety of proposed and operating nuclear waste repositories. The scope of the Review is to address current specific technical issues and challenges in safety case development along with the interplay of technical feasibility, siting, engineering design issues, and operational and post-closure safety. In particular, the chapters included inmore » the report present the following types of information: the current status of the deep geological repository programs for high level nuclear waste and low- and intermediate level nuclear waste in each country, concepts of siting and radioactive waste and spent nuclear fuel management in different countries (with the emphasis of nuclear waste disposal under different climatic conditions and different geological formations), progress in repository site selection and site characterization, technology development, buffer/backfill materials studies and testing, support activities, programs, and projects, international cooperation, and future plans, as well as regulatory issues and transboundary problems.« less

  5. National Aeronautics and Space Administration Biological Specimen Repository

    NASA Technical Reports Server (NTRS)

    McMonigal, Kathleen A.; Pietrzyk, Robert a.; Johnson, Mary Anne

    2008-01-01

    The National Aeronautics and Space Administration Biological Specimen Repository (Repository) is a storage bank that is used to maintain biological specimens over extended periods of time and under well-controlled conditions. Samples from the International Space Station (ISS), including blood and urine, will be collected, processed and archived during the preflight, inflight and postflight phases of ISS missions. This investigation has been developed to archive biosamples for use as a resource for future space flight related research. The International Space Station (ISS) provides a platform to investigate the effects of microgravity on human physiology prior to lunar and exploration class missions. The storage of crewmember samples from many different ISS flights in a single repository will be a valuable resource with which researchers can study space flight related changes and investigate physiological markers. The development of the National Aeronautics and Space Administration Biological Specimen Repository will allow for the collection, processing, storage, maintenance, and ethical distribution of biosamples to meet goals of scientific and programmatic relevance to the space program. Archiving of the biosamples will provide future research opportunities including investigating patterns of physiological changes, analysis of components unknown at this time or analyses performed by new methodologies.

  6. Importance of Data Management in a Long-Term Biological Monitoring Program

    NASA Astrophysics Data System (ADS)

    Christensen, Sigurd W.; Brandt, Craig C.; McCracken, Mary K.

    2011-06-01

    The long-term Biological Monitoring and Abatement Program (BMAP) has always needed to collect and retain high-quality data on which to base its assessments of ecological status of streams and their recovery after remediation. Its formal quality assurance, data processing, and data management components all contribute to meeting this need. The Quality Assurance Program comprehensively addresses requirements from various institutions, funders, and regulators, and includes a data management component. Centralized data management began a few years into the program when an existing relational database was adapted and extended to handle biological data. The database's main data tables and several key reference tables are described. One of the most important related activities supporting long-term analyses was the establishing of standards for sampling site names, taxonomic identification, flagging, and other components. The implemented relational database supports the transmittal of data to the Oak Ridge Environmental Information System (OREIS) as the permanent repository. We also discuss some limitations to our implementation. Some types of program data were not easily accommodated in the central systems, and many possible data-sharing and integration options are not easily accessible to investigators. From our experience we offer data management advice to other biologically oriented long-term environmental sampling and analysis programs.

  7. Robotics Scoping Study to Evaluate Advances in Robotics Technologies that Support Enhanced Efficiencies for Yucca Mountain Repository Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    T. Burgess; M. Noakes; P. Spampinato

    This paper presents an evaluation of robotics and remote handling technologies that have the potential to increase the efficiency of handling waste packages at the proposed Yucca Mountain High-Level Nuclear Waste Repository. It is expected that increased efficiency will reduce the cost of operations. The goal of this work was to identify technologies for consideration as potential projects that the U.S. Department of Energy Office of Civilian Radioactive Waste Management, Office of Science and Technology International Programs, could support in the near future, and to assess their ''payback'' value. The evaluation took into account the robotics and remote handling capabilitiesmore » planned for incorporation into the current baseline design for the repository, for both surface and subsurface operations. The evaluation, completed at the end of fiscal year 2004, identified where significant advantages in operating efficiencies could accrue by implementing any given robotics technology or approach, and included a road map for a multiyear R&D program for improvements to remote handling technology that support operating enhancements.« less

  8. Statistical sensitivity analysis of a simple nuclear waste repository model

    NASA Astrophysics Data System (ADS)

    Ronen, Y.; Lucius, J. L.; Blow, E. M.

    1980-06-01

    A preliminary step in a comprehensive sensitivity analysis of the modeling of a nuclear waste repository. The purpose of the complete analysis is to determine which modeling parameters and physical data are most important in determining key design performance criteria and then to obtain the uncertainty in the design for safety considerations. The theory for a statistical screening design methodology is developed for later use in the overall program. The theory was applied to the test case of determining the relative importance of the sensitivity of near field temperature distribution in a single level salt repository to modeling parameters. The exact values of the sensitivities to these physical and modeling parameters were then obtained using direct methods of recalculation. The sensitivity coefficients found to be important for the sample problem were thermal loading, distance between the spent fuel canisters and their radius. Other important parameters were those related to salt properties at a point of interest in the repository.

  9. Partnerships against Violence: Promising Programs. Volume 1: Resource Guide.

    ERIC Educational Resources Information Center

    Department of Housing and Urban Development, Washington, DC.

    This volume represents the first step in an effort to build a central repository of promising anti-violence programs. Part of a cooperative venture in the federal government, this resource guide draws on information stored in more than 30 Federal clearinghouses and resource centers. Included here are programs developed by government agencies,…

  10. Doing Your Science While You're in Orbit

    NASA Astrophysics Data System (ADS)

    Green, Mark L.; Miller, Stephen D.; Vazhkudai, Sudharshan S.; Trater, James R.

    2010-11-01

    Large-scale neutron facilities such as the Spallation Neutron Source (SNS) located at Oak Ridge National Laboratory need easy-to-use access to Department of Energy Leadership Computing Facilities and experiment repository data. The Orbiter thick- and thin-client and its supporting Service Oriented Architecture (SOA) based services (available at https://orbiter.sns.gov) consist of standards-based components that are reusable and extensible for accessing high performance computing, data and computational grid infrastructure, and cluster-based resources easily from a user configurable interface. The primary Orbiter system goals consist of (1) developing infrastructure for the creation and automation of virtual instrumentation experiment optimization, (2) developing user interfaces for thin- and thick-client access, (3) provide a prototype incorporating major instrument simulation packages, and (4) facilitate neutron science community access and collaboration. The secure Orbiter SOA authentication and authorization is achieved through the developed Virtual File System (VFS) services, which use Role-Based Access Control (RBAC) for data repository file access, thin-and thick-client functionality and application access, and computational job workflow management. The VFS Relational Database Management System (RDMS) consists of approximately 45 database tables describing 498 user accounts with 495 groups over 432,000 directories with 904,077 repository files. Over 59 million NeXus file metadata records are associated to the 12,800 unique NeXus file field/class names generated from the 52,824 repository NeXus files. Services that enable (a) summary dashboards of data repository status with Quality of Service (QoS) metrics, (b) data repository NeXus file field/class name full text search capabilities within a Google like interface, (c) fully functional RBAC browser for the read-only data repository and shared areas, (d) user/group defined and shared metadata for data repository files, (e) user, group, repository, and web 2.0 based global positioning with additional service capabilities are currently available. The SNS based Orbiter SOA integration progress with the Distributed Data Analysis for Neutron Scattering Experiments (DANSE) software development project is summarized with an emphasis on DANSE Central Services and the Virtual Neutron Facility (VNF). Additionally, the DANSE utilization of the Orbiter SOA authentication, authorization, and data transfer services best practice implementations are presented.

  11. An Open-Source Sandbox for Increasing the Accessibility of Functional Programming to the Bioinformatics and Scientific Communities

    PubMed Central

    Fenwick, Matthew; Sesanker, Colbert; Schiller, Martin R.; Ellis, Heidi JC; Hinman, M. Lee; Vyas, Jay; Gryk, Michael R.

    2012-01-01

    Scientists are continually faced with the need to express complex mathematical notions in code. The renaissance of functional languages such as LISP and Haskell is often credited to their ability to implement complex data operations and mathematical constructs in an expressive and natural idiom. The slow adoption of functional computing in the scientific community does not, however, reflect the congeniality of these fields. Unfortunately, the learning curve for adoption of functional programming techniques is steeper than that for more traditional languages in the scientific community, such as Python and Java, and this is partially due to the relative sparseness of available learning resources. To fill this gap, we demonstrate and provide applied, scientifically substantial examples of functional programming, We present a multi-language source-code repository for software integration and algorithm development, which generally focuses on the fields of machine learning, data processing, bioinformatics. We encourage scientists who are interested in learning the basics of functional programming to adopt, reuse, and learn from these examples. The source code is available at: https://github.com/CONNJUR/CONNJUR-Sandbox (see also http://www.connjur.org). PMID:25328913

  12. An Open-Source Sandbox for Increasing the Accessibility of Functional Programming to the Bioinformatics and Scientific Communities.

    PubMed

    Fenwick, Matthew; Sesanker, Colbert; Schiller, Martin R; Ellis, Heidi Jc; Hinman, M Lee; Vyas, Jay; Gryk, Michael R

    2012-01-01

    Scientists are continually faced with the need to express complex mathematical notions in code. The renaissance of functional languages such as LISP and Haskell is often credited to their ability to implement complex data operations and mathematical constructs in an expressive and natural idiom. The slow adoption of functional computing in the scientific community does not, however, reflect the congeniality of these fields. Unfortunately, the learning curve for adoption of functional programming techniques is steeper than that for more traditional languages in the scientific community, such as Python and Java, and this is partially due to the relative sparseness of available learning resources. To fill this gap, we demonstrate and provide applied, scientifically substantial examples of functional programming, We present a multi-language source-code repository for software integration and algorithm development, which generally focuses on the fields of machine learning, data processing, bioinformatics. We encourage scientists who are interested in learning the basics of functional programming to adopt, reuse, and learn from these examples. The source code is available at: https://github.com/CONNJUR/CONNJUR-Sandbox (see also http://www.connjur.org).

  13. Integrated Data Repository Toolkit (IDRT). A Suite of Programs to Facilitate Health Analytics on Heterogeneous Medical Data.

    PubMed

    Bauer, C R K D; Ganslandt, T; Baum, B; Christoph, J; Engel, I; Löbe, M; Mate, S; Stäubert, S; Drepper, J; Prokosch, H-U; Winter, A; Sax, U

    2016-01-01

    In recent years, research data warehouses moved increasingly into the focus of interest of medical research. Nevertheless, there are only a few center-independent infrastructure solutions available. They aim to provide a consolidated view on medical data from various sources such as clinical trials, electronic health records, epidemiological registries or longitudinal cohorts. The i2b2 framework is a well-established solution for such repositories, but it lacks support for importing and integrating clinical data and metadata. The goal of this project was to develop a platform for easy integration and administration of data from heterogeneous sources, to provide capabilities for linking them to medical terminologies and to allow for transforming and mapping of data streams for user-specific views. A suite of three tools has been developed: the i2b2 Wizard for simplifying administration of i2b2, the IDRT Import and Mapping Tool for loading clinical data from various formats like CSV, SQL, CDISC ODM or biobanks and the IDRT i2b2 Web Client Plugin for advanced export options. The Import and Mapping Tool also includes an ontology editor for rearranging and mapping patient data and structures as well as annotating clinical data with medical terminologies, primarily those used in Germany (ICD-10-GM, OPS, ICD-O, etc.). With the three tools functional, new i2b2-based research projects can be created, populated and customized to researcher's needs in a few hours. Amalgamating data and metadata from different databases can be managed easily. With regards to data privacy a pseudonymization service can be plugged in. Using common ontologies and reference terminologies rather than project-specific ones leads to a consistent understanding of the data semantics. i2b2's promise is to enable clinical researchers to devise and test new hypothesis even without a deep knowledge in statistical programing. The approach presented here has been tested in a number of scenarios with millions of observations and tens of thousands of patients. Initially mostly observant, trained researchers were able to construct new analyses on their own. Early feedback indicates that timely and extensive access to their "own" data is appreciated most, but it is also lowering the barrier for other tasks, for instance checking data quality and completeness (missing data, wrong coding).

  14. Basic repository source term and data sheet report: Lavender Canyon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1988-01-01

    This report is one of a series describing studies undertaken in support of the US Department of Energy Civilian Radioactive Waste Management (CRWM) Program. This study contains the derivation of values for environmental source terms and resources consumed for a CRWM repository. Estimates include heavy construction equipment; support equipment; shaft-sinking equipment; transportation equipment; and consumption of fuel, water, electricity, and natural gas. Data are presented for construction and operation at an assumed site in Lavender Canyon, Utah. 3 refs; 6 tabs.

  15. Configuration management plan. System definition and project development. Repository Based Software Engineering (RBSE) program

    NASA Technical Reports Server (NTRS)

    Mckay, Charles

    1991-01-01

    This is the configuration management Plan for the AdaNet Repository Based Software Engineering (RBSE) contract. This document establishes the requirements and activities needed to ensure that the products developed for the AdaNet RBSE contract are accurately identified, that proposed changes to the product are systematically evaluated and controlled, that the status of all change activity is known at all times, and that the product achieves its functional performance requirements and is accurately documented.

  16. Evaluation of Used Fuel Disposition in Clay-Bearing Rock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jové Colón, Carlos F.; Weck, Philippe F.; Sassani, David H.

    2014-08-01

    Radioactive waste disposal in shale/argillite rock formations has been widely considered given its desirable isolation properties (low permeability), geochemically reduced conditions, anomalous groundwater pressures, and widespread geologic occurrence. Clay/shale rock formations are characterized by their high content of clay minerals such as smectites and illites where diffusive transport and chemisorption phenomena predominate. These, in addition to low permeability, are key attributes of shale to impede radionuclide mobility. Shale host-media has been comprehensively studied in international nuclear waste repository programs as part of underground research laboratories (URLs) programs in Switzerland, France, Belgium, and Japan. These investigations, in some cases a decademore » or more long, have produced a large but fundamental body of information spanning from site characterization data (geological, hydrogeological, geochemical, geomechanical) to controlled experiments on the engineered barrier system (EBS) (barrier clay and seals materials). Evaluation of nuclear waste disposal in shale formations in the USA was conducted in the late 70’s and mid 80’s. Most of these studies evaluated the potential for shale to host a nuclear waste repository but not at the programmatic level of URLs in international repository programs. This report covers various R&D work and capabilities relevant to disposal of heat-generating nuclear waste in shale/argillite media. Integration and cross-fertilization of these capabilities will be utilized in the development and implementation of the shale/argillite reference case planned for FY15. Disposal R&D activities under the UFDC in the past few years have produced state-of-the-art modeling capabilities for coupled Thermal-Hydrological-Mechanical-Chemical (THMC), used fuel degradation (source term), and thermodynamic modeling and database development to evaluate generic disposal concepts. The THMC models have been developed for shale repository leveraging in large part on the information garnered in URLs and laboratory data to test and demonstrate model prediction capability and to accurately represent behavior of the EBS and the natural (barrier) system (NS). In addition, experimental work to improve our understanding of clay barrier interactions and TM couplings at high temperatures are key to evaluate thermal effects as a result of relatively high heat loads from waste and the extent of sacrificial zones in the EBS. To assess the latter, experiments and modeling approaches have provided important information on the stability and fate of barrier materials under high heat loads. This information is central to the assessment of thermal limits and the implementation of the reference case when constraining EBS properties and the repository layout (e.g., waste package and drift spacing). This report is comprised of various parts, each one describing various R&D activities applicable to shale/argillite media. For example, progress made on modeling and experimental approaches to analyze physical and chemical interactions affecting clay in the EBS, NS, and used nuclear fuel (source term) in support of R&D objectives. It also describes the development of a reference case for shale/argillite media. The accomplishments of these activities are summarized as follows: Development of a reference case for shale/argillite; Investigation of Reactive Transport and Coupled THM Processes in EBS: FY14; Update on Experimental Activities on Buffer/Backfill Interactions at elevated Pressure and Temperature; and Thermodynamic Database Development: Evaluation Strategy, Modeling Tools, First-Principles Modeling of Clay, and Sorption Database Assessment;ANL Mixed Potential Model For Used Fuel Degradation: Application to Argillite and Crystalline Rock Environments.« less

  17. An XML-based system for the flexible classification and retrieval of clinical practice guidelines.

    PubMed Central

    Ganslandt, T.; Mueller, M. L.; Krieglstein, C. F.; Senninger, N.; Prokosch, H. U.

    2002-01-01

    Beneficial effects of clinical practice guidelines (CPGs) have not yet reached expectations due to limited routine adoption. Electronic distribution and reminder systems have the potential to overcome implementation barriers. Existing electronic CPG repositories like the National Guideline Clearinghouse (NGC) provide individual access but lack standardized computer-readable interfaces necessary for automated guideline retrieval. The aim of this paper was to facilitate automated context-based selection and presentation of CPGs. Using attributes from the NGC classification scheme, an XML-based metadata repository was successfully implemented, providing document storage, classification and retrieval functionality. Semi-automated extraction of attributes was implemented for the import of XML guideline documents using XPath. A hospital information system interface was exemplarily implemented for diagnosis-based guideline invocation. Limitations of the implemented system are discussed and possible future work is outlined. Integration of standardized computer-readable search interfaces into existing CPG repositories is proposed. PMID:12463831

  18. Opinion: Why we need a centralized repository for isotopic data

    USGS Publications Warehouse

    Pauli, Jonathan N.; Newsome, Seth D.; Cook, Joseph A.; Harrod, Chris; Steffan, Shawn A.; Baker, Christopher J. O.; Ben-David, Merav; Bloom, David; Bowen, Gabriel J.; Cerling, Thure E.; Cicero, Carla; Cook, Craig; Dohm, Michelle; Dharampal, Prarthana S.; Graves, Gary; Gropp, Robert; Hobson, Keith A.; Jordan, Chris; MacFadden, Bruce; Pilaar Birch, Suzanne; Poelen, Jorrit; Ratnasingham, Sujeevan; Russell, Laura; Stricker, Craig A.; Uhen, Mark D.; Yarnes, Christopher T.; Hayden, Brian

    2017-01-01

    Stable isotopes encode and integrate the origin of matter; thus, their analysis offers tremendous potential to address questions across diverse scientific disciplines (1, 2). Indeed, the broad applicability of stable isotopes, coupled with advancements in high-throughput analysis, have created a scientific field that is growing exponentially, and generating data at a rate paralleling the explosive rise of DNA sequencing and genomics (3). Centralized data repositories, such as GenBank, have become increasingly important as a means for archiving information, and “Big Data” analytics of these resources are revolutionizing science and everyday life.

  19. NCBI2RDF: Enabling Full RDF-Based Access to NCBI Databases

    PubMed Central

    Anguita, Alberto; García-Remesal, Miguel; de la Iglesia, Diana; Maojo, Victor

    2013-01-01

    RDF has become the standard technology for enabling interoperability among heterogeneous biomedical databases. The NCBI provides access to a large set of life sciences databases through a common interface called Entrez. However, the latter does not provide RDF-based access to such databases, and, therefore, they cannot be integrated with other RDF-compliant databases and accessed via SPARQL query interfaces. This paper presents the NCBI2RDF system, aimed at providing RDF-based access to the complete NCBI data repository. This API creates a virtual endpoint for servicing SPARQL queries over different NCBI repositories and presenting to users the query results in SPARQL results format, thus enabling this data to be integrated and/or stored with other RDF-compliant repositories. SPARQL queries are dynamically resolved, decomposed, and forwarded to the NCBI-provided E-utilities programmatic interface to access the NCBI data. Furthermore, we show how our approach increases the expressiveness of the native NCBI querying system, allowing several databases to be accessed simultaneously. This feature significantly boosts productivity when working with complex queries and saves time and effort to biomedical researchers. Our approach has been validated with a large number of SPARQL queries, thus proving its reliability and enhanced capabilities in biomedical environments. PMID:23984425

  20. Semantic Web repositories for genomics data using the eXframe platform.

    PubMed

    Merrill, Emily; Corlosquet, Stéphane; Ciccarese, Paolo; Clark, Tim; Das, Sudeshna

    2014-01-01

    With the advent of inexpensive assay technologies, there has been an unprecedented growth in genomics data as well as the number of databases in which it is stored. In these databases, sample annotation using ontologies and controlled vocabularies is becoming more common. However, the annotation is rarely available as Linked Data, in a machine-readable format, or for standardized queries using SPARQL. This makes large-scale reuse, or integration with other knowledge bases very difficult. To address this challenge, we have developed the second generation of our eXframe platform, a reusable framework for creating online repositories of genomics experiments. This second generation model now publishes Semantic Web data. To accomplish this, we created an experiment model that covers provenance, citations, external links, assays, biomaterials used in the experiment, and the data collected during the process. The elements of our model are mapped to classes and properties from various established biomedical ontologies. Resource Description Framework (RDF) data is automatically produced using these mappings and indexed in an RDF store with a built-in Sparql Protocol and RDF Query Language (SPARQL) endpoint. Using the open-source eXframe software, institutions and laboratories can create Semantic Web repositories of their experiments, integrate it with heterogeneous resources and make it interoperable with the vast Semantic Web of biomedical knowledge.

  1. Software Tools Streamline Project Management

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Three innovative software inventions from Ames Research Center (NETMARK, Program Management Tool, and Query-Based Document Management) are finding their way into NASA missions as well as industry applications. The first, NETMARK, is a program that enables integrated searching of data stored in a variety of databases and documents, meaning that users no longer have to look in several places for related information. NETMARK allows users to search and query information across all of these sources in one step. This cross-cutting capability in information analysis has exponentially reduced the amount of time needed to mine data from days or weeks to mere seconds. NETMARK has been used widely throughout NASA, enabling this automatic integration of information across many documents and databases. NASA projects that use NETMARK include the internal reporting system and project performance dashboard, Erasmus, NASA s enterprise management tool, which enhances organizational collaboration and information sharing through document routing and review; the Integrated Financial Management Program; International Space Station Knowledge Management; Mishap and Anomaly Information Reporting System; and management of the Mars Exploration Rovers. Approximately $1 billion worth of NASA s projects are currently managed using Program Management Tool (PMT), which is based on NETMARK. PMT is a comprehensive, Web-enabled application tool used to assist program and project managers within NASA enterprises in monitoring, disseminating, and tracking the progress of program and project milestones and other relevant resources. The PMT consists of an integrated knowledge repository built upon advanced enterprise-wide database integration techniques and the latest Web-enabled technologies. The current system is in a pilot operational mode allowing users to automatically manage, track, define, update, and view customizable milestone objectives and goals. The third software invention, Query-Based Document Management (QBDM) is a tool that enables content or context searches, either simple or hierarchical, across a variety of databases. The system enables users to specify notification subscriptions where they associate "contexts of interest" and "events of interest" to one or more documents or collection(s) of documents. Based on these subscriptions, users receive notification when the events of interest occur within the contexts of interest for associated document or collection(s) of documents. Users can also associate at least one notification time as part of the notification subscription, with at least one option for the time period of notifications.

  2. MODEL-Based Methodology for System of Systems Architecture Development with Application to the Recapitalization of the Future Towing and Salvage Platform

    DTIC Science & Technology

    2008-09-01

    SEP) is a comprehensive , iterative and recursive problem solving process, applied sequentially top-down by integrated teams. It transforms needs...central integrated design repository. It includes a comprehensive behavior modeling notation to understand the dynamics of a design. CORE is a MBSE...37 F. DYNAMIC POSITIONING..........................................................................38 G. FIREFIGHTING

  3. Unlocking the potential of publicly available microarray data using inSilicoDb and inSilicoMerging R/Bioconductor packages.

    PubMed

    Taminau, Jonatan; Meganck, Stijn; Lazar, Cosmin; Steenhoff, David; Coletta, Alain; Molter, Colin; Duque, Robin; de Schaetzen, Virginie; Weiss Solís, David Y; Bersini, Hugues; Nowé, Ann

    2012-12-24

    With an abundant amount of microarray gene expression data sets available through public repositories, new possibilities lie in combining multiple existing data sets. In this new context, analysis itself is no longer the problem, but retrieving and consistently integrating all this data before delivering it to the wide variety of existing analysis tools becomes the new bottleneck. We present the newly released inSilicoMerging R/Bioconductor package which, together with the earlier released inSilicoDb R/Bioconductor package, allows consistent retrieval, integration and analysis of publicly available microarray gene expression data sets. Inside the inSilicoMerging package a set of five visual and six quantitative validation measures are available as well. By providing (i) access to uniformly curated and preprocessed data, (ii) a collection of techniques to remove the batch effects between data sets from different sources, and (iii) several validation tools enabling the inspection of the integration process, these packages enable researchers to fully explore the potential of combining gene expression data for downstream analysis. The power of using both packages is demonstrated by programmatically retrieving and integrating gene expression studies from the InSilico DB repository [https://insilicodb.org/app/].

  4. Corrosion Management of the Hanford High-Level Nuclear Waste Tanks

    NASA Astrophysics Data System (ADS)

    Beavers, John A.; Sridhar, Narasi; Boomer, Kayle D.

    2014-03-01

    The Hanford site is located in southeastern Washington State and stores more than 200,000 m3 (55 million gallons) of high-level radioactive waste resulting from the production and processing of plutonium. The waste is stored in large carbon steel tanks that were constructed between 1943 and 1986. The leak and structurally integrity of the more recently constructed double-shell tanks must be maintained until the waste can be removed from the tanks and encapsulated in glass logs for final disposal in a repository. There are a number of corrosion-related threats to the waste tanks, including stress-corrosion cracking, pitting corrosion, and corrosion at the liquid-air interface and in the vapor space. This article summarizes the corrosion management program at Hanford to mitigate these threats.

  5. Configuration Management File Manager Developed for Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.

    1997-01-01

    One of the objectives of the High Performance Computing and Communication Project's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to provide a common and consistent way to manage applications, data, and engine simulations. The NPSS Configuration Management (CM) File Manager integrated with the Common Desktop Environment (CDE) window management system provides a common look and feel for the configuration management of data, applications, and engine simulations for U.S. engine companies. In addition, CM File Manager provides tools to manage a simulation. Features include managing input files, output files, textual notes, and any other material normally associated with simulation. The CM File Manager includes a generic configuration management Application Program Interface (API) that can be adapted for the configuration management repositories of any U.S. engine company.

  6. Yucca Mountain Biological Resources Monitoring Program. Progress report, January 1994--December 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-07-01

    The US Department of Energy (DOE) is required by the Nuclear Waste Policy Act of 1982 (as amended in 1987) to study and characterize the suitability of Yucca Mountain as a potential geological repository for high-level nuclear waste. During site characterization, the DOE will conduct a variety of geotechnical, geochemical, geological, and hydrological studies to determine the suitability of Yucca Mountain as a potential repository. To ensure that site characterization activities do not adversely affect the environment at Yucca Mountain, a program has been implemented to monitor and mitigate potential impacts and ensure activities comply with applicable environmental regulations. Thismore » report describes the activities and accomplishments of EG and G Energy Measurements, Inc. (EG and G/EM) from January 1994 through December 1994 for six program areas within the Terrestrial Ecosystem component of the environmental program for the Yucca Mountain Site Characterization Project (YMP): Site Characterization Effects, Desert Tortoises (Gopherus agassizii), Habitat Reclamation, Monitoring and Mitigation, Radiological Monitoring, and Biological Support.« less

  7. Data Stewardship throughout the Ocean Research Data Life Cycle

    NASA Astrophysics Data System (ADS)

    Chandler, Cynthia; Groman, Robert; Allison, Molly; Wiebe, Peter; Glover, David

    2013-04-01

    The Biological and Chemical Oceanography Data Management Office (BCO-DMO) works in partnership with ocean science investigators to publish data from research projects funded by the Biological and Chemical Oceanography Sections and the Office of Polar Programs Antarctic Organisms & Ecosystems Program (OPP ANT) at the U.S. National Science Foundation. Since 2006, researchers have been contributing data to the BCO-DMO data system, and it has developed into a rich repository of data from ocean, coastal and Great Lakes research programs. The end goals of the BCO-DMO are to ensure preservation of NSF funded project data and to provide open access to those data; achievement of those goals is attained through successful completion of a series of related phases. BCO-DMO has developed an end-to-end data stewardship process that includes all phases of the data life cycle: (1) providing data management advice to investigators during the proposal writing stage; (2) registering their funded project at BCO-DMO; (3) adding data and supporting documentation to the BCO-DMO data repository; (4) providing geospatial and text-based data access systems that support data discovery, access, display, assessment, integration, and export of data resources; (5) exploring mechanisms for exchange of data with complementary repositories; (6) publication of data sets to provide publishers of the peer-reviewed literature with citable references (Digital Object Identifiers) and to encourage proper citation and attribution of data sets in the future and (7) submission of final data sets for preservation in the appropriate long-term data archive. Strategic development of collaborative partnerships with complementary data management organizations is essential to sustainable coverage of the full data life cycle from research proposal through preservation of the final data products. Development and incorporation of controlled vocabularies, domain-specific ontologies and globally unique, persistent identifiers to unambiguously identify resources of interest curated by and available from BCO-DMO have significantly enabled progress toward interoperability with partner systems. Several important components have emerged from early collaborative relationships: (1) identifying a trusted authoritative source of complementary content and the appropriate contact; (2) determining the globally unique, persistent identifier for resources of interest and (3) negotiating the requisite syntactic and semantic exchange systems. An added benefit is the ability to use globally unique, persistent resource identifiers to identify and compare related content in other repositories, thus enabling us to improve the accuracy of content in the BCO-DMO data collection. Results from a recent community discussion at the January 2013 Federation of Earth Science Information Partners (ESIP) meeting will be presented. Mindful of the NSF EarthCube initiative in the United States, the ESIP discussion was an effort to identify commonalities and differences in the way different communities meet the challenges of data stewardship throughout the full data life cycle and to determine any gaps that currently exist. BCO-DMO: http://bco-dmo.org ESIP: http://esipfed.org/

  8. The MMI Semantic Framework: Rosetta Stones for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Rueda, C.; Bermudez, L. E.; Graybeal, J.; Alexander, P.

    2009-12-01

    Semantic interoperability—the exchange of meaning among computer systems—is needed to successfully share data in Ocean Science and across all Earth sciences. The best approach toward semantic interoperability requires a designed framework, and operationally tested tools and infrastructure within that framework. Currently available technologies make a scientific semantic framework feasible, but its development requires sustainable architectural vision and development processes. This presentation outlines the MMI Semantic Framework, including recent progress on it and its client applications. The MMI Semantic Framework consists of tools, infrastructure, and operational and community procedures and best practices, to meet short-term and long-term semantic interoperability goals. The design and prioritization of the semantic framework capabilities are based on real-world scenarios in Earth observation systems. We describe some key uses cases, as well as the associated requirements for building the overall infrastructure, which is realized through the MMI Ontology Registry and Repository. This system includes support for community creation and sharing of semantic content, ontology registration, version management, and seamless integration of user-friendly tools and application programming interfaces. The presentation describes the architectural components for semantic mediation, registry and repository for vocabularies, ontology, and term mappings. We show how the technologies and approaches in the framework can address community needs for managing and exchanging semantic information. We will demonstrate how different types of users and client applications exploit the tools and services for data aggregation, visualization, archiving, and integration. Specific examples from OOSTethys (http://www.oostethys.org) and the Ocean Observatories Initiative Cyberinfrastructure (http://www.oceanobservatories.org) will be cited. Finally, we show how semantic augmentation of web services standards could be performed using framework tools.

  9. SeaBIRD: A Flexible and Intuitive Planetary Datamining Infrastructure

    NASA Astrophysics Data System (ADS)

    Politi, R.; Capaccioni, F.; Giardino, M.; Fonte, S.; Capria, M. T.; Turrini, D.; De Sanctis, M. C.; Piccioni, G.

    2018-04-01

    Description of SeaBIRD (Searchable and Browsable Infrastructure for Repository of Data), a software and hardware infrastructure for multi-mission planetary datamining, with web-based GUI and API set for the integration in users' software.

  10. Structured vs. Unstructured: Factors Affecting Adverse Drug Reaction Documentation in an EMR Repository

    PubMed Central

    Skentzos, Stephen; Shubina, Maria; Plutzky, Jorge; Turchin, Alexander

    2011-01-01

    Adverse reactions to medications to which the patient was known to be intolerant are common. Electronic decision support can prevent them but only if history of adverse reactions to medications is recorded in structured format. We have conducted a retrospective study of 31,531 patients with adverse reactions to statins documented in the notes, as identified with natural language processing. The software identified statin adverse reactions with sensitivity of 86.5% and precision of 91.9%. Only 9020 of these patients had an adverse reaction to a statin recorded in structured format. In multivariable analysis the strongest predictor of structured documentation was utilization of EMR functionality that integrated the medication list with the structured medication adverse reaction repository (odds ratio 48.6, p < 0.0001). Integration of information flow between EMR modules can help improve documentation and potentially prevent adverse drug events. PMID:22195188

  11. Introducing the Brassica Information Portal: Towards integrating genotypic and phenotypic Brassica crop data

    PubMed Central

    Eckes, Annemarie H.; Gubała, Tomasz; Nowakowski, Piotr; Szymczyszyn, Tomasz; Wells, Rachel; Irwin, Judith A.; Horro, Carlos; Hancock, John M.; King, Graham; Dyer, Sarah C.; Jurkowski, Wiktor

    2017-01-01

    The Brassica Information Portal (BIP) is a centralised repository for brassica phenotypic data. The site hosts trait data associated with brassica research and breeding experiments conducted on brassica crops, that are used as oilseeds, vegetables, livestock forage and fodder and for biofuels. A key feature is the explicit management of meta-data describing the provenance and relationships between experimental plant materials, as well as trial design and trait descriptors. BIP is an open access and open source project, built on the schema of CropStoreDB, and as such can provide trait data management strategies for any crop data. A new user interface and programmatic submission/retrieval system helps to simplify data access for researchers, breeders and other end-users. BIP opens up the opportunity to apply integrative, cross-project analyses to data generated by the Brassica Research Community. Here, we present a short description of the current status of the repository. PMID:28529710

  12. Public (Q)SAR Services, Integrated Modeling Environments, and Model Repositories on the Web: State of the Art and Perspectives for Future Development.

    PubMed

    Tetko, Igor V; Maran, Uko; Tropsha, Alexander

    2017-03-01

    Thousands of (Quantitative) Structure-Activity Relationships (Q)SAR models have been described in peer-reviewed publications; however, this way of sharing seldom makes models available for the use by the research community outside of the developer's laboratory. Conversely, on-line models allow broad dissemination and application representing the most effective way of sharing the scientific knowledge. Approaches for sharing and providing on-line access to models range from web services created by individual users and laboratories to integrated modeling environments and model repositories. This emerging transition from the descriptive and informative, but "static", and for the most part, non-executable print format to interactive, transparent and functional delivery of "living" models is expected to have a transformative effect on modern experimental research in areas of scientific and regulatory use of (Q)SAR models. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Piloting a Deceased Subject Integrated Data Repository and Protecting Privacy of Relatives

    PubMed Central

    Huser, Vojtech; Kayaalp, Mehmet; Dodd, Zeyno A.; Cimino, James J.

    2014-01-01

    Use of deceased subject Electronic Health Records can be an important piloting platform for informatics or biomedical research. Existing legal framework allows such research under less strict de-identification criteria; however, privacy of non-decedent must be protected. We report on creation of the decease subject Integrated Data Repository (dsIDR) at National Institutes of Health, Clinical Center and a pilot methodology to remove secondary protected health information or identifiable information (secondary PxI; information about persons other than the primary patient). We characterize available structured coded data in dsIDR and report the estimated frequencies of secondary PxI, ranging from 12.9% (sensitive token presence) to 1.1% (using stricter criteria). Federating decedent EHR data from multiple institutions can address sample size limitations and our pilot study provides lessons learned and methodology that can be adopted by other institutions. PMID:25954378

  14. Piloting a deceased subject integrated data repository and protecting privacy of relatives.

    PubMed

    Huser, Vojtech; Kayaalp, Mehmet; Dodd, Zeyno A; Cimino, James J

    2014-01-01

    Use of deceased subject Electronic Health Records can be an important piloting platform for informatics or biomedical research. Existing legal framework allows such research under less strict de-identification criteria; however, privacy of non-decedent must be protected. We report on creation of the decease subject Integrated Data Repository (dsIDR) at National Institutes of Health, Clinical Center and a pilot methodology to remove secondary protected health information or identifiable information (secondary PxI; information about persons other than the primary patient). We characterize available structured coded data in dsIDR and report the estimated frequencies of secondary PxI, ranging from 12.9% (sensitive token presence) to 1.1% (using stricter criteria). Federating decedent EHR data from multiple institutions can address sample size limitations and our pilot study provides lessons learned and methodology that can be adopted by other institutions.

  15. Reconstructing a hydrogen-driven microbial metabolic network in Opalinus Clay rock.

    PubMed

    Bagnoud, Alexandre; Chourey, Karuna; Hettich, Robert L; de Bruijn, Ino; Andersson, Anders F; Leupin, Olivier X; Schwyn, Bernhard; Bernier-Latmani, Rizlan

    2016-10-14

    The Opalinus Clay formation will host geological nuclear waste repositories in Switzerland. It is expected that gas pressure will build-up due to hydrogen production from steel corrosion, jeopardizing the integrity of the engineered barriers. In an in situ experiment located in the Mont Terri Underground Rock Laboratory, we demonstrate that hydrogen is consumed by microorganisms, fuelling a microbial community. Metagenomic binning and metaproteomic analysis of this deep subsurface community reveals a carbon cycle driven by autotrophic hydrogen oxidizers belonging to novel genera. Necromass is then processed by fermenters, followed by complete oxidation to carbon dioxide by heterotrophic sulfate-reducing bacteria, which closes the cycle. This microbial metabolic web can be integrated in the design of geological repositories to reduce pressure build-up. This study shows that Opalinus Clay harbours the potential for chemolithoautotrophic-based system, and provides a model of microbial carbon cycle in deep subsurface environments where hydrogen and sulfate are present.

  16. On implementing clinical decision support: achieving scalability and maintainability by combining business rules and ontologies.

    PubMed

    Kashyap, Vipul; Morales, Alfredo; Hongsermeier, Tonya

    2006-01-01

    We present an approach and architecture for implementing scalable and maintainable clinical decision support at the Partners HealthCare System. The architecture integrates a business rules engine that executes declarative if-then rules stored in a rule-base referencing objects and methods in a business object model. The rules engine executes object methods by invoking services implemented on the clinical data repository. Specialized inferences that support classification of data and instances into classes are identified and an approach to implement these inferences using an OWL based ontology engine is presented. Alternative representations of these specialized inferences as if-then rules or OWL axioms are explored and their impact on the scalability and maintenance of the system is presented. Architectural alternatives for integration of clinical decision support functionality with the invoking application and the underlying clinical data repository; and their associated trade-offs are discussed and presented.

  17. Reconstructing a hydrogen-driven microbial metabolic network in Opalinus Clay rock

    PubMed Central

    Bagnoud, Alexandre; Chourey, Karuna; Hettich, Robert L.; de Bruijn, Ino; Andersson, Anders F.; Leupin, Olivier X.; Schwyn, Bernhard; Bernier-Latmani, Rizlan

    2016-01-01

    The Opalinus Clay formation will host geological nuclear waste repositories in Switzerland. It is expected that gas pressure will build-up due to hydrogen production from steel corrosion, jeopardizing the integrity of the engineered barriers. In an in situ experiment located in the Mont Terri Underground Rock Laboratory, we demonstrate that hydrogen is consumed by microorganisms, fuelling a microbial community. Metagenomic binning and metaproteomic analysis of this deep subsurface community reveals a carbon cycle driven by autotrophic hydrogen oxidizers belonging to novel genera. Necromass is then processed by fermenters, followed by complete oxidation to carbon dioxide by heterotrophic sulfate-reducing bacteria, which closes the cycle. This microbial metabolic web can be integrated in the design of geological repositories to reduce pressure build-up. This study shows that Opalinus Clay harbours the potential for chemolithoautotrophic-based system, and provides a model of microbial carbon cycle in deep subsurface environments where hydrogen and sulfate are present. PMID:27739431

  18. Clinical results of HIS, RIS, PACS integration using data integration CASE tools

    NASA Astrophysics Data System (ADS)

    Taira, Ricky K.; Chan, Hing-Ming; Breant, Claudine M.; Huang, Lu J.; Valentino, Daniel J.

    1995-05-01

    Current infrastructure research in PACS is dominated by the development of communication networks (local area networks, teleradiology, ATM networks, etc.), multimedia display workstations, and hierarchical image storage architectures. However, limited work has been performed on developing flexible, expansible, and intelligent information processing architectures for the vast decentralized image and text data repositories prevalent in healthcare environments. Patient information is often distributed among multiple data management systems. Current large-scale efforts to integrate medical information and knowledge sources have been costly with limited retrieval functionality. Software integration strategies to unify distributed data and knowledge sources is still lacking commercially. Systems heterogeneity (i.e., differences in hardware platforms, communication protocols, database management software, nomenclature, etc.) is at the heart of the problem and is unlikely to be standardized in the near future. In this paper, we demonstrate the use of newly available CASE (computer- aided software engineering) tools to rapidly integrate HIS, RIS, and PACS information systems. The advantages of these tools include fast development time (low-level code is generated from graphical specifications), and easy system maintenance (excellent documentation, easy to perform changes, and centralized code repository in an object-oriented database). The CASE tools are used to develop and manage the `middle-ware' in our client- mediator-serve architecture for systems integration. Our architecture is scalable and can accommodate heterogeneous database and communication protocols.

  19. Using Controlled Vocabularies and Semantics to Improve Ocean Data Discovery (Invited)

    NASA Astrophysics Data System (ADS)

    Chandler, C. L.; Groman, R. C.; Shepherd, A.; Allison, M. D.; Kinkade, D.; Rauch, S.; Wiebe, P. H.; Glover, D. M.

    2013-12-01

    The Biological and Chemical Oceanography Data Management Office (BCO-DMO) was created in late 2006, by combining the formerly independent data management offices for the U.S. GLOBal Ocean ECosystems Dynamics (GLOBEC) and U.S. Joint Global Ocean flux Study (JGOFS) programs. BCO-DMO staff members work with investigators to publish data from research projects funded by the NSF Geosciences Directorate (GEO) Division of Ocean Sciences (OCE) Biological and Chemical Oceanography Sections and Polar Programs (PLR) Antarctic Sciences Organisms & Ecosystems Program (ANT). Since 2006, researchers have been contributing new data to the BCO-DMO data system. As the data from new research efforts have been added to the data previously shared by U.S. GLOBEC and U.S. JGOFS researchers, the BCO-DMO system has developed into a rich repository of data from ocean, coastal, and Great Lakes research programs. The metadata records for the original research program data (prior to 2006) were stored in human-readable flat files of text, translated on-demand to Web-retrievable files. Beginning in 2006, the metadata records from multiple data systems managed by BCO-DMO were ingested into a relational database (MySQL). Since that time, efforts have been made to incorporate lists of controlled vocabulary terms for key information concepts stored in the MySQL database (e.g. names of research programs, deployments, instruments and measurements). This presents a challenge for a data system that includes legacy data and is continually expanding with the addition of new contributions. Over the years, BCO-DMO has developed a series of data delivery systems driven by the supporting metadata. Improved access to research data, a primary goal of the BCO-DMO project, is achieved through geospatial and text-based data access systems that support data discovery, access, display, assessment, integration, and export of data resources. The addition of a semantically-enabled search capability improves data discovery options particularly for those investigators whose research interests are cross-domain and multi-disciplinary. Current efforts by BCO-DMO staff members are focused on identifying globally unique, persistent identifiers to unambiguously identify resources of interest curated by and available from BCO-DMO. The process involves several essential components: (1) identifying a trusted authoritative source of complementary content and the appropriate contact; (2) determining the globally unique, persistent identifier system for resources of interest and (3) negotiating the requisite syntactic and semantic exchange systems. A variety of technologies have been deployed including: (1) controlled vocabulary term lists for some of the essential concepts/classes; (2) the Ocean Data Ontology; (3) publishing content as Linked Open Data and (4) SPARQL queries and inference. The final results are emerging as a semantic layer comprising domain-specific controlled vocabularies typed to community standard definitions, an ontology with the concepts and relationships needed to describe ocean data, a semantically-enabled faceted search, and inferencing services. We are exploring use of these technologies to improve the accuracy of the BCO-DMO data collection and to facilitate exchange of information with complementary ocean data repositories. Integrating a semantic layer into the BCO-DMO data system architecture improves data and information resource discovery, access and integration.

  20. Current Status of The Romanian National Deep Geological Repository Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radu, M.; Nicolae, R.; Nicolae, D.

    2008-07-01

    Construction of a deep geological repository is a very demanding and costly task. By now, countries that have Candu reactors, have not processed the spent fuel passing to the interim storage as a preliminary step of final disposal within the nuclear fuel cycle back-end. Romania, in comparison to other nations, represents a rather small territory, with high population density, wherein the geological formation areas with radioactive waste storage potential are limited and restricted not only from the point of view of the selection criteria due to the rocks natural characteristics, but also from the point of view of their involvementmore » in social and economical activities. In the framework of the national R and D Programs, series of 'Map investigations' have been made regarding the selection and preliminary characterization of the host geological formation for the nation's spent fuel deep geological repository. The fact that Romania has many deposits of natural gas, oil, ore and geothermal water, and intensively utilizes soil and also is very forested, cause some of the apparent acceptable sites to be rejected in the subsequent analysis. Currently, according to the Law on the spent fuel and radioactive waste management, including disposal, The National Agency of Radioactive Waste is responsible and coordinates the national strategy in the field and, subsequently, further actions will be decided. The Romanian National Strategy, approved in 2004, projects the operation of a deep geological repository to begin in 2055. (authors)« less

  1. Research on Geo-information Data Model for Preselected Areas of Geological Disposal of High-level Radioactive Waste

    NASA Astrophysics Data System (ADS)

    Gao, M.; Huang, S. T.; Wang, P.; Zhao, Y. A.; Wang, H. B.

    2016-11-01

    The geological disposal of high-level radioactive waste (hereinafter referred to "geological disposal") is a long-term, complex, and systematic scientific project, whose data and information resources in the research and development ((hereinafter referred to ”R&D”) process provide the significant support for R&D of geological disposal system, and lay a foundation for the long-term stability and safety assessment of repository site. However, the data related to the research and engineering in the sitting of the geological disposal repositories is more complicated (including multi-source, multi-dimension and changeable), the requirements for the data accuracy and comprehensive application has become much higher than before, which lead to the fact that the data model design of geo-information database for the disposal repository are facing more serious challenges. In the essay, data resources of the pre-selected areas of the repository has been comprehensive controlled and systematic analyzed. According to deeply understanding of the application requirements, the research work has made a solution for the key technical problems including reasonable classification system of multi-source data entity, complex logic relations and effective physical storage structures. The new solution has broken through data classification and conventional spatial data the organization model applied in the traditional industry, realized the data organization and integration with the unit of data entities and spatial relationship, which were independent, holonomic and with application significant features in HLW geological disposal. The reasonable, feasible and flexible data conceptual models, logical models and physical models have been established so as to ensure the effective integration and facilitate application development of multi-source data in pre-selected areas for geological disposal.

  2. A collection of open source applications for mass spectrometry data mining.

    PubMed

    Gallardo, Óscar; Ovelleiro, David; Gay, Marina; Carrascal, Montserrat; Abian, Joaquin

    2014-10-01

    We present several bioinformatics applications for the identification and quantification of phosphoproteome components by MS. These applications include a front-end graphical user interface that combines several Thermo RAW formats to MASCOT™ Generic Format extractors (EasierMgf), two graphical user interfaces for search engines OMSSA and SEQUEST (OmssaGui and SequestGui), and three applications, one for the management of databases in FASTA format (FastaTools), another for the integration of search results from up to three search engines (Integrator), and another one for the visualization of mass spectra and their corresponding database search results (JsonVisor). These applications were developed to solve some of the common problems found in proteomic and phosphoproteomic data analysis and were integrated in the workflow for data processing and feeding on our LymPHOS database. Applications were designed modularly and can be used standalone. These tools are written in Perl and Python programming languages and are supported on Windows platforms. They are all released under an Open Source Software license and can be freely downloaded from our software repository hosted at GoogleCode. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Researcher-library collaborations: Data repositories as a service for researchers.

    PubMed

    Gordon, Andrew S; Millman, David S; Steiger, Lisa; Adolph, Karen E; Gilmore, Rick O

    New interest has arisen in organizing, preserving, and sharing the raw materials-the data and metadata-that undergird the published products of research. Library and information scientists have valuable expertise to bring to bear in the effort to create larger, more diverse, and more widely used data repositories. However, for libraries to be maximally successful in providing the research data management and preservation services required of a successful data repository, librarians must work closely with researchers and learn about their data management workflows. Databrary is a data repository that is closely linked to the needs of a specific scholarly community-researchers who use video as a main source of data to study child development and learning. The project's success to date is a result of its focus on community outreach and providing services for scholarly communication, engaging institutional partners, offering services for data curation with the guidance of closely involved information professionals, and the creation of a strong technical infrastructure. Databrary plans to improve its curation tools that allow researchers to deposit their own data, enhance the user-facing feature set, increase integration with library systems, and implement strategies for long-term sustainability.

  4. Revision history aware repositories of computational models of biological systems.

    PubMed

    Miller, Andrew K; Yu, Tommy; Britten, Randall; Cooling, Mike T; Lawson, James; Cowan, Dougal; Garny, Alan; Halstead, Matt D B; Hunter, Peter J; Nickerson, David P; Nunns, Geo; Wimalaratne, Sarala M; Nielsen, Poul M F

    2011-01-14

    Building repositories of computational models of biological systems ensures that published models are available for both education and further research, and can provide a source of smaller, previously verified models to integrate into a larger model. One problem with earlier repositories has been the limitations in facilities to record the revision history of models. Often, these facilities are limited to a linear series of versions which were deposited in the repository. This is problematic for several reasons. Firstly, there are many instances in the history of biological systems modelling where an 'ancestral' model is modified by different groups to create many different models. With a linear series of versions, if the changes made to one model are merged into another model, the merge appears as a single item in the history. This hides useful revision history information, and also makes further merges much more difficult, as there is no record of which changes have or have not already been merged. In addition, a long series of individual changes made outside of the repository are also all merged into a single revision when they are put back into the repository, making it difficult to separate out individual changes. Furthermore, many earlier repositories only retain the revision history of individual files, rather than of a group of files. This is an important limitation to overcome, because some types of models, such as CellML 1.1 models, can be developed as a collection of modules, each in a separate file. The need for revision history is widely recognised for computer software, and a lot of work has gone into developing version control systems and distributed version control systems (DVCSs) for tracking the revision history. However, to date, there has been no published research on how DVCSs can be applied to repositories of computational models of biological systems. We have extended the Physiome Model Repository software to be fully revision history aware, by building it on top of Mercurial, an existing DVCS. We have demonstrated the utility of this approach, when used in conjunction with the model composition facilities in CellML, to build and understand more complex models. We have also demonstrated the ability of the repository software to present version history to casual users over the web, and to highlight specific versions which are likely to be useful to users. Providing facilities for maintaining and using revision history information is an important part of building a useful repository of computational models, as this information is useful both for understanding the source of and justification for parts of a model, and to facilitate automated processes such as merges. The availability of fully revision history aware repositories, and associated tools, will therefore be of significant benefit to the community.

  5. Implementation of the Brazilian National Repository - RBMN Project - 13008

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cassia Oliveira de Tello, Cledola

    2013-07-01

    Ionizing radiation in Brazil is used in electricity generation, medicine, industry, agriculture and for research and development purposes. All these activities can generate radioactive waste. At this point, in Brazil, the use of nuclear energy and radioisotopes justifies the construction of a national repository for radioactive wastes of low and intermediate-level. According to Federal Law No. 10308, Brazilian National Commission for Nuclear Energy (CNEN) is responsible for designing and constructing the intermediate and final storages for radioactive wastes. Additionally, a restriction on the construction of Angra 3 is that the repository is under construction until its operation start, attaining somemore » requirements of the Brazilian Environmental Regulator (IBAMA). Besides this NPP, in the National Energy Program is previewed the installation of four more plants, by 2030. In November 2008, CNEN launched the Project RBMN (Repository for Low and Intermediate-Level Radioactive Wastes), which aims at the implantation of a National Repository for disposal of low and intermediate-level of radiation wastes. This Project has some aspects that are unique in the Brazilian context, especially referring to the time between its construction and the end of its institutional period. This time is about 360 years, when the area will be released for unrestricted uses. It means that the Repository must be safe and secure for more than three hundred years, which is longer than half of the whole of Brazilian history. This aspect is very new for the Brazilian people, bringing a new dimension to public acceptance. Another point is this will be the first repository in South America, bringing a real challenge for the continent. The current status of the Project is summarized. (authors)« less

  6. MetNetAPI: A flexible method to access and manipulate biological network data from MetNet

    PubMed Central

    2010-01-01

    Background Convenient programmatic access to different biological databases allows automated integration of scientific knowledge. Many databases support a function to download files or data snapshots, or a webservice that offers "live" data. However, the functionality that a database offers cannot be represented in a static data download file, and webservices may consume considerable computational resources from the host server. Results MetNetAPI is a versatile Application Programming Interface (API) to the MetNetDB database. It abstracts, captures and retains operations away from a biological network repository and website. A range of database functions, previously only available online, can be immediately (and independently from the website) applied to a dataset of interest. Data is available in four layers: molecular entities, localized entities (linked to a specific organelle), interactions, and pathways. Navigation between these layers is intuitive (e.g. one can request the molecular entities in a pathway, as well as request in what pathways a specific entity participates). Data retrieval can be customized: Network objects allow the construction of new and integration of existing pathways and interactions, which can be uploaded back to our server. In contrast to webservices, the computational demand on the host server is limited to processing data-related queries only. Conclusions An API provides several advantages to a systems biology software platform. MetNetAPI illustrates an interface with a central repository of data that represents the complex interrelationships of a metabolic and regulatory network. As an alternative to data-dumps and webservices, it allows access to a current and "live" database and exposes analytical functions to application developers. Yet it only requires limited resources on the server-side (thin server/fat client setup). The API is available for Java, Microsoft.NET and R programming environments and offers flexible query and broad data- retrieval methods. Data retrieval can be customized to client needs and the API offers a framework to construct and manipulate user-defined networks. The design principles can be used as a template to build programmable interfaces for other biological databases. The API software and tutorials are available at http://www.metnetonline.org/api. PMID:21083943

  7. Toward the Development of a Sustainable Scientific Research Culture in Azerbaijan (2011-2015).

    PubMed

    Aliyeva, Saida; Flanagan, Peter; Johnson, April; Strelow, Lisa

    2016-01-01

    This review especially describes the dangerous pathogens research program in Azerbaijan (AJ) funded by the US Defense Threat Reduction Agency under the Cooperative Biological Engagement Program (CBEP) from 2011 through 2015. The objectives of the CBEP are to prevent the proliferation of biological weapons; to consolidate and secure collections of dangerous pathogens in central repositories; to strengthen biosafety and biosecurity of laboratory facilities; and to improve partner nations' ability to detect, diagnose, report, and respond to outbreaks of disease caused by especially dangerous pathogens. One of the missions of the CBEP is therefore to increase the research skills and proficiency of partner country scientists. The program aims to fulfill this mission by sponsoring scientific research projects that exercise the modern diagnostic techniques available in the CBEP-engaged laboratories and the enhanced disease surveillance/control programs. To strengthen the local scientists' ability to develop research ideas, write grant proposals, and conduct research independently, in-country CBEP integrating contractor personnel have mentored scientists across AJ and conducted workshops to address technical gaps. As a result of CBEP engagement, seven research projects developed and led by AJ scientists have been funded, and five projects are currently in various stages of implementation. The Defense Threat Reduction Agency has also sponsored AJ scientist participation at international scientific conferences to introduce and integrate them into the global scientific community. The efforts summarized in this review represent the first steps in an ongoing process that will ultimately provide AJ scientists with the skills and resources to plan and implement research projects of local and regional relevance.

  8. Characterization of Heat-treated Clay Minerals in the Context of Nuclear Waste Disposal

    NASA Astrophysics Data System (ADS)

    Matteo, E. N.; Wang, Y.; Kruichak, J. N.; Mills, M. M.

    2015-12-01

    Clay minerals are likely candidates to aid in nuclear waste isolation due to their low permeability, favorable swelling properties, and high cation sorption capacities. Establishing the thermal limit for clay minerals in a nuclear waste repository is a potentially important component of repository design, as flexibility of the heat load within the repository can have a major impact on the selection of repository design. For example, the thermal limit plays a critical role in the time that waste packages would need to cool before being transferred to the repository. Understanding the chemical and physical changes, if any, that occur in clay minerals at various temperatures above the current thermal limit (of 100 °C) can enable decision-makers with information critical to evaluating the potential trade-offs of increasing the thermal limit within the repository. Most critical is gaining understanding of how varying thermal conditions in the repository will impact radionuclide sorption and transport in clay materials either as engineered barriers or as disposal media. A variety of repository-relevant clay minerals (illite, mixed layer illite/smectite, and montmorillonite), were heated for a range of temperatures between 100-1000 °C. These samples were characterized to determine surface area, mineralogical alteration, and cation exchange capacity (CEC). Our results show that for conditions up to 500 °C, no significant change occurs, so long as the clay mineral remains mineralogically intact. At temperatures above 500 °C, transformation of the layered silicates into silica phases leads to alteration that impacts important clay characteristics. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's Nation Nuclear Security Administration under contract DE-AC04-94AL85000. SAND Number: SAND2015-6524 A

  9. DEVELOPMENT OF THE U.S. EPA HEALTH EFFECTS RESEARCH LABORATORY FROZEN BLOOD CELL REPOSITORY PROGRAM

    EPA Science Inventory

    In previous efforts, we suggested that proper blood cell freezing and storage is necessary in longitudinal studies with reduced between tests error, for specimen sharing between laboratories and for convenient scheduling of assays. e continue to develop and upgrade programs for o...

  10. An Optimal Centralized Carbon Dioxide Repository for Florida, USA

    PubMed Central

    Poiencot, Brandon; Brown, Christopher

    2011-01-01

    For over a decade, the United States Department of Energy, and engineers, geologists, and scientists from all over the world have investigated the potential for reducing atmospheric carbon emissions through carbon sequestration. Numerous reports exist analyzing the potential for sequestering carbon dioxide at various sites around the globe, but none have identified the potential for a statewide system in Florida, USA. In 2005, 83% of Florida’s electrical energy was produced by natural gas, coal, or oil (e.g., fossil fuels), from power plants spread across the state. In addition, only limited research has been completed on evaluating optimal pipeline transportation networks to centralized carbon dioxide repositories. This paper describes the feasibility and preliminary locations for an optimal centralized Florida-wide carbon sequestration repository. Linear programming optimization modeling is used to plan and route an idealized pipeline network to existing Florida power plants. Further analysis of the subsurface geology in these general locations will provide insight into the suitability of the subsurface conditions and the available capacity for carbon sequestration at selected possible repository sites. The identification of the most favorable site(s) is also presented. PMID:21695024

  11. An optimal centralized carbon dioxide repository for Florida, USA.

    PubMed

    Poiencot, Brandon; Brown, Christopher

    2011-04-01

    For over a decade, the United States Department of Energy, and engineers, geologists, and scientists from all over the world have investigated the potential for reducing atmospheric carbon emissions through carbon sequestration. Numerous reports exist analyzing the potential for sequestering carbon dioxide at various sites around the globe, but none have identified the potential for a statewide system in Florida, USA. In 2005, 83% of Florida's electrical energy was produced by natural gas, coal, or oil (e.g., fossil fuels), from power plants spread across the state. In addition, only limited research has been completed on evaluating optimal pipeline transportation networks to centralized carbon dioxide repositories. This paper describes the feasibility and preliminary locations for an optimal centralized Florida-wide carbon sequestration repository. Linear programming optimization modeling is used to plan and route an idealized pipeline network to existing Florida power plants. Further analysis of the subsurface geology in these general locations will provide insight into the suitability of the subsurface conditions and the available capacity for carbon sequestration at selected possible repository sites. The identification of the most favorable site(s) is also presented.

  12. Nuclear waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-09-01

    Radioactive waste is mounting at U.S. nuclear power plants at a rate of more than 2,000 metric tons a year. Pursuant to statute and anticipating that a geologic repository would be available in 1998, the Department of Energy (DOE) entered into disposal contracts with nuclear utilities. Now, however, DOE does not expect the repository to be ready before 2010. For this reason, DOE does not want to develop a facility for monitored retrievable storage (MRS) by 1998. This book is concerned about how best to store the waste until a repository is available, congressional requesters asked GAO to review themore » alternatives of continued storage at utilities' reactor sites or transferring waste to an MRS facility, GAO assessed the likelihood of an MRSA facility operating by 1998, legal implications if DOE is not able to take delivery of wastes in 1998, propriety of using the Nuclear Waste Fund-from which DOE's waste program costs are paid-to pay utilities for on-site storage capacity added after 1998, ability of utilities to store their waste on-site until a repository is operating, and relative costs and safety of the two storage alternatives.« less

  13. Review of DOE Waste Package Program. Semiannual report, October 1984-March 1985. Volume 8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, M.S.

    1985-12-01

    A large number of technical reports on waste package component performance were reviewed over the last year in support of the NRC`s review of the Department of Energy`s (DOE`s) Environmental Assessment reports. The intent was to assess in some detail the quantity and quality of the DOE data and their relevance to the high-level waste repository site selection process. A representative selection of the reviews is presented for the salt, basalt, and tuff repository projects. Areas for future research have been outlined. 141 refs.

  14. Why we need a centralized repository for isotopic data

    USDA-ARS?s Scientific Manuscript database

    Stable isotopes encode the origin and integrate the history of matter; thus, their analysis offers tremendous potential to address questions across diverse scientific disciplines. Indeed, the broad applicability of stable isotopes, coupled with advancements in high-throughput analysis, have created ...

  15. Metabolomics Workbench: An international repository for metabolomics data and metadata, metabolite standards, protocols, tutorials and training, and analysis tools.

    PubMed

    Sud, Manish; Fahy, Eoin; Cotter, Dawn; Azam, Kenan; Vadivelu, Ilango; Burant, Charles; Edison, Arthur; Fiehn, Oliver; Higashi, Richard; Nair, K Sreekumaran; Sumner, Susan; Subramaniam, Shankar

    2016-01-04

    The Metabolomics Workbench, available at www.metabolomicsworkbench.org, is a public repository for metabolomics metadata and experimental data spanning various species and experimental platforms, metabolite standards, metabolite structures, protocols, tutorials, and training material and other educational resources. It provides a computational platform to integrate, analyze, track, deposit and disseminate large volumes of heterogeneous data from a wide variety of metabolomics studies including mass spectrometry (MS) and nuclear magnetic resonance spectrometry (NMR) data spanning over 20 different species covering all the major taxonomic categories including humans and other mammals, plants, insects, invertebrates and microorganisms. Additionally, a number of protocols are provided for a range of metabolite classes, sample types, and both MS and NMR-based studies, along with a metabolite structure database. The metabolites characterized in the studies available on the Metabolomics Workbench are linked to chemical structures in the metabolite structure database to facilitate comparative analysis across studies. The Metabolomics Workbench, part of the data coordinating effort of the National Institute of Health (NIH) Common Fund's Metabolomics Program, provides data from the Common Fund's Metabolomics Resource Cores, metabolite standards, and analysis tools to the wider metabolomics community and seeks data depositions from metabolomics researchers across the world. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Sequencing Data Discovery and Integration for Earth System Science with MetaSeek

    NASA Astrophysics Data System (ADS)

    Hoarfrost, A.; Brown, N.; Arnosti, C.

    2017-12-01

    Microbial communities play a central role in biogeochemical cycles. Sequencing data resources from environmental sources have grown exponentially in recent years, and represent a singular opportunity to investigate microbial interactions with Earth system processes. Carrying out such meta-analyses depends on our ability to discover and curate sequencing data into large-scale integrated datasets. However, such integration efforts are currently challenging and time-consuming, with sequencing data scattered across multiple repositories and metadata that is not easily or comprehensively searchable. MetaSeek is a sequencing data discovery tool that integrates sequencing metadata from all the major data repositories, allowing the user to search and filter on datasets in a lightweight application with an intuitive, easy-to-use web-based interface. Users can save and share curated datasets, while other users can browse these data integrations or use them as a jumping off point for their own curation. Missing and/or erroneous metadata are inferred automatically where possible, and where not possible, users are prompted to contribute to the improvement of the sequencing metadata pool by correcting and amending metadata errors. Once an integrated dataset has been curated, users can follow simple instructions to download their raw data and quickly begin their investigations. In addition to the online interface, the MetaSeek database is easily queryable via an open API, further enabling users and facilitating integrations of MetaSeek with other data curation tools. This tool lowers the barriers to curation and integration of environmental sequencing data, clearing the path forward to illuminating the ecosystem-scale interactions between biological and abiotic processes.

  17. Rolling Deck to Repository I: Designing a Database Infrastructure

    NASA Astrophysics Data System (ADS)

    Arko, R. A.; Miller, S. P.; Chandler, C. L.; Ferrini, V. L.; O'Hara, S. H.

    2008-12-01

    The NSF-supported academic research fleet collectively produces a large and diverse volume of scientific data, which are increasingly being shared across disciplines and contributed to regional and global syntheses. As both Internet connectivity and storage technology improve, it becomes practical for ships to routinely deliver data and documentation for a standard suite of underway instruments to a central shoreside repository. Routine delivery will facilitate data discovery and integration, quality assessment, cruise planning, compliance with funding agency and clearance requirements, and long-term data preservation. We are working collaboratively with ship operators and data managers to develop a prototype "data discovery system" for NSF-supported research vessels. Our goal is to establish infrastructure for a central shoreside repository, and to develop and test procedures for the routine delivery of standard data products and documentation to the repository. Related efforts are underway to identify tools and criteria for quality control of standard data products, and to develop standard interfaces and procedures for maintaining an underway event log. Development of a shoreside repository infrastructure will include: 1. Deployment and testing of a central catalog that holds cruise summaries and vessel profiles. A cruise summary will capture the essential details of a research expedition (operating institution, ports/dates, personnel, data inventory, etc.), as well as related documentation such as event logs and technical reports. A vessel profile will capture the essential details of a ship's installed instruments (manufacturer, model, serial number, reference location, etc.), with version control as the profile changes through time. The catalog's relational database schema will be based on the UNOLS Data Best Practices Committee's recommendations, and published as a formal XML specification. 2. Deployment and testing of a central repository that holds navigation and routine underway data. Based on discussion with ship operators and data managers at a workgroup meeting in September 2008, we anticipate that a subset of underway data could be delivered from ships to the central repository in near- realtime - enabling the integrated display of ship tracks at a public Web portal, for example - and a full data package could be delivered post-cruise by network transfer or disk shipment. Once ashore, data sets could be distributed to assembly centers such as the Shipboard Automated Meteorological and Oceanographic System (SAMOS) for routine processing, quality assessment, and synthesis efforts - as well as transmitted to national data centers such as NODC and NGDC for permanent archival. 3. Deployment and testing of a basic suite of Web services to make cruise summaries, vessel profiles, event logs, and navigation data easily available. A standard set of catalog records, maps, and navigation features will be published via the Open Archives Initiative (OAI) and Open Geospatial Consortium (OGC) protocols, which can then be harvested by partner data centers and/or embedded in client applications.

  18. Semantic Web repositories for genomics data using the eXframe platform

    PubMed Central

    2014-01-01

    Background With the advent of inexpensive assay technologies, there has been an unprecedented growth in genomics data as well as the number of databases in which it is stored. In these databases, sample annotation using ontologies and controlled vocabularies is becoming more common. However, the annotation is rarely available as Linked Data, in a machine-readable format, or for standardized queries using SPARQL. This makes large-scale reuse, or integration with other knowledge bases very difficult. Methods To address this challenge, we have developed the second generation of our eXframe platform, a reusable framework for creating online repositories of genomics experiments. This second generation model now publishes Semantic Web data. To accomplish this, we created an experiment model that covers provenance, citations, external links, assays, biomaterials used in the experiment, and the data collected during the process. The elements of our model are mapped to classes and properties from various established biomedical ontologies. Resource Description Framework (RDF) data is automatically produced using these mappings and indexed in an RDF store with a built-in Sparql Protocol and RDF Query Language (SPARQL) endpoint. Conclusions Using the open-source eXframe software, institutions and laboratories can create Semantic Web repositories of their experiments, integrate it with heterogeneous resources and make it interoperable with the vast Semantic Web of biomedical knowledge. PMID:25093072

  19. Linking Big and Small Data Across the Social, Engineering, and Earth Sciences

    NASA Astrophysics Data System (ADS)

    Chen, R. S.; de Sherbinin, A. M.; Levy, M. A.; Downs, R. R.

    2014-12-01

    The challenges of sustainable development cut across the social, health, ecological, engineering, and Earth sciences, across a wide range of spatial and temporal scales, and across the spectrum from basic to applied research and decision making. The rapidly increasing availability of data and information in digital form from a variety of data repositories, networks, and other sources provides new opportunities to link and integrate both traditional data holdings as well as emerging "big data" resources in ways that enable interdisciplinary research and facilitate the use of objective scientific data and information in society. Taking advantage of these opportunities not only requires improved technical and scientific data interoperability across disciplines, scales, and data types, but also concerted efforts to bridge gaps and barriers between key communities, institutions, and networks. Given the long time perspectives required in planning sustainable approaches to development, it is also imperative to address user requirements for long-term data continuity and stewardship by trustworthy repositories. We report here on lessons learned by CIESIN working on a range of sustainable development issues to integrate data across multiple repositories and networks. This includes CIESIN's roles in developing policy-relevant climate and environmental indicators, soil data for African agriculture, and exposure and risk measures for hazards, disease, and conflict, as well as CIESIN's participation in a range of national and international initiatives related both to sustainable development and to open data access, interoperability, and stewardship.

  20. Accessing and integrating data and knowledge for biomedical research.

    PubMed

    Burgun, A; Bodenreider, O

    2008-01-01

    To review the issues that have arisen with the advent of translational research in terms of integration of data and knowledge, and survey current efforts to address these issues. Using examples form the biomedical literature, we identified new trends in biomedical research and their impact on bioinformatics. We analyzed the requirements for effective knowledge repositories and studied issues in the integration of biomedical knowledge. New diagnostic and therapeutic approaches based on gene expression patterns have brought about new issues in the statistical analysis of data, and new workflows are needed are needed to support translational research. Interoperable data repositories based on standard annotations, infrastructures and services are needed to support the pooling and meta-analysis of data, as well as their comparison to earlier experiments. High-quality, integrated ontologies and knowledge bases serve as a source of prior knowledge used in combination with traditional data mining techniques and contribute to the development of more effective data analysis strategies. As biomedical research evolves from traditional clinical and biological investigations towards omics sciences and translational research, specific needs have emerged, including integrating data collected in research studies with patient clinical data, linking omics knowledge with medical knowledge, modeling the molecular basis of diseases, and developing tools that support in-depth analysis of research data. As such, translational research illustrates the need to bridge the gap between bioinformatics and medical informatics, and opens new avenues for biomedical informatics research.

  1. International Approaches for Nuclear Waste Disposal in Geological Formations: Report on Fifth Worldwide Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faybishenko, Boris; Birkholzer, Jens; Persoff, Peter

    2016-09-01

    The goal of the Fifth Worldwide Review is to document evolution in the state-of-the-art of approaches for nuclear waste disposal in geological formations since the Fourth Worldwide Review that was released in 2006. The last ten years since the previous Worldwide Review has seen major developments in a number of nations throughout the world pursuing geological disposal programs, both in preparing and reviewing safety cases for the operational and long-term safety of proposed and operating repositories. The countries that are approaching implementation of geological disposal will increasingly focus on the feasibility of safely constructing and operating their repositories in short-more » and long terms on the basis existing regulations. The WWR-5 will also address a number of specific technical issues in safety case development along with the interplay among stakeholder concerns, technical feasibility, engineering design issues, and operational and post-closure safety. Preparation and publication of the Fifth Worldwide Review on nuclear waste disposal facilitates assessing the lessons learned and developing future cooperation between the countries. The Report provides scientific and technical experiences on preparing for and developing scientific and technical bases for nuclear waste disposal in deep geologic repositories in terms of requirements, societal expectations and the adequacy of cases for long-term repository safety. The Chapters include potential issues that may arise as repository programs mature, and identify techniques that demonstrate the safety cases and aid in promoting and gaining societal confidence. The report will also be used to exchange experience with other fields of industry and technology, in which concepts similar to the design and safety cases are applied, as well to facilitate the public perception and understanding of the safety of the disposal approaches relative to risks that may increase over long times frames in the absence of a successful implementation of final dispositioning.« less

  2. PACOM: A Versatile Tool for Integrating, Filtering, Visualizing, and Comparing Multiple Large Mass Spectrometry Proteomics Data Sets.

    PubMed

    Martínez-Bartolomé, Salvador; Medina-Aunon, J Alberto; López-García, Miguel Ángel; González-Tejedo, Carmen; Prieto, Gorka; Navajas, Rosana; Salazar-Donate, Emilio; Fernández-Costa, Carolina; Yates, John R; Albar, Juan Pablo

    2018-04-06

    Mass-spectrometry-based proteomics has evolved into a high-throughput technology in which numerous large-scale data sets are generated from diverse analytical platforms. Furthermore, several scientific journals and funding agencies have emphasized the storage of proteomics data in public repositories to facilitate its evaluation, inspection, and reanalysis. (1) As a consequence, public proteomics data repositories are growing rapidly. However, tools are needed to integrate multiple proteomics data sets to compare different experimental features or to perform quality control analysis. Here, we present a new Java stand-alone tool, Proteomics Assay COMparator (PACOM), that is able to import, combine, and simultaneously compare numerous proteomics experiments to check the integrity of the proteomic data as well as verify data quality. With PACOM, the user can detect source of errors that may have been introduced in any step of a proteomics workflow and that influence the final results. Data sets can be easily compared and integrated, and data quality and reproducibility can be visually assessed through a rich set of graphical representations of proteomics data features as well as a wide variety of data filters. Its flexibility and easy-to-use interface make PACOM a unique tool for daily use in a proteomics laboratory. PACOM is available at https://github.com/smdb21/pacom .

  3. The FaceBase Consortium: A comprehensive program to facilitate craniofacial research

    PubMed Central

    Hochheiser, Harry; Aronow, Bruce J.; Artinger, Kristin; Beaty, Terri H.; Brinkley, James F.; Chai, Yang; Clouthier, David; Cunningham, Michael L.; Dixon, Michael; Donahue, Leah Rae; Fraser, Scott E.; Hallgrimsson, Benedikt; Iwata, Junichi; Klein, Ophir; Marazita, Mary L.; Murray, Jeffrey C.; Murray, Stephen; de Villena, Fernando Pardo-Manuel; Postlethwait, John; Potter, Steven; Shapiro, Linda; Spritz, Richard; Visel, Axel; Weinberg, Seth M.; Trainor, Paul A.

    2012-01-01

    The FaceBase Consortium consists of ten interlinked research and technology projects whose goal is to generate craniofacial research data and technology for use by the research community through a central data management and integrated bioinformatics hub. Funded by the National Institute of Dental and Craniofacial Research (NIDCR) and currently focused on studying the development of the middle region of the face, the Consortium will produce comprehensive datasets of global gene expression patterns, regulatory elements and sequencing; will generate anatomical and molecular atlases; will provide human normative facial data and other phenotypes; conduct follow up studies of a completed genome-wide association study; generate independent data on the genetics of craniofacial development, build repositories of animal models and of human samples and data for community access and analysis; and will develop software tools and animal models for analyzing and functionally testing and integrating these data. The FaceBase website (http://www.facebase.org) will serve as a web home for these efforts, providing interactive tools for exploring these datasets, together with discussion forums and other services to support and foster collaboration within the craniofacial research community. PMID:21458441

  4. Integrating digital information for coastal and marine sciences

    USGS Publications Warehouse

    Marincioni, Fausto; Lightsom, Frances L.; Riall, Rebecca L.; Linck, Guthrie A.; Aldrich, Thomas C.; Caruso, Michael J.

    2004-01-01

    A pilot distributed geolibrary, the Marine Realms Information Bank (MRIB), was developed by the U.S. Geological Survey Coastal and Marine Geology Program and the Woods Hole Oceanographic Institution, to classify, integrate, and facilitate access to scientific information about oceans, coasts, and lakes. The MRIB is composed of a categorization scheme, a metadata database, and a specialized software backend, capable of drawing together information from remote sources without modifying their original format or content. Twelve facets are used to classify information: location, geologic time, feature type, biota, discipline, research method, hot topics, project, agency, author, content type, and file type. The MRIB approach allows easy and flexible organization of large or growing document collections for which centralized repositories would be impractical. Geographic searching based on the gazetteer and map interface is the centerpiece of the MRIB distributed geolibrary. The MRIB is one of a very few digital libraries that employ georeferencing -- a fundamentally different way to structure information from the traditional author/title/subject/keyword approach employed by most digital libraries. Lessons learned in developing the MRIB will be useful as other digital libraries confront the challenges of georeferencing.

  5. Earth Observation Data Quality Monitoring and Control: A Case Study of STAR Central Data Repository

    NASA Astrophysics Data System (ADS)

    Han, W.; Jochum, M.

    2017-12-01

    Earth observation data quality is very important for researchers and decision makers involved in weather forecasting, severe weather warning, disaster and emergency response, environmental monitoring, etc. Monitoring and control earth observation data quality, especially accuracy, completeness, and timeliness, is very useful in data management and governance to optimize data flow, discover potential transmission issues, and better connect data providers and users. Taking a centralized near real-time satellite data repository, STAR (Center for Satellite Applications and Research of NOAA) Central Data Repository (SCDR), as an example, this paper describes how to develop new mechanism to verify data integrity, check data completeness, and monitor data latency in an operational data management system. Such quality monitoring and control of large volume satellite data help data providers and managers improve data transmission of near real-time satellite data, enhance its acquisition and management, and overcome performance and management issues to better serve research and development activities.

  6. Protocols for Scholarly Communication

    NASA Astrophysics Data System (ADS)

    Pepe, A.; Yeomans, J.

    2007-10-01

    CERN, the European Organization for Nuclear Research, has operated an institutional preprint repository for more than 10 years. The repository contains over 850,000 records of which more than 450,000 are full-text OA preprints, mostly in the field of particle physics, and it is integrated with the library's holdings of books, conference proceedings, journals and other grey literature. In order to encourage effective propagation and open access to scholarly material, CERN is implementing a range of innovative library services into its document repository: automatic keywording, reference extraction, collaborative management tools and bibliometric tools. Some of these services, such as user reviewing and automatic metadata extraction, could make up an interesting testbed for future publishing solutions and certainly provide an exciting environment for e-science possibilities. The future protocol for scientific communication should guide authors naturally towards OA publication, and CERN wants to help reach a full open access publishing environment for the particle physics community and related sciences in the next few years.

  7. Geological repository for nuclear high level waste in France from feasibility to design within a legal framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voizard, Patrice; Mayer, Stefan; Ouzounian, Gerald

    Over the past 15 years, the French program on deep geologic disposal of high level and long-lived radioactive waste has benefited from a clear legal framework as the result of the December 30, 1991 French Waste Act. To fulfil its obligations stipulated in this law, ANDRA has submitted the 'Dossier 2005 Argile' (clay) and 'Dossier 2005 Granite' to the French Government. The first of those reports presents a concept for the underground disposal of nuclear waste at a specific clay site and focuses on a feasibility study. Knowledge of the host rock characteristics is based on the investigations carried outmore » at the Meuse/Haute Marne Underground Research Laboratory. The repository concept addresses various issues, the most important of which relates to the large amount of waste, the clay host rock and the reversibility requirement. This phase has ended upon review and evaluation of the 'Dossier 2005' made by different organisations including the National Review Board, the National Safety Authority and the NEA International Review Team. By passing the 'new', June 28, 2006 Planning Act on the sustainable management of radioactive materials and waste, the French parliament has further defined a clear legal framework for future work. This June 28 Planning Act thus sets a schedule and defines the objectives for the next phase of repository design in requesting the submission of a construction authorization application by 2015. The law calls for the repository program to be in a position to commission disposal installations by 2025. (authors)« less

  8. Unified Database Development Program. Final Report.

    ERIC Educational Resources Information Center

    Thomas, Everett L., Jr.; Deem, Robert N.

    The objective of the unified database (UDB) program was to develop an automated information system that would be useful in the design, development, testing, and support of new Air Force aircraft weapon systems. Primary emphasis was on the development of: (1) a historical logistics data repository system to provide convenient and timely access to…

  9. Simulator sickness research program at NASA-Ames Research Center

    NASA Technical Reports Server (NTRS)

    Mccauley, Michael E.; Cook, Anthony M.

    1987-01-01

    The simulator sickness syndrome is receiving increased attention in the simulation community. NASA-Ames Research Center has initiated a program to facilitate the exchange of information on this topic among the tri-services and other interested government organizations. The program objectives are to identify priority research issues, promote efficient research strategies, serve as a repository of information, and disseminate information to simulator users.

  10. Formalization, Annotation and Analysis of Diverse Drug and Probe Screening Assay Datasets Using the BioAssay Ontology (BAO)

    PubMed Central

    Vempati, Uma D.; Przydzial, Magdalena J.; Chung, Caty; Abeyruwan, Saminda; Mir, Ahsan; Sakurai, Kunie; Visser, Ubbo; Lemmon, Vance P.; Schürer, Stephan C.

    2012-01-01

    Huge amounts of high-throughput screening (HTS) data for probe and drug development projects are being generated in the pharmaceutical industry and more recently in the public sector. The resulting experimental datasets are increasingly being disseminated via publically accessible repositories. However, existing repositories lack sufficient metadata to describe the experiments and are often difficult to navigate by non-experts. The lack of standardized descriptions and semantics of biological assays and screening results hinder targeted data retrieval, integration, aggregation, and analyses across different HTS datasets, for example to infer mechanisms of action of small molecule perturbagens. To address these limitations, we created the BioAssay Ontology (BAO). BAO has been developed with a focus on data integration and analysis enabling the classification of assays and screening results by concepts that relate to format, assay design, technology, target, and endpoint. Previously, we reported on the higher-level design of BAO and on the semantic querying capabilities offered by the ontology-indexed triple store of HTS data. Here, we report on our detailed design, annotation pipeline, substantially enlarged annotation knowledgebase, and analysis results. We used BAO to annotate assays from the largest public HTS data repository, PubChem, and demonstrate its utility to categorize and analyze diverse HTS results from numerous experiments. BAO is publically available from the NCBO BioPortal at http://bioportal.bioontology.org/ontologies/1533. BAO provides controlled terminology and uniform scope to report probe and drug discovery screening assays and results. BAO leverages description logic to formalize the domain knowledge and facilitate the semantic integration with diverse other resources. As a consequence, BAO offers the potential to infer new knowledge from a corpus of assay results, for example molecular mechanisms of action of perturbagens. PMID:23155465

  11. Rolling Deck to Repository (R2R): Standards and Semantics for Open Access to Research Data

    NASA Astrophysics Data System (ADS)

    Arko, Robert; Carbotte, Suzanne; Chandler, Cynthia; Smith, Shawn; Stocks, Karen

    2015-04-01

    In recent years, a growing number of funding agencies and professional societies have issued policies calling for open access to research data. The Rolling Deck to Repository (R2R) program is working to ensure open access to the environmental sensor data routinely acquired by the U.S. academic research fleet. Currently 25 vessels deliver 7 terabytes of data to R2R each year, acquired from a suite of geophysical, oceanographic, meteorological, and navigational sensors on over 400 cruises worldwide. R2R is working to ensure these data are preserved in trusted repositories, discoverable via standard protocols, and adequately documented for reuse. R2R maintains a master catalog of cruises for the U.S. academic research fleet, currently holding essential documentation for over 3,800 expeditions including vessel and cruise identifiers, start/end dates and ports, project titles and funding awards, science parties, dataset inventories with instrument types and file formats, data quality assessments, and links to related content at other repositories. A Digital Object Identifier (DOI) is published for 1) each cruise, 2) each original field sensor dataset, 3) each post-field data product such as quality-controlled shiptrack navigation produced by the R2R program, and 4) each document such as a cruise report submitted by the science party. Scientists are linked to personal identifiers, such as the Open Researcher and Contributor ID (ORCID), where known. Using standard global identifiers such as DOIs and ORCIDs facilitates linking with journal publications and generation of citation metrics. Since its inception, the R2R program has worked in close collaboration with other data repositories in the development of shared semantics for oceanographic research. The R2R cruise catalog uses community-standard terms and definitions hosted by the NERC Vocabulary Server, and publishes ISO metadata records for each cruise that use community-standard profiles developed with the NOAA Data Centers and the EU SeaDataNet project. R2R is a partner in the Ocean Data Interoperability Platform (ODIP), working to strengthen links among regional and national data systems, as well as a lead partner in the EarthCube "GeoLink" project, developing a standard set of ontology design patterns for publishing research data using Semantic Web protocols.

  12. The NASA Scientific and Technical Information (STI) Program's Implementation of Open Archives Initiation (OAI) for Data Interoperability and Data Exchange

    NASA Technical Reports Server (NTRS)

    Rocker, JoAnne; Roncaglia, George J.; Heimerl, Lynn N.; Nelson, Michael L.

    2002-01-01

    Interoperability and data-exchange are critical for the survival of government information management programs. E-government initiatives are transforming the way the government interacts with the public. More information is to be made available through web-enabled technologies. Programs such as the NASA's Scientific and Technical Information (STI) Program Office are tasked to find more effective ways to disseminate information to the public. The NASA STI Program is an agency-wide program charged with gathering, organizing, storing, and disseminating NASA-produced information for research and public use. The program is investigating the use of a new protocol called the Open Archives Initiative (OAI) as a means to improve data interoperability and data collection. OAI promotes the use of the OAI harvesting protocol as a simple way for data sharing among repositories. In two separate initiatives, the STI Program is implementing OAI In collaboration with the Air Force, Department of Energy, and Old Dominion University, the NASA STI Program has funded research on implementing the OAI to exchange data between the three organizations. The second initiative is the deployment of OAI for the NASA technical report server (TRS) environment. The NASA TRS environment is comprised of distributed technical report servers with a centralized search interface. This paper focuses on the implementation of OAI to promote interoperability among diverse data repositories.

  13. Development of anomaly detection models for deep subsurface monitoring

    NASA Astrophysics Data System (ADS)

    Sun, A. Y.

    2017-12-01

    Deep subsurface repositories are used for waste disposal and carbon sequestration. Monitoring deep subsurface repositories for potential anomalies is challenging, not only because the number of sensor networks and the quality of data are often limited, but also because of the lack of labeled data needed to train and validate machine learning (ML) algorithms. Although physical simulation models may be applied to predict anomalies (or the system's nominal state for that sake), the accuracy of such predictions may be limited by inherent conceptual and parameter uncertainties. The main objective of this study was to demonstrate the potential of data-driven models for leakage detection in carbon sequestration repositories. Monitoring data collected during an artificial CO2 release test at a carbon sequestration repository were used, which include both scalar time series (pressure) and vector time series (distributed temperature sensing). For each type of data, separate online anomaly detection algorithms were developed using the baseline experiment data (no leak) and then tested on the leak experiment data. Performance of a number of different online algorithms was compared. Results show the importance of including contextual information in the dataset to mitigate the impact of reservoir noise and reduce false positive rate. The developed algorithms were integrated into a generic Web-based platform for real-time anomaly detection.

  14. Calculating the quality of public high-throughput sequencing data to obtain a suitable subset for reanalysis from the Sequence Read Archive

    PubMed Central

    Nakazato, Takeru; Bono, Hidemasa

    2017-01-01

    Abstract It is important for public data repositories to promote the reuse of archived data. In the growing field of omics science, however, the increasing number of submissions of high-throughput sequencing (HTSeq) data to public repositories prevents users from choosing a suitable data set from among the large number of search results. Repository users need to be able to set a threshold to reduce the number of results to obtain a suitable subset of high-quality data for reanalysis. We calculated the quality of sequencing data archived in a public data repository, the Sequence Read Archive (SRA), by using the quality control software FastQC. We obtained quality values for 1 171 313 experiments, which can be used to evaluate the suitability of data for reuse. We also visualized the data distribution in SRA by integrating the quality information and metadata of experiments and samples. We provide quality information of all of the archived sequencing data, which enable users to obtain sufficient quality sequencing data for reanalyses. The calculated quality data are available to the public in various formats. Our data also provide an example of enhancing the reuse of public data by adding metadata to published research data by a third party. PMID:28449062

  15. Role of geophysics in identifying and characterizing sites for high-level nuclear waste repositories.

    USGS Publications Warehouse

    Wynn, J.C.; Roseboom, E.H.

    1987-01-01

    Evaluation of potential high-level nuclear waste repository sites is an area where geophysical capabilities and limitations may significantly impact a major governmental program. Since there is concern that extensive exploratory drilling might degrade most potential disposal sites, geophysical methods become crucial as the only nondestructive means to examine large volumes of rock in three dimensions. Characterization of potential sites requires geophysicists to alter their usual mode of thinking: no longer are anomalies being sought, as in mineral exploration, but rather their absence. Thus the size of features that might go undetected by a particular method take on new significance. Legal and regulatory considerations that stem from this different outlook, most notably the requirements of quality assurance (necessary for any data used in support of a repository license application), are forcing changes in the manner in which geophysicists collect and document their data. -Authors

  16. National Programs | Frederick National Laboratory for Cancer Research

    Cancer.gov

    The Frederick National Laboratoryis a shared national resource that offers access to a suite of advanced biomedical technologies, provides selected science and technology services, and maintains vast repositories of research materials available

  17. National Programs | FNLCR Staging

    Cancer.gov

    The Frederick National Lab (FNL) is a shared national resource that offers access to a suite of advanced biomedical technologies, provides selected science and technology services, and maintains vast repositories of research materials available to bi

  18. The Nanomaterial Data Curation Initiative: A Collaborative Approach to Assessing, Evaluating, and Advancing the State of the Field

    EPA Science Inventory

    The Nanomaterial Data Curation Initiative (NDCI) explores the critical aspect of data curation within the development of informatics approaches to understanding nanomaterial behavior. Data repositories and tools for integrating and interrogating complex nanomaterial datasets are...

  19. SoyBase, The USDA-ARS Soybean Genetics and Genomics Database

    USDA-ARS?s Scientific Manuscript database

    SoyBase, the USDA-ARS soybean genetic database, is a comprehensive repository for professionally curated genetics, genomics and related data resources for soybean. SoyBase contains the most current genetic, physical and genomic sequence maps integrated with qualitative and quantitative traits. The...

  20. Future ice ages and the challenges related to final disposal of nuclear waste: The Greenland Ice Sheet Hydrology Project

    NASA Astrophysics Data System (ADS)

    Lehtinen, A.; Claesson-Liljedahl, L.; Näslund, J.-O.; Ruskeeniemi, T.

    2009-04-01

    A deep geological repository for nuclear waste is designed to keep radiotoxic material separated from mankind and the environment for several hundreds of thousands of years. Within this time perspective glacial conditions are expected in high latitudes/Canada and North Europe. Climate induced changes such as the growth of ice sheets and permafrost will influence and alter the ground surface and subsurface environment, which may impact repository safety. In order to understand how climate change, particularly cooling and glaciation, might affect a repository in the long term, the use of present-day analogues helps to reduce the uncertainties and support the assumptions made in safety assessments. There are major uncertainties concerning hydrological processes related to glacial conditions. The impact of glaciations on any planned repository is a key consideration when performing safety assessments as it is one of the strongest perturbations related to climate change in the long term. The main aspects that need to be further investigated include: 1) to what extent does the meltwater produced by an ice sheet penetrates into the bedrock; 2) what is the pressure situation under an ice sheet, driving ground water flow; 3) how much oxygenated water will reach repository depth; 4) to what depth does glacial meltwater penetrate into the bedrock ; 5)what chemical composition does such water has when and if it reaches repository depth; and 6) can taliks (unfrozen ground in a permafrost area) act as concentrated discharge points of deep groundwater potentially transporting radionuclides in case of repository failure? Field data is needed in order to achieve a better and integrated understanding of the problems discussed above. Thus, research in a natural analogue site in Greenland has been planned and initiated by the Finnish (Posiva), Swedish (SKB) and Canadian (NWMO) nuclear waste management companies. The Greenland ice sheet and the Kangerlussuaq area (west Greenland) provides a good analogue for this purpose due to similarities in geology (in the selected study area), and the climate conditions and ice sheet size in Kangerlussuaq resemble the expected conditions in Fennoscandia during future glaciations. In 2005 and 2008 reconnaissance field trips were made to Kangerlussuaq, which confirmed the suitability of the area for the planned studies. According to the present Work Programme the investigations will be carried out in 2009-2012. The project is divided into four subprojects (SPA, SPB, SPC and SPD) addressing specific and different topics at or in relation to the ice margin: SPA (ice sheet hydrology and glacial groundwater formation); SPB (subglacial ice sheet hydrology), SPC (hydrogeochemistry and hydrogeology) and SPD (periglacial environment: biosphere and permafrost). The main objectives of SPA and SPB are to gain a better process understanding of supra- and subglacial hydrology. Qualitative and quantitative knowledge of the mechanisms, rates and distribution of the melt water recharge through the ice down to the bed, location and extension of warm-based areas and hydraulic pressure conditions at the base are the key issues to be studied. This will be made by meteorological observations, GPS measurements, radar surveys, drilling through the ice sheet and by ice sheet modelling. SPC will further study the fate of melt water by extending the investigations into the bedrock. It is assumed that the high hydraulic pressures at the ice sheet bed force water into the fracture network prevailing in the bedrock. However, it is not known how the fracture network behaves under loading, what is the proportion of recharging water compared to the drainage through the bed sediments, what is the intrusion depth, how long the meltwater can sustain its oxic nature and what chemical composition the recharging water has when and if it reaches repository depth (400-700 m). SPC seeks to answer these questions by drilling and instrumenting boreholes drilled into the bedrock and below the ice sheet. SPD is aiming at describing and studying processes acting in the periglacial environment affected by permafrost conditions. The observations will be used within the safety assessment biosphere programs. From the acquired results we will obtain data, which will allow us to develop better conceptual and numerical models for quantitative analysis of ice sheet hydrology and dynamics, groundwater flow, groundwater chemistry and hydro-mechanical couplings during glacial periods, by reducing uncertainties and better constraining the boundary conditions used in the models. Finally, this project concerns the first in situ investigation of the vital parameters needed to achieve a holistic and realistic understanding of how an ice sheet may impact a deep geological repository for spent nuclear waste and will provide the necessary integrated view of ice sheet hydrology and groundwater flow/chemistry needed when executing safety assessments for the geological repositories in Sweden, Finland and Canada.

  1. Thermal-Hydraulic-Mechanical (THM) Coupled Simulation of a Generic Site for Disposal of High Level Nuclear Waste in Claystone in Germany: Exemplary Proof of the Integrity of the Geological Barrier

    NASA Astrophysics Data System (ADS)

    Massmann, J.; Ziefle, G.; Jobmann, M.

    2016-12-01

    Claystone is investigated as a potential host rock for the disposal of high level nuclear waste (HLW). In Germany, DBE TECHNOLOGY GmbH, the BGR and the "Gesellschaft für Anlagen- und Reaktorsicherheit (GRS)" are developing an integrated methodology for safety assessment within the R&D project "ANSICHT". One part herein is the demonstration of integrity of the geological barrier to ensure safe containment of radionuclides over 1 million years. The mechanical excavation of an underground repository, the ex­po­si­tion of claystone to at­mos­pheric air, the insertion of backfill, buffer, sealing and supporting material as well as the deposition of heat producing waste constitute a sig­nif­i­cant disturbance of the underground system. A complex interacting scheme of thermal, hydraulic and mechanical (THM) processes can be expected. In this work, the finite element software OpenGeoSys, main­ly de­vel­oped at the "Helmholtz Centre for Environmental Research GmbH (UFZ)", is used to simulate and evaluate several THM coupled effects in the repository surroundings up to the surface over a time span of 1 million years. The numerical setup is based on two generic geological models inspired by the representative geology of potentially suitable regions in North- and South Germany. The results give an insight into the evolution of temperature, pore pressure, stresses as well as deformation and enables statements concerning the extent of the significantly influenced area. One important effect among others is the temperature driven change in the densities of the solid and liquid phase and its influence on the stress field. In a further step, integrity criteria have been quantified, based on specifications of the German federal ministry of the environment. The exemplary numerical evaluation of these criteria demonstrates, how numerical simulations can be used to prove the integrity of the geological barrier and detect potential vulnerabilities. Fig.: Calculated zone of increased temperature (blue bubble) around a generic repository of HLW in a representative geological setting, 1000 years after emplacement of HLW

  2. Mont Terri Underground Rock Laboratory, Switzerland-Research Program And Key Results

    NASA Astrophysics Data System (ADS)

    Nussbaum, C. O.; Bossart, P. J.

    2012-12-01

    Argillaceous formations generally act as aquitards because of their low hydraulic conductivities. This property, together with the large retention capacity of clays for cationic contaminants and the potential for self-sealing, has brought clay formations into focus as potential host rocks for the geological disposal of radioactive waste. Excavated in the Opalinus Clay formation, the Mont Terri underground rock laboratory in the Jura Mountains of NW Switzerland is an important international test site for researching clay formations. Research is carried out in the underground facility, which is located adjacent to the security gallery of the Mont Terri motorway tunnel. Fifteen partners from European countries, USA, Canada and Japan participate in the project. The objectives of the research program are to analyze the hydrogeological, geochemical and rock mechanical properties of the Opalinus Clay, to determine the changes induced by the excavation of galleries and by heating of the rock formation, to test sealing and container emplacement techniques and to evaluate and improve suitable investigation techniques. For the safety of deep geological disposal, it is of key importance to understand the processes occurring in the undisturbed argillaceous environment, as well as the processes in a disturbed system, during the operation of the repository. The objectives are related to: 1. Understanding processes and mechanisms in undisturbed clays and 2. Experiments related to repository-induced perturbations. Experiments of the first group are dedicated to: i) Improvement of drilling and excavation technologies and sampling methods; ii) Estimation of hydrogeological, rock mechanical and geochemical parameters of the undisturbed Opalinus Clay. Upscaling of parameters from laboratory to in situ scale; iii) Geochemistry of porewater and natural gases; evolution of porewater over time scales; iv) Assessment of long-term hydraulic transients associated with erosion and thermal scenarios and v) Evaluation of diffusion and retention parameters for long-lived radionuclides. Experiments related to repository-induced perturbations are focused on: i) Influence of rock liner on the disposal system and the buffering potential of the host rock; ii) Self-sealing processes in the excavation damaged zone; iii) Hydro-mechanical coupled processes (e.g. stress redistributions and pore pressure evolution during excavation); iv) Thermo-hydro-mechanical-chemical coupled processes (e.g. heating of bentonite and host rock) and v) Gas-induced transport of radionuclides in porewater and along interfaces in the engineered barrier system. A third research direction is to demonstrate the feasibility of repository construction and long-term safety after repository closure. Demonstration experiments can contribute to improving the reliability of the scientific basis for the safety assessment of future geological repositories, particularly if they are performed on a large scale and with a long duration. These experiments include the construction and installation of engineered barriers on a 1:1 scale: i) Horizontal emplacement of canisters; ii) Evaluation of the corrosion of container materials; repository re-saturation; iii) Sealing of boreholes and repository access tunnels and iv) Long-term monitoring of the repository. References Bossart, P. & Thury, M. (2008): Mont Terri Rock Laboratory. Project, Programme 1996 to 2007 and Results. - Rep. Swiss Geol. Surv. 3.

  3. Analytical model for screening potential CO2 repositories

    USGS Publications Warehouse

    Okwen, R.T.; Stewart, M.T.; Cunningham, J.A.

    2011-01-01

    Assessing potential repositories for geologic sequestration of carbon dioxide using numerical models can be complicated, costly, and time-consuming, especially when faced with the challenge of selecting a repository from a multitude of potential repositories. This paper presents a set of simple analytical equations (model), based on the work of previous researchers, that could be used to evaluate the suitability of candidate repositories for subsurface sequestration of carbon dioxide. We considered the injection of carbon dioxide at a constant rate into a confined saline aquifer via a fully perforated vertical injection well. The validity of the analytical model was assessed via comparison with the TOUGH2 numerical model. The metrics used in comparing the two models include (1) spatial variations in formation pressure and (2) vertically integrated brine saturation profile. The analytical model and TOUGH2 show excellent agreement in their results when similar input conditions and assumptions are applied in both. The analytical model neglects capillary pressure and the pressure dependence of fluid properties. However, simulations in TOUGH2 indicate that little error is introduced by these simplifications. Sensitivity studies indicate that the agreement between the analytical model and TOUGH2 depends strongly on (1) the residual brine saturation, (2) the difference in density between carbon dioxide and resident brine (buoyancy), and (3) the relationship between relative permeability and brine saturation. The results achieved suggest that the analytical model is valid when the relationship between relative permeability and brine saturation is linear or quasi-linear and when the irreducible saturation of brine is zero or very small. ?? 2011 Springer Science+Business Media B.V.

  4. Midwest FreightView and the Great Lakes Maritime Information Delivery System : a resource for the regional analysis of intermodal freight flows.

    DOT National Transportation Integrated Search

    2011-03-01

    Midwest FreightView and the Great Lakes Maritime Information Delivery System is a comprehensive data repository and information : clearinghouse in support of Great Lakes maritime commerce. This multifunctional resource integrated in a geographic info...

  5. Investigating the Thermal Limit of Clay Minerals for Applications in Nuclear Waste Repository Design

    NASA Astrophysics Data System (ADS)

    Matteo, E. N.; Miller, A. W.; Kruichak, J.; Mills, M.; Tellez, H.; Wang, Y.

    2013-12-01

    Clay minerals are likely candidates to aid in nuclear waste isolation due to their low permeability, favorable swelling properties, and high cation sorption capacities. Establishing the thermal limit for clay minerals in a nuclear waste repository is a potentially important component of repository design, as flexibility of the heat load within the repository can have a major impact on the selection of repository design. For example, the thermal limit plays a critical role in the time that waste packages would need to cool before being transferred to the repository. Understanding the chemical and physical changes that occur in clay minerals at various temperatures above the current thermal limit (of 100 °C) can enable decision-makers with information critical to evaluating the potential trade-offs of increasing the thermal limit within the repository. Most critical is gaining understanding of how varying thermal conditions in the repository will impact radionuclide sorption and transport in clay materials either as engineered barriers or as disposal media. A variety of clays (illite, mixed layer illite/smectite, montmorillonite, and palygorskite) were heated for a range of temperatures between 100-500 °C. These samples were characterized by a variety of methods, including nitrogen adsorption, x-ray diffraction, thermogravimetric analysis, barium chloride exchange for cation exchange capacity (CEC), and iodide sorption. The nitrogen porosimetry shows that for all the clays, thermally-induced changes in BET surface area are dominated by collapse/creation of the microporosity, i.e. pore diameters < 17 angstroms. Changes in micro porosity (relative to no heat treatment) are most significant for heat treatments 300 °C and above. Alterations are also seen in the chemical properties (CEC, XRD, iodide sorption) of clays, and like pore size distribution changes, are most significant above 300 °C. Overall, the results imply that changes seen in pores size distribution correlate with cation exchange capacity and cation exchange processes. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's Nation Nuclear Security Administration under contract DE-AC04-94AL85000. SAND Number: 2013-6352A.

  6. DSA-WDS Common Requirements: Developing a New Core Data Repository Certification

    NASA Astrophysics Data System (ADS)

    Minster, J. B. H.; Edmunds, R.; L'Hours, H.; Mokrane, M.; Rickards, L.

    2016-12-01

    The Data Seal of Approval (DSA) and the International Council for Science - World Data System (ICSU-WDS) have both developed minimally intensive core certification standards whereby digital repositories supply evidence that they are trustworthy and have a long-term outlook. Both DSA and WDS applicants have found core certification to be beneficial: building stakeholder confidence, enhancing the repository's reputation, and demonstrating that it is following good practices; as well as stimulating the repository to focus on processes and procedures, thereby achieving ever higher levels of professionalism over time.The DSA and WDS core certifications evolved independently serving initially different communities but both initiatives are multidisciplinary with catalogues of criteria and review procedures based on the same principles. Hence, to realize efficiencies, simplify assessment options, stimulate more certifications, and increase impact on the community, the Repository Audit and Certification DSA-WDS Partnership Working Group (WG) was established under the umbrella of the Research Data Alliance (RDA). The WG conducted a side-by-side analysis of both frameworks to unify the wording and criteria, ultimately leading to a harmonized Catalogue of Common Requirements for core certification of repositories—as well as a set of Common Procedures for their assessment.This presentation will focus on the collaborative effort by DSA and WDS to establish (1) a testbed comprising DSA and WDS certified data repositories to validate both the new Catalogue and Procedures, and (2) a joint Certification Board towards their practical implementation. We will describe:• The purpose and methodology of the testbed, including selection of repositories to be assessed against the common standard.• The results of the testbed, with an in-depth look at some of the comments received and issues highlighted.• General insights gained from evaluating the testbed results, the subsequent changes to the Common Requirements and Procedures, and an assessment of the success of these enhancements.• Steps by the two organizations to integrate the Common Certification into their tools and systems. In particular, the creation of Terms of Reference for the nascent DSA-WDS Certification Board.

  7. iReceptor: A platform for querying and analyzing antibody/B-cell and T-cell receptor repertoire data across federated repositories.

    PubMed

    Corrie, Brian D; Marthandan, Nishanth; Zimonja, Bojan; Jaglale, Jerome; Zhou, Yang; Barr, Emily; Knoetze, Nicole; Breden, Frances M W; Christley, Scott; Scott, Jamie K; Cowell, Lindsay G; Breden, Felix

    2018-07-01

    Next-generation sequencing allows the characterization of the adaptive immune receptor repertoire (AIRR) in exquisite detail. These large-scale AIRR-seq data sets have rapidly become critical to vaccine development, understanding the immune response in autoimmune and infectious disease, and monitoring novel therapeutics against cancer. However, at present there is no easy way to compare these AIRR-seq data sets across studies and institutions. The ability to combine and compare information for different disease conditions will greatly enhance the value of AIRR-seq data for improving biomedical research and patient care. The iReceptor Data Integration Platform (gateway.ireceptor.org) provides one implementation of the AIRR Data Commons envisioned by the AIRR Community (airr-community.org), an initiative that is developing protocols to facilitate sharing and comparing AIRR-seq data. The iReceptor Scientific Gateway links distributed (federated) AIRR-seq repositories, allowing sequence searches or metadata queries across multiple studies at multiple institutions, returning sets of sequences fulfilling specific criteria. We present a review of the development of iReceptor, and how it fits in with the general trend toward sharing genomic and health data, and the development of standards for describing and reporting AIRR-seq data. Researchers interested in integrating their repositories of AIRR-seq data into the iReceptor Platform are invited to contact support@ireceptor.org. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Database Resources of the BIG Data Center in 2018

    PubMed Central

    Xu, Xingjian; Hao, Lili; Zhu, Junwei; Tang, Bixia; Zhou, Qing; Song, Fuhai; Chen, Tingting; Zhang, Sisi; Dong, Lili; Lan, Li; Wang, Yanqing; Sang, Jian; Hao, Lili; Liang, Fang; Cao, Jiabao; Liu, Fang; Liu, Lin; Wang, Fan; Ma, Yingke; Xu, Xingjian; Zhang, Lijuan; Chen, Meili; Tian, Dongmei; Li, Cuiping; Dong, Lili; Du, Zhenglin; Yuan, Na; Zeng, Jingyao; Zhang, Zhewen; Wang, Jinyue; Shi, Shuo; Zhang, Yadong; Pan, Mengyu; Tang, Bixia; Zou, Dong; Song, Shuhui; Sang, Jian; Xia, Lin; Wang, Zhennan; Li, Man; Cao, Jiabao; Niu, Guangyi; Zhang, Yang; Sheng, Xin; Lu, Mingming; Wang, Qi; Xiao, Jingfa; Zou, Dong; Wang, Fan; Hao, Lili; Liang, Fang; Li, Mengwei; Sun, Shixiang; Zou, Dong; Li, Rujiao; Yu, Chunlei; Wang, Guangyu; Sang, Jian; Liu, Lin; Li, Mengwei; Li, Man; Niu, Guangyi; Cao, Jiabao; Sun, Shixiang; Xia, Lin; Yin, Hongyan; Zou, Dong; Xu, Xingjian; Ma, Lina; Chen, Huanxin; Sun, Yubin; Yu, Lei; Zhai, Shuang; Sun, Mingyuan; Zhang, Zhang; Zhao, Wenming; Xiao, Jingfa; Bao, Yiming; Song, Shuhui; Hao, Lili; Li, Rujiao; Ma, Lina; Sang, Jian; Wang, Yanqing; Tang, Bixia; Zou, Dong; Wang, Fan

    2018-01-01

    Abstract The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. PMID:29036542

  9. Accessing and Integrating Data and Knowledge for Biomedical Research

    PubMed Central

    Burgun, A.; Bodenreider, O.

    2008-01-01

    Summary Objectives To review the issues that have arisen with the advent of translational research in terms of integration of data and knowledge, and survey current efforts to address these issues. Methods Using examples form the biomedical literature, we identified new trends in biomedical research and their impact on bioinformatics. We analyzed the requirements for effective knowledge repositories and studied issues in the integration of biomedical knowledge. Results New diagnostic and therapeutic approaches based on gene expression patterns have brought about new issues in the statistical analysis of data, and new workflows are needed are needed to support translational research. Interoperable data repositories based on standard annotations, infrastructures and services are needed to support the pooling and meta-analysis of data, as well as their comparison to earlier experiments. High-quality, integrated ontologies and knowledge bases serve as a source of prior knowledge used in combination with traditional data mining techniques and contribute to the development of more effective data analysis strategies. Conclusion As biomedical research evolves from traditional clinical and biological investigations towards omics sciences and translational research, specific needs have emerged, including integrating data collected in research studies with patient clinical data, linking omics knowledge with medical knowledge, modeling the molecular basis of diseases, and developing tools that support in-depth analysis of research data. As such, translational research illustrates the need to bridge the gap between bioinformatics and medical informatics, and opens new avenues for biomedical informatics research. PMID:18660883

  10. Rolling Deck to Repository (R2R): Building the Data Pipeline - Initial Results

    NASA Astrophysics Data System (ADS)

    Arko, R. A.; Clark, P. D.; Rioux, M. A.; McGovern, T. M.; Deering, T. W.; Hagg, R. K.; Payne, A. A.; Fischman, D. E.; Ferrini, V.

    2009-12-01

    The NSF-funded Rolling Deck to Repository (R2R) project is working with U.S. academic research vessel operators to ensure the documentation and preservation of data from routine “underway” (meteorological, geophysical, and oceanographic) sensor systems. A standard pipeline is being developed in which data are submitted by vessel operators directly to a central repository; inventoried in an integrated fleet-wide catalog; organized into discrete data sets with persistent unique identifiers; associated with essential cruise-level metadata; and delivered to the National Data Centers for archiving and dissemination. Several vessels including Atlantis, Healy, Hugh R. Sharp, Ka'imikai-O-Kanaloa, Kilo Moana, Knorr, Marcus G. Langseth, Melville, Oceanus, Roger Revelle, and Thomas G. Thompson began submitting data and documentation to R2R during the project’s pilot phase, and a repository infrastructure has been established. Cruise metadata, track maps, and data inventories are published at the R2R Web portal, with controlled vocabularies drawn from community standards (e.g. International Council for the Exploration of the Sea (ICES) ship codes). A direct connection has been established to the University-National Oceanographic Laboratory System (UNOLS) Ship Time Request and Scheduling System (STRS) via Web services to synchronize port codes and cruise schedules. A secure portal is being developed where operators may login to upload sailing orders, review data inventories, and create vessel profiles. R2R has established a standard procedure for submission of data to the National Geophysical Data Center (NGDC) that incorporates persistent unique identifiers for cruises, data sets, and individual files, using multibeam data as a test bed. Once proprietary holds are cleared and a data set is delivered to NGDC, the R2R catalog record is updated with the URL for direct download and it becomes immediately available to integration and synthesis projects such as the NSF-funded Global Multi-Resolution Topography (GMRT) synthesis. Similar procedures will be developed for delivery of data to other National Data Centers as appropriate.

  11. Hydrologic and geologic characteristics of the Yucca Mountain site relevant to the performance of a potential repository

    USGS Publications Warehouse

    Levich, R.A.; Linden, R.M.; Patterson, R.L.; Stuckless, J.S.

    2000-01-01

    Yucca Mountain, located ~100 mi northwest of Las Vegas, Nevada, has been designated by Congress as a site to be characterized for a potential mined geologic repository for high-level radioactive waste. This field trip will examine the regional geologic and hydrologic setting for Yucca Mountain, as well as specific results of the site characterization program. The first day focuses on the regional setting with emphasis on current and paleo hydrology, which are both of critical concern for predicting future performance of a potential repository. Morning stops will be southern Nevada and afternoon stops will be in Death Valley. The second day will be spent at Yucca Mountain. The field trip will visit the underground testing sites in the "Exploratory Studies Facility" and the "Busted Butte Unsaturated Zone Transport Field Test" plus several surface-based testing sites. Much of the work at the site has concentrated on studies of the unsaturated zone, an element of the hydrologic system that historically has received little attention. Discussions during the second day will compromise selected topics of Yucca Mountain geology, hydrology and geochemistry and will include the probabilistic volcanic hazard analysis and the seismicity and seismic hazard in the Yucca Mountain area. Evening discussions will address modeling of regional groundwater flow, the results of recent hydrologic studies by the Nye County Nuclear Waste Program Office, and the relationship of the geology and hydrology of Yucca Mountain to the performance of a potential repository. Day 3 will examine the geologic framework and hydrology of the Pahute Mesa-Oasis Valley Groundwater Basin and then will continue to Reno via Hawthorne, Nevada and the Walker Lake area.

  12. Multidisciplinary hydrologic investigations at Yucca Mountain, Nevada

    USGS Publications Warehouse

    Dudley, William W.

    1990-01-01

    Future climatic conditions and tectonic processes have the potential to cause significant changes of the hydrologic system in the southern Great Basin, where a nuclear-waste repository is proposed for construction above the water table at Yucca Mountain, Nevada. Geothermal anomalies in the vicinity of Yucca Mountain probably result from the local and regional transport of heat by ground-water flow. Regionally and locally irregular patterns of hydraulic potential, local marsh and pond deposits, and calcite veins in faults and fractures probably are related principally to climatically imposed hydrologic conditions within the geologic and topographic framework. However, tectonic effects on the hydrologic system have also been proposed as the causes of these features, and existing data limitations preclude a full evaluation of these competing hypotheses. A broad program that integrates many disciplines of earth science is required in order to understand the relation of hydrology to past, present and future climates and tectonism.

  13. Centered On The Pole: NCEI Interdisciplinary Arctic Data Stewardship

    NASA Astrophysics Data System (ADS)

    Zweng, M.

    2016-02-01

    In 2014, NOAA's data centers (National Climatic Data Center, National Oceanographic Data Center, National Geophysical Data Center and its affiliated program within the National Snow and Ice Data Center, and the National Coastal Data Development Center) merged to form NCEI, the National Centers for Environmental Information- the largest repository of publicly accessible earth system science data in the world. The merger has forced a reconciling of different workflows, data types, and cultures. However, the Arctic has emerged as a common area where the different centers can integrate their expertise, data assets, and services, and use this information to better align the entire organization. The centers face a unique challenge as they move forward: how to archive, steward and provide access to environmental data to fulfil their mission of providing the best information to help protect life and property. A pressing national need for information that supports policy decisions drives our work.

  14. A Rich Metadata Filesystem for Scientific Data

    ERIC Educational Resources Information Center

    Bui, Hoang

    2012-01-01

    As scientific research becomes more data intensive, there is an increasing need for scalable, reliable, and high performance storage systems. Such data repositories must provide both data archival services and rich metadata, and cleanly integrate with large scale computing resources. ROARS is a hybrid approach to distributed storage that provides…

  15. Software Hardware Asset Reuse Enterprise (SHARE) Repository Framework Final Report: Component Specification and Ontology

    DTIC Science & Technology

    2009-08-19

    SSDS Ship Self Defense System TSTS Total Ship Training System UDDI Universal Description, Discovery, and Integration UML Unified Modeling...34ContractorOrganization" type="ContractorOrganizationType"> <xs:annotation> <xs:documentation>Identifies a contractor organization resposible for the

  16. Intestinal meta-transcriptome comparison reveals disparate antiviral transcriptional response and its association with Mitochondria in chicken immunity development

    USDA-ARS?s Scientific Manuscript database

    Background: Availability of a large number of data sets in public repositories and the advances in integrating multi-omics methods have greatly advanced our understanding of biological organisms and microbial associates, as well as large subcellular organelles, such as mitochondria. Mitochondrial ...

  17. Gene Expression Browser: Large-Scale and Cross-Experiment Microarray Data Management, Search & Visualization

    USDA-ARS?s Scientific Manuscript database

    The amount of microarray gene expression data in public repositories has been increasing exponentially for the last couple of decades. High-throughput microarray data integration and analysis has become a critical step in exploring the large amount of expression data for biological discovery. Howeve...

  18. Using Semantic Templates to Study Vulnerabilities Recorded in Large Software Repositories

    ERIC Educational Resources Information Center

    Wu, Yan

    2011-01-01

    Software vulnerabilities allow an attacker to reduce a system's Confidentiality, Availability, and Integrity by exposing information, executing malicious code, and undermine system functionalities that contribute to the overall system purpose and need. With new vulnerabilities discovered everyday in a variety of applications and user environments,…

  19. Tank 241-AY-102 Leak Assessment Supporting Documentation: Miscellaneous Reports, Letters, Memoranda, And Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engeman, J. K.; Girardot, C. L.; Harlow, D. G.

    2012-12-20

    This report contains reference materials cited in RPP-ASMT -53793, Tank 241-AY-102 Leak Assessment Report, that were obtained from the National Archives Federal Records Repository in Seattle, Washington, or from other sources including the Hanford Site's Integrated Data Management System database (IDMS).

  20. Incorporation of Bioinformatics Exercises into the Undergraduate Biochemistry Curriculum

    ERIC Educational Resources Information Center

    Feig, Andrew L.; Jabri, Evelyn

    2002-01-01

    The field of bioinformatics is developing faster than most biochemistry textbooks can adapt. Supplementing the undergraduate biochemistry curriculum with data-mining exercises is an ideal way to expose the students to the common databases and tools that take advantage of this vast repository of biochemical information. An integrated collection of…

  1. Long-Term Modeling of Coupled Processes in a Generic Salt Repository for Heat-Generating Nuclear Waste: Analysis of the Impacts of Halite Solubility Constraints

    NASA Astrophysics Data System (ADS)

    Blanco Martin, L.; Rutqvist, J.; Battistelli, A.; Birkholzer, J. T.

    2015-12-01

    Rock salt is a potential medium for the underground disposal of nuclear waste because it has several assets, such as its ability to creep and heal fractures and its water and gas tightness in the undisturbed state. In this research, we focus on disposal of heat-generating nuclear waste and we consider a generic salt repository with in-drift emplacement of waste packages and crushed salt backfill. As the natural salt creeps, the crushed salt backfill gets progressively compacted and an engineered barrier system is subsequently created [1]. The safety requirements for such a repository impose that long time scales be considered, during which the integrity of the natural and engineered barriers have to be demonstrated. In order to evaluate this long-term integrity, we perform numerical modeling based on state-of-the-art knowledge. Here, we analyze the impacts of halite dissolution and precipitation within the backfill and the host rock. For this purpose, we use an enhanced equation-of-state module of TOUGH2 that properly includes temperature-dependent solubility constraints [2]. We perform coupled thermal-hydraulic-mechanical modeling and we investigate the influence of the mentioned impacts. The TOUGH-FLAC simulator, adapted for large strains and creep, is used [3]. In order to quantify the importance of salt dissolution and precipitation on the effective porosity, permeability, pore pressure, temperature and stress field, we compare numerical results that include or disregard fluids of variable salinity. The sensitivity of the results to some parameters, such as the initial saturation within the backfill, is also addressed. References: [1] Bechthold, W. et al. Backfilling and Sealing of Underground Repositories for Radioactive Waste in Salt (BAMBUS II Project). Report EUR20621 EN: European Atomic Energy Community, 2004. [2] Battistelli A. Improving the treatment of saline brines in EWASG for the simulation of hydrothermal systems. Proceedings, TOUGH Symposium 2012, Lawrence Berkeley National Laboratory, Berkeley, California, Sept. 17-19, 2012. [3] Blanco-Martín L, Rutqvist J, Birkholzer JT. Long-term modelling of the thermal-hydraulic-mechanical response of a generic salt repository for heat generating nuclear waste. Eng Geol 2015;193:198-211. doi:10.1016/j.enggeo.2015.04.014.

  2. Geologic Framework Model Analysis Model Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Clayton

    2000-12-19

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompassmore » the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the repository design. These downstream models include the hydrologic flow models and the radionuclide transport models. All the models and the repository design, in turn, will be incorporated into the Total System Performance Assessment (TSPA) of the potential radioactive waste repository block and vicinity to determine the suitability of Yucca Mountain as a host for the repository. The interrelationship of the three components of the ISM and their interface with downstream uses are illustrated in Figure 2.« less

  3. Calculating the quality of public high-throughput sequencing data to obtain a suitable subset for reanalysis from the Sequence Read Archive.

    PubMed

    Ohta, Tazro; Nakazato, Takeru; Bono, Hidemasa

    2017-06-01

    It is important for public data repositories to promote the reuse of archived data. In the growing field of omics science, however, the increasing number of submissions of high-throughput sequencing (HTSeq) data to public repositories prevents users from choosing a suitable data set from among the large number of search results. Repository users need to be able to set a threshold to reduce the number of results to obtain a suitable subset of high-quality data for reanalysis. We calculated the quality of sequencing data archived in a public data repository, the Sequence Read Archive (SRA), by using the quality control software FastQC. We obtained quality values for 1 171 313 experiments, which can be used to evaluate the suitability of data for reuse. We also visualized the data distribution in SRA by integrating the quality information and metadata of experiments and samples. We provide quality information of all of the archived sequencing data, which enable users to obtain sufficient quality sequencing data for reanalyses. The calculated quality data are available to the public in various formats. Our data also provide an example of enhancing the reuse of public data by adding metadata to published research data by a third party. © The Authors 2017. Published by Oxford University Press.

  4. Utilizing the Antarctic Master Directory to find orphan datasets

    NASA Astrophysics Data System (ADS)

    Bonczkowski, J.; Carbotte, S. M.; Arko, R. A.; Grebas, S. K.

    2011-12-01

    While most Antarctic data are housed at an established disciplinary-specific data repository, there are data types for which no suitable repository exists. In some cases, these "orphan" data, without an appropriate national archive, are served from local servers by the principal investigators who produced the data. There are many pitfalls with data served privately, including the frequent lack of adequate documentation to ensure the data can be understood by others for re-use and the impermanence of personal web sites. For example, if an investigator leaves an institution and the data moves, the link published is no longer accessible. To ensure continued availability of data, submission to long-term national data repositories is needed. As stated in the National Science Foundation Office of Polar Programs (NSF/OPP) Guidelines and Award Conditions for Scientific Data, investigators are obligated to submit their data for curation and long-term preservation; this includes the registration of a dataset description into the Antarctic Master Directory (AMD), http://gcmd.nasa.gov/Data/portals/amd/. The AMD is a Web-based, searchable directory of thousands of dataset descriptions, known as DIF records, submitted by scientists from over 20 countries. It serves as a node of the International Directory Network/Global Change Master Directory (IDN/GCMD). The US Antarctic Program Data Coordination Center (USAP-DCC), http://www.usap-data.org/, funded through NSF/OPP, was established in 2007 to help streamline the process of data submission and DIF record creation. When data does not quite fit within any existing disciplinary repository, it can be registered within the USAP-DCC as the fallback data repository. Within the scope of the USAP-DCC we undertook the challenge of discovering and "rescuing" orphan datasets currently registered within the AMD. In order to find which DIF records led to data served privately, all records relating to US data within the AMD were parsed. After identifying the records containing a URL leading to a national data center or other disciplinary data repository, the remaining records were individually inspected for data type, format, and quality of metadata and then assessed to determine how best to preserve. Of the records reviewed, those for which appropriate repositories could be identified were submitted. An additional 35 were deemed acceptable in quality of metadata to register in the USAP-DCC. The content of these datasets were varied in nature, ranging from penguin counts to paleo-geologic maps to results of meteorological models all of which are discoverable through our search interface, http://www.usap-data.org/search.php. The remaining 40 records linked to either no data or had inadequate documentation for preservation highlighting the danger of serving datasets on local servers where minimal metadata standards can not be enforced and long-term access can not be ensured.

  5. Suitability of Palestine salt dome, Anderson Co. , Texas for disposal of high-level radioactive waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patchick, P.F.

    1980-01-01

    The suitability of Palestine salt dome, in Anderson County, Texas, is in serious doubt for a repository to isolate high-level nuclear waste because of abandoned salt brining operations. The random geographic and spatial occurrence of 15 collapse sinks over the dome may prevent safe construction of the necessary surface installations for a repository. The dissolution of salt between the caprock and dome, from at least 15 brine wells up to 500 feet deep, may permit increased rates of salt dissolution long into future geologic time. The subsurface dissolution is occurring at a rate difficult, if not impossible, to assess ormore » to calculate. It cannot be shown that this dissolution rate is insignificant to the integrity of a future repository or to ancillary features. The most recent significant collapse was 36 feet in diameter and took place in 1972. The other collapses ranged from 27 to 105 feet in diameter and from 1.5 to more than 15 feet in depth. ONWI recommends that this dome be removed from consideration as a candidate site.« less

  6. BioAcoustica: a free and open repository and analysis platform for bioacoustics

    PubMed Central

    Baker, Edward; Price, Ben W.; Rycroft, S. D.; Smith, Vincent S.

    2015-01-01

    We describe an online open repository and analysis platform, BioAcoustica (http://bio.acousti.ca), for recordings of wildlife sounds. Recordings can be annotated using a crowdsourced approach, allowing voice introductions and sections with extraneous noise to be removed from analyses. This system is based on the Scratchpads virtual research environment, the BioVeL portal and the Taverna workflow management tool, which allows for analysis of recordings using a grid computing service. At present the analyses include spectrograms, oscillograms and dominant frequency analysis. Further analyses can be integrated to meet the needs of specific researchers or projects. Researchers can upload and annotate their recordings to supplement traditional publication. Database URL: http://bio.acousti.ca PMID:26055102

  7. MODPATH-LGR; documentation of a computer program for particle tracking in shared-node locally refined grids by using MODFLOW-LGR

    USGS Publications Warehouse

    Dickinson, Jesse; Hanson, R.T.; Mehl, Steffen W.; Hill, Mary C.

    2011-01-01

    The computer program described in this report, MODPATH-LGR, is designed to allow simulation of particle tracking in locally refined grids. The locally refined grids are simulated by using MODFLOW-LGR, which is based on MODFLOW-2005, the three-dimensional groundwater-flow model published by the U.S. Geological Survey. The documentation includes brief descriptions of the methods used and detailed descriptions of the required input files and how the output files are typically used. The code for this model is available for downloading from the World Wide Web from a U.S. Geological Survey software repository. The repository is accessible from the U.S. Geological Survey Water Resources Information Web page at http://water.usgs.gov/software/ground_water.html. The performance of the MODPATH-LGR program has been tested in a variety of applications. Future applications, however, might reveal errors that were not detected in the test simulations. Users are requested to notify the U.S. Geological Survey of any errors found in this document or the computer program by using the email address available on the Web site. Updates might occasionally be made to this document and to the MODPATH-LGR program, and users should check the Web site periodically.

  8. EHR-based disease registries to support integrated care in a health neighbourhood: an ontology-based methodology.

    PubMed

    Liaw, Siaw-Teng; Taggart, Jane; Yu, Hairong

    2014-01-01

    Disease registries derived from Electronic Health Records (EHRs) are widely used for chronic disease management. We approached registries from the perspective of integrated care in a health neighbourhood, considering data quality issues such as semantic interoperability (consistency), accuracy, completeness and duplication. Our proposition is that a realist ontological approach is required to accurately identify patients in an EHR or data repository, assess data quality and fitness for use by the multidisciplinary integrated care team. We report on this approach with routinely collected data in a practice based research network in Australia.

  9. The MMI Device Ontology: Enabling Sensor Integration

    NASA Astrophysics Data System (ADS)

    Rueda, C.; Galbraith, N.; Morris, R. A.; Bermudez, L. E.; Graybeal, J.; Arko, R. A.; Mmi Device Ontology Working Group

    2010-12-01

    The Marine Metadata Interoperability (MMI) project has developed an ontology for devices to describe sensors and sensor networks. This ontology is implemented in the W3C Web Ontology Language (OWL) and provides an extensible conceptual model and controlled vocabularies for describing heterogeneous instrument types, with different data characteristics, and their attributes. It can help users populate metadata records for sensors; associate devices with their platforms, deployments, measurement capabilities and restrictions; aid in discovery of sensor data, both historic and real-time; and improve the interoperability of observational oceanographic data sets. We developed the MMI Device Ontology following a community-based approach. By building on and integrating other models and ontologies from related disciplines, we sought to facilitate semantic interoperability while avoiding duplication. Key concepts and insights from various communities, including the Open Geospatial Consortium (eg., SensorML and Observations and Measurements specifications), Semantic Web for Earth and Environmental Terminology (SWEET), and W3C Semantic Sensor Network Incubator Group, have significantly enriched the development of the ontology. Individuals ranging from instrument designers, science data producers and consumers to ontology specialists and other technologists contributed to the work. Applications of the MMI Device Ontology are underway for several community use cases. These include vessel-mounted multibeam mapping sonars for the Rolling Deck to Repository (R2R) program and description of diverse instruments on deepwater Ocean Reference Stations for the OceanSITES program. These trials involve creation of records completely describing instruments, either by individual instances or by manufacturer and model. Individual terms in the MMI Device Ontology can be referenced with their corresponding Uniform Resource Identifiers (URIs) in sensor-related metadata specifications (e.g., SensorML, NetCDF). These identifiers can be resolved through a web browser, or other client applications via HTTP against the MMI Ontology Registry and Repository (ORR), where the ontology is maintained. SPARQL-based query capabilities, which are enhanced with reasoning, along with several supported output formats, allow the effective interaction of diverse client applications with the semantic information associated with the device ontology. In this presentation we describe the process for the development of the MMI Device Ontology and illustrate extensions and applications that demonstrate the benefits of adopting this semantic approach, including example queries involving inference. We also highlight the issues encountered and future work.

  10. An Integrated Approach to Risk Assessment for Concurrent Design

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Voss, Luke; Feather, Martin; Cornford, Steve

    2005-01-01

    This paper describes an approach to risk assessment and analysis suited to the early phase, concurrent design of a space mission. The approach integrates an agile, multi-user risk collection tool, a more in-depth risk analysis tool, and repositories of risk information. A JPL developed tool, named RAP, is used for collecting expert opinions about risk from designers involved in the concurrent design of a space mission. Another in-house developed risk assessment tool, named DDP, is used for the analysis.

  11. Object linking in repositories

    NASA Technical Reports Server (NTRS)

    Eichmann, David (Editor); Beck, Jon; Atkins, John; Bailey, Bill

    1992-01-01

    This topic is covered in three sections. The first section explores some of the architectural ramifications of extending the Eichmann/Atkins lattice-based classification scheme to encompass the assets of the full life cycle of software development. A model is considered that provides explicit links between objects in addition to the edges connecting classification vertices in the standard lattice. The second section gives a description of the efforts to implement the repository architecture using a commercially available object-oriented database management system. Some of the features of this implementation are described, and some of the next steps to be taken to produce a working prototype of the repository are pointed out. In the final section, it is argued that design and instantiation of reusable components have competing criteria (design-for-reuse strives for generality, design-with-reuse strives for specificity) and that providing mechanisms for each can be complementary rather than antagonistic. In particular, it is demonstrated how program slicing techniques can be applied to customization of reusable components.

  12. Completeness and overlap in open access systems: Search engines, aggregate institutional repositories and physics-related open sources.

    PubMed

    Tsay, Ming-Yueh; Wu, Tai-Luan; Tseng, Ling-Li

    2017-01-01

    This study examines the completeness and overlap of coverage in physics of six open access scholarly communication systems, including two search engines (Google Scholar and Microsoft Academic), two aggregate institutional repositories (OAIster and OpenDOAR), and two physics-related open sources (arXiv.org and Astrophysics Data System). The 2001-2013 Nobel Laureates in Physics served as the sample. Bibliographic records of their publications were retrieved and downloaded from each system, and a computer program was developed to perform the analytical tasks of sorting, comparison, elimination, aggregation and statistical calculations. Quantitative analyses and cross-referencing were performed to determine the completeness and overlap of the system coverage of the six open access systems. The results may enable scholars to select an appropriate open access system as an efficient scholarly communication channel, and academic institutions may build institutional repositories or independently create citation index systems in the future. Suggestions on indicators and tools for academic assessment are presented based on the comprehensiveness assessment of each system.

  13. Report to Congress on the potential use of lead in the waste packages for a geologic repository at Yucca Mountain, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1989-12-01

    In the Report of the Senate Committee on Appropriations accompanying the Energy and Water Appropriation Act for 1989, the Committee directed the Department of Energy (DOE) to evaluate the use of lead in the waste packages to be used in geologic repositories for spent nuclear fuel and high-level waste. The evaluation that was performed in response to this directive is presented in this report. This evaluation was based largely on a review of the technical literature on the behavior of lead, reports of work conducted in other countries, and work performed for the waste-management program being conducted by the DOE.more » The initial evaluation was limited to the potential use of lead in the packages to be used in the repository. Also, the focus of this report is post closure performance and not on retrievability and handling aspects of the waste package. 100 refs., 8 figs., 15 tabs.« less

  14. The United States Polar Rock Repository: A geological resource for the Earth science community

    USGS Publications Warehouse

    Grunow, Annie M.; Elliot, David H.; Codispoti, Julie E.

    2007-01-01

    The United States Polar Rock Repository (USPRR) is a U. S. national facility designed for the permanent curatorial preservation of rock samples, along with associated materials such as field notes, annotated air photos and maps, raw analytic data, paleomagnetic cores, ground rock and mineral residues, thin sections, and microfossil mounts, microslides and residues from Polar areas. This facility was established by the Office of Polar Programs at the U. S. National Science Foundation (NSF) to minimize redundant sample collecting, and also because the extreme cold and hazardous field conditions make fieldwork costly and difficult. The repository provides, along with an on-line database of sample information, an essential resource for proposal preparation, pilot studies and other sample based research that should make fieldwork more efficient and effective. This latter aspect should reduce the environmental impact of conducting research in sensitive Polar Regions. The USPRR also provides samples for educational outreach. Rock samples may be borrowed for research or educational purposes as well as for museum exhibits.

  15. Perceived risk, stigma, and potential economic impacts of a high-level nuclear waste repository in Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slovic, P.; Layman, M.; Kraus, N.N.

    1989-07-01

    This paper describes a program of research designed to assess the potential impacts of a high-level nuclear waste repository at Yucca Mountain, Nevada, upon tourism, retirement and job-related migration, and business development in Las Vegas and the state. Adverse economic impacts may be expected to result from two related social processes. One has to do with perceptions of risk and socially amplified reactions to ``unfortunate events`` associated with the repository (major and minor accidents, discoveries of radiation releases, evidence of mismanagement, attempts to sabotage or disrupt the facility, etc.). The second process that may trigger significant adverse impacts is thatmore » of stigmatization. The conceptual underpinnings of risk perception, social amplification, and stigmatization are discussed in this paper and empirical data are presented to demonstrate how nuclear images associated with Las Vegas and the State of Nevada might trigger adverse effects on tourism, migration, and business development.« less

  16. Site characterization plan: Yucca Mountain Site, Nevada Research and Development Area, Nevada: Volume 8, Part B: Chapter 8, Sections 8.3.5 through 8.3.5.20

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1988-12-01

    This site characterization plan (SCP) has been developed for the candidate repository site at Yucca Mountain in the State of Nevada. The SCP includes a description of the Yucca Mountain site (Chapters 1-5), a conceptual design for the repository (Chapter 6), a description of the packaging to be used for the waste to be emplaced in the repository (Chapter 7), and a description of the planned site characterization activities (Chapter 8). The schedules and milestones presented in Sections 8.3 and 8.5 of the SCP were developed to be consistent with the June 1988 draft Amendment to the DOE`s Mission Planmore » for the Civilian Radioactive Waste Management Program. The five month delay in the scheduled start of exploratory shaft construction that was announced recently is not reflected in these schedules. 68 figs., 102 tabs.« less

  17. Ayurgenomics for stratified medicine: TRISUTRA consortium initiative across ethnically and geographically diverse Indian populations.

    PubMed

    Prasher, Bhavana; Varma, Binuja; Kumar, Arvind; Khuntia, Bharat Krushna; Pandey, Rajesh; Narang, Ankita; Tiwari, Pradeep; Kutum, Rintu; Guin, Debleena; Kukreti, Ritushree; Dash, Debasis; Mukerji, Mitali

    2017-02-02

    Genetic differences in the target proteins, metabolizing enzymes and transporters that contribute to inter-individual differences in drug response are not integrated in contemporary drug development programs. Ayurveda, that has propelled many drug discovery programs albeit for the search of new chemical entities incorporates inter-individual variability "Prakriti" in development and administration of drug in an individualized manner. Prakriti of an individual largely determines responsiveness to external environment including drugs as well as susceptibility to diseases. Prakriti has also been shown to have molecular and genomic correlates. We highlight how integration of Prakriti concepts can augment the efficiency of drug discovery and development programs through a unique initiative of Ayurgenomics TRISUTRA consortium. Five aspects that have been carried out are (1) analysis of variability in FDA approved pharmacogenomics genes/SNPs in exomes of 72 healthy individuals including predominant Prakriti types and matched controls from a North Indian Indo-European cohort (2) establishment of a consortium network and development of five genetically homogeneous cohorts from diverse ethnic and geo-climatic background (3) identification of parameters and development of uniform standard protocols for objective assessment of Prakriti types (4) development of protocols for Prakriti evaluation and its application in more than 7500 individuals in the five cohorts (5) Development of data and sample repository and integrative omics pipelines for identification of genomic correlates. Highlight of the study are (1) Exome sequencing revealed significant differences between Prakriti types in 28 SNPs of 11 FDA approved genes of pharmacogenomics relevance viz. CYP2C19, CYP2B6, ESR1, F2, PGR, HLA-B, HLA-DQA1, HLA-DRB1, LDLR, CFTR, CPS1. These variations are polymorphic in diverse Indian and world populations included in 1000 genomes project. (2) Based on the phenotypic attributes of Prakriti we identified anthropometry for anatomical features, biophysical parameters for skin types, HRV for autonomic function tests, spirometry for vital capacity and gustometry for taste thresholds as objective parameters. (3) Comparison of Prakriti phenotypes across different ethnic, age and gender groups led to identification of invariant features as well as some that require weighted considerations across the cohorts. Considering the molecular and genomics differences underlying Prakriti and relevance in disease pharmacogenomics studies, this novel integrative platform would help in identification of differently susceptible and drug responsive population. Additionally, integrated analysis of phenomic and genomic variations would not only allow identification of clinical and genomic markers of Prakriti for application in personalized medicine but also its integration in drug discovery and development programs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Mineralogic Model (MM3.0) Analysis Model Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. Lum

    2002-02-12

    The purpose of this report is to document the Mineralogic Model (MM), Version 3.0 (MM3.0) with regard to data input, modeling methods, assumptions, uncertainties, limitations and validation of the model results, qualification status of the model, and the differences between Version 3.0 and previous versions. A three-dimensional (3-D) Mineralogic Model was developed for Yucca Mountain to support the analyses of hydrologic properties, radionuclide transport, mineral health hazards, repository performance, and repository design. Version 3.0 of the MM was developed from mineralogic data obtained from borehole samples. It consists of matrix mineral abundances as a function of x (easting), y (northing),more » and z (elevation), referenced to the stratigraphic framework defined in Version 3.1 of the Geologic Framework Model (GFM). The MM was developed specifically for incorporation into the 3-D Integrated Site Model (ISM). The MM enables project personnel to obtain calculated mineral abundances at any position, within any region, or within any stratigraphic unit in the model area. The significance of the MM for key aspects of site characterization and performance assessment is explained in the following subsections. This work was conducted in accordance with the Development Plan for the MM (CRWMS M&O 2000). The planning document for this Rev. 00, ICN 02 of this AMR is Technical Work Plan, TWP-NBS-GS-000003, Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01 (CRWMS M&O 2000). The purpose of this ICN is to record changes in the classification of input status by the resolution of the use of TBV software and data in this report. Constraints and limitations of the MM are discussed in the appropriate sections that follow. The MM is one component of the ISM, which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed stratigraphy and structural features of the site into a 3-D model that will be useful in primary downstream models and repository design. These downstream models include the hydrologic flow models and the radionuclide transport models. All the models and the repository design, in turn, will be incorporated into the Total System Performance Assessment (TSPA) of the potential nuclear waste repository block and vicinity to determine the suitability of Yucca Mountain as a host for a repository. The interrelationship of the three components of the ISM and their interface with downstream uses are illustrated in Figure 1. The lateral boundaries of the ISM and its three component models are shown in Figure 2.« less

  19. Applying the institutional review board data repository approach to manage ethical considerations in evaluating and studying medical education

    PubMed Central

    Thayer, Erin K.; Rathkey, Daniel; Miller, Marissa Fuqua; Palmer, Ryan; Mejicano, George C.; Pusic, Martin; Kalet, Adina; Gillespie, Colleen; Carney, Patricia A.

    2016-01-01

    Issue Medical educators and educational researchers continue to improve their processes for managing medical student and program evaluation data using sound ethical principles. This is becoming even more important as curricular innovations are occurring across undergraduate and graduate medical education. Dissemination of findings from this work is critical, and peer-reviewed journals often require an institutional review board (IRB) determination. Approach IRB data repositories, originally designed for the longitudinal study of biological specimens, can be applied to medical education research. The benefits of such an approach include obtaining expedited review for multiple related studies within a single IRB application and allowing for more flexibility when conducting complex longitudinal studies involving large datasets from multiple data sources and/or institutions. In this paper, we inform educators and educational researchers on our analysis of the use of the IRB data repository approach to manage ethical considerations as part of best practices for amassing, pooling, and sharing data for educational research, evaluation, and improvement purposes. Implications Fostering multi-institutional studies while following sound ethical principles in the study of medical education is needed, and the IRB data repository approach has many benefits, especially for longitudinal assessment of complex multi-site data. PMID:27443407

  20. Integration of HTML documents into an XML-based knowledge repository.

    PubMed

    Roemer, Lorrie K; Rocha, Roberto A; Del Fiol, Guilherme

    2005-01-01

    The Emergency Patient Instruction Generator (EPIG) is an electronic content compiler / viewer / editor developed by Intermountain Health Care. The content is vendor-licensed HTML patient discharge instructions. This work describes the process by which discharge instructions where converted from ASCII-encoded HTML to XML, then loaded to a database for use by EPIG.

  1. Software Hardware Asset Reuse Enterprise (SHARE) Repository Framework Final Report: Component Specification and Ontology

    DTIC Science & Technology

    2008-09-30

    89 Integrated Surface Ship ASW Combat System (AN/SQQ-89) SSDS Ship Self Defense System TSTS Total Ship Training System UDDI Universal Description...34ContractorOrganization" type="ContractorOrganizationType"> <xs:annotation> <xs:documentation>Identifies a contractor organization resposible for the

  2. A Framework to Integrate Public, Dynamic Metrics into an OER Platform

    ERIC Educational Resources Information Center

    Cohen, Jaclyn Zetta; Omollo, Kathleen Ludewig; Malicke, Dave

    2014-01-01

    The usage metrics for open educational resources (OER) are often either hidden behind an authentication system or shared intermittently in static, aggregated format at the repository level. This paper discusses the first year of University of Michigan's project to share its OER usage data dynamically, publicly, to synthesize it across different…

  3. Content Integration: Creating a Scalable Common Platform for Information Resources

    ERIC Educational Resources Information Center

    Berenstein, Max; Katz, Demian

    2012-01-01

    Academic, government, and corporate librarians organize and leverage internal resources and content through institutional repositories and library catalogs. Getting more value and usage from the content they license is a key goal. However, the ever-growing amount of content and shifting user demands for new materials or features has made the…

  4. Electronic Scientific Data & Literature Aggregation: A Review for Librarians

    ERIC Educational Resources Information Center

    Losoff, Barbara

    2009-01-01

    The advent of large-scale digital repositories, along with the need for sharing useful data world-wide, demands change to the current information structure. The merging of digital scientific data with scholarly literature has the potential to fulfill the Semantic Web design principles. This paper will identify factors leading to integration of…

  5. 78 FR 28111 - Making Open and Machine Readable the New Default for Government Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-14

    ... warning systems, location-based applications, precision farming tools, and much more, improving Americans... repository of tools and best practices to assist agencies in integrating the Open Data Policy into their... needed to ensure it remains a resource to facilitate the adoption of open data practices. (b) Within 90...

  6. Multimedia Open Educational Resources in Mathematics for High School Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Park, Sanghoon; McLeod, Kenneth

    2018-01-01

    Open Educational Resources (OER) can offer educators the necessary flexibility for tailoring educational resources to better fit their educational goals. Although the number of OER repositories is growing fast, few studies have been conducted to empirically test the effectiveness of OER integration in the classroom. Furthermore, very little is…

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Haeryong; Lee, Eunyong; Jeong, YiYeong

    Korea Radioactive-waste Management Corporation (KRMC) established in 2009 has started a new project to collect information on long-term stability of deep geological environments on the Korean Peninsula. The information has been built up in the integrated natural barrier database system available on web (www.deepgeodisposal.kr). The database system also includes socially and economically important information, such as land use, mining area, natural conservation area, population density, and industrial complex, because some of this information is used as exclusionary criteria during the site selection process for a deep geological repository for safe and secure containment and isolation of spent nuclear fuel andmore » other long-lived radioactive waste in Korea. Although the official site selection process has not been started yet in Korea, current integrated natural barrier database system and socio-economic database is believed that the database system will be effectively utilized to narrow down the number of sites where future investigation is most promising in the site selection process for a deep geological repository and to enhance public acceptance by providing readily-available relevant scientific information on deep geological environments in Korea. (authors)« less

  8. Waste Isolation Safety Assessment Program. Task 4. Third Contractor Information Meeting. [Adsorption-desorption on geological media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1980-06-01

    The study subject of this meeting was the adsorption and desorption of radionuclides on geologic media under repository conditions. This volume contans eight papers. Separate abstracts were prepared for all eight papers. (DLC)

  9. NCTN/NCORP Data Archive: Expanding Access to Clinical Trial Data

    Cancer.gov

    NCI is launching the NCTN/NCORP Data Archive, a centralized repository of patient-level data from phase III clinical trials conducted by NCI’s NCTN and NCORP trials programs and the National Cancer Institute of Canada-Clinical Trials Group.

  10. Dynameomics: design of a computational lab workflow and scientific data repository for protein simulations.

    PubMed

    Simms, Andrew M; Toofanny, Rudesh D; Kehl, Catherine; Benson, Noah C; Daggett, Valerie

    2008-06-01

    Dynameomics is a project to investigate and catalog the native-state dynamics and thermal unfolding pathways of representatives of all protein folds using solvated molecular dynamics simulations, as described in the preceding paper. Here we introduce the design of the molecular dynamics data warehouse, a scalable, reliable repository that houses simulation data that vastly simplifies management and access. In the succeeding paper, we describe the development of a complementary multidimensional database. A single protein unfolding or native-state simulation can take weeks to months to complete, and produces gigabytes of coordinate and analysis data. Mining information from over 3000 completed simulations is complicated and time-consuming. Even the simplest queries involve writing intricate programs that must be built from low-level file system access primitives and include significant logic to correctly locate and parse data of interest. As a result, programs to answer questions that require data from hundreds of simulations are very difficult to write. Thus, organization and access to simulation data have been major obstacles to the discovery of new knowledge in the Dynameomics project. This repository is used internally and is the foundation of the Dynameomics portal site http://www.dynameomics.org. By organizing simulation data into a scalable, manageable and accessible form, we can begin to address substantial questions that move us closer to solving biomedical and bioengineering problems.

  11. Libraries program

    USGS Publications Warehouse

    2011-01-01

    The U.S. Congress authorized a library for the U.S. Geological Survey (USGS) in 1879. The library was formally established in 1882 with the naming of the first librarian and began with a staff of three and a collection of 1,400 books. Today, the USGS Libraries Program is one of the world's largest Earth and natural science repositories and a resource of national significance used by researchers and the public worldwide.

  12. Telecommunications Network Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1989-05-01

    The Office of Civilian Radioactive Waste Management (OCRWM) must, among other things, be equipped to readily produce, file, store, access, retrieve, and transfer a wide variety of technical and institutional data and information. The data and information regularly produced by members of the OCRWM Program supports, and will continue to support, a wide range of program activities. Some of the more important of these information communication-related activities include: supporting the preparation, submittal, and review of a license application to the Nuclear Regulatory Commission (NRC) to authorize the construction of a geologic repository; responding to requests for information from parties affectedmore » by and/or interested in the program; and providing evidence of compliance with all relevant Federal, State, local, and Indian Tribe regulations, statutes, and/or treaties. The OCRWM Telecommunications Network Plan (TNP) is intended to identify, as well as to present the current strategy for satisfying, the telecommunications requirements of the civilian radioactive waste management program. The TNP will set forth the plan for integrating OCRWM`s information resources among major program sites. Specifically, this plan will introduce a telecommunications network designed to establish communication linkages across the program`s Washington, DC; Chicago, Illinois; and Las Vegas, Nevada, sites. The linkages across these and associated sites will comprise Phase I of the proposed OCRWM telecommunications network. The second phase will focus on the modification and expansion of the Phase I network to fully accommodate access to the OCRWM Licensing Support System (LSS). The primary components of the proposed OCRWM telecommunications network include local area networks; extended local area networks; and remote extended (wide) area networks. 10 refs., 6 figs.« less

  13. Database Resources of the BIG Data Center in 2018.

    PubMed

    2018-01-04

    The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Generation of open biomedical datasets through ontology-driven transformation and integration processes.

    PubMed

    Carmen Legaz-García, María Del; Miñarro-Giménez, José Antonio; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2016-06-03

    Biomedical research usually requires combining large volumes of data from multiple heterogeneous sources, which makes difficult the integrated exploitation of such data. The Semantic Web paradigm offers a natural technological space for data integration and exploitation by generating content readable by machines. Linked Open Data is a Semantic Web initiative that promotes the publication and sharing of data in machine readable semantic formats. We present an approach for the transformation and integration of heterogeneous biomedical data with the objective of generating open biomedical datasets in Semantic Web formats. The transformation of the data is based on the mappings between the entities of the data schema and the ontological infrastructure that provides the meaning to the content. Our approach permits different types of mappings and includes the possibility of defining complex transformation patterns. Once the mappings are defined, they can be automatically applied to datasets to generate logically consistent content and the mappings can be reused in further transformation processes. The results of our research are (1) a common transformation and integration process for heterogeneous biomedical data; (2) the application of Linked Open Data principles to generate interoperable, open, biomedical datasets; (3) a software tool, called SWIT, that implements the approach. In this paper we also describe how we have applied SWIT in different biomedical scenarios and some lessons learned. We have presented an approach that is able to generate open biomedical repositories in Semantic Web formats. SWIT is able to apply the Linked Open Data principles in the generation of the datasets, so allowing for linking their content to external repositories and creating linked open datasets. SWIT datasets may contain data from multiple sources and schemas, thus becoming integrated datasets.

  15. Principles for Integrating Mars Analog Science, Operations, and Technology Research

    NASA Technical Reports Server (NTRS)

    Clancey, William J.

    2003-01-01

    During the Apollo program, the scientific community and NASA used terrestrial analog sites for understanding planetary features and for training astronauts to be scientists. Human factors studies (Harrison, Clearwater, & McKay 1991; Stuster 1996) have focused on the effects of isolation in extreme environments. More recently, with the advent of wireless computing, we have prototyped advanced EVA technologies for navigation, scheduling, and science data logging (Clancey 2002b; Clancey et al., in press). Combining these interests in a single expedition enables tremendous synergy and authenticity, as pioneered by Pascal Lee's Haughton-Mars Project (Lee 2001; Clancey 2000a) and the Mars Society s research stations on a crater rim on Devon Island in the High Canadian Arctic (Clancey 2000b; 2001b) and the Morrison Formation of southeast Utah (Clancey 2002a). Based on this experience, the following principles are proposed for conducting an integrated science, operations, and technology research program at analog sites: 1) Authentic work; 2) PI-based projects; 3) Unencumbered baseline studies; 4) Closed simulations; and 5) Observation and documentation. Following these principles, we have been integrating field science, operations research, and technology development at analog sites on Devon Island and in Utah over the past five years. Analytic methods include work practice simulation (Clancey 2002c; Sierhuis et a]., 2000a;b), by which the interaction of human behavior, facilities, geography, tools, and procedures are formalized in computer models. These models are then converted into the runtime EVA system we call mobile agents (Clancey 2002b; Clancey et al., in press). Furthermore, we have found that the Apollo Lunar Surface Journal (Jones, 1999) provides a vast repository or understanding astronaut and CapCom interactions, serving as a baseline for Mars operations and quickly highlighting opportunities for computer automation (Clancey, in press).

  16. The OceanLink Project

    NASA Astrophysics Data System (ADS)

    Narock, T.; Arko, R. A.; Carbotte, S. M.; Chandler, C. L.; Cheatham, M.; Finin, T.; Hitzler, P.; Krisnadhi, A.; Raymond, L. M.; Shepherd, A.; Wiebe, P. H.

    2014-12-01

    A wide spectrum of maturing methods and tools, collectively characterized as the Semantic Web, is helping to vastly improve the dissemination of scientific research. Creating semantic integration requires input from both domain and cyberinfrastructure scientists. OceanLink, an NSF EarthCube Building Block, is demonstrating semantic technologies through the integration of geoscience data repositories, library holdings, conference abstracts, and funded research awards. Meeting project objectives involves applying semantic technologies to support data representation, discovery, sharing and integration. Our semantic cyberinfrastructure components include ontology design patterns, Linked Data collections, semantic provenance, and associated services to enhance data and knowledge discovery, interoperation, and integration. We discuss how these components are integrated, the continued automated and semi-automated creation of semantic metadata, and techniques we have developed to integrate ontologies, link resources, and preserve provenance and attribution.

  17. So You Want to Be Trustworthy: A Repository's Guide to Taking Reasonable Steps Towards Achieving ISO 16363

    NASA Astrophysics Data System (ADS)

    Stall, S.

    2016-12-01

    To be trustworthy is to be reliable, dependable, honest, principled, ethical, incorruptible, and more. A trustworthy person demonstrates these qualities over time and under all circumstances. A trustworthy repository demonstrates these qualities through the team that manages the repository and its responsible organization. The requirements of a Trusted Digital Repository (TDR) in ISO 16363 can be tough to reach and tough to maintain. Challenges include: limited funds, limited resources and/or skills, and an unclear path to successfully achieve the requirements. The ISO standard defines each requirement separately, but a successful certification recognizes that there are many cross-dependencies among the requirements. Understanding these dependencies leads to a more efficient path towards success. At AGU we recognize that reaching the goal of the TDR ISO standard, or any set of data management objectives defined by an organization, has a better chance at success if the organization clearly knows their current capability, the improvements that are needed, and the best way to make (and maintain) those changes. AGU has partnered with the CMMI® Institute to adapt their Data Management Maturity (DMM)SM model within the Earth and space sciences. Using the DMM, AGU developed a new Data Management Assessment Program aimed at helping data repositories, large and small, domain-specific to general, assess and improve data management practices to meet their goals - including becoming a Trustworthy Digital Repository. The requirements to achieve the TDR ISO standard are aligned to the data management best practices defined in the Data Management Maturity (DMM)SM model. Using the DMM as a process improvement tool in conjunction with the Data Management Assessment method, a team seeking the objective of the TDR ISO standard receives a clear road map to achieving their goal as an outcome of the assessment. Publishers and agencies are beginning to recommend or even require that repositories demonstrate that they are practicing best practices or meeting certain standards. Data preserved in a data facility that is working on achieving a TDR standard will have the level of care desired by the publishing community as well as the science community. Better Data Management results in Better Science.

  18. International Collaboration in Data Management for Scientific Ocean Drilling: Preserving Legacy Data While Implementing New Requirements.

    NASA Astrophysics Data System (ADS)

    Rack, F. R.

    2005-12-01

    The Integrated Ocean Drilling Program (IODP: 2003-2013 initial phase) is the successor to the Deep Sea Drilling Project (DSDP: 1968-1983) and the Ocean Drilling Program (ODP: 1985-2003). These earlier scientific drilling programs amassed collections of sediment and rock cores (over 300 kilometers stored in four repositories) and data organized in distributed databases and in print or electronic publications. International members of the IODP have established, through memoranda, the right to have access to: (1) all data, samples, scientific and technical results, all engineering plans, data or other information produced under contract to the program; and, (2) all data from geophysical and other site surveys performed in support of the program which are used for drilling planning. The challenge that faces the individual platform operators and management of IODP is to find the right balance and appropriate synergies among the needs, expectations and requirements of stakeholders. The evolving model for IODP database services consists of the management and integration of data collected onboard the various IODP platforms (including downhole logging and syn-cruise site survey information), legacy data from DSDP and ODP, data derived from post-cruise research and publications, and other IODP-relevant information types, to form a common, program-wide IODP information system (e.g., IODP Portal) which will be accessible to both researchers and the public. The JANUS relational database of ODP was introduced in 1997 and the bulk of ODP shipboard data has been migrated into this system, which is comprised of a relational data model consisting of over 450 tables. The JANUS database includes paleontological, lithostratigraphic, chemical, physical, sedimentological, and geophysical data from a global distribution of sites. For ODP Legs 100 through 210, and including IODP Expeditions 301 through 308, JANUS has been used to store data from 233,835 meters of core recovered, which are comprised of 38,039 cores, with 202,281 core sections stored in repositories, which have resulted in the taking of 2,299,180 samples for scientists and other users (http://iodp.tamu.edu/janusweb/general/dbtable.cgi). JANUS and other IODP databases are viewed as components of an evolving distributed network of databases, supported by metadata catalogs and middleware with XML workflows, that are intended to provide access to DSDP/ODP/IODP cores and sample-based data as well as other distributed geoscience data collections (e.g., CHRONOS, PetDB, SedDB). These data resources can be explored through the use of emerging data visualization environments, such as GeoWall, CoreWall (http://(www.evl.uic.edu/cavern/corewall), a multi-screen display for viewing cores and related data, GeoWall-2 and LambdaVision, a very-high resolution, networked environment for data exploration and visualization, and others. The U.S Implementing Organization (USIO) for the IODP, also known as the JOI Alliance, is a partnership between Joint Oceanographic Institutions (JOI), Texas A&M University, and Lamont-Doherty Earth Observatory of Columbia University. JOI is a consortium of 20 premier oceanographic research institutions that serves the U.S. scientific community by leading large-scale, global research programs in scientific ocean drilling and ocean observing. For more than 25 years, JOI has helped facilitate discovery and advance global understanding of the Earth and its oceans through excellence in program management.

  19. Principles of Product Quality Control of German Radioactive Waste Forms from the Reprocessing of Spent Fuel: Vitrification, Compaction and Numerical Simulation - 12529

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tietze-Jaensch, Holger; Schneider, Stephan; Aksyutina, Yuliya

    2012-07-01

    The German product quality control is inter alia responsible for control of two radioactive waste forms of heat generating waste: a) homogeneous vitrified HLW and b) heterogeneous compacted hulls, end-pieces and technological metallic waste. In either case, significantly different metrology is employed at the site of the conditioning plant for the obligatory nuclide inventory declaration. To facilitate an independent evaluation and checking of the accompanying documentation numerical simulations are carried out. The physical and chemical properties of radioactive waste residues are used to assess the data consistency and uncertainty margins, as well as to predict the long-term behavior of themore » radioactive waste. This is relevant for repository acceptance and safety considerations. Our new numerical approach follows a bottom-up simulation starting from the burn-up behavior of the fuel elements in the reactor core. The output of these burn-up calculations is then coupled with a program that simulates the material separation in the subsequent dissolution and extraction processes normalized to the mass balance. Follow-up simulations of the separated reprocessing lines of a) the vitrification of highly-active liquid and b) the compaction of residual intermediate-active metallic hulls remaining after fuel pellets dissolution, end-pieces and technological waste, allows calculating expectation values for the various repository relevant properties of either waste stream. The principles of the German product quality control of radioactive waste residues from the spent fuel reprocessing have been introduced and explained. Namely, heat generating homogeneous vitrified HLW and heterogeneous compacted metallic MLW have been discussed. The advantages of a complementary numerical property simulation have been made clear and examples of benefits are presented. We have compiled a new program suite to calculate the physical and radio-chemical properties of common nuclear waste residues. The immediate benefit is the independent assessment of radio-active inventory declarations and much facilitated product quality control of waste residues that need to be returned to Germany and submitted to a German HLW-repository requirements. Wherever possible, internationally accepted standard programs are used and embedded. The innovative coupling of burn-up calculations (SCALE) with neutron and gamma transport codes (MCPN-X) allows an application in the world of virtual waste properties. If-then-else scenarios of hypothetical waste material compositions and distributions provide valuable information of long term nuclide property propagation under repository conditions over a very long time span. Benchmarking the program with real residue data demonstrates the power and remarkable accuracy of this numerical approach, boosting the reliability of the confidence aforementioned numerous applications, namely the proof tool set for on-the-spot production quality checking and data evaluation and independent verification. Moreover, using the numerical bottom-up approach helps to avoid the accumulation of fake activities that may gradually build up in a repository from the so-called conservative or penalizing nuclide inventory declarations. The radioactive waste properties and the hydrolytic and chemical stability can be predicted. The interaction with invasive chemicals can be assessed and propagation scenarios can be developed from reliable and sound data and HLW properties. Hence, the appropriate design of a future HLW repository can be based upon predictable and quality assured waste characteristics. (authors)« less

  20. 78 FR 63455 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-24

    ..., Building 23, Columbus, OH 43213-1152. Defense Manpower Data Center, 400 Gigling Road, Seaside CA 93955... web-based system providing a repository of military, Government civilian and contractor personnel and..., tracking, reporting, evaluating program effectiveness and conducting research. The Total Operational...

  1. 10 CFR 60.140 - General requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and it will continue until permanent closure. (c) The program shall include in situ monitoring, laboratory and field testing, and in situ experiments, as may be appropriate to accomplish the objective as... REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES...

  2. 10 CFR 60.140 - General requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and it will continue until permanent closure. (c) The program shall include in situ monitoring, laboratory and field testing, and in situ experiments, as may be appropriate to accomplish the objective as... REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES...

  3. 10 CFR 60.140 - General requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... and it will continue until permanent closure. (c) The program shall include in situ monitoring, laboratory and field testing, and in situ experiments, as may be appropriate to accomplish the objective as... REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES...

  4. 10 CFR 60.140 - General requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... and it will continue until permanent closure. (c) The program shall include in situ monitoring, laboratory and field testing, and in situ experiments, as may be appropriate to accomplish the objective as... REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES...

  5. 10 CFR 60.140 - General requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and it will continue until permanent closure. (c) The program shall include in situ monitoring, laboratory and field testing, and in situ experiments, as may be appropriate to accomplish the objective as... REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES...

  6. Storage, transportation and disposal system for used nuclear fuel assemblies

    DOEpatents

    Scaglione, John M.; Wagner, John C.

    2017-01-10

    An integrated storage, transportation and disposal system for used fuel assemblies is provided. The system includes a plurality of sealed canisters and a cask sized to receive the sealed canisters in side by side relationship. The plurality of sealed canisters include an internal basket structure to receive a plurality of used fuel assemblies. The internal basket structure includes a plurality of radiation-absorbing panels and a plurality of hemispherical ribs generally perpendicular to the canister sidewall. The sealed canisters are received within the cask for storage and transportation and are removed from the cask for disposal at a designated repository. The system of the present invention allows the handling of sealed canisters separately or collectively, while allowing storage and transportation of high burnup fuel and damaged fuel to the designated repository.

  7. Death Valley Lower Carbonate Aquifer Monitoring Program Wells Down Gradient of the Proposed Yucca Mountain Nuclear Waste Repository, U. S. Department of Energy Grant DE-RW0000233 2010 Project Report, prepared by The Hydrodynamics Group, LLC for Inyo County Yucca Mountain Repository Assessment Office

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Michael J; Bredehoeft, John D., Dr.

    2010-09-03

    Inyo County completed the first year of the U.S. Department of Energy Grant Agreement No. DE-RW0000233. This report presents the results of research conducted within this Grant agreement in the context of Inyo County's Yucca Mountain oversight program goals and objectives. The Hydrodynamics Group, LLC prepared this report for Inyo County Yucca Mountain Repository Assessment Office. The overall goal of Inyo County's Yucca Mountain research program is the evaluation of far-field issues related to potential transport, by ground water, of radionuclide into Inyo County, including Death Valley, and the evaluation of a connection between the Lower Carbonate Aquifer (LCA) andmore » the biosphere. Data collected within the Grant is included in interpretive illustrations and discussions of the results of our analysis. The centeral elements of this Grant prgoram was the drilling of exploratory wells, geophysical surveys, geological mapping of the Southern Funeral Mountain Range. The cullimination of this research was 1) a numerical ground water model of the Southern Funeral Mountain Range demonstrating the potential of a hydraulic connection between the LCA and the major springs in the Furnace Creek area of Death Valley, and 2) a numerical ground water model of the Amargosa Valley to evaluate the potential for radionuclide transport from Yucca Mountain to Inyo County, California. The report provides a description of research and activities performed by The Hydrodynamics Group, LLC on behalf of Inyo County, and copies of key work products in attachments to this report.« less

  8. International Collaboration Activities on Engineered Barrier Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jove-Colon, Carlos F.

    The Used Fuel Disposition Campaign (UFDC) within the DOE Fuel Cycle Technologies (FCT) program has been engaging in international collaborations between repository R&D programs for high-level waste (HLW) disposal to leverage on gathered knowledge and laboratory/field data of near- and far-field processes from experiments at underground research laboratories (URL). Heater test experiments at URLs provide a unique opportunity to mimetically study the thermal effects of heat-generating nuclear waste in subsurface repository environments. Various configurations of these experiments have been carried out at various URLs according to the disposal design concepts of the hosting country repository program. The FEBEX (Full-scale Engineeredmore » Barrier Experiment in Crystalline Host Rock) project is a large-scale heater test experiment originated by the Spanish radioactive waste management agency (Empresa Nacional de Residuos Radiactivos S.A. – ENRESA) at the Grimsel Test Site (GTS) URL in Switzerland. The project was subsequently managed by CIEMAT. FEBEX-DP is a concerted effort of various international partners working on the evaluation of sensor data and characterization of samples obtained during the course of this field test and subsequent dismantling. The main purpose of these field-scale experiments is to evaluate feasibility for creation of an engineered barrier system (EBS) with a horizontal configuration according to the Spanish concept of deep geological disposal of high-level radioactive waste in crystalline rock. Another key aspect of this project is to improve the knowledge of coupled processes such as thermal-hydro-mechanical (THM) and thermal-hydro-chemical (THC) operating in the near-field environment. The focus of these is on model development and validation of predictions through model implementation in computational tools to simulate coupled THM and THC processes.« less

  9. Tourism impacts of Three Mile Island and other adverse events: Implications for Lincoln County and other rural counties bisected by radioactive wastes intended for Yucca Mountain

    NASA Astrophysics Data System (ADS)

    Himmelberger, Jeffery J.; Baughman, Mike; Ogneva-Himmelberger, Yelena A.

    1995-11-01

    Whether the proposed Yucca Mountain nuclear waste repository system will adversely impact tourism in southern Nevada is an open question of particular importance to visitor-oriented rural counties bisected by planned waste transportatin corridors (highway or rail). As part of one such county's repository impact assessment program, tourism implications of Three Mile Island (TMI) and other major hazard events have beem revisited to inform ongoing county-wide socioeconomic assessments and contingency planning efforts. This paper summarizes key research implications of such research as applied to Lincoln County, Nevada. Implications for other rural counties are discussed in light of the research findings.

  10. CSpace: an integrated workplace for the graphical and algebraic analysis of phase assemblages on 32-bit wintel platforms

    NASA Astrophysics Data System (ADS)

    Torres-Roldan, Rafael L.; Garcia-Casco, Antonio; Garcia-Sanchez, Pedro A.

    2000-08-01

    CSpace is a program for the graphical and algebraic analysis of composition relations within chemical systems. The program is particularly suited to the needs of petrologists, but could also prove useful for mineralogists, geochemists and other environmental scientists. A few examples of what can be accomplished with CSpace are the mapping of compositions into some desired set of system/phase components, the estimation of reaction/mixing coefficients and assessment of phase-rule compatibility relations within or between complex mineral assemblages. The program also allows dynamic inspection of compositional relations by means of barycentric plots. CSpace provides an integrated workplace for data management, manipulation and plotting. Data management is done through a built-in spreadsheet-like editor, which also acts as a data repository for the graphical and algebraic procedures. Algebraic capabilities are provided by a mapping engine and a matrix analysis tool, both of which are based on singular-value decomposition. The mapping engine uses a general approach to linear mapping, capable of handling determined, underdetermined and overdetermined problems. The matrix analysis tool is implemented as a task "wizard" that guides the user through a number of steps to perform matrix approximation (finding nearest rank-deficient models of an input composition matrix), and inspection of null-reaction space relationships (i.e. of implicit linear relations among the elements of the composition matrix). Graphical capabilities are provided by a graph engine that directly links with the contents of the data editor. The graph engine can generate sophisticated 2-D ternary (triangular) and 3D quaternary (tetrahedral) barycentric plots and includes features such as interactive re-sizing and rotation, on-the-fly coordinate scaling and support for automated drawing of tie lines.

  11. Geneious Basic: An integrated and extendable desktop software platform for the organization and analysis of sequence data

    PubMed Central

    Kearse, Matthew; Moir, Richard; Wilson, Amy; Stones-Havas, Steven; Cheung, Matthew; Sturrock, Shane; Buxton, Simon; Cooper, Alex; Markowitz, Sidney; Duran, Chris; Thierer, Tobias; Ashton, Bruce; Meintjes, Peter; Drummond, Alexei

    2012-01-01

    Summary: The two main functions of bioinformatics are the organization and analysis of biological data using computational resources. Geneious Basic has been designed to be an easy-to-use and flexible desktop software application framework for the organization and analysis of biological data, with a focus on molecular sequences and related data types. It integrates numerous industry-standard discovery analysis tools, with interactive visualizations to generate publication-ready images. One key contribution to researchers in the life sciences is the Geneious public application programming interface (API) that affords the ability to leverage the existing framework of the Geneious Basic software platform for virtually unlimited extension and customization. The result is an increase in the speed and quality of development of computation tools for the life sciences, due to the functionality and graphical user interface available to the developer through the public API. Geneious Basic represents an ideal platform for the bioinformatics community to leverage existing components and to integrate their own specific requirements for the discovery, analysis and visualization of biological data. Availability and implementation: Binaries and public API freely available for download at http://www.geneious.com/basic, implemented in Java and supported on Linux, Apple OSX and MS Windows. The software is also available from the Bio-Linux package repository at http://nebc.nerc.ac.uk/news/geneiousonbl. Contact: peter@biomatters.com PMID:22543367

  12. Geneious Basic: an integrated and extendable desktop software platform for the organization and analysis of sequence data.

    PubMed

    Kearse, Matthew; Moir, Richard; Wilson, Amy; Stones-Havas, Steven; Cheung, Matthew; Sturrock, Shane; Buxton, Simon; Cooper, Alex; Markowitz, Sidney; Duran, Chris; Thierer, Tobias; Ashton, Bruce; Meintjes, Peter; Drummond, Alexei

    2012-06-15

    The two main functions of bioinformatics are the organization and analysis of biological data using computational resources. Geneious Basic has been designed to be an easy-to-use and flexible desktop software application framework for the organization and analysis of biological data, with a focus on molecular sequences and related data types. It integrates numerous industry-standard discovery analysis tools, with interactive visualizations to generate publication-ready images. One key contribution to researchers in the life sciences is the Geneious public application programming interface (API) that affords the ability to leverage the existing framework of the Geneious Basic software platform for virtually unlimited extension and customization. The result is an increase in the speed and quality of development of computation tools for the life sciences, due to the functionality and graphical user interface available to the developer through the public API. Geneious Basic represents an ideal platform for the bioinformatics community to leverage existing components and to integrate their own specific requirements for the discovery, analysis and visualization of biological data. Binaries and public API freely available for download at http://www.geneious.com/basic, implemented in Java and supported on Linux, Apple OSX and MS Windows. The software is also available from the Bio-Linux package repository at http://nebc.nerc.ac.uk/news/geneiousonbl.

  13. Enabling Long-Term Oceanographic Research: Changing Data Practices, Information Management Strategies and Informatics

    NASA Astrophysics Data System (ADS)

    Baker, K. S.; Chandler, C. L.

    2008-12-01

    Data management and informatics research are in a state of change in terms of data practices, information strategies, and roles. New ways of thinking about data and data management can facilitate interdisciplinary global ocean science. To meet contemporary expectations for local data use and reuse by a variety of audiences, collaborative strategies involving diverse teams of information professionals are developing. Such changes are fostering the growth of information infrastructures that support multi-scale sampling, data integration, and nascent networks of data repositories. In this retrospective, two examples of oceanographic projects incorporating data management in partnership with long-term science programs are reviewed: the Palmer Station Long-Term Ecological Research program (Palmer LTER) and the United States Joint Global Ocean Flux Study (US JGOFS). Lessons learned - short-term and long-term - from a decade of data management within these two communities will be presented. A conceptual framework called Ocean Informatics provides one example for managing the complexities inherent to sharing oceanographic data. Elements are discussed that address the economies-of-scale as well as the complexities-of-scale pertinent to a broad vision of information management and scientific research.

  14. Towards an Integrated Framework for Designing Effective ICT-Supported Learning Environments: The Challenge to Better Link Technology and Pedagogy

    ERIC Educational Resources Information Center

    Richards, Cameron

    2006-01-01

    For various reasons many teachers struggle to harness the powerful informational, communicative and interactive learning possibilities of information and communication technologies (ICTs) in general. This is perhaps typified by how e-learning platforms and web portals are often used mainly as repositories for content and related online discussion…

  15. Integration of HTML Documents into an XML-Based Knowledge Repository

    PubMed Central

    Roemer, Lorrie K; Rocha, Roberto A; Del Fiol, Guilherme

    2005-01-01

    The Emergency Patient Instruction Generator (EPIG) is an electronic content compiler/viewer/editor developed by Intermountain Health Care. The content is vendor-licensed HTML patient discharge instructions. This work describes the process by which discharge instructions where converted from ASCII-encoded HTML to XML, then loaded to a database for use by EPIG. PMID:16779384

  16. Web-Based Learning Materials for Higher Education: The MERLOT Repository

    ERIC Educational Resources Information Center

    Orhun, Emrah

    2004-01-01

    MERLOT (Multimedia Educational Resource for Learning and Online Teaching) is a web-based open resource designed primarily for faculty and students in higher education. The resources in MERLOT include over 8,000 learning materials and support materials from a wide variety of disciplines that can be integrated within the context of a larger course.…

  17. A web-based biosignal data management system for U-health data integration.

    PubMed

    Ro, Dongwoo; Yoo, Sooyoung; Choi, Jinwook

    2008-11-06

    In the ubiquitous healthcare environment, the biosignal data should be easily accessed and properly maintained. This paper describes a web-based data management system. It consists of a device interface, a data upload control, a central repository, and a web server. For the user-specific web services, a MFER Upload ActiveX Control was developed.

  18. 75 FR 72660 - Extension of Temporary Exemptions for Eligible Credit Default Swaps To Facilitate Operation of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-26

    ... Integrity Act of 2009 (S. 272) (introduced by Senator Tom Harkin in January 2009); The Derivatives Markets... establishing central trade repositories for OTC derivatives markets (August 2009); and Over-the-Counter Derivatives Markets Act of 2009 (prepared by Treasury and sent to Congress in August 2009). On July 21, 2010...

  19. Extending software repository hosting to code review and testing

    NASA Astrophysics Data System (ADS)

    Gonzalez Alvarez, A.; Aparicio Cotarelo, B.; Lossent, A.; Andersen, T.; Trzcinska, A.; Asbury, D.; Hłimyr, N.; Meinhard, H.

    2015-12-01

    We will describe how CERN's services around Issue Tracking and Version Control have evolved, and what the plans for the future are. We will describe the services main design, integration and structure, giving special attention to the new requirements from the community of users in terms of collaboration and integration tools and how we address this challenge when defining new services based on GitLab for collaboration to replace our current Gitolite service and Code Review and Jenkins for Continuous Integration. These new services complement the existing ones to create a new global "development tool stack" where each working group can place its particular development work-flow.

  20. 10 CFR 63.143 - Implementation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Implementation. 63.143 Section 63.143 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Quality Assurance § 63.143 Implementation. DOE shall implement a quality assurance program...

  1. Scrubchem: Building Bioactivity Datasets from Pubchem Bioassay Data (SOT)

    EPA Science Inventory

    The PubChem Bioassay database is a non-curated public repository with data from 64 sources, including: ChEMBL, BindingDb, DrugBank, EPA Tox21, NIH Molecular Libraries Screening Program, and various other academic, government, and industrial contributors. Methods for extracting th...

  2. 10 CFR 63.132 - Confirmation of geotechnical and design parameters.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Commission. (e) In situ monitoring of the thermomechanical response of the underground facility must be... IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Performance Confirmation Program § 63.132... engineered systems and components, must be identified in the performance confirmation plan. (d) These...

  3. 10 CFR 63.132 - Confirmation of geotechnical and design parameters.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Commission. (e) In situ monitoring of the thermomechanical response of the underground facility must be... IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Performance Confirmation Program § 63.132... engineered systems and components, must be identified in the performance confirmation plan. (d) These...

  4. 10 CFR 63.132 - Confirmation of geotechnical and design parameters.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Commission. (e) In situ monitoring of the thermomechanical response of the underground facility must be... IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Performance Confirmation Program § 63.132... engineered systems and components, must be identified in the performance confirmation plan. (d) These...

  5. 10 CFR 60.141 - Confirmation of geotechnical and design parameters.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... reported to the Commission. (e) In situ monitoring of the thermomechanical response of the underground... IN GEOLOGIC REPOSITORIES Performance Confirmation Program § 60.141 Confirmation of geotechnical and... needed in design to accommodate actual field conditions encountered. (b) Subsurface conditions shall be...

  6. 10 CFR 60.141 - Confirmation of geotechnical and design parameters.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... reported to the Commission. (e) In situ monitoring of the thermomechanical response of the underground... IN GEOLOGIC REPOSITORIES Performance Confirmation Program § 60.141 Confirmation of geotechnical and... needed in design to accommodate actual field conditions encountered. (b) Subsurface conditions shall be...

  7. 10 CFR 60.141 - Confirmation of geotechnical and design parameters.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... reported to the Commission. (e) In situ monitoring of the thermomechanical response of the underground... IN GEOLOGIC REPOSITORIES Performance Confirmation Program § 60.141 Confirmation of geotechnical and... needed in design to accommodate actual field conditions encountered. (b) Subsurface conditions shall be...

  8. 10 CFR 63.132 - Confirmation of geotechnical and design parameters.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Commission. (e) In situ monitoring of the thermomechanical response of the underground facility must be... IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Performance Confirmation Program § 63.132... engineered systems and components, must be identified in the performance confirmation plan. (d) These...

  9. 10 CFR 63.132 - Confirmation of geotechnical and design parameters.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Commission. (e) In situ monitoring of the thermomechanical response of the underground facility must be... IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Performance Confirmation Program § 63.132... engineered systems and components, must be identified in the performance confirmation plan. (d) These...

  10. U.S. Virgin Islands Petroleum Price-Spike Preparation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, C.

    2012-06-01

    This NREL technical report details a plan for the U.S. Virgin Islands (USVI) to minimize the economic damage caused by major petroleum price increases. The assumptions for this plan are that the USVI will have very little time and money to implement it and that the population will be highly motivated to follow it because of high fuel prices. The plan's success, therefore, is highly dependent on behavior change. This plan was derived largely from a review of the actions taken and behavior changes made by companies and commuters throughout the United States in response to the oil price spikemore » of 2008. Many of these solutions were coordinated by or reported through the 88 local representatives of the U.S. Department of Energy's Clean Cities program. The National Renewable Energy Laboratory provides technical and communications support for the Clean Cities program and therefore serves as a de facto repository of these solutions. This plan is the first publication that has tapped this repository.« less

  11. JingleBells: A Repository of Immune-Related Single-Cell RNA-Sequencing Datasets.

    PubMed

    Ner-Gaon, Hadas; Melchior, Ariel; Golan, Nili; Ben-Haim, Yael; Shay, Tal

    2017-05-01

    Recent advances in single-cell RNA-sequencing (scRNA-seq) technology increase the understanding of immune differentiation and activation processes, as well as the heterogeneity of immune cell types. Although the number of available immune-related scRNA-seq datasets increases rapidly, their large size and various formats render them hard for the wider immunology community to use, and read-level data are practically inaccessible to the non-computational immunologist. To facilitate datasets reuse, we created the JingleBells repository for immune-related scRNA-seq datasets ready for analysis and visualization of reads at the single-cell level (http://jinglebells.bgu.ac.il/). To this end, we collected the raw data of publicly available immune-related scRNA-seq datasets, aligned the reads to the relevant genome, and saved aligned reads in a uniform format, annotated for cell of origin. We also added scripts and a step-by-step tutorial for visualizing each dataset at the single-cell level, through the commonly used Integrated Genome Viewer (www.broadinstitute.org/igv/). The uniform scRNA-seq format used in JingleBells can facilitate reuse of scRNA-seq data by computational biologists. It also enables immunologists who are interested in a specific gene to visualize the reads aligned to this gene to estimate cell-specific preferences for splicing, mutation load, or alleles. Thus JingleBells is a resource that will extend the usefulness of scRNA-seq datasets outside the programming aficionado realm. Copyright © 2017 by The American Association of Immunologists, Inc.

  12. CranialVault and its CRAVE tools: a clinical computer assistance system for Deep Brain Stimulation (DBS) therapy

    PubMed Central

    D’Haese, Pierre-François; Pallavaram, Srivatsan; Li, Rui; Remple, Michael S.; Kao, Chris; Neimat, Joseph S.; Konrad, Peter E.; Dawant, Benoit M.

    2010-01-01

    A number of methods have been developed to assist surgeons at various stages of deep brain stimulation (DBS) therapy. These include construction of anatomical atlases, functional databases, and electrophysiological atlases and maps. But, a complete system that can be integrated into the clinical workflow has not been developed. In this paper we present a system designed to assist physicians in pre-operative target planning, intra-operative target refinement and implantation, and post-operative DBS lead programming. The purpose of this system is to centralize the data acquired a the various stages of the procedure, reduce the amount of time needed at each stage of the therapy, and maximize the efficiency of the entire process. The system consists of a central repository (CranialVault), of a suite of software modules called CRAVE (CRAnialVault Explorer) that permit data entry and data visualization at each stage of the therapy, and of a series of algorithms that permit the automatic processing of the data. The central repository contains image data for more than 400 patients with the related pre-operative plans and position of the final implants and about 10,550 electrophysiological data points (micro-electrode recordings or responses to stimulations) recorded from 222 of these patients. The system has reached the stage of a clinical prototype that is being evaluated clinically at our institution. A preliminary quantitative validation of the planning component of the system performed on 80 patients who underwent the procedure between January 2009 and December 2009 shows that the system provides both timely and valuable information. PMID:20732828

  13. Multisite Semiautomated Clinical Data Repository for Duplication 15q Syndrome: Study Protocol and Early Uses.

    PubMed

    Ajayi, Oluwaseun Jessica; Smith, Ebony Jeannae; Viangteeravat, Teeradache; Huang, Eunice Y; Nagisetty, Naga Satya V Rao; Urraca, Nora; Lusk, Laina; Finucane, Brenda; Arkilo, Dimitrios; Young, Jennifer; Jeste, Shafali; Thibert, Ronald; Reiter, Lawrence T

    2017-10-18

    Chromosome 15q11.2-q13.1 duplication syndrome (Dup15q syndrome) is a rare disorder caused by duplications of chromosome 15q11.2-q13.1, resulting in a wide range of developmental disabilities in affected individuals. The Dup15q Alliance is an organization that provides family support and promotes research to improve the quality of life of patients living with Dup15q syndrome. Because of the low prevalence of this condition, the establishment of a single research repository would have been difficult and more time consuming without collaboration across multiple institutions. The goal of this project is to establish a national deidentified database with clinical and survey information on individuals diagnosed with Dup15q syndrome. The development of a multiclinic site repository for clinical and survey data on individuals with Dup15q syndrome was initiated and supported by the Dup15q Alliance. Using collaborative workflows, communication protocols, and stakeholder engagement tools, a comprehensive database of patient-centered information was built. We successfully established a self-report populating, centralized repository for Dup15q syndrome research. This repository also resulted in the development of standardized instruments that can be used for other studies relating to developmental disorders. By standardizing the data collection instruments, it allows us integrate our data with other national databases, such as the National Database for Autism Research. A substantial portion of the data collected from the questionnaires was facilitated through direct engagement of participants and their families. This allowed for a more complete set of information to be collected with a minimal turnaround time. We developed a repository that can efficiently be mined for shared clinical phenotypes observed at multiple clinic sites and used as a springboard for future clinical and basic research studies. ©Oluwaseun Jessica Ajayi, Ebony Jeannae Smith, Teeradache Viangteeravat, Eunice Y Huang, Naga Satya V Rao Nagisetty, Nora Urraca, Laina Lusk, Brenda Finucane, Dimitrios Arkilo, Jennifer Young, Shafali Jeste, Ronald Thibert, The Dup15q Alliance, Lawrence T Reiter. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 18.10.2017.

  14. The Environmental Data Initiative data repository: Trustworthy practices that foster preservation, fitness, and reuse for environmental and ecological data

    NASA Astrophysics Data System (ADS)

    Servilla, M. S.; Brunt, J.; Costa, D.; Gries, C.; Grossman-Clarke, S.; Hanson, P. C.; O'Brien, M.; Smith, C.; Vanderbilt, K.; Waide, R.

    2017-12-01

    The Environmental Data Initiative (EDI) is an outgrowth of more than 30 years of information management experience and technology from LTER Network data practitioners. EDI builds upon the PASTA data repository software used by the LTER Network Information System and manages more than 42,000 data packages, containing tabular data, imagery, and other formats. Development of the repository was a community process beginning in 2009 that included numerous working groups for generating use cases, system requirements, and testing of completed software, thereby creating a vested interested in its success and transparency in design. All software is available for review on GitHub, and refinements and new features are ongoing. Documentation is also available on Read-the-docs, including a comprehensive description of all web-service API methods. PASTA is metadata driven and uses the Ecological Metadata Language (EML) standard for describing environmental and ecological data; a simplified Dublin Core document is also available for each data package. Data are aggregated into packages consisting of metadata and other related content described by an OAI-ORE document. Once archived, each data package becomes immutable and permanent; updates are possible through the addition of new revisions. Components of each data package are accessible through a unique identifier, while the entire data package receives a DOI that is registered in DataCite. Preservation occurs through a combination of DataONE synchronization/replication and by a series of local and remote backup strategies, including daily uploads to AWS Glacier storage. Checksums are computed for all data at initial upload, with random verification occurring on a continuous basis, thus ensuring the integrity of data. PASTA incorporates a series of data quality tests to ensure that data are correctly documented with EML before data are archived; data packages that fail any test are forbidden in the repository. These tests are a measure data fitness, which ultimately increases confidence in data reuse and synthesis. The EDI data repository is recognized by multiple organizations, including EarthCube's Council of Data Facilities, the United States Geological Survey, FAIRsharing.org, re3data.org, and is a PLOS and Nature recommended data repository.

  15. The Index to Marine and Lacustrine Geological Samples: Improving Sample Accessibility and Enabling Current and Future Research

    NASA Astrophysics Data System (ADS)

    Moore, C.

    2011-12-01

    The Index to Marine and Lacustrine Geological Samples is a community designed and maintained resource enabling researchers to locate and request sea floor and lakebed geologic samples archived by partner institutions. Conceived in the dawn of the digital age by representatives from U.S. academic and government marine core repositories and the NOAA National Geophysical Data Center (NGDC) at a 1977 meeting convened by the National Science Foundation (NSF), the Index is based on core concepts of community oversight, common vocabularies, consistent metadata and a shared interface. Form and content of underlying vocabularies and metadata continue to evolve according to the needs of the community, as do supporting technologies and access methodologies. The Curators Consortium, now international in scope, meets at partner institutions biennially to share ideas and discuss best practices. NGDC serves the group by providing database access and maintenance, a list server, digitizing support and long-term archival of sample metadata, data and imagery. Over three decades, participating curators have performed the herculean task of creating and contributing metadata for over 195,000 sea floor and lakebed cores, grabs, and dredges archived in their collections. Some partners use the Index for primary web access to their collections while others use it to increase exposure of more in-depth institutional systems. The Index is currently a geospatially-enabled relational database, publicly accessible via Web Feature and Web Map Services, and text- and ArcGIS map-based web interfaces. To provide as much knowledge as possible about each sample, the Index includes curatorial contact information and links to related data, information and images; 1) at participating institutions, 2) in the NGDC archive, and 3) at sites such as the Rolling Deck to Repository (R2R) and the System for Earth Sample Registration (SESAR). Over 34,000 International GeoSample Numbers (IGSNs) linking to SESAR are included in anticipation of opportunities for interconnectivity with Integrated Earth Data Applications (IEDA) systems. To promote interoperability and broaden exposure via the semantic web, NGDC is publishing lithologic classification schemes and terminology used in the Index as Simple Knowledge Organization System (SKOS) vocabularies, coordinating with R2R and the Consortium for Ocean Leadership for consistency. Availability in SKOS form will also facilitate use of the vocabularies in International Standards Organization (ISO) 19115-2 compliant metadata records. NGDC provides stewardship for the Index on behalf of U.S. repositories as the NSF designated "appropriate National Data Center" for data and metadata pertaining to sea floor samples as specified in the 2011 Division of Ocean Sciences Sample and Data Policy, and on behalf of international partners via a collocated World Data Center. NGDC operates on the Open Archival Information System (OAIS) reference model. Active Partners: Antarctic Marine Geology Research Facility, Florida State University; British Ocean Sediment Core Research Facility; Geological Survey of Canada; Integrated Ocean Drilling Program; Lamont-Doherty Earth Observatory; National Lacustrine Core Repository, University of Minnesota; Oregon State University; Scripps Institution of Oceanography; University of Rhode Island; U.S. Geological Survey; Woods Hole Oceanographic Institution.

  16. The Cambridge Centre for Ageing and Neuroscience (Cam-CAN) data repository: Structural and functional MRI, MEG, and cognitive data from a cross-sectional adult lifespan sample.

    PubMed

    Taylor, Jason R; Williams, Nitin; Cusack, Rhodri; Auer, Tibor; Shafto, Meredith A; Dixon, Marie; Tyler, Lorraine K; Cam-Can; Henson, Richard N

    2017-01-01

    This paper describes the data repository for the Cambridge Centre for Ageing and Neuroscience (Cam-CAN) initial study cohort. The Cam-CAN Stage 2 repository contains multi-modal (MRI, MEG, and cognitive-behavioural) data from a large (approximately N=700), cross-sectional adult lifespan (18-87years old) population-based sample. The study is designed to characterise age-related changes in cognition and brain structure and function, and to uncover the neurocognitive mechanisms that support healthy cognitive ageing. The database contains raw and preprocessed structural MRI, functional MRI (active tasks and resting state), and MEG data (active tasks and resting state), as well as derived scores from cognitive behavioural experiments spanning five broad domains (attention, emotion, action, language, and memory), and demographic and neuropsychological data. The dataset thus provides a depth of neurocognitive phenotyping that is currently unparalleled, enabling integrative analyses of age-related changes in brain structure, brain function, and cognition, and providing a testbed for novel analyses of multi-modal neuroimaging data. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Enhancing the Reuse of Digital Resources for Integrated Systems to Represent, Understand and Dynamize Complex Interactions in Architectural Cultural Heritage Environments

    NASA Astrophysics Data System (ADS)

    Delgado, F. J.; Martinez, R.; Finat, J.; Martinez, J.; Puche, J. C.; Finat, F. J.

    2013-07-01

    In this work we develop a multiply interconnected system which involves objects, agents and interactions between them from the use of ICT applied to open repositories, users communities and web services. Our approach is applied to Architectural Cultural Heritage Environments (ACHE). It includes components relative to digital accessibility (to augmented ACHE repositories), contents management (ontologies for the semantic web), semiautomatic recognition (to ease the reuse of materials) and serious videogames (for interaction in urban environments). Their combination provides a support for local real/remote virtual tourism (including some tools for low-level RT display of rendering in portable devices), mobile-based smart interactions (with a special regard to monitored environments) and CH related games (as extended web services). Main contributions to AR models on usual GIS applied to architectural environments, concern to an interactive support performed directly on digital files which allows to access to CH contents which are referred to GIS of urban districts (involving facades, historical or preindustrial buildings) and/or CH repositories in a ludic and transversal way to acquire cognitive, medial and social abilities in collaborative environments.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sobes, Vladimir; Scaglione, John M; Wagner, John C

    Spent nuclear fuel (SNF) management practices in the United States rely on dry storage systems that include both canister- and cask-based systems. The United States Department of Energy Used Fuel Disposition Campaign is examining the feasibility of direct disposal of dual-purpose (storage and transportation) canisters (DPCs) in a geological repository. One of the major technical challenges for direct disposal is the ability to demonstrate the subcriticality of the DPCs loaded with SNF for the repository performance period (e.g., 10,000 years or more) as the DPCs may undergo degradation over time. Specifically, groundwater ingress into the DPC (i.e., flooding) could allowmore » the system to achieve criticality in scenarios where the neutron absorber plates in the DPC basket have degraded. However, as was shown by Banerjee et al., some aqueous species in the groundwater provide noticeable reactivity reduction for these systems. For certain amounts of particular aqueous species (e.g., chlorine, lithium) in the groundwater, subcriticality can be demonstrated even for DPCs with complete degradation of the neutron absorber plates or a degraded fuel basket configuration. It has been demonstrated that chlorine is the leading impurity, as indicated by significant neutron absorption in the water that is available in reasonable quantities for the deep geological repository media under consideration. This paper presents the results of an investigation of the available integral experiments worldwide that could be used to validate DPC disposal criticality evaluations, including credit for chlorine. Due to the small number of applicable critical configurations, validation through traditional trending analysis was not possible. The bias in the eigenvalue of the application systems due only to the chlorine was calculated using TSURFER analysis and found to be on the order of 100 percent mille (1 pcm = 10 -5 k eff). This study investigated the design of a series of critical configurations with varying amounts of chlorine to address validation gaps. Such integral experiments would support the crediting of the chlorine neutron-absorption properties in groundwater and the demonstration of subcriticality for DPCs in deep geologic repositories with sufficient chlorine availability.« less

  19. Validation Study for Crediting Chlorine in Criticality Analyses for US Spent Nuclear Fuel Disposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sobes, Vladimir; Scaglione, John M.; Wagner, John C.

    2015-01-01

    Spent nuclear fuel (SNF) management practices in the United States rely on dry storage systems that include both canister- and cask-based systems. The United States Department of Energy Used Fuel Disposition Campaign is examining the feasibility of direct disposal of dual-purpose (storage and transportation) canisters (DPCs) in a geological repository. One of the major technical challenges for direct disposal is the ability to demonstrate the subcriticality of the DPCs loaded with SNF for the repository performance period (e.g., 10,000 years or more) as the DPCs may undergo degradation over time. Specifically, groundwater ingress into the DPC (i.e., flooding) could allowmore » the system to achieve criticality in scenarios where the neutron absorber plates in the DPC basket have degraded. However, as was shown by Banerjee et al., some aqueous species in the groundwater provide noticeable reactivity reduction for these systems. For certain amounts of particular aqueous species (e.g., chlorine, lithium) in the groundwater, subcriticality can be demonstrated even for DPCs with complete degradation of the neutron absorber plates or a degraded fuel basket configuration. It has been demonstrated that chlorine is the leading impurity, as indicated by significant neutron absorption in the water that is available in reasonable quantities for the deep geological repository media under consideration. This paper presents the results of an investigation of the available integral experiments worldwide that could be used to validate DPC disposal criticality evaluations, including credit for chlorine. Due to the small number of applicable critical configurations, validation through traditional trending analysis was not possible. The bias in the eigenvalue of the application systems due only to the chlorine was calculated using TSURFER analysis and found to be on the order of 100 percent mille (1 pcm = 10 -5 k eff). This study investigated the design of a series of critical configurations with varying amounts of chlorine to address validation gaps. Such integral experiments would support the crediting of the chlorine neutron-absorption properties in groundwater and the demonstration of subcriticality for DPCs in deep geologic repositories with sufficient chlorine availability.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forslund, D.W.; Cook, J.L.

    One of the most powerful tools available for telemedicine is a multimedia medical record accessible over a wide area and simultaneously editable by multiple physicians. The ability to do this through an intuitive interface linking multiple distributed data repositories while maintaining full data integrity is a fundamental enabling technology in healthcare. The authors discuss the role of distributed object technology using Java and CORBA in providing this capability including an example of such a system (TeleMed) which can be accessed through the World Wide Web. Issues of security, scalability, data integrity, and usability are emphasized.

  1. The role of CORBA in enabling telemedicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forslund, D.W.

    1997-07-01

    One of the most powerful tools available for telemedicine is a multimedia medical record accessible over a wide area and simultaneously editable by multiple physicians. The ability to do this through an intuitive interface linking multiple distributed data repositories while maintaining full data integrity is a fundamental enabling technology in healthcare. The author discusses the role of distributed object technology using CORBA in providing this capability including an example of such a system (TeleMed) which can be accessed through the World Wide Web. Issues of security, scalability, data integrity, and useability are emphasized.

  2. The siting program of geological repository for spent fuel/high-level waste in Czech Republic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novotny, P.

    1993-12-31

    The management of high-level waste in Czech Republic have a very short history, because before the year 1989 spent nuclear fuel was re-exported back to USSR. The project ``Geological research of HLW repository in Czech Republic`` was initiated during 1990 by the Ministry of the Environment of the Czech Republic and by this project delegated the Czech Geological Survey (CGU) Prague. The first CGU project late in 1990 for multibarrier concept has proposed a geological repository to be located at a depth of about 500 m. Screening and studies of potential sites for repository started in 1991. First stage representedmore » regional siting of the Czech Republic for perspective rock types and massifs. In cooperation with GEOPHYSICS Co., Geophysical Institute of the Czech Academy of Sciences and Charles University Prague 27 perspective regions were selected, using criteria IAEA. This work in the Czech Republic was possible thanks to the detailed geological studies done in the past and thanks to the numerous archive data, concentrated in the central geological archive GEOFOND. Selection of perspective sites also respected natural conservation regions, regions conserving water and mineral waters resources. CGU opened up contact with countries with similar geological situation and started cooperation with SKB (Swedish Nuclear Fuel and Waste Management Co.). The Project of geological research for the next 10 years is a result of these activities.« less

  3. Preservation of Earth Science Data History with Digital Content Repository Technology

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Pan, J.; Shrestha, B.; Cook, R. B.

    2011-12-01

    An increasing need for derived and on-demand data product in Earth Science research makes the digital content more difficult for providers to manage and preserve and for users to locate, understand, and consume. Specifically, this increasing need presents additional challenges in managing data processing history information and delivering such information to end users. For example, the North American Carbon Program (NACP) Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP) chose a modified SYNMAP land cover data as one of the input driver data for participating terrestrial biospheric models. The global 1km resolution SYNMAP data was created by harmonizing 3 remote sensing-based land cover products: GLCC, GLC2000, and the MODIS land cover product. The original SYNMAP land cover data was aggregated into half and quarter degree resolution. It was then enhanced with more detailed grassland and cropland types. Currently, there lacks an effective mechanism to convey this data processing information to different modeling teams for them to determine if a data product meets their needs. It still highly relies on offline human interaction. The NASA-sponsored ORNL DAAC has leveraged the contemporary digital object repository technology to promote the representation, management, and delivery of data processing history and provenance information. Within digital object repository, different data products are managed as objects, with metadata as attributes and content delivery and management services as dissemination methods. Derivation relationships among data products can be semantically referenced between digital objects. Within the repository, data users can easily track a derived data product back to its origin, explorer metadata and documents about each intermediate data product, and discover processing details involved in each derivation step. Coupled with Drupal Web Content Management System, the digital repository interface was enhanced to provide intuitive graphic representation of the data processing history. Each data product is also associated with a formal metadata record in FGDC standards, and the main fields of the FGDC record are indexed for search, and are displayed as attributes of the data product. These features enable data users to better understand and consume a data product. The representation of data processing history in digital repository can further promote long-term data preservation. Lineage information is a major aspect to make digital data understandable and usable long time into the future. Derivation references can be setup between digital objects not only within a single digital repository, but also across multiple distributed digital repositories. Along with emerging identification mechanisms, such as Digital Object Identifier (DOI), a flexible distributed digital repository network can be setup to better preserve digital content. In this presentation, we describe how digital content repository technology can be used to manage, preserve, and deliver digital data processing history information in Earth Science research domain, with selected data archived in ORNL DAAC and Model and Synthesis Thematic Data Center (MAST-DC) as testing targets.

  4. Fundamentals of the NEA Thermochemical Database and its influence over national nuclear programs on the performance assessment of deep geological repositories.

    PubMed

    Ragoussi, Maria-Eleni; Costa, Davide

    2017-03-14

    For the last 30 years, the NEA Thermochemical Database (TDB) Project (www.oecd-nea.org/dbtdb/) has been developing a chemical thermodynamic database for elements relevant to the safety of radioactive waste repositories, providing data that are vital to support the geochemical modeling of such systems. The recommended data are selected on the basis of strict review procedures and are characterized by their consistency. The results of these efforts are freely available, and have become an international point of reference in the field. As a result, a number of important national initiatives with regard to waste management programs have used the NEA TDB as their basis, both in terms of recommended data and guidelines. In this article we describe the fundamentals and achievements of the project together with the characteristics of some databases developed in national nuclear waste disposal programs that have been influenced by the NEA TDB. We also give some insights on how this work could be seen as an approach to be used in broader areas of environmental interest. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. SINGLE HEATER TEST FINAL REPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J.B. Cho

    The Single Heater Test is the first of the in-situ thermal tests conducted by the U.S. Department of Energy as part of its program of characterizing Yucca Mountain in Nevada as the potential site for a proposed deep geologic repository for the disposal of spent nuclear fuel and high-level nuclear waste. The Site Characterization Plan (DOE 1988) contained an extensive plan of in-situ thermal tests aimed at understanding specific aspects of the response of the local rock-mass around the potential repository to the heat from the radioactive decay of the emplaced waste. With the refocusing of the Site Characterization Planmore » by the ''Civilian Radioactive Waste Management Program Plan'' (DOE 1994), a consolidated thermal testing program emerged by 1995 as documented in the reports ''In-Situ Thermal Testing Program Strategy'' (DOE 1995) and ''Updated In-Situ Thermal Testing Program Strategy'' (CRWMS M&O 1997a). The concept of the Single Heater Test took shape in the summer of 1995 and detailed planning and design of the test started with the beginning fiscal year 1996. The overall objective of the Single Heater Test was to gain an understanding of the coupled thermal, mechanical, hydrological, and chemical processes that are anticipated to occur in the local rock-mass in the potential repository as a result of heat from radioactive decay of the emplaced waste. This included making a priori predictions of the test results using existing models and subsequently refining or modifying the models, on the basis of comparative and interpretive analyses of the measurements and predictions. A second, no less important, objective was to try out, in a full-scale field setting, the various instruments and equipment to be employed in the future on a much larger, more complex, thermal test of longer duration, such as the Drift Scale Test. This ''shake down'' or trial aspect of the Single Heater Test applied not just to the hardware, but also to the teamwork and cooperation between multiple organizations performing their part in the test.« less

  6. Personalized reminiscence therapy M-health application for patients living with dementia: Innovating using open source code repository.

    PubMed

    Zhang, Melvyn W B; Ho, Roger C M

    2017-01-01

    Dementia is known to be an illness which brings forth marked disability amongst the elderly individuals. At times, patients living with dementia do also experience non-cognitive symptoms, and these symptoms include that of hallucinations, delusional beliefs as well as emotional liability, sexualized behaviours and aggression. According to the National Institute of Clinical Excellence (NICE) guidelines, non-pharmacological techniques are typically the first-line option prior to the consideration of adjuvant pharmacological options. Reminiscence and music therapy are thus viable options. Lazar et al. [3] previously performed a systematic review with regards to the utilization of technology to delivery reminiscence based therapy to individuals who are living with dementia and has highlighted that technology does have benefits in the delivery of reminiscence therapy. However, to date, there has been a paucity of M-health innovations in this area. In addition, most of the current innovations are not personalized for each of the person living with Dementia. Prior research has highlighted the utility for open source repository in bioinformatics study. The authors hoped to explain how they managed to tap upon and make use of open source repository in the development of a personalized M-health reminiscence therapy innovation for patients living with dementia. The availability of open source code repository has changed the way healthcare professionals and developers develop smartphone applications today. Conventionally, a long iterative process is needed in the development of native application, mainly because of the need for native programming and coding, especially so if the application needs to have interactive features or features that could be personalized. Such repository enables the rapid and cost effective development of application. Moreover, developers are also able to further innovate, as less time is spend in the iterative process.

  7. Images of a place and vacation preferences: Implications of the 1989 surveys for assessing the economic impacts of a nuclear waste repository in Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slovic, P.; Layman, M.; Flynn, J.H.

    1990-11-01

    In July, 1989 the authors produced a report titled Perceived Risk, Stigma, and Potential Economic Impacts of a High-Level Nuclear-Waste Repository in Nevada (Slovic et al., 1989). That report described a program of research designed to assess the potential impacts of a high-level nuclear waste repository at Yucca Mountain, Nevada upon tourism, retirement and job-related migration, and business development in Las Vegas and the state. It was concluded that adverse economic impacts potentially may result from two related social processes. Specifically, the study by Slovic et al. employed analyses of imagery in order to overcome concerns about the validity ofmore » direct questions regarding the influence of a nuclear-waste repository at Yucca Mountain upon a person`s future behaviors. During the latter months of 1989, data were collected in three major telephone surveys, designed to achieve the following objectives: (1) to replicate the results from the Phoenix, Arizona, surveys using samples from other populations that contribute to tourism, migration, and development in Nevada; (2) to retest the original Phoenix respondents to determine the stability of their images across an 18-month time period and to determine whether their vacation choices subsequent to the first survey were predictable from the images they produced in that original survey; (3) to elicit additional word-association images for the stimulus underground nuclear waste repository in order to determine whether the extreme negative images generated by the Phoenix respondents would occur with other samples of respondents; and (4) to develop and test a new method for imagery elicitation, based upon a rating technique rather than on word associations. 2 refs., 8 figs., 13 tabs.« less

  8. 75 FR 71133 - National Institute of Mental Health; Notice of Closed Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ... Emphasis Panel; Competitive Revision for Stem Cell Repository Relevant to Mental Disorders. Date: December... Domestic Assistance Program Nos. 93.242, Mental Health Research Grants; 93.281, Scientist Development Award, Scientist Development Award for Clinicians, and Research Scientist Award; 93.282, Mental Health National...

  9. 77 FR 13135 - Agency Information Collection Activities: Submission for Review; Information Collection Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-05

    ... (DHS), Science and Technology, Protected Repository for the Defense of Infrastructure Against Cyber Threats (PREDICT) Program AGENCY: Science and Technology Directorate, DHS. ACTION: 30-Day notice and request for comment. SUMMARY: The Department of Homeland Security (DHS), Science & Technology (S&T...

  10. 10 CFR 2.1003 - Availability of material.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... months in advance of submitting its license application for a geologic repository, the NRC shall make... of privilege in § 2.1006, graphic-oriented documentary material that includes raw data, computer runs, computer programs and codes, field notes, laboratory notes, maps, diagrams and photographs, which have been...

  11. 10 CFR 2.1003 - Availability of material.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... months in advance of submitting its license application for a geologic repository, the NRC shall make... of privilege in § 2.1006, graphic-oriented documentary material that includes raw data, computer runs, computer programs and codes, field notes, laboratory notes, maps, diagrams and photographs, which have been...

  12. 10 CFR 60.142 - Design testing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... construction, a program for in situ testing of such features as borehole and shaft seals, backfill, and the... 10 Energy 2 2010-01-01 2010-01-01 false Design testing. 60.142 Section 60.142 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES...

  13. 10 CFR 60.142 - Design testing.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... construction, a program for in situ testing of such features as borehole and shaft seals, backfill, and the... 10 Energy 2 2013-01-01 2013-01-01 false Design testing. 60.142 Section 60.142 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES...

  14. 10 CFR 60.142 - Design testing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... construction, a program for in situ testing of such features as borehole and shaft seals, backfill, and the... 10 Energy 2 2012-01-01 2012-01-01 false Design testing. 60.142 Section 60.142 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES...

  15. 10 CFR 60.142 - Design testing.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... construction, a program for in situ testing of such features as borehole and shaft seals, backfill, and the... 10 Energy 2 2014-01-01 2014-01-01 false Design testing. 60.142 Section 60.142 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES...

  16. 10 CFR 60.142 - Design testing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... construction, a program for in situ testing of such features as borehole and shaft seals, backfill, and the... 10 Energy 2 2011-01-01 2011-01-01 false Design testing. 60.142 Section 60.142 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES...

  17. Brine and Gas Flow Patterns Between Excavated Areas and Disturbed Rock Zone in the 1996 Performance Assessment for the Waste Isolation Pilot Plant for a Single Drilling Intrusion that Penetrates Repository and Castile Brine Reservoir

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ECONOMY,KATHLEEN M.; HELTON,JON CRAIG; VAUGHN,PALMER

    1999-10-01

    The Waste Isolation Pilot Plant (WIPP), which is located in southeastern New Mexico, is being developed for the geologic disposal of transuranic (TRU) waste by the U.S. Department of Energy (DOE). Waste disposal will take place in panels excavated in a bedded salt formation approximately 2000 ft (610 m) below the land surface. The BRAGFLO computer program which solves a system of nonlinear partial differential equations for two-phase flow, was used to investigate brine and gas flow patterns in the vicinity of the repository for the 1996 WIPP performance assessment (PA). The present study examines the implications of modeling assumptionsmore » used in conjunction with BRAGFLO in the 1996 WIPP PA that affect brine and gas flow patterns involving two waste regions in the repository (i.e., a single waste panel and the remaining nine waste panels), a disturbed rock zone (DRZ) that lies just above and below these two regions, and a borehole that penetrates the single waste panel and a brine pocket below this panel. The two waste regions are separated by a panel closure. The following insights were obtained from this study. First, the impediment to flow between the two waste regions provided by the panel closure model is reduced due to the permeable and areally extensive nature of the DRZ adopted in the 1996 WIPP PA, which results in the DRZ becoming an effective pathway for gas and brine movement around the panel closures and thus between the two waste regions. Brine and gas flow between the two waste regions via the DRZ causes pressures between the two to equilibrate rapidly, with the result that processes in the intruded waste panel are not isolated from the rest of the repository. Second, the connection between intruded and unintruded waste panels provided by the DRZ increases the time required for repository pressures to equilibrate with the overlying and/or underlying units subsequent to a drilling intrusion. Third, the large and areally extensive DRZ void volumes is a significant source of brine to the repository, which is consumed in the corrosion of iron and thus contributes to increased repository pressures. Fourth, the DRZ itself lowers repository pressures by providing storage for gas and access to additional gas storage in areas of the repository. Fifth, given the pathway that the DRZ provides for gas and brine to flow around the panel closures, isolation of the waste panels by the panel closures was not essential to compliance with the U.S. Environment Protection Agency's regulations in the 1996 WIPP PA.« less

  18. Completeness and overlap in open access systems: Search engines, aggregate institutional repositories and physics-related open sources

    PubMed Central

    Wu, Tai-luan; Tseng, Ling-li

    2017-01-01

    This study examines the completeness and overlap of coverage in physics of six open access scholarly communication systems, including two search engines (Google Scholar and Microsoft Academic), two aggregate institutional repositories (OAIster and OpenDOAR), and two physics-related open sources (arXiv.org and Astrophysics Data System). The 2001–2013 Nobel Laureates in Physics served as the sample. Bibliographic records of their publications were retrieved and downloaded from each system, and a computer program was developed to perform the analytical tasks of sorting, comparison, elimination, aggregation and statistical calculations. Quantitative analyses and cross-referencing were performed to determine the completeness and overlap of the system coverage of the six open access systems. The results may enable scholars to select an appropriate open access system as an efficient scholarly communication channel, and academic institutions may build institutional repositories or independently create citation index systems in the future. Suggestions on indicators and tools for academic assessment are presented based on the comprehensiveness assessment of each system. PMID:29267327

  19. search GenBank: interactive orchestration and ad-hoc choreography of Web services in the exploration of the biomedical resources of the National Center For Biotechnology Information

    PubMed Central

    2013-01-01

    Background Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. Results We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user’s query, advanced data searching based on the specified user’s query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. Conclusions search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/. PMID:23452691

  20. search GenBank: interactive orchestration and ad-hoc choreography of Web services in the exploration of the biomedical resources of the National Center For Biotechnology Information.

    PubMed

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Siążnik, Artur

    2013-03-01

    Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user's query, advanced data searching based on the specified user's query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/.

  1. French Geological Repository Project for High Level and Long-Lived Waste: Scientific Programme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landais, P.; Lebon, P.; Ouzounian, G.

    2008-07-01

    The feasibility study presented in the Dossier 2005 Argile set out to evaluate the conditions for building, operating and managing a reversible disposal facility. The research was directed at demonstrating a potential for confining long-lived radioactive waste in a deep clay formation by establishing the feasibility of the disposal principle. Results have been enough convincing and a Planning Act was passed on 28 June, 2006. Decision in principle has been taken to dispose of intermediate and high level long-lived radioactive waste in a geological repository. An application file for a license to construct a disposal facility is requested by endmore » of 2014 and its commissioning is planned for 2025. Based on previous results as well as on recommendations made by various Dossier 2005 evaluators, a new scientific programme for 2006-2015 has been defined. It gives details of what will be covered over the 2006-2015 period. Particular emphasis is placed on consolidating scientific data, increasing understanding of certain mechanisms and using a scientific and technical integration approach. It aims at integrating scientific developments and engineering advances. The scientific work envisaged beyond 2006 has the benefit of a unique context, which is direct access to the geological medium over long timescales. It naturally extends the research carried out to date, and incorporates additional investigations of the geological medium, and the preparation of demonstration work especially through full-scale tests. Results will aim at improving the representation of repository evolutions over time, extract the relevant parameters for monitoring during the reversibility phases, reduce the parametric uncertainties and enhance the robustness of models for performance calculations and safety analyses. Structure and main orientation of the ongoing scientific programme are presented. (author)« less

  2. Storage, transportation and disposal system for used nuclear fuel assemblies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scaglione, John M.; Wagner, John C.

    An integrated storage, transportation and disposal system for used fuel assemblies is provided. The system includes a plurality of sealed canisters and a cask sized to receive the sealed canisters in side by side relationship. The plurality of sealed canisters include an internal basket structure to receive a plurality of used fuel assemblies. The internal basket structure includes a plurality of radiation-absorbing panels and a plurality of hemispherical ribs generally perpendicular to the canister sidewall. The sealed canisters are received within the cask for storage and transportation and are removed from the cask for disposal at a designated repository. Themore » system of the present invention allows the handling of sealed canisters separately or collectively, while allowing storage and transportation of high burnup fuel and damaged fuel to the designated repository.« less

  3. Facilitating Cohort Discovery by Enhancing Ontology Exploration, Query Management and Query Sharing for Large Clinical Data Repositories.

    PubMed

    Tao, Shiqiang; Cui, Licong; Wu, Xi; Zhang, Guo-Qiang

    2017-01-01

    To help researchers better access clinical data, we developed a prototype query engine called DataSphere for exploring large-scale integrated clinical data repositories. DataSphere expedites data importing using a NoSQL data management system and dynamically renders its user interface for concept-based querying tasks. DataSphere provides an interactive query-building interface together with query translation and optimization strategies, which enable users to build and execute queries effectively and efficiently. We successfully loaded a dataset of one million patients for University of Kentucky (UK) Healthcare into DataSphere with more than 300 million clinical data records. We evaluated DataSphere by comparing it with an instance of i2b2 deployed at UK Healthcare, demonstrating that DataSphere provides enhanced user experience for both query building and execution.

  4. Facilitating Cohort Discovery by Enhancing Ontology Exploration, Query Management and Query Sharing for Large Clinical Data Repositories

    PubMed Central

    Tao, Shiqiang; Cui, Licong; Wu, Xi; Zhang, Guo-Qiang

    2017-01-01

    To help researchers better access clinical data, we developed a prototype query engine called DataSphere for exploring large-scale integrated clinical data repositories. DataSphere expedites data importing using a NoSQL data management system and dynamically renders its user interface for concept-based querying tasks. DataSphere provides an interactive query-building interface together with query translation and optimization strategies, which enable users to build and execute queries effectively and efficiently. We successfully loaded a dataset of one million patients for University of Kentucky (UK) Healthcare into DataSphere with more than 300 million clinical data records. We evaluated DataSphere by comparing it with an instance of i2b2 deployed at UK Healthcare, demonstrating that DataSphere provides enhanced user experience for both query building and execution. PMID:29854239

  5. An infrastructure for ontology-based information systems in biomedicine: RICORDO case study.

    PubMed

    Wimalaratne, Sarala M; Grenon, Pierre; Hoehndorf, Robert; Gkoutos, Georgios V; de Bono, Bernard

    2012-02-01

    The article presents an infrastructure for supporting the semantic interoperability of biomedical resources based on the management (storing and inference-based querying) of their ontology-based annotations. This infrastructure consists of: (i) a repository to store and query ontology-based annotations; (ii) a knowledge base server with an inference engine to support the storage of and reasoning over ontologies used in the annotation of resources; (iii) a set of applications and services allowing interaction with the integrated repository and knowledge base. The infrastructure is being prototyped and developed and evaluated by the RICORDO project in support of the knowledge management of biomedical resources, including physiology and pharmacology models and associated clinical data. The RICORDO toolkit and its source code are freely available from http://ricordo.eu/relevant-resources. sarala@ebi.ac.uk.

  6. InvestigationOrganizer: The Development and Testing of a Web-based Tool to Support Mishap Investigations

    NASA Technical Reports Server (NTRS)

    Carvalho, Robert F.; Williams, James; Keller, Richard; Sturken, Ian; Panontin, Tina

    2004-01-01

    InvestigationOrganizer (IO) is a collaborative web-based system designed to support the conduct of mishap investigations. IO provides a common repository for a wide range of mishap related information, and allows investigators to make explicit, shared, and meaningful links between evidence, causal models, findings and recommendations. It integrates the functionality of a database, a common document repository, a semantic knowledge network, a rule-based inference engine, and causal modeling and visualization. Thus far, IO has been used to support four mishap investigations within NASA, ranging from a small property damage case to the loss of the Space Shuttle Columbia. This paper describes how the functionality of IO supports mishap investigations and the lessons learned from the experience of supporting two of the NASA mishap investigations: the Columbia Accident Investigation and the CONTOUR Loss Investigation.

  7. moocRP: Enabling Open Learning Analytics with an Open Source Platform for Data Distribution, Analysis, and Visualization

    ERIC Educational Resources Information Center

    Pardos, Zachary A.; Whyte, Anthony; Kao, Kevin

    2016-01-01

    In this paper, we address issues of transparency, modularity, and privacy with the introduction of an open source, web-based data repository and analysis tool tailored to the Massive Open Online Course community. The tool integrates data request/authorization and distribution workflow features as well as provides a simple analytics module upload…

  8. An Architecture Based on Linked Data Technologies for the Integration and Reuse of OER in MOOCs Context

    ERIC Educational Resources Information Center

    Piedra, Nelson; Chicaiza, Janneth Alexandra; López, Jorge; Tovar, Edmundo

    2014-01-01

    The Linked Data initiative is considered as one of the most effective alternatives for creating global shared information spaces, it has become an interesting approach for discovering and enriching open educational resources data, as well as achieving semantic interoperability and re-use between multiple OER repositories. The notion of Linked Data…

  9. Genome Variation Map: a data repository of genome variations in BIG Data Center

    PubMed Central

    Tian, Dongmei; Li, Cuiping; Tang, Bixia; Dong, Lili; Xiao, Jingfa; Bao, Yiming; Zhao, Wenming; He, Hang

    2018-01-01

    Abstract The Genome Variation Map (GVM; http://bigd.big.ac.cn/gvm/) is a public data repository of genome variations. As a core resource in the BIG Data Center, Beijing Institute of Genomics, Chinese Academy of Sciences, GVM dedicates to collect, integrate and visualize genome variations for a wide range of species, accepts submissions of different types of genome variations from all over the world and provides free open access to all publicly available data in support of worldwide research activities. Unlike existing related databases, GVM features integration of a large number of genome variations for a broad diversity of species including human, cultivated plants and domesticated animals. Specifically, the current implementation of GVM not only houses a total of ∼4.9 billion variants for 19 species including chicken, dog, goat, human, poplar, rice and tomato, but also incorporates 8669 individual genotypes and 13 262 manually curated high-quality genotype-to-phenotype associations for non-human species. In addition, GVM provides friendly intuitive web interfaces for data submission, browse, search and visualization. Collectively, GVM serves as an important resource for archiving genomic variation data, helpful for better understanding population genetic diversity and deciphering complex mechanisms associated with different phenotypes. PMID:29069473

  10. Acute Kidney Injury and Big Data.

    PubMed

    Sutherland, Scott M; Goldstein, Stuart L; Bagshaw, Sean M

    2018-01-01

    The recognition of a standardized, consensus definition for acute kidney injury (AKI) has been an important milestone in critical care nephrology, which has facilitated innovation in prevention, quality of care, and outcomes research among the growing population of hospitalized patients susceptible to AKI. Concomitantly, there have been substantial advances in "big data" technologies in medicine, including electronic health records (EHR), data registries and repositories, and data management and analytic methodologies. EHRs are increasingly being adopted, clinical informatics is constantly being refined, and the field of EHR-enabled care improvement and research has grown exponentially. While these fields have matured independently, integrating the two has the potential to redefine and integrate AKI-related care and research. AKI is an ideal condition to exploit big data health care innovation for several reasons: AKI is common, increasingly encountered in hospitalized settings, imposes meaningful risk for adverse events and poor outcomes, has incremental cost implications, and has been plagued by suboptimal quality of care. In this concise review, we discuss the potential applications of big data technologies, particularly modern EHR platforms and health data repositories, to transform our capacity for AKI prediction, detection, and care quality. © 2018 S. Karger AG, Basel.

  11. Save medical personnel's time by improved user interfaces.

    PubMed

    Kindler, H

    1997-01-01

    Common objectives in the industrial countries are the improvement of quality of care, clinical effectiveness, and cost control. Cost control, in particular, has been addressed through the introduction of case mix systems for reimbursement by social-security institutions. More data is required to enable quality improvement, increases in clinical effectiveness and for juridical reasons. At first glance, this documentation effort is contradictory to cost reduction. However, integrated services for resource management based on better documentation should help to reduce costs. The clerical effort for documentation should be decreased by providing a co-operative working environment for healthcare professionals applying sophisticated human-computer interface technology. Additional services, e.g., automatic report generation, increase the efficiency of healthcare personnel. Modelling the medical work flow forms an essential prerequisite for integrated resource management services and for co-operative user interfaces. A user interface aware of the work flow provides intelligent assistance by offering the appropriate tools at the right moment. Nowadays there is a trend to client/server systems with relational databases or object-oriented databases as repository. The work flows used for controlling purposes and to steer the user interfaces must be represented in the repository.

  12. The tropical germplasm repository program at the USDA-ARS, Tropical Agriculture Research Station, Mayaguez, Puerto Rico

    USDA-ARS?s Scientific Manuscript database

    The USDA-ARS Tropical Agriculture Research Station is the only research entity within the National Plant Germplasm system in the insular Caribbean region. It houses germplasm collections of cultivated tropical/subtropical germplasm of bananas/plantains, cacao, mamey sapote, sapodilla, Spanish lime,...

  13. USAF Hearing Conservation Program, DOEHRS Data Repository Annual Report: CY2014

    DTIC Science & Technology

    2016-02-01

    tinnitus . The goal was to align the DOEHRS-HC DR data with DoD Hearing Conservation and Readiness Working Group initiatives and Government...Accountability Office recommendations [3]. The data collected from the standardized tinnitus questions are projected to be mined by the DoD in future studies

  14. At the Creation: Chaos, Control, and Automation--Commercial Software Development for Archives.

    ERIC Educational Resources Information Center

    Drr, W. Theodore

    1988-01-01

    An approach to the design of flexible text-based management systems for archives includes tiers for repository, software, and user management systems. Each tier has four layers--objective, program, result, and interface. Traps awaiting software development companies involve the market, competition, operations, and finance. (10 references) (MES)

  15. Jean C. Zenklusen, M.S., Ph.D., Discusses the NCI Genomics Data Commons at AACR 2014 - TCGA

    Cancer.gov

    At the AACR 2014 meeting, Dr. Jean C. Zenklusen, Director of The Cancer Genome Atlas Program Office, highlights the Genomics Data Commons, a harmonized data repository that will allow simultaneous access and analysis of NCI genomics data, including The Ca

  16. 75 FR 35087 - Violent Criminal Apprehension Program; Agency Information Collection Activities: Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-21

    ... in 1985, ViCAP serves as the national repository for violent crimes; specifically: Homicides and attempted homicides, especially those that (a) involve an abduction, (b) are apparently random, motiveless... homicide. Comprehensive case information submitted to ViCAP is maintained in the ViCAP Web National Crime...

  17. 75 FR 52027 - Violent Criminal Apprehension Program: Agency Information Collection Activities: Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-24

    ... 1985, ViCAP serves as the national repository for violent crimes; specifically: Homicides and attempted homicides, especially those that (a) involve an abduction, (b) are apparently random, motiveless, or... missing. Unidentified human remains, where the manner of death is known or suspected to be homicide...

  18. System Description and Status Report: California Education Information System.

    ERIC Educational Resources Information Center

    California State Dept. of Education, Sacramento.

    The California Education Information System (CEIS) consists of two subsystems of computer programs designed to process business and pupil data for local school districts. Creating and maintaining records concerning the students in the schools, the pupil subsystem provides for a central repository of school district identification information and a…

  19. 76 FR 72426 - Agency Information Collection Activities: Submission for Review; Information Collection Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-23

    ... (DHS), Science and Technology, Protected Repository for the Defense of Infrastructure Against Cyber... the Defense of Infrastructure against Cyber Threats (PREDICT) program, and is a revision of a... operational data for use in cyber security research and development through the establishment of distributed...

  20. Defense Technical Information Center (DTIC) - Its role in the USAF Scientific and Technical Information Program

    NASA Technical Reports Server (NTRS)

    Kuhn, Allan D.

    1991-01-01

    The Defense Technical Information Center (DTIC), the central repository for DOD scientific and technical information concerning studies and research and engineering efforts, is discussed. The present makeup of DTIC is described and its functions in producing technical reports and technical report bibliographies are examined. DTIC's outreach services are reviewed, as are its DTIC information and technology transfer programs. DTIC's plans for the year 2000 and its relation to the mission of the U.S. Air Force, including the Air Force's STINFO program, are addressed.

  1. The NSF Arctic Data Center: Leveraging the DataONE Federation to Build a Sustainable Archive for the NSF Arctic Research Community

    NASA Astrophysics Data System (ADS)

    Budden, A. E.; Arzayus, K. M.; Baker-Yeboah, S.; Casey, K. S.; Dozier, J.; Jones, C. S.; Jones, M. B.; Schildhauer, M.; Walker, L.

    2016-12-01

    The newly established NSF Arctic Data Center plays a critical support role in archiving and curating the data and software generated by Arctic researchers from diverse disciplines. The Arctic community, comprising Earth science, archaeology, geography, anthropology, and other social science researchers, are supported through data curation services and domain agnostic tools and infrastructure, ensuring data are accessible in the most transparent and usable way possible. This interoperability across diverse disciplines within the Arctic community facilitates collaborative research and is mirrored by interoperability between the Arctic Data Center infrastructure and other large scale cyberinfrastructure initiatives. The Arctic Data Center leverages the DataONE federation to standardize access to and replication of data and metadata to other repositories, specifically the NOAA's National Centers for Environmental Information (NCEI). This approach promotes long-term preservation of the data and metadata, as well as opening the door for other data repositories to leverage this replication infrastructure with NCEI and other DataONE member repositories. The Arctic Data Center uses rich, detailed metadata following widely recognized standards. Particularly, measurement-level and provenance metadata provide scientists the details necessary to integrate datasets across studies and across repositories while enabling a full understanding of the provenance of data used in the system. The Arctic Data Center gains this deep metadata and provenance support by simply adopting DataONE services, which results in significant efficiency gains by eliminating the need to develop systems de novo. Similarly, the advanced search tool developed by the Knowledge Network for Biocomplexity and extended for data submission by the Arctic Data Center, can be used by other DataONE-compliant repositories without further development. By standardizing interfaces and leveraging the DataONE federation, the Arctic Data Center has advanced rapidly and can itself contribute to raising the capabilities of all members of the federation.

  2. The SpeX Prism Library for Ultracool Dwarfs: A Resource for Stellar, Exoplanet and Galactic Science and Student-Led Research

    NASA Astrophysics Data System (ADS)

    Burgasser, Adam

    The NASA Infrared Telescope Facility's (IRTF) SpeX spectrograph has been an essential tool in the discovery and characterization of ultracool dwarf (UCD) stars, brown dwarfs and exoplanets. Over ten years of SpeX data have been collected on these sources, and a repository of low-resolution (R 100) SpeX prism spectra has been maintained by the PI at the SpeX Prism Spectral Libraries website since 2008. As the largest existing collection of NIR UCD spectra, this repository has facilitated a broad range of investigations in UCD, exoplanet, Galactic and extragalactic science, contributing to over 100 publications in the past 6 years. However, this repository remains highly incomplete, has not been uniformly calibrated, lacks sufficient contextual data for observations and sources, and most importantly provides no data visualization or analysis tools for the user. To fully realize the scientific potential of these data for community research, we propose a two-year program to (1) calibrate and expand existing repository and archival data, and make it virtual-observatory compliant; (2) serve the data through a searchable web archive with basic visualization tools; and (3) develop and distribute an open-source, Python-based analysis toolkit for users to analyze the data. These resources will be generated through an innovative, student-centered research model, with undergraduate and graduate students building and validating the analysis tools through carefully designed coding challenges and research validation activities. The resulting data archive, the SpeX Prism Library, will be a legacy resource for IRTF and SpeX, and will facilitate numerous investigations using current and future NASA capabilities. These include deep/wide surveys of UCDs to measure Galactic structure and chemical evolution, and probe UCD populations in satellite galaxies (e.g., JWST, WFIRST); characterization of directly imaged exoplanet spectra (e.g., FINESSE), and development of low-temperature theoretical models of UCD and exoplanet atmospheres. Our program will also serve to validate the IRTF data archive during its development, by reducing and disseminating non-proprietary archival observations of UCDs to the community. The proposed program directly addresses NASA's strategic goals of exploring the origin and evolution of stars and planets that make up our universe, and discovering and studying planets around other stars.

  3. Oceanographic Research Capacity in the US Virgin Islands

    NASA Astrophysics Data System (ADS)

    Jobsis, P.; Habtes, S. Y.

    2016-02-01

    The University of the Virgin Islands (UVI), a small HBCU with campuses on both St Thomas and St Croix, has a growing marine science department that is quickly increasing its capacity for oceanographic monitoring and research due to VI-EPSCoR (National Science Foundation's Experimental Program to Stimulate Competitive Research in the Virgin Islands) and associations with CariCOOS (the Caribbean Coastal Ocean Observing System). CariCOOS is managed through the University of Puerto Rico Mayaguez, with funding from NOAA's Integrated Ocean Observing System (IOOS). Over the past five years two oceanographic data buoys have been deployed increasing the real-time oceanographic data available for the northeastern Caribbean. In addition, researchers at UVI have deployed ADCPs and conducted CTD casts at relevant research sites as part of routine territorial monitoring programs. With VI-EPSCoR funding UVI has developed an Institute for Geocomputational Analysis and Statistic (GeoCAS) to conduct geospatial analysis and to act as a data repository and hosting/serving center for research, environmental and other relevant data. Much of the oceanographic data is available at www.caricoos.org and www.geocas.uvi.edu. As the marine research infrastructure at UVI continues to grow, the oceanographic and marine biology research program at the University's Center for Marine and Environmental Studies will continue to expand. This will benefit not only UVI researchers but also any researcher with interests in this region of the Caribbean.

  4. ACToR Chemical Structure processing using Open Source ...

    EPA Pesticide Factsheets

    ACToR (Aggregated Computational Toxicology Resource) is a centralized database repository developed by the National Center for Computational Toxicology (NCCT) at the U.S. Environmental Protection Agency (EPA). Free and open source tools were used to compile toxicity data from over 1,950 public sources. ACToR contains chemical structure information and toxicological data for over 558,000 unique chemicals. The database primarily includes data from NCCT research programs, in vivo toxicity data from ToxRef, human exposure data from ExpoCast, high-throughput screening data from ToxCast and high quality chemical structure information from the EPA DSSTox program. The DSSTox database is a chemical structure inventory for the NCCT programs and currently has about 16,000 unique structures. Included are also data from PubChem, ChemSpider, USDA, FDA, NIH and several other public data sources. ACToR has been a resource to various international and national research groups. Most of our recent efforts on ACToR are focused on improving the structural identifiers and Physico-Chemical properties of the chemicals in the database. Organizing this huge collection of data and improving the chemical structure quality of the database has posed some major challenges. Workflows have been developed to process structures, calculate chemical properties and identify relationships between CAS numbers. The Structure processing workflow integrates web services (PubChem and NIH NCI Cactus) to d

  5. Determination of Uncertainties for +III and +IV Actinide Solubilities in the WIPP Geochemistry Model for the 2009 Compliance Recertification Application

    NASA Astrophysics Data System (ADS)

    Ismail, A. E.; Xiong, Y.; Nowak, E. J.; Brush, L. H.

    2009-12-01

    The Waste Isolation Pilot Plant (WIPP) is a U.S. Department of Energy (DOE) repository in southeast New Mexico for defense-related transuranic (TRU) waste. Every five years, the DOE is required to submit an application to the Environmental Protection Agency (EPA) demonstrating the WIPP’s continuing compliance with the applicable EPA regulations governing the repository. Part of this recertification effort involves a performance assessment—a probabilistic evaluation of the repository performance with respect to regulatory limits on the amount of releases from the repository to the accessible environment. One of the models used as part of the performance assessment process is a geochemistry model, which predicts solubilities of the radionuclides in the brines that may enter the repository in the different scenarios considered by the performance assessment. The dissolved actinide source term comprises actinide solubilities, which are input parameters for modeling the transport of radionuclides as a result of brine flow through and from the repository. During a performance assessment, the solubilities are modeled as the product of a “base” solubility determined from calculations based on the chemical conditions expected in the repository, and an uncertainty factor that describes the potential deviations of the model from expected behavior. We will focus here on a discussion of the uncertainties. To compute a cumulative distribution function (CDF) for the uncertainties, we compare published, experimentally measured solubility data to predictions made using the established WIPP geochemistry model. The differences between the solubilities observed for a given experiment and the calculated solubilities from the model are used to form the overall CDF, which is then sampled as part of the performance assessment. We will discuss the methodology used to update the CDF’s for the +III actinides, obtained from data for Nd, Am, and Cm, and the +IV actinides, obtained from data for Th, and present results for the calculations of the updated CDF’s. We compare the CDF’s to the distributions computed for the previous recertification, and discuss the potential impact of the changes on the geochemistry model. This research is funded by WIPP programs administered by the U.S. Department of Energy. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.

  6. Central Satellite Data Repository Supporting Research and Development

    NASA Astrophysics Data System (ADS)

    Han, W.; Brust, J.

    2015-12-01

    Near real-time satellite data is critical to many research and development activities of atmosphere, land, and ocean processes. Acquiring and managing huge volumes of satellite data without (or with less) latency in an organization is always a challenge in the big data age. An organization level data repository is a practical solution to meeting this challenge. The STAR (Center for Satellite Applications and Research of NOAA) Central Data Repository (SCDR) is a scalable, stable, and reliable repository to acquire, manipulate, and disseminate various types of satellite data in an effective and efficient manner. SCDR collects more than 200 data products, which are commonly used by multiple groups in STAR, from NOAA, GOES, Metop, Suomi NPP, Sentinel, Himawari, and other satellites. The processes of acquisition, recording, retrieval, organization, and dissemination are performed in parallel. Multiple data access interfaces, like FTP, FTPS, HTTP, HTTPS, and RESTful, are supported in the SCDR to obtain satellite data from their providers through high speed internet. The original satellite data in various raster formats can be parsed in the respective adapter to retrieve data information. The data information is ingested to the corresponding partitioned tables in the central database. All files are distributed equally on the Network File System (NFS) disks to balance the disk load. SCDR provides consistent interfaces (including Perl utility, portal, and RESTful Web service) to locate files of interest easily and quickly and access them directly by over 200 compute servers via NFS. SCDR greatly improves collection and integration of near real-time satellite data, addresses satellite data requirements of scientists and researchers, and facilitates their primary research and development activities.

  7. De-identification of Medical Images with Retention of Scientific Research Value

    PubMed Central

    Maffitt, David R.; Smith, Kirk E.; Kirby, Justin S.; Clark, Kenneth W.; Freymann, John B.; Vendt, Bruce A.; Tarbox, Lawrence R.; Prior, Fred W.

    2015-01-01

    Online public repositories for sharing research data allow investigators to validate existing research or perform secondary research without the expense of collecting new data. Patient data made publicly available through such repositories may constitute a breach of personally identifiable information if not properly de-identified. Imaging data are especially at risk because some intricacies of the Digital Imaging and Communications in Medicine (DICOM) format are not widely understood by researchers. If imaging data still containing protected health information (PHI) were released through a public repository, a number of different parties could be held liable, including the original researcher who collected and submitted the data, the original researcher’s institution, and the organization managing the repository. To minimize these risks through proper de-identification of image data, one must understand what PHI exists and where that PHI resides, and one must have the tools to remove PHI without compromising the scientific integrity of the data. DICOM public elements are defined by the DICOM Standard. Modality vendors use private elements to encode acquisition parameters that are not yet defined by the DICOM Standard, or the vendor may not have updated an existing software product after DICOM defined new public elements. Because private elements are not standardized, a common de-identification practice is to delete all private elements, removing scientifically useful data as well as PHI. Researchers and publishers of imaging data can use the tools and process described in this article to de-identify DICOM images according to current best practices. ©RSNA, 2015 PMID:25969931

  8. Enthalpies of formation of polyhalite: A mineral relevant to salt repository

    DOE PAGES

    Guo, Xiaofeng; Xu, Hongwu

    2017-06-02

    Polyhalite is an important coexisting mineral with halite in salt repositories for nuclear waste disposal, such as Waste Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico. The thermal stability of this mineral is a key knowledge in evaluating the integrity of a salt repository in the long term, as water may release due to thermal decomposition of polyhalite. Previous studies on structural evolution of polyhalite at elevated temperatures laid the basis for detailed calorimetric measurements. Using high-temperature oxide-melt drop-solution calorimetry at 975 K with sodium molybdate as the solvent, we have determined the standard enthalpies of formation from constituent sulfatesmore » (ΔH° f,sul), oxides (ΔH° f,ox) and elements (ΔH° f,ele) of a polyhalite sample with the composition of K 2Ca 2Mg(SO 4) 4·1.95H 2O from the Salado formation at the WIPP site. The obtained results are: ΔH° f,sul = -152.5 ± 5.3 kJ/mol, ΔH° f,ox = -1926.1 ± 10.5 kJ/mol, and ΔH° f,ele = -6301.2 ± 9.9 kJ/mol. Furthermore, based on the estimated formation entropies of polyhalite, its standard Gibbs free energy of formation has been derived to be in the range of -5715.3 ± 9.9 kJ/mol to -5739.3 ± 9.9 kJ/mol. In conclusion, these determined thermodynamic properties provide fundamental parameters for modeling the stability behavior of polyhalite in salt repositories.« less

  9. EuroPhenome and EMPReSS: online mouse phenotyping resource

    PubMed Central

    Mallon, Ann-Marie; Hancock, John M.

    2008-01-01

    EuroPhenome (http://www.europhenome.org) and EMPReSS (http://empress.har.mrc.ac.uk/) form an integrated resource to provide access to data and procedures for mouse phenotyping. EMPReSS describes 96 Standard Operating Procedures for mouse phenotyping. EuroPhenome contains data resulting from carrying out EMPReSS protocols on four inbred laboratory mouse strains. As well as web interfaces, both resources support web services to enable integration with other mouse phenotyping and functional genetics resources, and are committed to initiatives to improve integration of mouse phenotype databases. EuroPhenome will be the repository for a recently initiated effort to carry out large-scale phenotyping on a large number of knockout mouse lines (EUMODIC). PMID:17905814

  10. EuroPhenome and EMPReSS: online mouse phenotyping resource.

    PubMed

    Mallon, Ann-Marie; Blake, Andrew; Hancock, John M

    2008-01-01

    EuroPhenome (http://www.europhenome.org) and EMPReSS (http://empress.har.mrc.ac.uk/) form an integrated resource to provide access to data and procedures for mouse phenotyping. EMPReSS describes 96 Standard Operating Procedures for mouse phenotyping. EuroPhenome contains data resulting from carrying out EMPReSS protocols on four inbred laboratory mouse strains. As well as web interfaces, both resources support web services to enable integration with other mouse phenotyping and functional genetics resources, and are committed to initiatives to improve integration of mouse phenotype databases. EuroPhenome will be the repository for a recently initiated effort to carry out large-scale phenotyping on a large number of knockout mouse lines (EUMODIC).

  11. ReNE: A Cytoscape Plugin for Regulatory Network Enhancement

    PubMed Central

    Politano, Gianfranco; Benso, Alfredo; Savino, Alessandro; Di Carlo, Stefano

    2014-01-01

    One of the biggest challenges in the study of biological regulatory mechanisms is the integration, americanmodeling, and analysis of the complex interactions which take place in biological networks. Despite post transcriptional regulatory elements (i.e., miRNAs) are widely investigated in current research, their usage and visualization in biological networks is very limited. Regulatory networks are commonly limited to gene entities. To integrate networks with post transcriptional regulatory data, researchers are therefore forced to manually resort to specific third party databases. In this context, we introduce ReNE, a Cytoscape 3.x plugin designed to automatically enrich a standard gene-based regulatory network with more detailed transcriptional, post transcriptional, and translational data, resulting in an enhanced network that more precisely models the actual biological regulatory mechanisms. ReNE can automatically import a network layout from the Reactome or KEGG repositories, or work with custom pathways described using a standard OWL/XML data format that the Cytoscape import procedure accepts. Moreover, ReNE allows researchers to merge multiple pathways coming from different sources. The merged network structure is normalized to guarantee a consistent and uniform description of the network nodes and edges and to enrich all integrated data with additional annotations retrieved from genome-wide databases like NCBI, thus producing a pathway fully manageable through the Cytoscape environment. The normalized network is then analyzed to include missing transcription factors, miRNAs, and proteins. The resulting enhanced network is still a fully functional Cytoscape network where each regulatory element (transcription factor, miRNA, gene, protein) and regulatory mechanism (up-regulation/down-regulation) is clearly visually identifiable, thus enabling a better visual understanding of its role and the effect in the network behavior. The enhanced network produced by ReNE is exportable in multiple formats for further analysis via third party applications. ReNE can be freely installed from the Cytoscape App Store (http://apps.cytoscape.org/apps/rene) and the full source code is freely available for download through a SVN repository accessible at http://www.sysbio.polito.it/tools_svn/BioInformatics/Rene/releases/. ReNE enhances a network by only integrating data from public repositories, without any inference or prediction. The reliability of the introduced interactions only depends on the reliability of the source data, which is out of control of ReNe developers. PMID:25541727

  12. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    DOE PAGES

    King, Zachary A.; Lu, Justin; Drager, Andreas; ...

    2015-10-17

    In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scalemore » metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.« less

  13. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    PubMed Central

    King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456

  14. Integrating In Silico Resources to Map a Signaling Network

    PubMed Central

    Liu, Hanqing; Beck, Tim N.; Golemis, Erica A.; Serebriiskii, Ilya G.

    2013-01-01

    The abundance of publicly available life science databases offer a wealth of information that can support interpretation of experimentally derived data and greatly enhance hypothesis generation. Protein interaction and functional networks are not simply new renditions of existing data: they provide the opportunity to gain insights into the specific physical and functional role a protein plays as part of the biological system. In this chapter, we describe different in silico tools that can quickly and conveniently retrieve data from existing data repositories and discuss how the available tools are best utilized for different purposes. While emphasizing protein-protein interaction databases (e.g., BioGrid and IntAct), we also introduce metasearch platforms such as STRING and GeneMANIA, pathway databases (e.g., BioCarta and Pathway Commons), text mining approaches (e.g., PubMed and Chilibot), and resources for drug-protein interactions, genetic information for model organisms and gene expression information based on microarray data mining. Furthermore, we provide a simple step-by-step protocol to building customized protein-protein interaction networks in Cytoscape, a powerful network assembly and visualization program, integrating data retrieved from these various databases. As we illustrate, generation of composite interaction networks enables investigators to extract significantly more information about a given biological system than utilization of a single database or sole reliance on primary literature. PMID:24233784

  15. GENESI-DR: Discovery, Access and on-Demand Processing in Federated Repositories

    NASA Astrophysics Data System (ADS)

    Cossu, Roberto; Pacini, Fabrizio; Parrini, Andrea; Santi, Eliana Li; Fusco, Luigi

    2010-05-01

    GENESI-DR (Ground European Network for Earth Science Interoperations - Digital Repositories) is a European Commission (EC)-funded project, kicked-off early 2008 lead by ESA; partners include Space Agencies (DLR, ASI, CNES), both space and no-space data providers such as ENEA (I), Infoterra (UK), K-SAT (N), NILU (N), JRC (EU) and industry as Elsag Datamat (I), CS (F) and TERRADUE (I). GENESI-DR intends to meet the challenge of facilitating "time to science" from different Earth Science disciplines in discovery, access and use (combining, integrating, processing, …) of historical and recent Earth-related data from space, airborne and in-situ sensors, which are archived in large distributed repositories. In fact, a common dedicated infrastructure such as the GENESI-DR one permits the Earth Science communities to derive objective information and to share knowledge in all environmental sensitive domains over a continuum of time and a variety of geographical scales so addressing urgent challenges such as Global Change. GENESI-DR federates data, information and knowledge for the management of our fragile planet in line with one of the major goals of the many international environmental programmes such as GMES, GEO/GEOSS. As of today, 12 different Digital Repositories hosting more than 60 heterogeneous dataset series are federated in GENESI-DR. Series include satellite data, in situ data, images acquired by airborne sensors, digital elevation models and model outputs. ESA has started providing access to: Category-1 data systematically available on Internet; level 3 data (e.g., GlobCover map, MERIS Global Vegetation Index); ASAR products available in ESA Virtual Archive and related to the Supersites initiatives. In all cases, existing data policies and security constraints are fully respected. GENESI-DR also gives access to Grid and Cloud computing resources allowing authorized users to run a number of different processing services on the available data. The GENESI-DR operational platform is currently being validated against several applications from different domains, such as: automatic orthorectification of SPOT data; SAR Interferometry; GlobModel results visualization and verification by comparison with satellite observations; ozone estimation from ERS-GOME products and comparison with in-situ LIDAR measures; access to ocean-related heterogeneous data and on-the-fly generated products. The project is adopting, ISO 19115, ISO 19139 and OGC standards for geospatial metadata discovery and processing, is compliant with the basis of INSPIRE Implementing Rules for Metadata and Discovery, and uses the OpenSearch protocol with Geo extensions for data and services discovery. OpenSearch is now considered by OGC a mass-market standard to provide machine accessible search interface to data repositories. GENESI-DR is gaining momentum in the Earth Science community thanks to the active participation to the GEO task force "Data Integration and Analysis Systems" and to the several collaborations with EC projects. It is now extending international cooperation agreements specifically with the NASA (Goddard Earth Sciences Data Information Services), with CEODE (the Center of Earth Observation for Digital Earth of Beijing), with the APN (Asia-Pacific Network), with University of Tokyo (Japanese GeoGrid and Data Integration and Analysis System).

  16. Grid Application Meta-Repository System: Repository Interconnectivity and Cross-domain Application Usage in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Tudose, Alexandru; Terstyansky, Gabor; Kacsuk, Peter; Winter, Stephen

    Grid Application Repositories vary greatly in terms of access interface, security system, implementation technology, communication protocols and repository model. This diversity has become a significant limitation in terms of interoperability and inter-repository access. This paper presents the Grid Application Meta-Repository System (GAMRS) as a solution that offers better options for the management of Grid applications. GAMRS proposes a generic repository architecture, which allows any Grid Application Repository (GAR) to be connected to the system independent of their underlying technology. It also presents applications in a uniform manner and makes applications from all connected repositories visible to web search engines, OGSI/WSRF Grid Services and other OAI (Open Archive Initiative)-compliant repositories. GAMRS can also function as a repository in its own right and can store applications under a new repository model. With the help of this model, applications can be presented as embedded in virtual machines (VM) and therefore they can be run in their native environments and can easily be deployed on virtualized infrastructures allowing interoperability with new generation technologies such as cloud computing, application-on-demand, automatic service/application deployments and automatic VM generation.

  17. 48 CFR 227.7207 - Contractor data repositories.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Computer Software and Computer Software Documentation 227.7207 Contractor data repositories. Follow 227.7108 when it is in the Government's interests to have a data repository include computer software or to have a separate computer software repository. Contractual instruments establishing the repository...

  18. 48 CFR 227.7207 - Contractor data repositories.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Computer Software and Computer Software Documentation 227.7207 Contractor data repositories. Follow 227.7108 when it is in the Government's interests to have a data repository include computer software or to have a separate computer software repository. Contractual instruments establishing the repository...

  19. 48 CFR 227.7207 - Contractor data repositories.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Computer Software and Computer Software Documentation 227.7207 Contractor data repositories. Follow 227.7108 when it is in the Government's interests to have a data repository include computer software or to have a separate computer software repository. Contractual instruments establishing the repository...

  20. 48 CFR 227.7207 - Contractor data repositories.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Computer Software and Computer Software Documentation 227.7207 Contractor data repositories. Follow 227.7108 when it is in the Government's interests to have a data repository include computer software or to have a separate computer software repository. Contractual instruments establishing the repository...

  1. Accident/Mishap Investigation System

    NASA Technical Reports Server (NTRS)

    Keller, Richard; Wolfe, Shawn; Gawdiak, Yuri; Carvalho, Robert; Panontin, Tina; Williams, James; Sturken, Ian

    2007-01-01

    InvestigationOrganizer (IO) is a Web-based collaborative information system that integrates the generic functionality of a database, a document repository, a semantic hypermedia browser, and a rule-based inference system with specialized modeling and visualization functionality to support accident/mishap investigation teams. This accessible, online structure is designed to support investigators by allowing them to make explicit, shared, and meaningful links among evidence, causal models, findings, and recommendations.

  2. The International Solar Terrestrial Physics Program: A Model for International Cooperation in Space Research

    NASA Astrophysics Data System (ADS)

    Acuña, M.

    The International Solar Terrestrial Physics Program (ISTP) evolved from the individual plans of US, Japanese and European countries to develop space missions to expand our knowledge of the Sun-Earth connection as a "system". Previous experience with independent missions amply illustrated the critical need for coordinated and simultaneous observations in key regions of Sun-Earth space in order to resolve time-space ambiguities and cause-effect relationships. Mission studies such as the US Origins of Plasmas in the Earth's Neighborhood (OPEN), Geotail in Japan, the Solar Heliospheric Observatory in Europe and the Regatta and other magnetospheric missions in the former Soviert Union, formed the early conceptual elements that eventually led to the ISTP program. The coordinating role developed by the Inter-Agency-Consultative-Group (IACG) integrated by NASA, ESA, ISAS and IKI and demonstrated during the comet Halley apparition in 1986, was continued to include solar-terrestrial research and the mission elements described above. In addition to the space elements, a most important component of the coordination effort was the inclusion of data networks, analysis and planning tools as well as globally accessible data sets by the scientific community at large. This approach enabled the active and direct participation of scientists in developing countries in one of the most comprehensive solar-terrestrial research programs implemented to date. The creation of multiple ISTP data repositories throughout the world has enabled a large number of scientists in developing countries to have direct access to the latest spacecraft observations and a most fruitful interaction with fellow researchers throughout the world. This paper will present a review of the evolution of the ISTP program, its products, analysis tools, data bases, infrastructure and lessons learned applicable to future international collaborative programs.

  3. Analysis of water flow paths: methodology and example calculations for a potential geological repository in Sweden.

    PubMed

    Werner, Kent; Bosson, Emma; Berglund, Sten

    2006-12-01

    Safety assessment related to the siting of a geological repository for spent nuclear fuel deep in the bedrock requires identification of potential flow paths and the associated travel times for radionuclides originating at repository depth. Using the Laxemar candidate site in Sweden as a case study, this paper describes modeling methodology, data integration, and the resulting water flow models, focusing on the Quaternary deposits and the upper 150 m of the bedrock. Example simulations identify flow paths to groundwater discharge areas and flow paths in the surface system. The majority of the simulated groundwater flow paths end up in the main surface waters and along the coastline, even though the particles used to trace the flow paths are introduced with a uniform spatial distribution at a relatively shallow depth. The calculated groundwater travel time, determining the time available for decay and retention of radionuclides, is on average longer to the coastal bays than to other biosphere objects at the site. Further, it is demonstrated how GIS-based modeling can be used to limit the number of surface flow paths that need to be characterized for safety assessment. Based on the results, the paper discusses an approach for coupling the present models to a model for groundwater flow in the deep bedrock.

  4. PipelineDog: a simple and flexible graphic pipeline construction and maintenance tool.

    PubMed

    Zhou, Anbo; Zhang, Yeting; Sun, Yazhou; Xing, Jinchuan

    2018-05-01

    Analysis pipelines are an essential part of bioinformatics research, and ad hoc pipelines are frequently created by researchers for prototyping and proof-of-concept purposes. However, most existing pipeline management system or workflow engines are too complex for rapid prototyping or learning the pipeline concept. A lightweight, user-friendly and flexible solution is thus desirable. In this study, we developed a new pipeline construction and maintenance tool, PipelineDog. This is a web-based integrated development environment with a modern web graphical user interface. It offers cross-platform compatibility, project management capabilities, code formatting and error checking functions and an online repository. It uses an easy-to-read/write script system that encourages code reuse. With the online repository, it also encourages sharing of pipelines, which enhances analysis reproducibility and accountability. For most users, PipelineDog requires no software installation. Overall, this web application provides a way to rapidly create and easily manage pipelines. PipelineDog web app is freely available at http://web.pipeline.dog. The command line version is available at http://www.npmjs.com/package/pipelinedog and online repository at http://repo.pipeline.dog. ysun@kean.edu or xing@biology.rutgers.edu or ysun@diagnoa.com. Supplementary data are available at Bioinformatics online.

  5. A cloud-based information repository for bridge monitoring applications

    NASA Astrophysics Data System (ADS)

    Jeong, Seongwoon; Zhang, Yilan; Hou, Rui; Lynch, Jerome P.; Sohn, Hoon; Law, Kincho H.

    2016-04-01

    This paper describes an information repository to support bridge monitoring applications on a cloud computing platform. Bridge monitoring, with instrumentation of sensors in particular, collects significant amount of data. In addition to sensor data, a wide variety of information such as bridge geometry, analysis model and sensor description need to be stored. Data management plays an important role to facilitate data utilization and data sharing. While bridge information modeling (BrIM) technologies and standards have been proposed and they provide a means to enable integration and facilitate interoperability, current BrIM standards support mostly the information about bridge geometry. In this study, we extend the BrIM schema to include analysis models and sensor information. Specifically, using the OpenBrIM standards as the base, we draw on CSI Bridge, a commercial software widely used for bridge analysis and design, and SensorML, a standard schema for sensor definition, to define the data entities necessary for bridge monitoring applications. NoSQL database systems are employed for data repository. Cloud service infrastructure is deployed to enhance scalability, flexibility and accessibility of the data management system. The data model and systems are tested using the bridge model and the sensor data collected at the Telegraph Road Bridge, Monroe, Michigan.

  6. 40 CFR 124.33 - Information repository.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Information repository. 124.33 Section... FOR DECISIONMAKING Specific Procedures Applicable to RCRA Permits § 124.33 Information repository. (a... basis, for an information repository. When assessing the need for an information repository, the...

  7. 10 CFR 60.130 - General considerations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... REPOSITORIES Technical Criteria Design Criteria for the Geologic Repository Operations Area § 60.130 General... for a high-level radioactive waste repository at a geologic repository operations area, and an... geologic repository operations area, must include the principal design criteria for a proposed facility...

  8. Annual Historical Summary, Defense Documentation Center, 1 July 1968 to 30 June 1969.

    ERIC Educational Resources Information Center

    Defense Documentation Center, Alexandria, VA.

    This summary describes the more significant activities and achievements of the Defense Documentation Center (DDC) including: DDC and the scientific and technical community. The DDC role in the Department of Defense Scientific and Technical Information Program continued to shift from the traditional concept of an archival repository and a…

  9. 10 CFR 60.111 - Performance of the geologic repository operations area through permanent closure.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... retrieval throughout the period during which wastes are being emplaced and, thereafter, until the completion of a preformance confirmation program and Commission review of the information obtained from such a... retrievability. (3) For purposes of this paragraph, a reasonable schedule for retrieval is one that would permit...

  10. 10 CFR 60.111 - Performance of the geologic repository operations area through permanent closure.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... retrieval throughout the period during which wastes are being emplaced and, thereafter, until the completion of a preformance confirmation program and Commission review of the information obtained from such a... retrievability. (3) For purposes of this paragraph, a reasonable schedule for retrieval is one that would permit...

  11. 10 CFR 60.111 - Performance of the geologic repository operations area through permanent closure.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... retrieval throughout the period during which wastes are being emplaced and, thereafter, until the completion of a preformance confirmation program and Commission review of the information obtained from such a... retrievability. (3) For purposes of this paragraph, a reasonable schedule for retrieval is one that would permit...

  12. 10 CFR 60.111 - Performance of the geologic repository operations area through permanent closure.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... retrieval throughout the period during which wastes are being emplaced and, thereafter, until the completion of a preformance confirmation program and Commission review of the information obtained from such a... retrievability. (3) For purposes of this paragraph, a reasonable schedule for retrieval is one that would permit...

  13. 75 FR 66136 - Agency Information Collection Activities: Reinstatement, With Change, of a Previously Approved...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-27

    ... Criminal History Information Systems. The Department of Justice (DOJ), Office of Justice Programs, Bureau... collection for which approval has expired. (2) Title of the Form/Collection: Survey of State Criminal History... history records and on the increasing number of operations and services provided by state repositories. (5...

  14. Harvesting Alternative Credit Transfer Students: Redefining Selectivity in Your Online Learning Program Enrollment Leads

    ERIC Educational Resources Information Center

    Corlett, Bradly

    2014-01-01

    Several recent issues and trends in online education have resulted in consolidation of efforts for Massive Open Online Courses (MOOCs), increased Open Educational Resources (OER) in the form of asynchronous course repositories, with noticeable increases in governance and policy amplification. These emerging enrollment trends in alternative online…

  15. Extraction and analysis of signatures from the Gene Expression Omnibus by the crowd

    PubMed Central

    Wang, Zichen; Monteiro, Caroline D.; Jagodnik, Kathleen M.; Fernandez, Nicolas F.; Gundersen, Gregory W.; Rouillard, Andrew D.; Jenkins, Sherry L.; Feldmann, Axel S.; Hu, Kevin S.; McDermott, Michael G.; Duan, Qiaonan; Clark, Neil R.; Jones, Matthew R.; Kou, Yan; Goff, Troy; Woodland, Holly; Amaral, Fabio M R.; Szeto, Gregory L.; Fuchs, Oliver; Schüssler-Fiorenza Rose, Sophia M.; Sharma, Shvetank; Schwartz, Uwe; Bausela, Xabier Bengoetxea; Szymkiewicz, Maciej; Maroulis, Vasileios; Salykin, Anton; Barra, Carolina M.; Kruth, Candice D.; Bongio, Nicholas J.; Mathur, Vaibhav; Todoric, Radmila D; Rubin, Udi E.; Malatras, Apostolos; Fulp, Carl T.; Galindo, John A.; Motiejunaite, Ruta; Jüschke, Christoph; Dishuck, Philip C.; Lahl, Katharina; Jafari, Mohieddin; Aibar, Sara; Zaravinos, Apostolos; Steenhuizen, Linda H.; Allison, Lindsey R.; Gamallo, Pablo; de Andres Segura, Fernando; Dae Devlin, Tyler; Pérez-García, Vicente; Ma'ayan, Avi

    2016-01-01

    Gene expression data are accumulating exponentially in public repositories. Reanalysis and integration of themed collections from these studies may provide new insights, but requires further human curation. Here we report a crowdsourcing project to annotate and reanalyse a large number of gene expression profiles from Gene Expression Omnibus (GEO). Through a massive open online course on Coursera, over 70 participants from over 25 countries identify and annotate 2,460 single-gene perturbation signatures, 839 disease versus normal signatures, and 906 drug perturbation signatures. All these signatures are unique and are manually validated for quality. Global analysis of these signatures confirms known associations and identifies novel associations between genes, diseases and drugs. The manually curated signatures are used as a training set to develop classifiers for extracting similar signatures from the entire GEO repository. We develop a web portal to serve these signatures for query, download and visualization. PMID:27667448

  16. Digital Library Storage using iRODS Data Grids

    NASA Astrophysics Data System (ADS)

    Hedges, Mark; Blanke, Tobias; Hasan, Adil

    Digital repository software provides a powerful and flexible infrastructure for managing and delivering complex digital resources and metadata. However, issues can arise in managing the very large, distributed data files that may constitute these resources. This paper describes an implementation approach that combines the Fedora digital repository software with a storage layer implemented as a data grid, using the iRODS middleware developed by DICE (Data Intensive Cyber Environments) as the successor to SRB. This approach allows us to use Fedoras flexible architecture to manage the structure of resources and to provide application- layer services to users. The grid-based storage layer provides efficient support for managing and processing the underlying distributed data objects, which may be very large (e.g. audio-visual material). The Rule Engine built into iRODS is used to integrate complex workflows at the data level that need not be visible to users, e.g. digital preservation functionality.

  17. Extraction and analysis of signatures from the Gene Expression Omnibus by the crowd.

    PubMed

    Wang, Zichen; Monteiro, Caroline D; Jagodnik, Kathleen M; Fernandez, Nicolas F; Gundersen, Gregory W; Rouillard, Andrew D; Jenkins, Sherry L; Feldmann, Axel S; Hu, Kevin S; McDermott, Michael G; Duan, Qiaonan; Clark, Neil R; Jones, Matthew R; Kou, Yan; Goff, Troy; Woodland, Holly; Amaral, Fabio M R; Szeto, Gregory L; Fuchs, Oliver; Schüssler-Fiorenza Rose, Sophia M; Sharma, Shvetank; Schwartz, Uwe; Bausela, Xabier Bengoetxea; Szymkiewicz, Maciej; Maroulis, Vasileios; Salykin, Anton; Barra, Carolina M; Kruth, Candice D; Bongio, Nicholas J; Mathur, Vaibhav; Todoric, Radmila D; Rubin, Udi E; Malatras, Apostolos; Fulp, Carl T; Galindo, John A; Motiejunaite, Ruta; Jüschke, Christoph; Dishuck, Philip C; Lahl, Katharina; Jafari, Mohieddin; Aibar, Sara; Zaravinos, Apostolos; Steenhuizen, Linda H; Allison, Lindsey R; Gamallo, Pablo; de Andres Segura, Fernando; Dae Devlin, Tyler; Pérez-García, Vicente; Ma'ayan, Avi

    2016-09-26

    Gene expression data are accumulating exponentially in public repositories. Reanalysis and integration of themed collections from these studies may provide new insights, but requires further human curation. Here we report a crowdsourcing project to annotate and reanalyse a large number of gene expression profiles from Gene Expression Omnibus (GEO). Through a massive open online course on Coursera, over 70 participants from over 25 countries identify and annotate 2,460 single-gene perturbation signatures, 839 disease versus normal signatures, and 906 drug perturbation signatures. All these signatures are unique and are manually validated for quality. Global analysis of these signatures confirms known associations and identifies novel associations between genes, diseases and drugs. The manually curated signatures are used as a training set to develop classifiers for extracting similar signatures from the entire GEO repository. We develop a web portal to serve these signatures for query, download and visualization.

  18. Ubiquitous-Severance Hospital Project: Implementation and Results

    PubMed Central

    Chang, Bung-Chul; Kim, Young-A; Kim, Jee Hea; Jung, Hae Kyung; Kang, Eun Hae; Kang, Hee Suk; Lee, Hyung Il; Kim, Yong Ook; Yoo, Sun Kook; Sunwoo, Ilnam; An, Seo Yong; Jeong, Hye Jeong

    2010-01-01

    Objectives The purpose of this study was to review an implementation of u-Severance information system with focus on electronic hospital records (EHR) and to suggest future improvements. Methods Clinical Data Repository (CDR) of u-Severance involved implementing electronic medical records (EMR) as the basis of EHR and the management of individual health records. EHR were implemented with service enhancements extending to the clinical decision support system (CDSS) and expanding the knowledge base for research with a repository for clinical data and medical care information. Results The EMR system of Yonsei University Health Systems (YUHS) consists of HP integrity superdome servers using MS SQL as a database management system and MS Windows as its operating system. Conclusions YUHS is a high-performing medical institution with regards to efficient management and customer satisfaction; however, after 5 years of implementation of u-Severance system, several limitations with regards to expandability and security have been identified. PMID:21818425

  19. Depleted uranium as a backfill for nuclear fuel waste package

    DOEpatents

    Forsberg, Charles W.

    1998-01-01

    A method for packaging spent nuclear fuel for long-term disposal in a geological repository. At least one spent nuclear fuel assembly is first placed in an unsealed waste package and a depleted uranium fill material is added to the waste package. The depleted uranium fill material comprises flowable particles having a size sufficient to substantially fill any voids in and around the assembly and contains isotopically-depleted uranium in the +4 valence state in an amount sufficient to inhibit dissolution of the spent nuclear fuel from the assembly into a surrounding medium and to lessen the potential for nuclear criticality inside the repository in the event of failure of the waste package. Last, the waste package is sealed, thereby substantially reducing the release of radionuclides into the surrounding medium, while simultaneously providing radiation shielding and increased structural integrity of the waste package.

  20. A Distributed Multi-Agent System for Collaborative Information Management and Learning

    NASA Technical Reports Server (NTRS)

    Chen, James R.; Wolfe, Shawn R.; Wragg, Stephen D.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In this paper, we present DIAMS, a system of distributed, collaborative agents to help users access, manage, share and exchange information. A DIAMS personal agent helps its owner find information most relevant to current needs. It provides tools and utilities for users to manage their information repositories with dynamic organization and virtual views. Flexible hierarchical display is integrated with indexed query search-to support effective information access. Automatic indexing methods are employed to support user queries and communication between agents. Contents of a repository are kept in object-oriented storage to facilitate information sharing. Collaboration between users is aided by easy sharing utilities as well as automated information exchange. Matchmaker agents are designed to establish connections between users with similar interests and expertise. DIAMS agents provide needed services for users to share and learn information from one another on the World Wide Web.

  1. Evolutions in Metadata Quality

    NASA Astrophysics Data System (ADS)

    Gilman, J.

    2016-12-01

    Metadata Quality is one of the chief drivers of discovery and use of NASA EOSDIS (Earth Observing System Data and Information System) data. Issues with metadata such as lack of completeness, inconsistency, and use of legacy terms directly hinder data use. As the central metadata repository for NASA Earth Science data, the Common Metadata Repository (CMR) has a responsibility to its users to ensure the quality of CMR search results. This talk will cover how we encourage metadata authors to improve the metadata through the use of integrated rubrics of metadata quality and outreach efforts. In addition we'll demonstrate Humanizers, a technique for dealing with the symptoms of metadata issues. Humanizers allow CMR administrators to identify specific metadata issues that are fixed at runtime when the data is indexed. An example Humanizer is the aliasing of processing level "Level 1" to "1" to improve consistency across collections. The CMR currently indexes 35K collections and 300M granules.

  2. A Routing Mechanism for Cloud Outsourcing of Medical Imaging Repositories.

    PubMed

    Godinho, Tiago Marques; Viana-Ferreira, Carlos; Bastião Silva, Luís A; Costa, Carlos

    2016-01-01

    Web-based technologies have been increasingly used in picture archive and communication systems (PACS), in services related to storage, distribution, and visualization of medical images. Nowadays, many healthcare institutions are outsourcing their repositories to the cloud. However, managing communications between multiple geo-distributed locations is still challenging due to the complexity of dealing with huge volumes of data and bandwidth requirements. Moreover, standard methodologies still do not take full advantage of outsourced archives, namely because their integration with other in-house solutions is troublesome. In order to improve the performance of distributed medical imaging networks, a smart routing mechanism was developed. This includes an innovative cache system based on splitting and dynamic management of digital imaging and communications in medicine objects. The proposed solution was successfully deployed in a regional PACS archive. The results obtained proved that it is better than conventional approaches, as it reduces remote access latency and also the required cache storage space.

  3. The Materials Commons: A Collaboration Platform and Information Repository for the Global Materials Community

    NASA Astrophysics Data System (ADS)

    Puchala, Brian; Tarcea, Glenn; Marquis, Emmanuelle. A.; Hedstrom, Margaret; Jagadish, H. V.; Allison, John E.

    2016-08-01

    Accelerating the pace of materials discovery and development requires new approaches and means of collaborating and sharing information. To address this need, we are developing the Materials Commons, a collaboration platform and information repository for use by the structural materials community. The Materials Commons has been designed to be a continuous, seamless part of the scientific workflow process. Researchers upload the results of experiments and computations as they are performed, automatically where possible, along with the provenance information describing the experimental and computational processes. The Materials Commons website provides an easy-to-use interface for uploading and downloading data and data provenance, as well as for searching and sharing data. This paper provides an overview of the Materials Commons. Concepts are also outlined for integrating the Materials Commons with the broader Materials Information Infrastructure that is evolving to support the Materials Genome Initiative.

  4. EIR: enterprise imaging repository, an alternative imaging archiving and communication system.

    PubMed

    Bian, Jiang; Topaloglu, Umit; Lane, Cheryl

    2009-01-01

    The enormous number of studies performed at the Nuclear Medicine Department of University of Arkansas for Medical Sciences (UAMS) generates a huge amount PET/CT images daily. A DICOM workstation had been used as "mini-PACS" to route all studies, which is historically proven to be slow due to various reasons. However, replacing the workstation with a commercial PACS server is not only cost inefficient; and more often, the PACS vendors are reluctant to take responsibility for the final integration of these components. Therefore, in this paper, we propose an alternative imaging archiving and communication system called Enterprise Imaging Repository (EIR). EIR consists of two distinguished components: an image processing daemon and a user friendly web interface. EIR not only reduces the overall waiting time of transferring a study from the modalities to radiologists' workstations, but also provides a more preferable presentation.

  5. Ubiquitous-severance hospital project: implementation and results.

    PubMed

    Chang, Bung-Chul; Kim, Nam-Hyun; Kim, Young-A; Kim, Jee Hea; Jung, Hae Kyung; Kang, Eun Hae; Kang, Hee Suk; Lee, Hyung Il; Kim, Yong Ook; Yoo, Sun Kook; Sunwoo, Ilnam; An, Seo Yong; Jeong, Hye Jeong

    2010-03-01

    The purpose of this study was to review an implementation of u-Severance information system with focus on electronic hospital records (EHR) and to suggest future improvements. Clinical Data Repository (CDR) of u-Severance involved implementing electronic medical records (EMR) as the basis of EHR and the management of individual health records. EHR were implemented with service enhancements extending to the clinical decision support system (CDSS) and expanding the knowledge base for research with a repository for clinical data and medical care information. The EMR system of Yonsei University Health Systems (YUHS) consists of HP integrity superdome servers using MS SQL as a database management system and MS Windows as its operating system. YUHS is a high-performing medical institution with regards to efficient management and customer satisfaction; however, after 5 years of implementation of u-Severance system, several limitations with regards to expandability and security have been identified.

  6. Depleted uranium as a backfill for nuclear fuel waste package

    DOEpatents

    Forsberg, C.W.

    1998-11-03

    A method is described for packaging spent nuclear fuel for long-term disposal in a geological repository. At least one spent nuclear fuel assembly is first placed in an unsealed waste package and a depleted uranium fill material is added to the waste package. The depleted uranium fill material comprises flowable particles having a size sufficient to substantially fill any voids in and around the assembly and contains isotopically-depleted uranium in the +4 valence state in an amount sufficient to inhibit dissolution of the spent nuclear fuel from the assembly into a surrounding medium and to lessen the potential for nuclear criticality inside the repository in the event of failure of the waste package. Last, the waste package is sealed, thereby substantially reducing the release of radionuclides into the surrounding medium, while simultaneously providing radiation shielding and increased structural integrity of the waste package. 6 figs.

  7. Can shale safely host US nuclear waste?

    USGS Publications Warehouse

    Neuzil, C.E.

    2013-01-01

    "Even as cleanup efforts after Japan’s Fukushima disaster offer a stark reminder of the spent nuclear fuel (SNF) stored at nuclear plants worldwide, the decision in 2009 to scrap Yucca Mountain as a permanent disposal site has dimmed hope for a repository for SNF and other high-level nuclear waste (HLW) in the United States anytime soon. About 70,000 metric tons of SNF are now in pool or dry cask storage at 75 sites across the United States [Government Accountability Office, 2012], and uncertainty about its fate is hobbling future development of nuclear power, increasing costs for utilities, and creating a liability for American taxpayers [Blue Ribbon Commission on America’s Nuclear Future, 2012].However, abandoning Yucca Mountain could also result in broadening geologic options for hosting America’s nuclear waste. Shales and other argillaceous formations (mudrocks, clays, and similar clay-rich media) have been absent from the U.S. repository program. In contrast, France, Switzerland, and Belgium are now planning repositories in argillaceous formations after extensive research in underground laboratories on the safety and feasibility of such an approach [Blue Ribbon Commission on America’s Nuclear Future, 2012; Nationale Genossenschaft für die Lagerung radioaktiver Abfälle (NAGRA), 2010; Organisme national des déchets radioactifs et des matières fissiles enrichies, 2011]. Other nations, notably Japan, Canada, and the United Kingdom, are studying argillaceous formations or may consider them in their siting programs [Japan Atomic Energy Agency, 2012; Nuclear Waste Management Organization (NWMO), (2011a); Powell et al., 2010]."

  8. Pretest characterization of WIPP experimental waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, J.; Davis, H.; Drez, P.E.

    The Waste Isolation Pilot Plant (WIPP) near Carlsbad, New Mexico, is an underground repository designed for the storage and disposal of transuranic (TRU) wastes from US Department of Energy (DOE) facilities across the country. The Performance Assessment (PA) studies for WIPP address compliance of the repository with applicable regulations, and include full-scale experiments to be performed at the WIPP site. These experiments are the bin-scale and alcove tests to be conducted by Sandia National Laboratories (SNL). Prior to conducting these experiments, the waste to be used in these tests needs to be characterized to provide data on the initial conditionsmore » for these experiments. This characterization is referred to as the Pretest Characterization of WIPP Experimental Waste, and is also expected to provide input to other programmatic efforts related to waste characterization. The purpose of this paper is to describe the pretest waste characterization activities currently in progress for the WIPP bin-scale waste, and to discuss the program plan and specific analytical protocols being developed for this characterization. The relationship between different programs and documents related to waste characterization efforts is also highlighted in this paper.« less

  9. 48 CFR 227.7108 - Contractor data repositories.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Technical Data 227.7108 Contractor data repositories. (a) Contractor data repositories may be established... procedures for protecting technical data delivered to or stored at the repository from unauthorized release... disclosure of technical data from the repository to third parties consistent with the Government's rights in...

  10. Tourism impacts of Three Mile Island and other adverse events: Implications for Lincoln County and other rural counties bisected by radioactive wastes intended for Yucca Mountain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Himmelberger, J.J.; Ogneva-Himmelberger, Y.A.; Baughman, M.

    Whether the proposed Yucca Mountain nuclear waste repository system will adversely impact tourism in southern Nevada is an open question of particular importance to visitor-oriented rural counties bisected by planned waste transportation corridors (highway or rail). As part of one such county`s repository impact assessment program, tourism implications of Three Mile Island (TMI) and other major hazard events have been revisited to inform ongoing county-wide socioeconomic assessments and contingency planning efforts. This paper summarizes key research implications of such research as applied to Lincoln County, Nevada. Implications for other rural counties are discussed in light of the research findings. 29more » refs., 3 figs., 1 tab.« less

  11. A clinical data repository enhances hospital infection control.

    PubMed Central

    Samore, M.; Lichtenberg, D.; Saubermann, L.; Kawachi, C.; Carmeli, Y.

    1997-01-01

    We describe the benefits of a relational database of hospital clinical data (Clinical Data Repository; CDR) for an infection control program. The CDR consists of > 40 Sybase tables, and is directly accessible for ad hoc queries by members of the infection control unit who have been granted privileges for access by the Information Systems Department. The data elements and functional requirements most useful for surveillance of nosocomial infections, antibiotic use, and resistant organisms are characterized. Specific applications of the CDR are presented, including the use of automated definitions of nosocomial infection, graphical monitoring of resistant organisms with quality control limits, and prospective detection of inappropriate antibiotic use. Hospital surveillance and quality improvement activities are significantly benefited by the availability of a querable set of tables containing diverse clinical data. PMID:9357588

  12. Microstructural and mineralogical characterization of selected shales in support of nuclear waste repository studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S.Y.; Hyder, L.K.; Alley, P.D.

    1988-01-01

    Five shales were examined as part of the Sedimentary Rock Program evaluation of this medium as a potential host for a US civilian nuclear waste repository. The units selected for characterization were the Chattanooga Shale from Fentress County, Tennessee; the Pierre Shale from Gregory County, South Dakota; the Green River Formation from Garfield County, Colorado; and the Nolichucky Shale and Pumpkin Valley Shale from Roane County, Tennessee. The micromorphology and structure of the shales were examined by petrographic, scanning electron, and high-resolution transmission electron microscopy. Chemical and mineralogical compositions were studied through the use of energy-dispersive x-ray, neutron activation, atomicmore » absorption, thermal, and x-ray diffraction analysis techniques. 18 refs., 12 figs., 2 tabs.« less

  13. Cryopreservation in fish: current status and pathways to quality assurance and quality control in repository development

    PubMed Central

    Torres, Leticia; Hu, E.; Tiersch, Terrence R.

    2017-01-01

    Cryopreservation in aquatic species in general has been constrained to research activities for more than 60 years. Although the need for application and commercialisation pathways has become clear, the lack of comprehensive quality assurance and quality control programs has impeded the progress of the field, delaying the establishment of germplasm repositories and commercial-scale applications. In this review we focus on the opportunities for standardisation in the practices involved in the four main stages of the cryopreservation process: (1) source, housing and conditioning of fish; (2) sample collection and preparation; (3) freezing and cryogenic storage of samples; and (4) egg collection and use of thawed sperm samples. In addition, we introduce some key factors that would assist the transition to commercial-scale, high-throughput application. PMID:26739583

  14. Data model, dictionaries, and desiderata for biomolecular simulation data indexing and sharing

    PubMed Central

    2014-01-01

    Background Few environments have been developed or deployed to widely share biomolecular simulation data or to enable collaborative networks to facilitate data exploration and reuse. As the amount and complexity of data generated by these simulations is dramatically increasing and the methods are being more widely applied, the need for new tools to manage and share this data has become obvious. In this paper we present the results of a process aimed at assessing the needs of the community for data representation standards to guide the implementation of future repositories for biomolecular simulations. Results We introduce a list of common data elements, inspired by previous work, and updated according to feedback from the community collected through a survey and personal interviews. These data elements integrate the concepts for multiple types of computational methods, including quantum chemistry and molecular dynamics. The identified core data elements were organized into a logical model to guide the design of new databases and application programming interfaces. Finally a set of dictionaries was implemented to be used via SQL queries or locally via a Java API built upon the Apache Lucene text-search engine. Conclusions The model and its associated dictionaries provide a simple yet rich representation of the concepts related to biomolecular simulations, which should guide future developments of repositories and more complex terminologies and ontologies. The model still remains extensible through the decomposition of virtual experiments into tasks and parameter sets, and via the use of extended attributes. The benefits of a common logical model for biomolecular simulations was illustrated through various use cases, including data storage, indexing, and presentation. All the models and dictionaries introduced in this paper are available for download at http://ibiomes.chpc.utah.edu/mediawiki/index.php/Downloads. PMID:24484917

  15. Community-Supported Data Repositories in Paleobiology: A 'Middle Tail' Between the Geoscientific and Informatics Communities

    NASA Astrophysics Data System (ADS)

    Williams, J. W.; Ashworth, A. C.; Betancourt, J. L.; Bills, B.; Blois, J.; Booth, R.; Buckland, P.; Charles, D.; Curry, B. B.; Goring, S. J.; Davis, E.; Grimm, E. C.; Graham, R. W.; Smith, A. J.

    2015-12-01

    Community-supported data repositories (CSDRs) in paleoecology and paleoclimatology have a decades-long tradition and serve multiple critical scientific needs. CSDRs facilitate synthetic large-scale scientific research by providing open-access and curated data that employ community-supported metadata and data standards. CSDRs serve as a 'middle tail' or boundary organization between information scientists and the long-tail community of individual geoscientists collecting and analyzing paleoecological data. Over the past decades, a distributed network of CSDRs has emerged, each serving a particular suite of data and research communities, e.g. Neotoma Paleoecology Database, Paleobiology Database, International Tree Ring Database, NOAA NCEI for Paleoclimatology, Morphobank, iDigPaleo, and Integrated Earth Data Alliance. Recently, these groups have organized into a common Paleobiology Data Consortium dedicated to improving interoperability and sharing best practices and protocols. The Neotoma Paleoecology Database offers one example of an active and growing CSDR, designed to facilitate research into ecological and evolutionary dynamics during recent past global change. Neotoma combines a centralized database structure with distributed scientific governance via multiple virtual constituent data working groups. The Neotoma data model is flexible and can accommodate a variety of paleoecological proxies from many depositional contests. Data input into Neotoma is done by trained Data Stewards, drawn from their communities. Neotoma data can be searched, viewed, and returned to users through multiple interfaces, including the interactive Neotoma Explorer map interface, REST-ful Application Programming Interfaces (APIs), the neotoma R package, and the Tilia stratigraphic software. Neotoma is governed by geoscientists and provides community engagement through training workshops for data contributors, stewards, and users. Neotoma is engaged in the Paleobiological Data Consortium and other efforts to improve interoperability among cyberinfrastructure in the paleogeosciences.

  16. CranialVault and its CRAVE tools: a clinical computer assistance system for deep brain stimulation (DBS) therapy.

    PubMed

    D'Haese, Pierre-François; Pallavaram, Srivatsan; Li, Rui; Remple, Michael S; Kao, Chris; Neimat, Joseph S; Konrad, Peter E; Dawant, Benoit M

    2012-04-01

    A number of methods have been developed to assist surgeons at various stages of deep brain stimulation (DBS) therapy. These include construction of anatomical atlases, functional databases, and electrophysiological atlases and maps. But, a complete system that can be integrated into the clinical workflow has not been developed. In this paper we present a system designed to assist physicians in pre-operative target planning, intra-operative target refinement and implantation, and post-operative DBS lead programming. The purpose of this system is to centralize the data acquired a the various stages of the procedure, reduce the amount of time needed at each stage of the therapy, and maximize the efficiency of the entire process. The system consists of a central repository (CranialVault), of a suite of software modules called CRAnialVault Explorer (CRAVE) that permit data entry and data visualization at each stage of the therapy, and of a series of algorithms that permit the automatic processing of the data. The central repository contains image data for more than 400 patients with the related pre-operative plans and position of the final implants and about 10,550 electrophysiological data points (micro-electrode recordings or responses to stimulations) recorded from 222 of these patients. The system has reached the stage of a clinical prototype that is being evaluated clinically at our institution. A preliminary quantitative validation of the planning component of the system performed on 80 patients who underwent the procedure between January 2009 and December 2009 shows that the system provides both timely and valuable information. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. Online molecular image repository and analysis system: A multicenter collaborative open-source infrastructure for molecular imaging research and application.

    PubMed

    Rahman, Mahabubur; Watabe, Hiroshi

    2018-05-01

    Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid achievement in cancer diagnosis and therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. A Series of MATLAB Learning Modules to Enhance Numerical Competency in Applied Marine Sciences

    NASA Astrophysics Data System (ADS)

    Fischer, A. M.; Lucieer, V.; Burke, C.

    2016-12-01

    Enhanced numerical competency to navigate the massive data landscapes are critical skills students need to effectively explore, analyse and visualize complex patterns in high-dimensional data for addressing the complexity of many of the world's problems. This is especially the case for interdisciplinary, undergraduate applied marine science programs, where students are required to demonstrate competency in methods and ideas across multiple disciplines. In response to this challenge, we have developed a series of repository-based data exploration, analysis and visualization modules in MATLAB for integration across various attending and online classes within the University of Tasmania. The primary focus of these modules is to teach students to collect, aggregate and interpret data from large on-line marine scientific data repositories to, 1) gain technical skills in discovering, accessing, managing and visualising large, numerous data sources, 2) interpret, analyse and design approaches to visualise these data, and 3) to address, through numerical approaches, complex, real-world problems, that the traditional scientific methods cannot address. All modules, implemented through a MATLAB live script, include a short recorded lecture to introduce the topic, a handout that gives an overview of the activities, an instructor's manual with a detailed methodology and discussion points, a student assessment (quiz and level-specific challenge task), and a survey. The marine science themes addressed through these modules include biodiversity, habitat mapping, algal blooms and sea surface temperature change and utilize a series of marine science and oceanographic data portals. Through these modules students, with minimal experience in MATLAB or numerical methods are introduced to array indexing, concatenation, sorting, and reshaping, principal component analysis, spectral analysis and unsupervised classification within the context of oceanographic processes, marine geology and marine community ecology.

  19. Variable thickness transient ground-water flow model. Volume 3. Program listings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reisenauer, A.E.

    1979-12-01

    The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (OWNI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. Analysis of the long-term, far-field consequences of release scenarios requires the application of numerical codes which simulate the hydrologicmore » systems, model the transport of released radionuclides through the hydrologic systems to the biosphere, and, where applicable, assess the radiological dose to humans. Hydrologic and transport models are available at several levels of complexity or sophistication. Model selection and use are determined by the quantity and quality of input data. Model development under AEGIS and related programs provides three levels of hydrologic models, two levels of transport models, and one level of dose models (with several separate models). This is the third of 3 volumes of the description of the VTT (Variable Thickness Transient) Groundwater Hydrologic Model - second level (intermediate complexity) two-dimensional saturated groundwater flow.« less

  20. The RAND Online Measure Repository for Evaluating Psychological Health and Traumatic Brain Injury Programs. The RAND Toolkit, Volume 2

    DTIC Science & Technology

    2014-01-01

    tempo may raise the risk for mental health challenges. During this time, the U.S. Department of Defense (DoD) has implemented numerous programs to...and were based on the constraints of each electronic database. However, most searches were variations on a basic three-category format: The first...Gerontology, 1983, 38: 111–116. Iannuzzo RW, Jaeger J, Goldberg JF, Kafantaris V, Sublette ME. “Development and Reliability of the Ham-D/MADRS

Top